CN116385894A - Coastline identification method, device and equipment based on remote sensing image - Google Patents

Coastline identification method, device and equipment based on remote sensing image Download PDF

Info

Publication number
CN116385894A
CN116385894A CN202310174506.3A CN202310174506A CN116385894A CN 116385894 A CN116385894 A CN 116385894A CN 202310174506 A CN202310174506 A CN 202310174506A CN 116385894 A CN116385894 A CN 116385894A
Authority
CN
China
Prior art keywords
remote sensing
sensing image
image
coastline
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310174506.3A
Other languages
Chinese (zh)
Inventor
荆文龙
胡义强
邓琰
杨骥
李勇
梁枝浩
邓应彬
蓝文陆
彭小燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Marine Environment Monitoring Center Of Guangxi Zhuang Autonomous Region
Guangzhou Institute of Geography of GDAS
Southern Marine Science and Engineering Guangdong Laboratory Guangzhou
Original Assignee
Marine Environment Monitoring Center Of Guangxi Zhuang Autonomous Region
Guangzhou Institute of Geography of GDAS
Southern Marine Science and Engineering Guangdong Laboratory Guangzhou
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Marine Environment Monitoring Center Of Guangxi Zhuang Autonomous Region, Guangzhou Institute of Geography of GDAS, Southern Marine Science and Engineering Guangdong Laboratory Guangzhou filed Critical Marine Environment Monitoring Center Of Guangxi Zhuang Autonomous Region
Priority to CN202310174506.3A priority Critical patent/CN116385894A/en
Publication of CN116385894A publication Critical patent/CN116385894A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)

Abstract

The invention relates to the field of remote sensing data analysis, in particular to a coastline identification method based on remote sensing images, which comprises the following steps: acquiring a remote sensing image set of a sample area; performing image fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of each group of remote sensing image subsets to obtain fusion images corresponding to each group of remote sensing image subsets; training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model; responding to an identification instruction, wherein the identification instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, carrying out fusion processing on the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected, obtaining a fusion image of the region to be detected, inputting the fusion image of the region to be detected into an amphibious segmentation model, obtaining an amphibious segmentation image of the region to be detected, and obtaining a coastline identification result of the region to be detected according to the amphibious segmentation image.

Description

Coastline identification method, device and equipment based on remote sensing image
Technical Field
The invention relates to the field of remote sensing data analysis, in particular to a coastline identification method, a device, equipment and a storage medium based on remote sensing images.
Background
The coastline is a datum line for dividing sea and land management areas, is important content for researching sea-land interaction, influence of sea activities on coastal zones and comprehensive management of coastal zones and an ecological system of a coastal zone near the coast, changes of the coastline directly change the tidal flat resource amount of the intertidal zone and the environment of the coastal zone, and influences survival and development of people, so that rapid and accurate monitoring of dynamic changes of the coastline has very important significance.
The traditional coastline extraction method is usually carried out by adopting manual field GPS measurement, but the method is time-consuming, labor-consuming, low in efficiency, long in working period, low in precision and difficult to extract the coastline rapidly and accurately.
Disclosure of Invention
Based on the above, the invention aims to provide a coastline identification method, a device, equipment and a storage medium based on remote sensing images, which are used for carrying out fusion processing on multispectral remote sensing images and full-color remote sensing images, inputting the fusion images obtained after the fusion processing into a preset land-water segmentation model to identify the coastline, improving the accuracy and efficiency of the coastline identification and reducing the labor cost and time cost of the coastline identification.
In a first aspect, an embodiment of the present application provides a coastline identification method based on a remote sensing image, including the following steps:
acquiring a remote sensing image set of a sample area, wherein the remote sensing image set comprises a plurality of groups of remote sensing image subsets, and the remote sensing image subsets comprise a first multispectral remote sensing image and a panchromatic remote sensing image;
performing image fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of each group of remote sensing image subsets to obtain fusion images corresponding to each group of remote sensing image subsets;
training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model;
responding to an identification instruction, wherein the identification instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, carrying out fusion processing on the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected, acquiring a fusion image of the region to be detected, inputting the fusion image of the region to be detected into the land and water segmentation model, acquiring a land and water segmentation image of the region to be detected, and acquiring a coastline identification result of the region to be detected according to the land and water segmentation image.
In a second aspect, an embodiment of the present application provides a coastline type identifying device based on a remote sensing image, including:
the acquisition module is used for acquiring a remote sensing image set of the sample area, wherein the remote sensing image set comprises a plurality of groups of remote sensing image subsets, and the remote sensing image subsets comprise a first multispectral remote sensing image and a panchromatic remote sensing image;
the fusion module is used for carrying out image fusion processing on the first multispectral remote sensing images and the panchromatic remote sensing images of the remote sensing image subsets of each group to obtain fusion images corresponding to the remote sensing image subsets of each group;
the training module is used for training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model;
the recognition module is used for responding to a recognition instruction, wherein the recognition instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected are fused, the fused image of the region to be detected is obtained, the fused image of the region to be detected is input into the land and water segmentation model, the land and water segmentation image of the region to be detected is obtained, and a coastline recognition result of the region to be detected is obtained according to the land and water segmentation image.
In a third aspect, embodiments of the present application provide a computer device, including: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program when executed by the processor implements the steps of the remote sensing image based coastline identification method as described in the first aspect.
In a fourth aspect, embodiments of the present application provide a storage medium storing a computer program, which when executed by a processor, implements the steps of the remote sensing image based coastline identification method according to the first aspect.
In the embodiment of the application, a coastline identification method, a device, equipment and a storage medium based on remote sensing images are provided, multispectral remote sensing images and full-color remote sensing images are subjected to fusion processing, and fusion images obtained after the fusion processing are input into a preset land-water segmentation model to identify the coastline, so that the accuracy and the efficiency of the coastline identification are improved, and the labor cost and the time cost of the coastline identification are reduced.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a schematic flow chart of a coastline identification method based on remote sensing images according to a first embodiment of the present application;
fig. 2 is a schematic flow chart of a coastline identification method based on remote sensing images according to a second embodiment of the present application;
fig. 3 is a schematic flow chart of S2 in the remote sensing image-based coastline recognition method according to the first embodiment of the present application;
fig. 4 is a schematic flow chart of S3 in the remote sensing image-based coastline identification method according to the first embodiment of the present application;
fig. 5 is a schematic flow chart of S4 in the remote sensing image-based coastline identification method according to the first embodiment of the present application;
fig. 6 is a flowchart of a coastline identification method based on remote sensing images according to a third embodiment of the present application;
fig. 7 is a schematic structural diagram of a coastline type recognition device based on remote sensing images according to a fourth embodiment of the present application;
fig. 8 is a schematic structural diagram of a computer device according to a fifth embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if"/"if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination", depending on the context.
Referring to fig. 1, fig. 1 is a flowchart of a coastline identification method based on remote sensing images according to a first embodiment of the present application, where the method includes the following steps:
s1: a remote sensing image set of the sample area is acquired.
The main implementation body of the remote sensing image-based coastline recognition method is a recognition device (hereinafter referred to as a recognition device) of the remote sensing image-based coastline recognition method, and in an optional embodiment, the recognition device may be a computer device or a server, or a server cluster formed by combining multiple computer devices.
The remote sensing image set comprises a plurality of remote sensing image subsets, the remote sensing image subsets comprise a first multispectral remote sensing image and a panchromatic remote sensing image, the remote sensing image set in the sample area is obtained through a GF-2 satellite, a high-resolution second satellite (GF-2) is a civil optical remote sensing satellite which is independently developed in China and has the first spatial resolution superior to 1 meter, two high-resolution 1-meter panchromatic and 4-meter multispectral cameras are mounted, and the remote sensing image set has the characteristics of sub-meter spatial resolution, high positioning precision, quick gesture movement capability and the like.
In this embodiment, the identification device may acquire the remote sensing image set of the sample area through a satellite, or may acquire the remote sensing image set of the sample area from a preset database.
Referring to fig. 2, fig. 2 is a flow chart of a coastline identification method based on remote sensing images according to a second embodiment of the present application, and further includes step S5, specifically as follows:
s5: and preprocessing the remote sensing image subset to obtain a preprocessed first multispectral remote sensing image and a full-color remote sensing image.
The preprocessing step includes radiation calibration, atmospheric correction, orthographic correction, and geometric registration.
Radiometric scaling is a process of converting a digital quantized value (DN) of an image into a physical quantity such as radiance value or reflectivity or surface temperature. The radiometric calibration parameters are typically stored in a metadata file, and a generic radiometric calibration tool (Radiometric Calibration) in ENVI can automatically read parameters from the metadata file to complete radiometric calibration.
The satellite sensor is inevitably influenced by illumination conditions, observation angles, absorption and scattering of atmospheric components such as atmospheric molecules, aerosol, cloud particles and the like (scattering and absorption of solar radiation and ground reflection by the atmosphere) and the like in the process of acquiring ground surface information, and remote sensing data comprise imaging information of certain non-target ground objects. Quantitative analysis using remote sensing images requires that the true reflectance spectrum of the surface target must be known, and thus some means is required to eliminate these atmospheric effects, a process known as atmospheric correction.
The orthographic correction is a process for processing the remote sensing image according to the attitude and orbit data and high-precision DEM (digital elevation model) data directly transmitted by the satellite and a rational polynomial model.
Geometric registration refers to the operation of completely overlapping the same-name image points in position and orientation by geometric transformation of remote sensing images of the same region obtained by different remote sensor systems in different time and different wave bands.
In this embodiment, the identification device performs preprocessing on the first multispectral remote sensing image and the panchromatic remote sensing image in the remote sensing image subset, and obtains the preprocessed first multispectral remote sensing image and panchromatic remote sensing image.
S2: and carrying out image fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of each group of remote sensing image subsets to obtain fusion images corresponding to each group of remote sensing image subsets.
Because the first multispectral remote sensing image has higher spectral resolution (four channels in total), namely four wavebands, R (red), G (green), B (blue) and NIR (near infrared) respectively. In order to obtain a fused image with spectral resolution and spatial resolution, in this embodiment, the recognition device performs image fusion processing on the first multispectral remote sensing image of each set of remote sensing image subsets and the panchromatic remote sensing image to obtain a fused image corresponding to each set of remote sensing image subsets.
Referring to fig. 3, fig. 3 is a schematic flow chart of step S2 in the remote sensing image-based coastline recognition method according to the first embodiment of the present application, including steps S201 to S203, specifically as follows:
s201: and acquiring the resolution of the full-color remote sensing image, and resampling the first multispectral remote sensing images of the same group according to the resolution to acquire a second multispectral remote sensing image.
The resampling process is a process of extracting a low-resolution image from a high-resolution remote sensing image, in an alternative embodiment, the identification device obtains the resolution of the panchromatic remote sensing image, and resamples the first multispectral remote sensing image of the same group according to the resolution by adopting a resampling method, so that the resolution of the first multispectral remote sensing image is the same as the resolution of the panchromatic remote sensing image, and the resampled first multispectral remote sensing image is obtained as a second multispectral remote sensing image, wherein the resampling method can be one of a nearest neighbor interpolation method (nearest neighbor interpolation), a bilinear interpolation method (bilinear interpolation) and a three-convolution interpolation method (cubic convolution interpolation).
S202: and performing IHS conversion on the second multispectral remote sensing image to acquire IHS space components of the second multispectral remote sensing image.
IHS transformation image fusion is based on an IHS space model, and the basic idea is to replace the brightness component of a multispectral image with low spatial resolution with a gray-scale image with high spatial resolution in IHS space.
The IHS spatial component comprises an intensity component, a tone component and a saturation component, wherein the intensity component represents the overall brightness of the spectrum, corresponds to the spatial information attribute of the image, the tone component describes the attribute of the pure color, is determined by the dominant wavelength of the spectrum and is the difference of the spectrum in terms of quality, the saturation component represents the proportion of the dominant wavelength of the spectrum in the intensity, and the tone and the saturation represent the spectrum resolution of the image.
In this embodiment, the identifying device performs IHS transformation on the second multispectral remote sensing image to obtain an IHS spatial component of the second multispectral remote sensing image.
S203: and acquiring the intensity component of the full-color remote sensing image, and carrying out histogram matching on the full-color remote sensing image according to the intensity components of the full-color remote sensing image and the second multispectral remote sensing image to acquire the intensity component of the matched full-color remote sensing image.
The histogram reflects the distribution condition of the gray value of the pixel point of the image global, and the histogram is matched to be an image enhancement method for adjusting the global brightness and contrast of the image.
In order to eliminate the influence of the atmosphere, illumination and different sensors, more accurate intensity components of the panchromatic remote sensing image are acquired, in this embodiment, the identification device acquires the intensity components of the panchromatic remote sensing image, and performs histogram matching on the panchromatic remote sensing image according to the intensity components of the panchromatic remote sensing image and the second multispectral remote sensing image, so as to acquire the intensity components of the matched panchromatic remote sensing image, which is specifically as follows:
Figure BDA0004100414750000061
wherein A_PAN is the intensity component of the full-color remote sensing image after matching, B_PAN is the intensity component of the full-color remote sensing image, mu (B_PAN) is the average value of the intensity component of the full-color remote sensing image, sigma (B_PAN) is the variance of the intensity component of the full-color remote sensing image, sigma (I) is the variance of the intensity component of the second multispectral remote sensing image, and mu (I) is the average value of the intensity component of the second multispectral remote sensing image.
S204: and replacing the intensity component of the second multispectral remote sensing image according to the intensity component of the full-color remote sensing image after matching, and performing inverse IHS (IHS) transformation on the second multispectral remote sensing image with the replaced intensity component to obtain a third multispectral remote sensing image which is used as a fusion image corresponding to the sample area.
In this embodiment, the identifying device replaces the intensity component of the second multispectral remote sensing image according to the intensity component of the full-color remote sensing image after matching, and performs inverse IHS transformation on the second multispectral remote sensing image after replacing the intensity component, so as to obtain a third multispectral remote sensing image, and the third multispectral remote sensing image is used as a fusion image corresponding to the sample area, so that the spatial resolution and the spectral curve of the image are improved while the better color texture of the image is maintained, so that the training of the model is better performed, and the accuracy of the model is improved.
S3: training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model.
The amphibious segmentation model is one of a U-net model and a convolutional neural network model, features are fused by using low-level features, and a high-quality seamless segmentation result is generated according to an input image.
In this embodiment, the recognition device trains a preset neural network model according to the fused image corresponding to each group of remote sensing image subsets, and obtains a trained neural network model as the land and water segmentation model.
Referring to fig. 4, fig. 4 is a schematic flow chart of step S3 in the remote sensing image-based coastline recognition method according to the first embodiment of the present application, including steps S301 to S303, specifically including the following steps:
s301: and respectively carrying out image segmentation on the fusion images corresponding to the remote sensing image subsets of each group, and obtaining a plurality of sample segmentation pictures of the sample region as a training image set.
In this embodiment, the recognition device performs image segmentation on the fused images corresponding to the remote sensing image subsets of each group, obtains sample segmentation pictures corresponding to the remote sensing image subsets of each group, and performs aggregation to obtain a training image set, where the sample segmentation pictures include a water body segmentation image, a land segmentation region and a background segmentation image.
S302: and respectively labeling the sample segmentation pictures to obtain label data corresponding to a plurality of sample segmentation pictures as a training label set.
In this embodiment, the identification device performs label labeling on the sample segmentation pictures respectively, and obtains label data corresponding to a plurality of sample segmentation pictures as a training label set, where the label data includes label data corresponding to a water body segmentation image, a land segmentation region and a background segmentation image.
S303: and inputting the training image set and the training label set into the neural network model for training to obtain a trained neural network model serving as an amphibious segmentation model.
In this embodiment, the recognition device inputs the training image set and the training tag set into a preset neural network model, performs iterative training according to a preset iteration number, obtains a plurality of trained neural network models, obtains an accuracy corresponding to each trained neural network model according to a preset accuracy calculation algorithm, and obtains a target neural network model with the maximum accuracy and recall rate from the plurality of trained neural network models according to the accuracy and recall rate, as the land and water segmentation model.
S4: responding to an identification instruction, wherein the identification instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, carrying out fusion processing on the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected, acquiring a fusion image of the region to be detected, inputting the fusion image of the region to be detected into the land and water segmentation model, acquiring a land and water segmentation image of the region to be detected, and acquiring a coastline identification result of the region to be detected according to the land and water segmentation image.
The identification instruction is sent by a user and received by the identification equipment.
In this embodiment, the identification device obtains the identification instruction sent by the user, and responds to the identification instruction to obtain the first multispectral remote sensing image and the full-color remote sensing image of the area to be detected. The recognition equipment carries out fusion processing on the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected, acquires the fusion image of the region to be detected, inputs the fusion image of the region to be detected into the land and water segmentation model, acquires the land and water segmentation image of the region to be detected, and acquires a coastline recognition result of the region to be detected according to the land and water segmentation image.
Referring to fig. 5, fig. 5 is a schematic flow chart of step S4 in the remote sensing image-based coastline recognition method according to the first embodiment of the present application, including step S401, specifically as follows:
s401: removing the land and water staggered areas in the land and water segmented image, obtaining a land and water segmented image after removing, and extracting the boundary line between the water area and the land area from the land and water segmented image after removing to serve as a land and water line identification result of the area to be detected.
The coastline obtained from the remote sensing image is generally an instantaneous boundary line (also called a water side line) between sea water and land at a certain moment, and the satellite image of the average high tide and high tide line is difficult to obtain because the coastline is continuously changed under the influence of factors such as tides, so that most of the coastline satellite remote sensing automatic extraction is an instantaneous water side line.
The amphibious segmented image comprises a water body area, a land area and an amphibious staggered area; in order to obtain a coastline recognition result more accurately, in this embodiment, the recognition device recognizes a land-land interleaved region in the land-water segmented image, and eliminates the land-water interleaved region in the land-water segmented image, obtains an land-water segmented image after elimination processing, and extracts, from the land-water segmented image after elimination processing, the boundary line between the water region and the land region as a coastline recognition result of the region to be detected.
Referring to fig. 6, fig. 6 is a flowchart of a coastline identification method based on remote sensing images according to a third embodiment of the present application, and further includes step S6, specifically as follows:
s6: and responding to a display instruction, acquiring electronic map data corresponding to the remote sensing image of the region to be detected, acquiring a coastline type identifier corresponding to the coastline identification result according to the coastline identification result of the region to be detected, and displaying and labeling the coastline type identification identifier on the electronic map data according to the coastline type identifier.
The display instruction is sent by a user and received by the identification equipment.
In this embodiment, the identification device obtains the display instruction sent by the user, and responds to obtain the electronic map data associated with the area to be detected. And the identification equipment acquires a coastline type identifier corresponding to the coastline identification result from a preset database according to the coastline identification result of the area to be detected, returns the coastline type identifier to a display interface of the identification equipment, and displays and marks the coastline type identification identifier on the electronic map data according to the coastline identification result and the coastline type identifier.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a remote sensing image-based coastline type recognition device according to a fourth embodiment of the present application, where the device may implement all or a part of the remote sensing image-based coastline type recognition device through software, hardware or a combination of both, and the device 7 includes:
an acquisition module 71, configured to acquire a remote sensing image set of a sample area, where the remote sensing image set includes a plurality of remote sensing image subsets, and the remote sensing image subsets include a first multispectral remote sensing image and a panchromatic remote sensing image;
the fusion module 72 is configured to perform image fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of the subset of the remote sensing images to obtain fused images corresponding to the subset of the remote sensing images of each group;
the training module 73 is configured to train a preset neural network model according to the fused images corresponding to the remote sensing image subsets of each group, so as to obtain a trained neural network model as an amphibious segmentation model;
the identifying module 74 is configured to respond to an identifying instruction, where the identifying instruction includes a first multispectral remote sensing image and a panchromatic remote sensing image of a region to be detected, perform fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of the region to be detected, obtain a fused image of the region to be detected, input the fused image of the region to be detected into the land and water segmentation model, obtain a land and water segmentation image of the region to be detected, and obtain a coastline identifying result of the region to be detected according to the land and water segmentation image.
In this embodiment, a remote sensing image set of a sample area is acquired by an acquisition module, where the remote sensing image set includes a plurality of sets of remote sensing image subsets, and the remote sensing image subsets include a first multispectral remote sensing image and a panchromatic remote sensing image; performing image fusion processing on the first multispectral remote sensing images and the panchromatic remote sensing images of the remote sensing image subsets of each group through a fusion module to obtain fusion images corresponding to the remote sensing image subsets of each group; training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group through a training module to obtain a trained neural network model serving as an amphibious segmentation model; the method comprises the steps that through an identification module, an identification instruction is responded, the identification instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of an area to be detected, the first multispectral remote sensing image and the full-color remote sensing image of the area to be detected are subjected to fusion processing, a fusion image of the area to be detected is obtained, the fusion image of the area to be detected is input into an amphibious segmentation model, an amphibious segmentation image of the area to be detected is obtained, and a coastline identification result of the area to be detected is obtained according to the amphibious segmentation image. And the multispectral remote sensing image and the full-color remote sensing image are subjected to fusion processing, and the fusion image obtained after the fusion processing is input into a preset water-land segmentation model to identify the coastline, so that the accuracy and the efficiency of the coastline identification are improved, and the labor cost and the time cost of the coastline identification are reduced.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a computer device according to a fifth embodiment of the present application, where the computer device 8 includes: a processor 81, a memory 82, and a computer program 83 stored on the memory 82 and executable on the processor 81; the computer device may store a plurality of instructions adapted to be loaded and executed by the processor 81 to perform the method steps of the first, second and third embodiments, and the specific implementation procedure may be referred to in the first, second and third embodiments, which are not described herein.
Wherein processor 81 may include one or more processing cores. The processor 81 performs various functions of the remote sensing image based coastline type recognition device 7 and processes data by running or executing instructions, programs, code sets or instruction sets stored in the memory 82 and invoking data in the memory 82 using various interfaces and various parts within the wired connection server, alternatively the processor 81 may be implemented in at least one hardware form of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programble Logic Array, PLA). The processor 81 may integrate one or a combination of several of a central processor 81 (Central Processing Unit, CPU), an image processor 81 (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the touch display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 81 and may be implemented by a single chip.
The Memory 82 may include a random access Memory 82 (Random Access Memory, RAM) or a Read-Only Memory 82 (Read-Only Memory). Optionally, the memory 82 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The memory 82 may be used to store instructions, programs, code sets, or instruction sets. The memory 82 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the various method embodiments described above, etc.; the storage data area may store data or the like referred to in the above respective method embodiments. The memory 82 may also optionally be at least one memory device located remotely from the aforementioned processor 81.
The embodiment of the present application further provides a storage medium, where the storage medium may store a plurality of instructions, where the instructions are suitable for being loaded by a processor and executed by the processor to perform the method steps of the first, second and third embodiments, and the specific execution process may refer to specific descriptions of the first, second and third embodiments, which are not repeated herein.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc.
The present invention is not limited to the above-described embodiments, but, if various modifications or variations of the present invention are not departing from the spirit and scope of the present invention, the present invention is intended to include such modifications and variations as fall within the scope of the claims and the equivalents thereof.

Claims (10)

1. The coastline identification method based on the remote sensing image is characterized by comprising the following steps of:
acquiring a remote sensing image set of a sample area, wherein the remote sensing image set comprises a plurality of groups of remote sensing image subsets, and the remote sensing image subsets comprise a first multispectral remote sensing image and a panchromatic remote sensing image;
performing image fusion processing on the first multispectral remote sensing image and the panchromatic remote sensing image of each group of remote sensing image subsets to obtain fusion images corresponding to each group of remote sensing image subsets;
training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model;
responding to an identification instruction, wherein the identification instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, carrying out fusion processing on the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected, acquiring a fusion image of the region to be detected, inputting the fusion image of the region to be detected into the land and water segmentation model, acquiring a land and water segmentation image of the region to be detected, and acquiring a coastline identification result of the region to be detected according to the land and water segmentation image.
2. The remote sensing image-based coastline identification method of claim 1, wherein: the image fusion processing is performed on the first multispectral remote sensing image and the panchromatic remote sensing image of each group of remote sensing image subsets, and before the fused images corresponding to each group of remote sensing image subsets are obtained, the method comprises the following steps:
and preprocessing the remote sensing image subset to obtain a preprocessed first multispectral remote sensing image and a panchromatic remote sensing image, wherein the preprocessing step comprises radiation calibration, atmospheric correction, orthographic correction and geometric registration.
3. The coastline recognition method based on remote sensing images according to claim 2, wherein the performing image fusion processing on the first multispectral remote sensing images and the panchromatic remote sensing images of the subset of the remote sensing images to obtain fused images corresponding to the subset of the remote sensing images comprises the steps of:
acquiring the resolution of the full-color remote sensing image, and resampling a first multispectral remote sensing image of the same group according to the resolution to acquire a second multispectral remote sensing image, wherein the second multispectral remote sensing image is the resampled first multispectral remote sensing image;
IHS conversion is carried out on the second multispectral remote sensing image, and IHS space components of the second multispectral remote sensing image are obtained, wherein the IHS comprises intensity components;
acquiring an intensity component of the full-color remote sensing image, and performing histogram matching on the full-color remote sensing image according to the intensity components of the full-color remote sensing image and the second multispectral remote sensing image to acquire the intensity component of the matched full-color remote sensing image;
and replacing the intensity component of the second multispectral remote sensing image according to the intensity component of the full-color remote sensing image after matching, and performing inverse IHS (IHS) transformation on the second multispectral remote sensing image with the replaced intensity component to obtain a third multispectral remote sensing image which is used as a fusion image corresponding to the remote sensing image subset.
4. The coastline recognition method based on remote sensing images according to claim 3, wherein training a preset neural network model according to the fused image corresponding to each set of remote sensing image subsets to obtain a trained neural network model as an amphibious segmentation model comprises the steps of:
respectively carrying out image segmentation on the fusion images corresponding to the remote sensing image subsets of each group to obtain a plurality of sample segmentation pictures of the sample region as a training image set, wherein the sample segmentation pictures comprise a water body segmentation image, a land segmentation region and a background segmentation image;
respectively labeling the sample segmentation pictures to obtain label data corresponding to a plurality of sample segmentation pictures as a training label set, wherein the label data comprises label data corresponding to a water body segmentation image, a land segmentation region and a background segmentation image;
and inputting the training image set and the training label set into the neural network model for training to obtain a trained neural network model serving as an amphibious segmentation model.
5. The remote sensing image-based coastline identification method of claim 1, wherein: the amphibious segmented image includes a body of water region, a land region, and an amphibious staggered region.
6. The remote sensing image-based coastline recognition method of claim 5, wherein the acquiring the coastline recognition result of the region to be measured from the land and water segmented image comprises the steps of:
removing the land and water staggered areas in the land and water segmented image, obtaining a land and water segmented image after removing, and extracting the boundary line between the water area and the land area from the land and water segmented image after removing to serve as a land and water line identification result of the area to be detected.
7. The remote sensing image based coastline identification method of claim 1, further comprising the steps of:
and responding to a display instruction, acquiring electronic map data corresponding to the remote sensing image of the region to be detected, acquiring a coastline type identifier corresponding to the coastline identification result according to the coastline identification result of the region to be detected, and displaying and labeling the coastline type identification identifier on the electronic map data according to the coastline type identifier.
8. Coastline type recognition device based on remote sensing image, characterized by comprising:
the acquisition module is used for acquiring a remote sensing image set of the sample area, wherein the remote sensing image set comprises a plurality of groups of remote sensing image subsets, and the remote sensing image subsets comprise a first multispectral remote sensing image and a panchromatic remote sensing image;
the fusion module is used for carrying out image fusion processing on the first multispectral remote sensing images and the panchromatic remote sensing images of the remote sensing image subsets of each group to obtain fusion images corresponding to the remote sensing image subsets of each group;
the training module is used for training a preset neural network model according to the fusion images corresponding to the remote sensing image subsets of each group to obtain a trained neural network model serving as an amphibious segmentation model;
the recognition module is used for responding to a recognition instruction, wherein the recognition instruction comprises a first multispectral remote sensing image and a full-color remote sensing image of a region to be detected, the first multispectral remote sensing image and the full-color remote sensing image of the region to be detected are fused, the fused image of the region to be detected is obtained, the fused image of the region to be detected is input into the land and water segmentation model, the land and water segmentation image of the region to be detected is obtained, and a coastline recognition result of the region to be detected is obtained according to the land and water segmentation image.
9. A computer device, comprising: a processor, a memory, and a computer program stored on the memory and executable on the processor; the computer program, when executed by the processor, implements the steps of the remote sensing image based coastline identification method as claimed in any one of claims 1 to 7.
10. A storage medium, characterized by: the storage medium stores a computer program which, when executed by a processor, implements the steps of the remote sensing image based coastline identification method as claimed in any one of claims 1 to 7.
CN202310174506.3A 2023-02-24 2023-02-24 Coastline identification method, device and equipment based on remote sensing image Pending CN116385894A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310174506.3A CN116385894A (en) 2023-02-24 2023-02-24 Coastline identification method, device and equipment based on remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310174506.3A CN116385894A (en) 2023-02-24 2023-02-24 Coastline identification method, device and equipment based on remote sensing image

Publications (1)

Publication Number Publication Date
CN116385894A true CN116385894A (en) 2023-07-04

Family

ID=86966384

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310174506.3A Pending CN116385894A (en) 2023-02-24 2023-02-24 Coastline identification method, device and equipment based on remote sensing image

Country Status (1)

Country Link
CN (1) CN116385894A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456192A (en) * 2023-12-21 2024-01-26 广东省海洋发展规划研究中心 Remote sensing image color correction method, device, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117456192A (en) * 2023-12-21 2024-01-26 广东省海洋发展规划研究中心 Remote sensing image color correction method, device, equipment and storage medium
CN117456192B (en) * 2023-12-21 2024-05-07 广东省海洋发展规划研究中心 Remote sensing image color correction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Zhao et al. A robust adaptive spatial and temporal image fusion model for complex land surface changes
Chen et al. Multi-source remotely sensed data fusion for improving land cover classification
Du et al. Radiometric normalization, compositing, and quality control for satellite high resolution image mosaics over large areas
CN111553245A (en) Vegetation classification method based on machine learning algorithm and multi-source remote sensing data fusion
CN108932521B (en) Deep learning-based crop classification method and system
Taylor et al. Monitoring landscape change in the National Parks of England and Wales using aerial photo interpretation and GIS
CN113885025A (en) Landslide deformation monitoring method and visual service platform
CN107688003B (en) Blade reflectivity satellite remote sensing extraction method for eliminating vegetation canopy structure and earth surface background influence
CN107688777B (en) Urban green land extraction method for collaborative multi-source remote sensing image
Lu et al. Land-use and land-cover change detection
CN109635249B (en) Water body turbidity inversion model establishing method, water body turbidity inversion model detecting method and water body turbidity inversion model detecting device
Im et al. An automated binary change detection model using a calibration approach
CN108961199A (en) Multi- source Remote Sensing Data data space-time fusion method and device
Zhu et al. Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
CN114022783A (en) Satellite image-based water and soil conservation ecological function remote sensing monitoring method and device
CN113486975A (en) Ground object classification method, device, equipment and storage medium for remote sensing image
CN116385894A (en) Coastline identification method, device and equipment based on remote sensing image
CN115546656A (en) Remote sensing image breeding area extraction method based on deep learning
Liu et al. Evaluating the potential of multi-view data extraction from small Unmanned Aerial Systems (UASs) for object-based classification for Wetland land covers
CN115343226A (en) Multi-scale vegetation coverage remote sensing calculation method based on unmanned aerial vehicle
Jing et al. Sub-pixel accuracy evaluation of FY-3D MERSI-2 geolocation based on OLI reference imagery
Zhou et al. Assessment of bidirectional reflectance effects on desert and forest for radiometric cross-calibration of satellite sensors
Jurado et al. An efficient method for acquisition of spectral BRDFs in real-world scenarios
Li et al. Modeling forest aboveground biomass by combining spectrum, textures and topographic features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination