CN117496278A - Water depth map inversion method based on radiation transmission parameter application convolutional neural network - Google Patents

Water depth map inversion method based on radiation transmission parameter application convolutional neural network Download PDF

Info

Publication number
CN117496278A
CN117496278A CN202410004085.4A CN202410004085A CN117496278A CN 117496278 A CN117496278 A CN 117496278A CN 202410004085 A CN202410004085 A CN 202410004085A CN 117496278 A CN117496278 A CN 117496278A
Authority
CN
China
Prior art keywords
remote sensing
radiation transmission
neural network
convolutional neural
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410004085.4A
Other languages
Chinese (zh)
Other versions
CN117496278B (en
Inventor
陈鹏
谢丛霜
张镇华
张思琪
黄海清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Second Institute of Oceanography MNR
Original Assignee
Second Institute of Oceanography MNR
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Second Institute of Oceanography MNR filed Critical Second Institute of Oceanography MNR
Priority to CN202410004085.4A priority Critical patent/CN117496278B/en
Publication of CN117496278A publication Critical patent/CN117496278A/en
Application granted granted Critical
Publication of CN117496278B publication Critical patent/CN117496278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Abstract

The invention discloses a water depth map inversion method based on a radiation transmission parameter application convolutional neural network, which comprises the following steps: acquiring a reference submarine topography sounding point as a priori sounding point, acquiring a passive remote sensing image, preprocessing the passive remote sensing image to acquire an image containing remote sensing reflectivities corresponding to different wave bands, and further acquiring corresponding radiation transmission data layer information and diffusion attenuation coefficients; the remote sensing reflectivity, radiation transmission data layer information and diffusion attenuation coefficient of the red, green and blue three wave bands form a characteristic data set, each 7X 7 sub-image taking the prior depth measurement point position as the center is extracted based on the characteristic data set to form a characteristic tensor training data set, and a training label is the prior depth measurement point; inputting the characteristic tensor training data set into a convolutional neural network for training; and inputting the characteristic data set into a trained convolutional neural network to perform the inverse water depth map. The invention fully utilizes the spectrum information of the passive remote sensing image, refers to the pixel information around the water depth point, and improves the inversion precision.

Description

Water depth map inversion method based on radiation transmission parameter application convolutional neural network
Technical Field
The invention relates to the field of remote sensing images, in particular to a water depth map inversion method based on a radiation transmission parameter application convolutional neural network.
Background
The accurate water depth map can describe underwater topography and plays a key role in supporting activities such as coastal research, environmental management, ocean space planning and the like. However, global ocean maps made using high resolution detection techniques have heretofore covered less than 15% of the sea area. Both active and passive on-board/airborne platforms can be effectively used for water depth measurement. Conventional active sounding methods, such as on-board or on-board multi-beam echosounders (Multibeam Echo Sounder, hereinafter MBES) and lidars, provide detailed marine mapping data with a high degree of accuracy. However, these methods encounter limitations when dealing with offshore waters featuring harsh environments, complex terrain, shallow water disputes, etc. Therefore, satellite-based water depth measurement is becoming a promising alternative in-situ measurement method for acquiring water depth information.
Recently, machine learning methods have been attracting attention in offshore water depth measurement. The method can effectively extract the high-dimensional characteristics of the data to construct a nonlinear function, does not need strict atmospheric correction, and does not need to make assumptions on the optical characteristics of water. Various machine learning algorithms, such as Neural Networks (NN), support vector machines (Support Vector Machine, SVM), random Forest (RF), multi-layer perceptron (Multilayer Perceptron, MLP), convolutional Neural networks (Convolutional Neural Network, CNN), etc., have been applied to water depth estimation. The CNN is able to detect reflectivity similarities between adjacent pixels and spatial correlations with corresponding depths of water, relative to other machine learning models, which is expected to improve the accuracy of the sounding inversion. However, existing machine learning methods are mainly focused on extracting high-dimensional features from training data by using a data mining method to improve the robustness of the model. Previous studies have shown that incorporating multispectral feature information into the model input can improve the accuracy of the water depth inversion. However, the existing machine learning methods, such as Neural Network (NN), support vector machine (Support Vector Machine, SVM), random Forest (RF), only consider the radiation transmission information corresponding to the deep water point, neglect the spatial characteristics and radiation transfer characteristics of the passive remote sensing image, and have poor inversion efficiency and insufficient accuracy of the inversion result.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a water depth map inversion method based on a radiation transmission parameter application convolutional neural network, which is a machine learning model method for performing water depth inversion by fully utilizing spectrum information of a passive remote sensing image and referring to surrounding pixel information of a water depth point.
The specific technical scheme is as follows:
a water depth map inversion method applying a convolutional neural network based on radiation transmission parameters comprises the following steps:
s1: acquiring a reference submarine topography sounding point of a research area in ICESat-2 ATL03 data as a priori sounding point, and acquiring a passive remote sensing image of the research area;
s2: preprocessing the passive remote sensing image to obtain an image containing remote sensing reflectivities corresponding to different wave bands;
s3: respectively calculating radiation transmission data layer information between corresponding wave bands by using remote sensing reflectivities of red wave bands, green wave bands and blue wave bands in the preprocessed passive remote sensing image;
s4: estimating a diffusion attenuation coefficient according to the remote sensing reflectivity of a green wave band and a blue wave band in the preprocessed passive remote sensing image;
s5: the remote sensing reflectivity of the red wave band, the green wave band and the blue wave band, the radiation transmission data layer information obtained in the step S3 and the diffusion attenuation coefficient form a characteristic data set of the whole image with the depth of 7;
s6: based on the characteristic data set, extracting 7 multiplied by 7 sub-images which take the prior depth measurement point position as the center to form a characteristic tensor training data set, wherein a training label is the prior depth measurement point;
s7: inputting the characteristic tensor training data set into a convolutional neural network for training;
s8: and (5) inputting the characteristic data set of the whole image obtained in the step (S5) into a trained convolutional neural network, and inverting a water depth map of the whole research area.
Further, in S2, the preprocessing operation includes: atmospheric correction, resampling, land masking, clipping, coordinate transformation.
Further, in the step S3, the method for calculating the radiation transmission data layer information includes:
wherein P is 1 Is the radiation transmission data layer information between blue and green wave bands, P 2 Is the radiation transmission data layer information between red and green wave bands, P 3 Is the radiation transmission data layer information between the red and blue wave bands; r is R R Is the remote sensing reflectivity of red wave band in the passive remote sensing image, R G Is the remote sensing reflectivity of a green wave band in a passive remote sensing image, R B The remote sensing reflectivity of the blue wave band in the passive remote sensing image;is the average remote sensing inverse of red wave band of optical deep water area in passive remote sensing imageEmissivity (x/y)>Is the average remote sensing reflectivity of the green wave band of the optical deep water area in the passive remote sensing image, < ->Is the average remote sensing reflectivity of the blue wave band of the optical deep water area in the passive remote sensing image.
Further, in S4, the diffusion attenuation coefficient K d The calculated expression of (2) is as follows:wherein R is G Is the remote sensing reflectivity of a green wave band in a passive remote sensing image, R B Is the remote sensing reflectivity of the blue wave band in the passive remote sensing image.
Further, in S7, the convolutional neural network has the following structure: the first layer is the input layer; the second layer is a convolution layer, which has 32 convolution kernels, and the convolution kernels are 2×2 in size; the third layer is a convolution layer, which has 64 convolution kernels, and the convolution kernels are 2×2 in size; the fourth layer is a convolution layer, and has 128 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the fifth layer is a convolution layer, and has 32 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the sixth layer is a full-connection layer, and regression operation is carried out by adopting a linear activation function to invert out water depth data.
Further, in the step S1, a priori sounding points are obtained by using a DBSCAN algorithm.
The beneficial effects of the invention are as follows:
the method combines the optical radiation transmission data and the convolutional neural network model, can better highlight the correlation between the spectral information and the water depth, and improves the inversion precision; the optical radiation transmission data particularly highlights the spectral characteristic information of the water body in the passive remote sensing image, and the convolutional neural network model well considers the surrounding information of the reference sounding point pixels, so that the method can provide powerful support for large-scale inversion of shallow water sea area maps.
Drawings
FIG. 1 is a flow chart of the water depth map inversion method of the present invention employing convolutional neural networks based on radiation transmission parameters.
FIG. 2 is a scatter plot of the water depth inversion map obtained by the present invention compared to in situ data.
Detailed Description
The objects and effects of the present invention will become more apparent from the following detailed description of the preferred embodiments and the accompanying drawings, in which the present invention is further described in detail. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, a water depth map inversion method using a convolutional neural network based on radiation transmission parameters includes the following steps:
s1: acquiring reference submarine topography sounding points of a research area in ICESat-2 ATL03 (namely a laser altimeter system of an ICESat-2 satellite) data as priori sounding point data, and acquiring a passive remote sensing image of the research area; in the embodiment, a DBSCAN algorithm is adopted to obtain the priori sounding points, and the algorithm has higher efficiency.
S2: preprocessing the passive remote sensing image, wherein the preprocessing operation comprises the following steps: atmospheric correction, resampling, land masking, clipping, coordinate transformation. The preprocessing aims to remove interference of the atmosphere and the land on the image, and obtain remote sensing reflectivity images corresponding to different wave bands, wherein the remote sensing reflectivity images at least comprise a remote sensing reflectivity image of a red wave band, a remote sensing reflectivity image of a green wave band and a remote sensing reflectivity image of a blue wave band.
S3: because the wavelengths of the red wave band, the green wave band and the blue wave band can penetrate through deeper water, the remote sensing reflectivities of the red wave band, the green wave band and the blue wave band in the preprocessed passive remote sensing image obtained in the S2 are used selectively, and the radiation transmission data layer information between the corresponding wave bands is calculated respectively. The radiation transmissive data layer information calculation expression is as follows:
wherein P is 1 Is the radiation transmission data layer information between blue and green wave bands, P 2 Is the radiation transmission data layer information between red and green wave bands, P 3 Is the radiation transmission data layer information between the red and blue wave bands; r is R R Is the remote sensing reflectivity of red wave band in the passive remote sensing image, R G Is the remote sensing reflectivity of a green wave band in a passive remote sensing image, R B The remote sensing reflectivity of the blue wave band in the passive remote sensing image;is the average remote sensing reflectivity of red wave band of optical deep water area in passive remote sensing image,/or->Is the average remote sensing reflectivity of the green wave band of the optical deep water area in the passive remote sensing image, < ->Is the average remote sensing reflectivity of the blue wave band of the optical deep water area in the passive remote sensing image.
S4: estimating a diffusion attenuation coefficient K according to the remote sensing reflectivity of a green wave band and a blue wave band in the preprocessed passive remote sensing image obtained in the step S2 d The diffusion attenuation coefficient can invert the turbidity degree of the water body, is favorable for inversion of water depth information, and has the following calculation expression:
s5: transmitting data layer information (namely P) of three radiation obtained in S3 by using red band remote sensing reflectivity, green band remote sensing reflectivity and blue band remote sensing reflectivity in the passive remote sensing image obtained in S2 1 、P 2 、P 3 ) And the diffusion attenuation coefficient K obtained in S4 d A feature data set of the whole image comprising radiation transmission information with a depth of 7 is composed.
S6: based on the characteristic data set of the whole image, extracting 7 multiplied by 7 sub-images which take the ICESat-2 prior sounding point position as the center to form a characteristic tensor training data set, wherein the training label is the position of the ICESat-2 prior sounding point. The sub-image with the size can ensure the processing efficiency and accuracy of the CNN network.
S7: the feature tensor training data set is input into a convolutional neural network to train the convolutional neural network. The convolutional neural network structure is as follows: the first layer is the input layer; the second layer is a convolution layer, which has 32 convolution kernels, and the convolution kernels are 2×2 in size; the third layer is a convolution layer, which has 64 convolution kernels, and the convolution kernels are 2×2 in size; the fourth layer is a convolution layer, and has 128 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the fifth layer is a convolution layer, and has 32 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the sixth layer is a full-connection layer, and regression operation is carried out by adopting a linear activation function to invert out water depth data.
S8: and (5) inputting the characteristic data set of the whole image obtained in the step (S5) into a trained convolutional neural network, and inverting a water depth map of the whole research area.
The effect of the method according to the invention is described below in a specific embodiment.
In the embodiment, a certain satellite remote sensing image of a water area near a certain island is adopted, and a water depth map is obtained through inversion by the method of the invention in combination with water depth data extracted from an ICESat2 satellite original file. The digital elevation model data (namely a series of water depth data measured in the field and can be regarded as standard water depth data) continuously updated by NCEI is compared with the water depth map obtained by inversion in the embodiment, the verification result is shown in figure 2, and the comparison trend of the water depth data obtained by the method and the standard water depth data accords with the precision reaching 95.6%, so that the inversion precision of the method is higher.
It will be appreciated by persons skilled in the art that the foregoing description is a preferred embodiment of the invention, and is not intended to limit the invention, but rather to limit the invention to the specific embodiments described, and that modifications may be made to the technical solutions described in the foregoing embodiments, or equivalents may be substituted for elements thereof, for the purposes of those skilled in the art. Modifications, equivalents, and alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (6)

1. The water depth map inversion method applying the convolutional neural network based on the radiation transmission parameters is characterized by comprising the following steps of:
s1: acquiring a reference submarine topography sounding point of a research area in ICESat-2 ATL03 data as a priori sounding point, and acquiring a passive remote sensing image of the research area;
s2: preprocessing the passive remote sensing image to obtain an image containing remote sensing reflectivities corresponding to different wave bands;
s3: respectively calculating radiation transmission data layer information between corresponding wave bands by using remote sensing reflectivities of red wave bands, green wave bands and blue wave bands in the preprocessed passive remote sensing image;
s4: estimating a diffusion attenuation coefficient according to the remote sensing reflectivity of a green wave band and a blue wave band in the preprocessed passive remote sensing image;
s5: the remote sensing reflectivity of the red wave band, the green wave band and the blue wave band, the radiation transmission data layer information obtained in the step S3 and the diffusion attenuation coefficient form a characteristic data set of the whole image with the depth of 7;
s6: based on the characteristic data set, extracting 7 multiplied by 7 sub-images which take the prior depth measurement point position as the center to form a characteristic tensor training data set, wherein a training label is the prior depth measurement point;
s7: inputting the characteristic tensor training data set into a convolutional neural network for training;
s8: and (5) inputting the characteristic data set of the whole image obtained in the step (S5) into a trained convolutional neural network, and inverting a water depth map of the whole research area.
2. The method for water depth map inversion using convolutional neural network based on radiation transmission parameters according to claim 1, wherein in S2, the preprocessing operation comprises: atmospheric correction, resampling, land masking, clipping, coordinate transformation.
3. The method for inverting the water depth map by applying the convolutional neural network based on the radiation transmission parameters according to claim 1, wherein in S3, the method for calculating the radiation transmission data layer information is as follows:
wherein P is 1 Is the radiation transmission data layer information between blue and green wave bands, P 2 Is the radiation transmission data layer information between red and green wave bands, P 3 Is the radiation transmission data layer information between the red and blue wave bands; r is R R Is the remote sensing reflectivity of red wave band in the passive remote sensing image, R G Is the remote sensing reflectivity of a green wave band in a passive remote sensing image, R B The remote sensing reflectivity of the blue wave band in the passive remote sensing image;is the average remote sensing reflectivity of red wave band of optical deep water area in passive remote sensing image,/or->Is the average remote sensing reflectivity of the green wave band of the optical deep water area in the passive remote sensing image, < ->Is the average remote sensing reflectivity of the blue wave band of the optical deep water area in the passive remote sensing image.
4. The method for water depth map inversion using convolutional neural network based on radiation transmission parameters according to claim 1, wherein in S4, the diffusion attenuation coefficient K d The calculated expression of (2) is as follows:
wherein R is G Is the remote sensing reflectivity of a green wave band in a passive remote sensing image, R B Is the remote sensing reflectivity of the blue wave band in the passive remote sensing image.
5. The method for inverting a water depth map by applying a convolutional neural network based on a radiation transmission parameter according to claim 1, wherein in S7, the convolutional neural network has a structure as follows: the first layer is the input layer; the second layer is a convolution layer, which has 32 convolution kernels, and the convolution kernels are 2×2 in size; the third layer is a convolution layer, which has 64 convolution kernels, and the convolution kernels are 2×2 in size; the fourth layer is a convolution layer, and has 128 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the fifth layer is a convolution layer, and has 32 convolution kernels with the same size, and the sizes of the convolution kernels are 3 multiplied by 3; the sixth layer is a full-connection layer, and regression operation is carried out by adopting a linear activation function to invert out water depth data.
6. The method for inverting the water depth map by applying the convolutional neural network based on the radiation transmission parameters according to claim 1, wherein in the step S1, a priori sounding point is obtained by adopting a DBSCAN algorithm.
CN202410004085.4A 2024-01-03 2024-01-03 Water depth map inversion method based on radiation transmission parameter application convolutional neural network Active CN117496278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410004085.4A CN117496278B (en) 2024-01-03 2024-01-03 Water depth map inversion method based on radiation transmission parameter application convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410004085.4A CN117496278B (en) 2024-01-03 2024-01-03 Water depth map inversion method based on radiation transmission parameter application convolutional neural network

Publications (2)

Publication Number Publication Date
CN117496278A true CN117496278A (en) 2024-02-02
CN117496278B CN117496278B (en) 2024-04-05

Family

ID=89683415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410004085.4A Active CN117496278B (en) 2024-01-03 2024-01-03 Water depth map inversion method based on radiation transmission parameter application convolutional neural network

Country Status (1)

Country Link
CN (1) CN117496278B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002329382A1 (en) * 2001-08-29 2003-03-18 Isis Innovation Limited Surface texture determination method and apparatus
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
CN113793374A (en) * 2021-09-01 2021-12-14 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by using improved four-waveband remote sensing image QAA algorithm
CN114117886A (en) * 2021-10-28 2022-03-01 南京信息工程大学 Water depth inversion method for multispectral remote sensing
CN115235431A (en) * 2022-05-19 2022-10-25 南京大学 Shallow sea water depth inversion method and system based on spectrum layering
CN116295285A (en) * 2023-02-14 2023-06-23 国家海洋信息中心 Shallow sea water depth remote sensing inversion method based on region self-adaption

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002329382A1 (en) * 2001-08-29 2003-03-18 Isis Innovation Limited Surface texture determination method and apparatus
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
CN113793374A (en) * 2021-09-01 2021-12-14 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by using improved four-waveband remote sensing image QAA algorithm
CN114117886A (en) * 2021-10-28 2022-03-01 南京信息工程大学 Water depth inversion method for multispectral remote sensing
CN115235431A (en) * 2022-05-19 2022-10-25 南京大学 Shallow sea water depth inversion method and system based on spectrum layering
CN116295285A (en) * 2023-02-14 2023-06-23 国家海洋信息中心 Shallow sea water depth remote sensing inversion method based on region self-adaption

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
QIMING YUAN: "Water-Body Detection From Spaceborne SAR Images With DBO-CNN", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, 19 October 2023 (2023-10-19) *
李珏东: "高光谱数据在浅海底质分类及水深反演中的应用", 《中国优秀硕士学位论文全文数据库》, 15 February 2012 (2012-02-15) *

Also Published As

Publication number Publication date
CN117496278B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Misra et al. Shallow water bathymetry mapping using Support Vector Machine (SVM) technique and multispectral imagery
Pe’eri et al. Satellite remote sensing as a reconnaissance tool for assessing nautical chart adequacy and completeness
Casal et al. Understanding satellite-derived bathymetry using Sentinel 2 imagery and spatial prediction models
Su et al. Prediction of water depth from multispectral satellite imagery—the regression Kriging alternative
Chu et al. Technical framework for shallow-water bathymetry with high reliability and no missing data based on time-series sentinel-2 images
CN112013822A (en) Multispectral remote sensing water depth inversion method based on improved GWR model
Lee et al. Confidence measure of the shallow-water bathymetry map obtained through the fusion of Lidar and multiband image data
CN114612769A (en) Integrated sensing infrared imaging ship detection method integrated with local structure information
WO2021205424A2 (en) System and method of feature detection in satellite images using neural networks
Marghany et al. 3-D visualizations of coastal bathymetry by utilization of airborne TOPSAR polarized data
Slocum et al. Combined geometric-radiometric and neural network approach to shallow bathymetric mapping with UAS imagery
Thomas et al. A purely spaceborne open source approach for regional bathymetry mapping
Sung et al. Image-based super resolution of underwater sonar images using generative adversarial network
Lumban-Gaol et al. Extracting Coastal Water Depths from Multi-Temporal Sentinel-2 Images Using Convolutional Neural Networks
CN113960625A (en) Water depth inversion method based on satellite-borne single photon laser active and passive remote sensing fusion
CN117496278B (en) Water depth map inversion method based on radiation transmission parameter application convolutional neural network
Ahmed et al. Deep neural network for oil spill detection using Sentinel-1 data: application to Egyptian coastal regions
CN117274831A (en) Offshore turbid water body depth inversion method based on machine learning and hyperspectral satellite remote sensing image
CN111561916B (en) Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN113989612A (en) Remote sensing image target detection method based on attention and generation countermeasure network
CN110097562B (en) Sea surface oil spill area image detection method
Kao et al. Determination of shallow water depth using optical satellite images
Oh et al. Coastal shallow-water bathymetry survey through a drone and optical remote sensors
Arai et al. Method for Frequent High Resolution of Optical Sensor Image Acquisition using Satellite-Based SAR Image for Disaster Mitigation
Agrafiotis Image-based bathymetry mapping for shallow waters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant