CN114297938B - Inversion method of optical shallow water depth based on neural network - Google Patents

Inversion method of optical shallow water depth based on neural network Download PDF

Info

Publication number
CN114297938B
CN114297938B CN202111670094.XA CN202111670094A CN114297938B CN 114297938 B CN114297938 B CN 114297938B CN 202111670094 A CN202111670094 A CN 202111670094A CN 114297938 B CN114297938 B CN 114297938B
Authority
CN
China
Prior art keywords
water depth
optical
water
neural network
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111670094.XA
Other languages
Chinese (zh)
Other versions
CN114297938A (en
Inventor
赖文典
李忠平
王俊帏
汪永超
林供
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen University
Original Assignee
Xiamen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen University filed Critical Xiamen University
Priority to CN202111670094.XA priority Critical patent/CN114297938B/en
Publication of CN114297938A publication Critical patent/CN114297938A/en
Application granted granted Critical
Publication of CN114297938B publication Critical patent/CN114297938B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

An inversion method of optical shallow water depth based on a neural network relates to geophysical exploration. The method comprises the following steps: 1) Carrying out gas absorption correction and Rayleigh correction on the original data of the optical image to obtain Rayleigh corrected reflectivity; 2) Calculating cloud back-illumination of each pixel in the optical image, taking the cloud back-illumination as a reference, and carrying out cloud pixel masking on the reflectivity obtained in the step 1) by using a threshold method; 3) Dividing the water body corresponding to each pixel in the image data obtained by remote sensing into an optical deep water area and an optical shallow water area according to the known water body type; 4) Constructing a water depth data set; 5) And 4) establishing a multi-layer perception neural network model, training the model by using the data set with the wide water depth covered in the step 4) to obtain an optical shallow water depth signal, and predicting the water depth.

Description

Inversion method of optical shallow water depth based on neural network
Technical Field
The invention relates to the technical field of geophysical exploration, in particular to an inversion method of optical shallow water depth based on a neural network.
Background
The shallow water environment near the shore is an important ecological system, which comprises coral reefs, seaweed, kelp beds and the like. In addition to the health detection of these ecosystem substrates, one important monitoring parameter is the depth of the sea floor. It is not only important for navigation and scientific research, but also a necessary reference factor for coastal event management (including storm surge monitoring and wind farm site selection).
With the rapid development of computer technology, neural Networks (NNs) are an effective means of developing water depth inversion algorithms with wide applicability. Compared with the traditional empirical algorithm for optical remote sensing water depth inversion, the neural network is flexible and changeable in input. As in Liu et al article "Deriving Bathymetry From Optical Images With a Localized Neural Network Algorithm"(IEEE Transactions on Geoscience and Remote Sensing,2018.56(9):5334-5342.), the authors have combined the regularity of the local empirical regression model for data requirements with the low sensitivity of the neural network, and have proposed a locally adaptive back propagation neural network, the main idea being to use as input reflectance data (R rs) of the blue and green bands of regular sites regularly distributed in the investigation region, and as output the corresponding water depths, respectively training a plurality of back propagation neural networks, and then voting on the pixel water depths obtained by the plurality of neural networks, thereby predicting the water depths. In Ai et al "Convolutional Neural Network to Retrieve Water Depth in Marine Shallow Water Area From Remote Sensing Images"(IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2020,13:2888-2898.), the authors used 3 high spatial resolution satellites (resource-3, high score-1 and world view-2), matched the measured laser water depth point with the R rs data corresponding to the pixel and its neighboring pixels using the remote sensing reflectivity (R rs) data of the 4 bands of near infrared and visible light therein, and predicted the water depth of each pixel by training a Convolutional Neural Network (CNN).
At present, research on water depth inversion is carried out by using a traditional remote sensing method, namely, spectral information of remote sensing reflectivity (R rs) is utilized, and R rs in satellite remote sensing is obtained after the original data is subjected to atmospheric correction. For the near-shore shallow water region, when the satellite image is used for acquiring the water depth, R rs can be acquired through atmospheric correction, and the water depth inversion result is sensitive to the inversion precision of the atmospheric correction algorithm, so that due to the uncertainty of the atmospheric correction algorithm in the near-shore shallow water region, the near-shore shallow water region often has no effective R rs or wrong R rs, so that the water depth cannot be acquired from the satellite image at the sites, or the acquired water depth contains extremely high errors. In contrast, using atmospheric top data as an input to the algorithm may avoid R rs from being invalid and wrong, thereby obtaining a water depth point at the corresponding location. In addition, compared with the traditional empirical algorithm for optical remote sensing water depth inversion, the neural network is flexible and changeable in input, and the model has universality due to the adoption of a large number of data sets.
Disclosure of Invention
The invention aims at overcoming the defects of the prior art, and provides an inversion method of optical shallow water depth based on a neural network, which accurately calculates global optical shallow water depth by using remote sensing data obtained from the top of the atmosphere by an optical satellite.
The invention comprises the following steps:
1) Carrying out gas absorption correction and Rayleigh correction on the original data of the optical image to obtain Rayleigh corrected reflectivity;
2) Calculating cloud back-illumination of each pixel in the optical image, taking the cloud back-illumination as a reference, and carrying out cloud pixel masking on the reflectivity obtained in the step 1) by using a threshold method;
3) Dividing the water body corresponding to each pixel in the image data obtained by remote sensing into an optical deepwater area (ODW) and an optical shallow water area (OSW) according to the known water body type;
4) Constructing a water depth data set;
5) And (3) establishing a multi-layer perception neural network (MLP) model, training the model by using the data set with the wide water depth covered in the step (4), obtaining an optical shallow water depth signal, and predicting the water depth.
In step 1), the specific steps of performing gas absorption correction and rayleigh correction on the original data of the optical image may be:
The reflectance (ρ t) observed by the satellite sensor at the top of the atmosphere is expressed as:
ρ t(λ)=tgr(λ)+ρaer(λ)+ρra (λ) decat (λ) ρ g+tstvπRrs (λ) ] (1)
Where λ is the wavelength, T g is the transmittance of the gas in the atmosphere, ρ r is the rayleigh reflectance due to multiple molecular scattering in the absence of aerosol, ρ aer is the aerosol reflectance due to multiple scattering in the absence of air molecules, ρ ra is the signal due to air molecule and aerosol coupling, T is the direct transmittance, ρ g is the scattered signal due to solar flare, T s and T v are the solar to water surface and the water surface to sensor atmosphere transmittance, respectively, and R rs is the remote sensing reflectance determined by the seawater optical properties (absorption and scattering of water body, etc.) and the water bottom signal.
The transmittance t g of the gas in the atmosphere in the formula (1) is a known quantity, and rho r and rho g are precisely calculated quantities, and the atmospheric correction in remote sensing is to accurately estimate t s、tv、ρaer and rho ra so as to obtain R rs; to avoid erroneous estimation, these 4 terms are kept, and the rayleigh corrected reflectivity ρ rc is obtained by rayleigh correcting ρ t alone, specifically expressed as:
the step of accurately calculating and removing the rayleigh reflectivity signal affecting the water body signal and the scattered signal caused by solar flare is called rayleigh correction, and has no dependence on the region; the four parameters t s、tv、ρaer and ρ ra also affect the atmospheric parameters of the water signal, and the atmospheric parameters are changed according to the region and the atmospheric conditions, so that inaccurate estimation of the atmospheric parameters can result in a situation that correction fails and no effective data or low quality data exist.
In step 2), the cloud albedo cloud albedo of each pixel in the optical image is calculated, specifically expressed as:
Wherein L t (865) is the top radiance of an air layer of 865nm, L r is Rayleigh radiation, F 0 is solar irradiance outside the ground, θ s is solar zenith angle, and t oz is ozone transmittance;
The specific method for performing cloud pixel masking on the reflectivity obtained in the step 1) by using the threshold method can be as follows: when the cloud cover in the image pixels is thicker, most of light is reflected by the cloud, so that cloud albedo is increased; and setting a threshold according to the optical satellite characteristics, and removing pixels cloud albedo higher than the threshold to avoid the influence of the cloud on the data.
In the step 3), the optical deepwater area (ODW) and the optical shallow water area (OSW) are mainly judged by whether sunlight can reach the water bottom; an optical deepwater zone (ODW) typically includes a deepwater body that is not reachable by natural light to the water bottom and a cloudy shallow water body that is not penetrable by natural light, and vice versa is an optical shallow water zone (OSW).
In the step 4), the construction of the water depth data set is to match the Rayleigh corrected reflectivity of the 'shallow water' water body type in the step 3) with the actual water depth in the pixels to form the water depth data set; the representative shallow water body with global comparison can be selected to match rho rc and water depth under different seasons, different substrates and different atmospheric conditions so as to improve the coverage range of data; if the time of actually measured water depth data is inconsistent with the satellite data time, the tidal height of the actually measured water depth data needs to be corrected to the tidal height corresponding to the satellite data observation time through harmonic analysis; the spatial resolution of the measured water depth is usually much higher than that of the high-resolution satellite image, so when there are multiple water depth pixels in one satellite image pixel, the water depth of the satellite image pixel is calculated by using the formula (4):
Wherein H pix is the water depth of the satellite image pixel, n is the actually measured water depth point in the pixel, and H i is the water depth of each actually measured water depth point.
In step 5), a multi-layer perceptive neural network (MLP) model is built, the model is trained by using the data set with the wide water depth covered in step 4), the input is ρ rc, and the output is the water depth; the neural network is trained through different types of data, so that the neural network can acquire optical shallow water depth signals under any atmospheric condition, namely, the neural network is utilized to autonomously learn two atmospheric parameters which can influence water body signals in each season and under the atmospheric condition in the step 1) to replace the traditional water color remote sensing atmospheric correction step, and different substrate information is combined, so that the water depth is predicted better.
Compared with the prior art, the invention has the following advantages:
1. The present invention uses ρ rc as an input to avoid the case where the atmospheric correction fails without valid data (or low quality data) that would occur with R rs.
2. The neural network has the characteristics of flexible input, high fault tolerance and self-adaptation, and the rho rc is used for combining the neural network to a certain extent, so that the atmospheric correction process is put into the training in the neural network and is used for adapting to the water depth environmental conditions of different atmospheres and different seasons.
3. Most of the current empirical water depth remote sensing algorithms have no universality, and compared with the current existing water depth inversion method, the method can be well applied to optical shallow water depth inversion in other regions of the world under the condition that training data of local regions are enough (different seasons, substrates and atmospheric conditions).
4. The invention provides a new globalization water depth inversion idea for predicting local water depth by using remote sensing reflectivity R rs in the current optical satellite inversion water depth.
Drawings
FIG. 1 is a data matching region of measured water depth data and ρ rc data training data of a multispectral satellite in an embodiment of the present invention;
FIG. 2 is a block diagram of an MLP neural network for shallow water depth prediction in an embodiment of the invention;
FIG. 3 is a shallow water depth algorithm verification area (solid lines in the figure are paths of image-matched ICESat-2 data) according to an embodiment of the present invention;
FIG. 4 is a diagram of the result of remote sensing inversion of the water depth of the verification area in the embodiment of the invention;
FIG. 5 is an inversion water depth accuracy verification result in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the following examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. On the contrary, the invention is intended to cover any alternatives, modifications, equivalents, and variations as may be included within the spirit and scope of the invention as defined by the appended claims. Further, in the following detailed description of the present invention, certain specific details are set forth in order to provide a thorough understanding of the present invention. The present invention will be fully understood by those skilled in the art without the details described herein.
According to the invention, rho rc data of a multispectral satellite are used, and a multi-layer perception neural network model is used, so that global optical shallow water depth remote sensing inversion is realized.
The invention comprises the following steps:
1) The original data of the optical image is subjected to gas absorption correction and rayleigh correction to obtain a rayleigh corrected reflectance (ρ rc). The reflectance (ρ t) observed by the satellite sensor at the top of the atmosphere is expressed as:
ρt(λ)=tgr(λ)+ρear(λ)+ρra(λ)+T(λ)ρg+tstvπRrs(λ)] (1)
Where λ is the wavelength, T g is the transmittance of the gas in the atmosphere, ρ r is the rayleigh reflectance due to multiple molecular scattering in the absence of aerosol, ρ aer is the aerosol reflectance due to multiple scattering in the absence of air molecules, ρ ra is the signal due to air molecule and aerosol coupling, T is the direct transmittance, ρ g is the scattered signal due to solar flare, T s and T v are the solar to water surface and the water surface to sensor atmosphere transmittance, respectively, and R rs is the remote sensing reflectance determined by the seawater optical properties (absorption and scattering of water body, etc.) and the water bottom signal.
T g in the formula (1) is a known quantity, and ρ r and ρ g are precisely calculated quantities, and the atmospheric correction in remote sensing is actually to accurately estimate t s、tv、ρaer and ρ ra to obtain R rs. To avoid erroneous estimation, these 4 terms are kept, and the rayleigh corrected reflectivity ρ rc is obtained by rayleigh correcting ρ t alone, specifically expressed as:
The above step, i.e. the step of accurately calculating and removing the rayleigh reflectivity signal and the scattered signal caused by solar flare, which will affect the water body signal, is called rayleigh correction, which has no dependence on the territory. The four parameters t s、tv、ρaer and ρ ra also affect the atmospheric parameters of the water signal, and the atmospheric parameters are changed according to the region and the atmospheric conditions, so that inaccurate estimation of the atmospheric parameters can result in a situation that correction fails and no effective data (or low quality data) exists.
2) And (3) calculating cloud back illumination (cloud albedo) of each pixel in the optical image, taking the cloud back illumination as a reference, and using a threshold method to carry out cloud pixel masking on the rho rc obtained in the step (1). cloud albedo is specifically expressed as:
Where L t (865) is the top radiance of the 865nm atmosphere, L r is Rayleigh radiation, F 0 is the solar irradiance outside the earth, θ s is the solar zenith angle, and t oz is the ozone transmittance.
When the cloud cover in the image pixels is thicker, most of the light is reflected by the cloud, resulting in cloud albedo increase. And setting a threshold according to the optical satellite characteristics, and removing pixels with cloud albedo higher than the threshold, so as to avoid the influence of cloud on data.
3) According to the known water body type, the water body corresponding to each pixel of the image data obtained by remote sensing is divided into an optical deepwater area (ODW) and an optical shallow water area (OSW). ODW and OSW are mainly determined by whether sunlight can reach the water bottom. ODW typically includes deep water bodies where natural light cannot reach the water bottom and cloudy shallow water bodies where natural light cannot pass through, otherwise OSW.
4) And (3) matching rho rc of the 'shallow water' water body type in the step 3) with the actual water depth in the pixels to form a water depth dataset. And selecting a global representative shallow water body to match rho rc and water depth under different seasons, different substrates and different atmospheric conditions, and improving the coverage range of data. If the time of the measured water depth data is inconsistent with the satellite data time, the tidal height of the measured water depth data needs to be corrected to the tidal height corresponding to the satellite data observation time through harmonic analysis. The spatial resolution of the measured water depth is usually much higher than that of the high-resolution satellite image, so when there are multiple water depth pixels in one satellite image pixel, the water depth of the satellite image pixel is calculated by using the formula (4):
Wherein H pix is the water depth of the satellite image pixel, n is the actually measured water depth point in the pixel, and H i is the water depth of each actually measured water depth point.
5) A multi-layer perceptive neural network (MLP) model is built, the model is trained by using the data set with the wide water depth covered in the step 4), and the model is input as rho rc and output as water depth. The neural network is trained through different types of data, so that the neural network can acquire optical shallow water depth signals under any atmospheric condition, namely, the neural network is utilized to autonomously learn two atmospheric parameters which can influence water body signals in each season and under the atmospheric condition in the step 1 to replace the traditional water color remote sensing atmospheric correction step, and different substrate information is combined, so that the water depth is predicted better.
The following detailed description of the invention refers to the accompanying drawings and specific embodiments:
1. the multispectral high-resolution satellite image of the research area is acquired, landsat-8 optical satellite data is adopted in the embodiment, and the original image is subjected to Rayleigh correction by using open source professional software Acolite, so that the calculation formula of the Rayleigh corrected reflectivity rho rcrc of each pixel is obtained:
Where λ is the wavelength, ρ t is the reflectance observed by the satellite sensor at the top of the atmosphere, T g is the transmittance of the gas in the atmosphere, T is the direct transmittance, ρ g is the scattering signal due to solar flare, ρ r is the rayleigh reflectance due to multiple molecular scattering in the absence of aerosol. The Rayleigh reflectivity signal and the scattering signal caused by solar flare can be accurately calculated, and the method has no dependence on regions.
2. Cloud albedo of each pixel in the original image is calculated by using open source specialized software SeaDAS, and masking is carried out on each pixel by using a threshold method. cloud albedo the calculation formula is:
Where L t (865) is the top radiance of the 865nm atmosphere, L r is Rayleigh radiation, F 0 is solar irradiance, θ s is solar zenith angle, t s and t v are the solar to water surface and the water surface to sensor atmosphere transmittance, respectively, and t oz is ozone transmittance.
Cloud albedo pixels larger than the set threshold are cloud pixels, the data of the cloud pixels are set to be invalid values, the rest are water pixels, and the data are reserved. The threshold criteria for each optical satellite cloud albedo were not consistent, and in this example Landsat-8 was used with a threshold of 0.018.
3. The spectra of the measured water depth data and the "shallow water" in the matching study area (fig. 1) form a shallow water depth data set. The actual measurement water depth data of ICESat-2 laser radars are used in the embodiment. The training data study area selected in the examples was the BAHama shoal (Great Bahama Bank) and the salsa shoal (CaySalBank), matching all Landsat-8 and ICESat-2 satellite data during 10 months 2018 to 3 months 2021. The BAHama shoal and the salsa shoal have 72,178 and 20,728 sets of data, respectively. The water depth distribution of the Baha Ma Jiantan is 0-15 m, and the substrate type comprises seaweed, microalgae, brown macroalgae, sand and the like; the water depth distribution of the salsa shoal is 5-25 m, and the bottom materials are sea grass, sand and the like. The shallow water depth range from-0-25 m is almost all optical shallow water depth distribution range, and the substrate types of two areas also cover most optical shallow water substrate types. The matching time of two years includes all seasons and almost all types of atmospheric conditions. ICESat-2 satellite data times are not consistent with Landsat-8 satellite data times, so the water depths obtained at ICESat-2 require that the tides be corrected to Landsat-8 satellite data times by harmonic analysis. The spatial resolution of ICESat-2 laser measurement of water depth is far higher than that of high-resolution satellite images, a plurality of ICESat-2 water depth points are arranged in one Landsat-8 pixel, and the water depth of the pixel needs to be calculated by using a formula (4):
wherein H pix is the water depth of the pixel, n is the measured water depth point in the pixel, and H i is the water depth of each measured water depth point.
5. An MLP model was constructed for optical shallow water depth prediction (fig. 2). Through testing, the number of neurons of the input layer is the number of rho rc wave bands, and Landsat-8 satellite data used in the embodiment has 7 rho rc wave bands, and the wave band range is 430-2300nm; the middle layer is 3 layers, the number of neurons is 128, 32 and 16 respectively, and the activation function of the middle layer is a modified linear unit (ReLU); the number of the neurons of the output layer is 1, the activation function is Linear, and the water depth is output. Meanwhile, the shallow water depth data set training model obtained in the step 4 is used, mean Square Error (MSE) is used as a loss function, adaptive moment estimation (Adam) and a batch gradient descent method (BGD) are used for optimizing model gradient descent, learning rate is 0.001, and the loss function convergence training is finished.
6. The MLP model of the optical shallow water depth prediction obtained in the step 4 is applied to Landsat-8 optical satellite images (FIG. 3) of the optical shallow water regions (Little Bahama Bank, dry Tortugas, the Bight of Acklins, xisha Island and Dongsha Island) outside the training region, and water depth inversion results are obtained (FIG. 4).
7. And selecting ICESat-2 laser water depth points (the distribution is shown in fig. 4), performing accuracy verification on a water depth inversion result, drawing a comparison scatter diagram of the measured water depth (H true) and the inversion water depth (H est), and calculating an absolute correlation coefficient R 2, a root mean square error (RMSD) and a mean absolute correlation coefficient (MARD) (fig. 5). Overall, R 2 was 0.98, RMSD 0.5m and MARD 8.3%. The model obtained by using the training data of the baha Ma Jiantan and the salsa shoal can be well applied to the water bodies of other 6 independent verification areas of the world, and the application of the invention is wider.

Claims (5)

1. An inversion method of optical shallow water depth based on a neural network is characterized by comprising the following steps:
1) Carrying out gas absorption correction and Rayleigh correction on the original data of the optical image to obtain Rayleigh corrected reflectivity;
2) Calculating cloud back-illumination of each pixel in the optical image, taking the cloud back-illumination as a reference, and carrying out cloud pixel masking on the reflectivity obtained in the step 1) by using a threshold method;
3) Dividing the water body corresponding to each pixel in the image data obtained by remote sensing into an optical deep water area and an optical shallow water area according to the known water body type;
4) Constructing a water depth data set: matching the Rayleigh corrected reflectivity of the water body type of the optical shallow water area in the step 3) with the actual water depth in the pixels to form a water depth data set; selecting representative shallow water bodies in global comparison to match rho rc and water depths under different seasons, different substrates and different atmospheric conditions so as to improve the coverage range of data; if the time of the actually measured water depth data is inconsistent with the satellite data time, the tidal height of the actually measured water depth data needs to be corrected to the tidal height corresponding to the satellite data observation time through harmonic analysis; when a plurality of water depth point pixels exist in one satellite image pixel, calculating the water depth of the satellite image pixel by using a formula (4):
wherein H pix is the water depth of a satellite image pixel, n is the actually measured water depth point in the pixel, and H i is the water depth of each actually measured water depth point;
5) Establishing a multi-layer perception neural network model, training the model by using the data set with the wide water depth covered in the step 4) to obtain an optical shallow water depth signal, and predicting the water depth;
The multi-layer perception neural network model is built, the model is trained by using the water depth data set which is covered widely in the step 4), the input is ρ rc, and the output is the water depth; the neural network is trained through different types of data, so that the neural network can acquire optical shallow water depth signals under any atmospheric condition, namely, the neural network is utilized to autonomously learn two atmospheric parameters which can influence water body signals in each season and under the atmospheric condition in the step 1) to replace the traditional water color remote sensing atmospheric correction step, and different substrate information is combined, so that the water depth is predicted.
2. The method for inverting the depth of an optical shallow water bottom based on a neural network according to claim 1, wherein in the step 1), the specific steps of performing gas absorption correction and rayleigh correction on the raw data of the optical image are as follows:
The reflectance ρ t observed by the satellite sensor at the top of the atmosphere is expressed as:
ρt(λ)=tgr(λ)+ρaer(λ)+ρra(λ)+T(λ)ρg+tstvπRrs(λ)] (1)
Where λ is the wavelength, T g is the transmittance of the gas in the atmosphere, ρ r is the rayleigh reflectivity due to multiple molecular scattering in the absence of aerosol, ρ aer is the aerosol reflectivity due to multiple scattering in the absence of air molecules, ρ ra is the signal due to air molecule and aerosol coupling, T is the direct transmittance, ρ g is the scattered signal due to solar flare, T s and T v are the solar to water surface and the water surface to sensor atmospheric transmittance, respectively, and R rs is the remote sensing reflectivity determined by the seawater optical properties and the water bottom signal;
The transmittance t g of the atmospheric gas in the formula (1) is a known quantity, and ρ r and ρ g are precisely calculated quantities, the atmospheric correction in remote sensing is to estimate t s、tv、ρaer and ρ ra so as to obtain R rs, and the step of precisely calculating and removing the rayleigh reflectivity signal affecting the water body signal and the scattering signal caused by solar flare is called rayleigh correction; to avoid erroneous estimation, these 4 terms are kept, and the rayleigh corrected reflectivity ρ rc is obtained by rayleigh correcting ρ t alone, specifically expressed as:
3. The method for inverting the depth of the optical shallow water bottom based on the neural network as claimed in claim 1, wherein in the step 2), the cloud albedo cloud albedo of each pixel in the optical image is calculated, specifically expressed as:
Where L t (865) is the top radiance of the 865nm atmosphere, L r is Rayleigh radiation, F 0 is the solar irradiance outside the earth, θ s is the solar zenith angle, and t oz is the ozone transmittance.
4. The inversion method of the optical shallow water depth based on the neural network as claimed in claim 1, wherein in the step 2), the specific method for performing cloud pixel masking on the reflectivity obtained in the step 1) by using a threshold method is as follows: when the cloud cover in the image pixels is thicker, most of light is reflected by the cloud, so that cloud albedo is increased; and setting a threshold according to the optical satellite characteristics, and removing pixels cloud albedo higher than the threshold to avoid the influence of the cloud on the data.
5. The inversion method of optical shallow water depth based on neural network as claimed in claim 1, wherein in step 3), the optical deep water area and the optical shallow water area are judged by whether sunlight can reach the water bottom; the optical deepwater area comprises deepwater water bodies which can not reach the water bottom by natural light and turbid shallow water bodies which can not pass through by the natural light, and the optical deepwater area is the optical shallow water area.
CN202111670094.XA 2021-12-31 2021-12-31 Inversion method of optical shallow water depth based on neural network Active CN114297938B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111670094.XA CN114297938B (en) 2021-12-31 2021-12-31 Inversion method of optical shallow water depth based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111670094.XA CN114297938B (en) 2021-12-31 2021-12-31 Inversion method of optical shallow water depth based on neural network

Publications (2)

Publication Number Publication Date
CN114297938A CN114297938A (en) 2022-04-08
CN114297938B true CN114297938B (en) 2024-06-07

Family

ID=80973181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111670094.XA Active CN114297938B (en) 2021-12-31 2021-12-31 Inversion method of optical shallow water depth based on neural network

Country Status (1)

Country Link
CN (1) CN114297938B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116879237B (en) * 2023-09-04 2023-12-12 自然资源部第二海洋研究所 Atmospheric correction method for offshore turbid water body
CN117789056B (en) * 2024-02-27 2024-05-07 杭州蚁联传感科技有限公司 Remote sensing data processing method and device with solar flare and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176001A (en) * 2011-02-10 2011-09-07 哈尔滨工程大学 Permeable band ratio factor-based water depth inversion method
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
CN113255144A (en) * 2021-06-02 2021-08-13 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113639716A (en) * 2021-07-29 2021-11-12 北京航空航天大学 Depth residual shrinkage network-based water depth remote sensing inversion method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7349806B2 (en) * 2004-09-15 2008-03-25 United States Of America As Represented By The Secretary Of The Navy System and method for extracting optical properties from environmental parameters in water

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176001A (en) * 2011-02-10 2011-09-07 哈尔滨工程大学 Permeable band ratio factor-based water depth inversion method
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
CN113255144A (en) * 2021-06-02 2021-08-13 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113639716A (en) * 2021-07-29 2021-11-12 北京航空航天大学 Depth residual shrinkage network-based water depth remote sensing inversion method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A Portable Algorithm to Retrieve Bottom Depth of Optically Shallow Waters from Top-Of-Atmosphere Measurements;赖文典;JOURNAL OF REMOTE SENSING;20220203;全文 *
基于神经网络方法的高光谱遥感浅海水深反演;施英妮;张亭禄;周晓中;吴耀平;石立坚;;高技术通讯;20080125(第01期);全文 *

Also Published As

Publication number Publication date
CN114297938A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN109581372B (en) Ecological environment remote sensing monitoring method
Hsu et al. A semi-empirical scheme for bathymetric mapping in shallow water by ICESat-2 and Sentinel-2: A case study in the South China Sea
CN114297938B (en) Inversion method of optical shallow water depth based on neural network
CN108303044B (en) Leaf area index obtaining method and system
Xu et al. Deriving highly accurate shallow water bathymetry from Sentinel-2 and ICESat-2 datasets by a multitemporal stacking method
CN111832518B (en) Space-time fusion-based TSA remote sensing image land utilization method
CN110909491A (en) Sea surface salinity inversion algorithm based on wind and cloud meteorological satellite
CN112013822A (en) Multispectral remote sensing water depth inversion method based on improved GWR model
CN113639716A (en) Depth residual shrinkage network-based water depth remote sensing inversion method
Lai et al. A portable algorithm to retrieve bottom depth of optically shallow waters from top-of-atmosphere measurements
Chu et al. Technical framework for shallow-water bathymetry with high reliability and no missing data based on time-series sentinel-2 images
CN115308386B (en) Soil salinity inversion method and system based on CYGNSS satellite data
Matsui et al. Improving the resolution of UAV-based remote sensing data of water quality of Lake Hachiroko, Japan by neural networks
CN116295285A (en) Shallow sea water depth remote sensing inversion method based on region self-adaption
CN113960625B (en) Water depth inversion method based on satellite-borne single-photon laser active and passive remote sensing fusion
Peng et al. A physics-assisted convolutional neural network for bathymetric mapping using ICESat-2 and Sentinel-2 data
CN117274831B (en) Offshore turbid water body depth inversion method based on machine learning and hyperspectral satellite remote sensing image
Xie et al. Satellite-derived bathymetry combined with Sentinel-2 and ICESat-2 datasets using machine learning
Zhang et al. A multiband model with successive projections algorithm for bathymetry estimation based on remotely sensed hyperspectral data in Qinghai Lake
CN116817869B (en) Submarine photon signal determination method using laser radar data
CN111751286B (en) Soil moisture extraction method based on change detection algorithm
CN111650128B (en) High-resolution atmospheric aerosol inversion method based on surface reflectivity library
Zhang et al. Satellite-derived bathymetry model in the Arctic waters based on support vector regression
CN115060656B (en) Satellite remote sensing water depth inversion method based on sparse priori real measurement points
Liu et al. Evaluation of the effectiveness of multiple machine learning methods in remote sensing quantitative retrieval of suspended matter concentrations: A case study of Nansi Lake in North China

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant