CN105651263B - Shallow water depth multi-source remote sensing merges inversion method - Google Patents
Shallow water depth multi-source remote sensing merges inversion method Download PDFInfo
- Publication number
- CN105651263B CN105651263B CN201510975396.6A CN201510975396A CN105651263B CN 105651263 B CN105651263 B CN 105651263B CN 201510975396 A CN201510975396 A CN 201510975396A CN 105651263 B CN105651263 B CN 105651263B
- Authority
- CN
- China
- Prior art keywords
- depth
- mrow
- water
- water depth
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 title claims abstract description 249
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000004927 fusion Effects 0.000 claims abstract description 83
- 238000012545 processing Methods 0.000 claims abstract description 5
- 238000012937 correction Methods 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 4
- 230000011218 segmentation Effects 0.000 claims description 4
- 238000002310 reflectometry Methods 0.000 claims 4
- 238000012360 testing method Methods 0.000 claims 4
- 238000012935 Averaging Methods 0.000 claims 2
- 238000001228 spectrum Methods 0.000 claims 2
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 238000011161 development Methods 0.000 claims 1
- 230000018109 developmental process Effects 0.000 claims 1
- 235000013399 edible fruits Nutrition 0.000 claims 1
- 238000012795 verification Methods 0.000 abstract description 9
- 238000007781 pre-processing Methods 0.000 abstract description 4
- 238000007499 fusion processing Methods 0.000 abstract description 3
- 230000003595 spectral effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011156 evaluation Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000001174 ascending effect Effects 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010835 comparative analysis Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C13/00—Surveying specially adapted to open water, e.g. sea, lake, river or canal
- G01C13/008—Surveying specially adapted to open water, e.g. sea, lake, river or canal measuring depth of open water
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Hydrology & Water Resources (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
浅海水深多源遥感融合反演方法包括:第一步:对多光谱遥感影像进行预处理,得到海表反射率;第二步:现场实测水深值获取及处理;第三步:单源水深反演和水深段标识;第四步:多源水深反演融合;第五步:水深反演精度验证;将n种单源的水深反演结果及其对应的水深段标识影像和融合参数作为输入,逐像元开展多源水深反演融合;水深反演精度验证完成后,将最终水深值作为遥感图像实际水深值输出数据。与现有的反演方法相比,本方法可综合利用多种遥感数据源对水深信息的不同响应,挖掘其中的水深数据,提高反演精度,经过决策融合处理,尤其适用于复杂情况下浅水区域的海洋水深测量。
The shallow water depth multi-source remote sensing fusion inversion method includes: the first step: preprocessing the multi-spectral remote sensing image to obtain the sea surface reflectance; the second step: obtaining and processing the measured water depth value on site; Step 4: Multi-source water depth inversion fusion; Step 5: Water depth inversion accuracy verification; n kinds of single-source sounding inversion results and their corresponding water depth segment identification images and fusion parameters are used as input , multi-source water depth inversion fusion is carried out pixel by pixel; after the water depth inversion accuracy verification is completed, the final water depth value is output as the actual water depth value of the remote sensing image. Compared with the existing inversion methods, this method can comprehensively utilize the different responses of various remote sensing data sources to water depth information, mine the water depth data, improve the inversion accuracy, and after decision-making fusion processing, it is especially suitable for shallow water in complex situations. Regional ocean bathymetry.
Description
技术领域technical field
本发明涉及一种海洋水深测量方法,属于空间遥感技术领域,尤其涉及一种可利用多种遥感器进行海洋水深探测的浅海水深多源遥感融合反演方法。The invention relates to a method for measuring ocean water depth, which belongs to the technical field of space remote sensing, and in particular relates to a shallow water depth multi-source remote sensing fusion inversion method that can use multiple remote sensors to detect ocean water depth.
背景技术Background technique
海洋水深测量是保障船舶航行、开展港口码头和海洋工程建设、制定海岸和海岛相关规划的必要基础数据。与水深现场测量手段相比,遥感技术具有覆盖广、周期短、费用低、空间分辨率高等多方面优势。自20世纪70年代以来,国内外开展了各种被动遥感水深反演模型的研究,常用的可见光水深反演模型主要包括分析模型、半分析半经验模型和统计模型。利用不同模型,近年来在河流、湖泊、水库、海岛和海岸带周边等水深测量领域进行了反演应用。Ocean bathymetry is the necessary basic data to ensure ship navigation, carry out port wharf and marine engineering construction, and formulate relevant plans for coasts and islands. Compared with on-site water depth measurement methods, remote sensing technology has many advantages such as wide coverage, short cycle, low cost, and high spatial resolution. Since the 1970s, research on various passive remote sensing water depth inversion models has been carried out at home and abroad. Commonly used visible light water depth inversion models mainly include analytical models, semi-analytical and semi-empirical models, and statistical models. Using different models, inversion applications have been carried out in the fields of bathymetry around rivers, lakes, reservoirs, islands and coastal zones in recent years.
水深可见光遥感反演是获取浅海复杂地形水深的有效解决办法,尤其可以反演获取船只无法靠近和难以进入区域的水深资料。但由于模型难以兼顾物理机制和参数化,因此已有的可见光水深遥感反演模型精度再提高的空间有限。The inversion of water depth with visible light remote sensing is an effective solution to obtain the water depth of complex shallow sea terrain, especially for inversion to obtain water depth data in areas that are inaccessible and difficult to enter by ships. However, because the model is difficult to take into account the physical mechanism and parameterization, the accuracy of the existing visible light water depth remote sensing inversion model has limited room for improvement.
水深多源遥感反演可以克服单源影像成像时环境条件的限制,且多源遥感影像提供的更丰富的波段信息和其不尽相同的光谱分辨率也有利于水深信息的提取,当前已有将多源应用于水深遥感反演的研究工作,但多是用于空间信息的插补,没有在决策融合层面进行开发应用。而决策融合能够充分利用已有遥感影像资源和信息,为提高光学遥感水深反演精度提供了新途径。Water depth multi-source remote sensing inversion can overcome the limitation of environmental conditions in single-source image imaging, and the richer band information and different spectral resolutions provided by multi-source remote sensing images are also conducive to the extraction of water depth information. Currently, there are Multi-sources are applied to the research work of water depth remote sensing inversion, but most of them are used for the interpolation of spatial information, and have not been developed and applied at the level of decision-making fusion. Decision fusion can make full use of existing remote sensing image resources and information, and provides a new way to improve the accuracy of optical remote sensing water depth retrieval.
中国专利(申请号201310188829.4,申请公布日CN 104181515A)公开了“一种基于蓝-黄波段高光谱数据的浅海水深反演方法”。主要用于解决利用光学遥感手段进行清洁水体水深反演的模型大都针对多光谱数据建立,该类算法受多光谱数据波段宽、光谱信息少的制约,该发明依据水体光衰减机理,基于高光谱数据提出了一种利用蓝-黄波段(450-610纳米)高光谱数据反演清洁水体浅海水深的新方法,该方法可准确提取30米以内浅海水深分布信息,并且针对一种遥感器,只需要进行一次算法系数标定,算法普适性得到明显改善。但该方采用单源的遥感器获取影像作为探测数据源,可利用的遥感影像光谱数据波段、光谱信息范围有限,不利于提高浅海水深反演用于水深测量的准确性,尤其是在复杂情况下对浅海区域水深的探测效果不足。Chinese patent (Application No. 201310188829.4, application publication date CN 104181515A) discloses "a shallow water depth inversion method based on blue-yellow band hyperspectral data". Most of the models used to solve the inversion of clean water depth by means of optical remote sensing are established for multi-spectral data. This type of algorithm is restricted by the wide band of multi-spectral data and the lack of spectral information. The data proposed a new method for inversion of shallow water depth in clean water by using blue-yellow band (450-610 nm) hyperspectral data. This method can accurately extract shallow water depth distribution information within 30 meters, and for a remote sensor, only Algorithm coefficient calibration is required once, and the universality of the algorithm is significantly improved. However, the party uses a single-source remote sensor to obtain images as the detection data source, and the available remote sensing image spectral data bands and spectral information range are limited, which is not conducive to improving the accuracy of shallow sea depth inversion for bathymetry, especially in complex situations However, the detection effect of water depth in shallow sea areas is insufficient.
发明内容Contents of the invention
本发明提供了一种基于决策融合的浅海水深多源遥感融合反演方法,用于解决现有技术中只使用单源遥感器的影像作为数据源,其遥感影像光谱数据波段、光谱信息的使用范围受限,水深测量精度和准确性较差的问题。The present invention provides a shallow water depth multi-source remote sensing fusion inversion method based on decision-making fusion, which is used to solve the problem of using only the image of a single-source remote sensor as a data source in the prior art, and the use of spectral data bands and spectral information of remote sensing images Problems with limited range, poor precision and accuracy of bathymetry.
浅海水深多源遥感融合反演方法,包括以下步骤:The multi-source remote sensing fusion inversion method for shallow sea depth includes the following steps:
第一步:对多光谱遥感影像进行预处理,得到海表反射率;The first step: preprocessing the multispectral remote sensing images to obtain the sea surface reflectance;
所述预处理包括辐射亮度转换、大气校正和太阳耀斑去除;The preprocessing includes radiance conversion, atmospheric correction and solar flare removal;
第二步:现场实测水深值获取及处理;Step 2: Acquisition and processing of on-site measured water depth values;
获取实验区的水深数据和对应的经纬度坐标,通过潮汐表确认测量时刻的潮高值,将水深数据校正获得理论深度基准面的水深,再根据多光谱遥感影像的获取时刻,对理论深度基准面的水深数据进行瞬时水深的潮汐校正以获得瞬时水深;Obtain the water depth data and the corresponding latitude and longitude coordinates of the experimental area, confirm the tide height value at the measurement time through the tide table, correct the water depth data to obtain the water depth of the theoretical depth datum, and then calculate the theoretical depth datum according to the acquisition time of the multispectral remote sensing image. Tide correction of the instantaneous water depth is performed on the water depth data to obtain the instantaneous water depth;
第三步:单源水深反演和水深段标识;Step 3: single-source water depth inversion and water depth section identification;
根据水深控制点处水深与对应影像像元反射率之间的关系,采用多波段模型进行统计回归,输出在该源影像水深反演的参数作为多源反演融合的一项输入,并对多波段模型进行参数定标,多波段模型公式如下,According to the relationship between the water depth at the water depth control point and the reflectance of the corresponding image pixel, the multi-band model is used for statistical regression, and the parameters of the water depth inversion of the source image are output as an input of the multi-source inversion fusion, and the multi-band model is used to perform statistical regression. The parameters of the band model are calibrated, and the formula of the multi-band model is as follows,
Xi=Ln(ρi-ρsi) (2)X i =Ln(ρ i -ρ si ) (2)
其中,Z为水深,n为参与反演的波段个数,A0和Ai为待定系数;ρi是第i波段反射率数据,ρsi是该波段深水处的反射率;Among them, Z is the water depth, n is the number of bands participating in the inversion, A 0 and A i are undetermined coefficients; ρ i is the reflectance data of the i-th band, and ρ si is the reflectance of the deep water in this band;
将水深控制点分成多个水深段作为输入,输出各水深段的平均相对误差,作为多源水深反演融合的另一项输入,即融合参数;作为多源水深反演融合输入的融合参数,还包括输出的单源影像的Kappa系数和每个水深段的分段平均精度;The sounding control points are divided into multiple sounding segments as input, and the average relative error of each sounding segment is output as another input of multi-source sounding inversion fusion, that is, the fusion parameter; as the fusion parameter of multi-source sounding inversion fusion input, Also includes the Kappa coefficient of the output single-source image and the segmented average accuracy of each depth segment;
其中,n为水深控制点个数,k表示水深段,式3中,δk为平均相对误差,zi是第i个水深控制点的实测值,zi'为其反演值,式4中,为Kappa系数,xii表示正确分类的控制点个数,xi+、x+i是对水深控制点进行分段统计时,误差矩阵的行列边界值,式5中,δma_k是分段平均精度,PAk是第k个水深段的生产者精度,UAk是第k个水深段的用户精度;Among them, n is the number of water depth control points, k represents the water depth section, in formula 3, δ k is the average relative error, z i is the measured value of the i-th water depth control point, z i ' is its inversion value, formula 4 middle, is the Kappa coefficient, x ii represents the number of correctly classified control points, x i+ and x +i are the row and column boundary values of the error matrix when performing segmental statistics on water depth control points, in formula 5, δ ma_k is the segmental average precision , PA k is the producer's accuracy of the kth water depth section, UA k is the user's accuracy of the kth water depth section;
利用融合参数和整景遥感影像,计算得到单源水深反演结果,并将其进行校正理论深度基准面之后对结果进行分段,得到水深段标识影像;Using the fusion parameters and the whole scene remote sensing image, calculate the single-source water depth inversion result, and correct the theoretical depth datum and then segment the result to obtain the water depth segment identification image;
第四步:多源水深反演融合;Step 4: Multi-source water depth inversion fusion;
将n种单源的水深反演结果及其对应的水深段标识影像和融合参数作为输入,逐像元开展多源水深反演融合,具体包括;Taking n types of single-source water depth inversion results and their corresponding water depth section identification images and fusion parameters as input, multi-source water depth inversion fusion is carried out pixel by pixel, specifically including;
a)当某个水深段的票数为t,且说明有种或更多种影像的反演结果是在同一个水深段内,其中表示向下取整,此时,如果有2种或2种以上的影像得到相等的水深反演值,则直接为当前像元赋此值,否则,比较这几种影像在该水深段的平均相对误差和平均精度,将水深段平均精度最大的作为最终像元值;仅当该水深段平均精度最大的影像对应的水深段平均相对误差也最大时,选择平均精度次之的影像;a) When the number of votes in a water depth segment is t, and Description has The inversion results of one or more images are in the same water depth segment, where Indicates rounding down. At this time, if there are 2 or more images with the same water depth retrieval value, assign this value directly to the current pixel; otherwise, compare the average values of these images in the water depth range For relative error and average precision, the image with the largest average precision in the water depth segment is used as the final pixel value; only when the image with the largest average accuracy in the water depth segment corresponds to the largest average relative error in the water depth segment, select the image with the second highest average accuracy;
b)当最大得票数t满足且有x个(x≥2)得票数为t,此时对比Kappa系数和n个分类器在各自对应水深段的平均精度,若Kappa系数最大的影像与平均精度最大的都判定为同一个水深段,且是同源影像,将该影像像元的水深值作为结果;若不是同一景影像,则确定这两景在该水深段中平均相对误差较小的;若Kappa系数最大的影像所判定的水深段与平均精度最大的不同,则取前者的水深值;若仅有1个得票数为t,则在投票数为t的水深段中,将水深段平均精度最大的作为最终像元值;当该水深段平均精度最大的影像对应的水深段平均相对误差也最大时,选择平均精度次之的影像;b) When the maximum number of votes t satisfies And there are x (x≥2) with the number of votes as t. At this time, compare the Kappa coefficient and the average accuracy of the n classifiers in their corresponding water depth segments. If the image with the largest Kappa coefficient and the largest average accuracy are judged to be the same water depth segment, and it is the same source image, the water depth value of the image pixel is taken as the result; if it is not the same scene image, it is determined that the average relative error of the two scenes in the water depth segment is small; if the image with the largest Kappa coefficient is determined The water depth segment with the largest average precision is different from the water depth value of the former; if there is only one water depth segment with the number of votes t, then in the water depth segment with the number of votes t, the water depth segment with the largest average accuracy is taken as the final pixel value ; When the average relative error of the image corresponding to the image with the largest average accuracy of the water depth segment is also the largest, select the image with the second average accuracy;
c)当最大投票数t=1时,选择Kappa系数最大的影像所对应的水深值;c) When the maximum number of votes t=1, select the water depth value corresponding to the image with the largest Kappa coefficient;
第五步:水深反演精度验证;Step 5: Water depth inversion accuracy verification;
所述精度验证是利用水深检查点开展融合前单源反演结果和融合后多源反演结果的比较,水深反演精度验证完成后,将最终水深值作为遥感图像实际水深值输出数据。The accuracy verification is to compare the single-source inversion results before fusion with the multi-source inversion results after fusion by using the sounding checkpoint. After the accuracy verification of sounding inversion is completed, the final sounding value is output as the actual sounding value of the remote sensing image.
如上所述的浅海水深多源遥感融合反演方法,所述第一步中的辐亮度转换是将遥感影像DN值转化为辐亮度值;所述太阳耀斑去除可采用中值法、均值法或者小波法;所述大气校正可以采用FLAASH、暗像元或者6S大气校正方法。In the shallow water depth multi-source remote sensing fusion inversion method described above, the radiance conversion in the first step is to convert the DN value of the remote sensing image into a radiance value; Wavelet method; the atmospheric correction can use FLAASH, dark pixel or 6S atmospheric correction method.
本发明中决策融合选择的基准影像,视所需水深影像的比例尺和分辨率而定,无特殊要求。若选择空间分辨率最大的影像作基准,虽然融合的运行速度有一定的提升,但是空间匹配采用以像元中心坐标处的决策融合数值代表整个像元,损失的信息量较大。所以,从处理效率和反演融合精度角度综合考虑,优选采用以分辨率最高的影像所生成的水深反演结果影像作为基准,同时需要将基准影像像元中心坐标与其它遥感源水深影像的位置进行匹配,获取该坐标处的所有单源反演水深值和其它信息,进行决策融合,以减少可能损失的信息量,保证反演的精度。The reference image selected for decision-making fusion in the present invention depends on the scale and resolution of the required water depth image, and there is no special requirement. If the image with the largest spatial resolution is selected as the benchmark, although the fusion speed is improved to a certain extent, the spatial matching uses the decision fusion value at the coordinates of the pixel center to represent the entire pixel, and the loss of information is relatively large. Therefore, from the perspective of processing efficiency and inversion fusion accuracy, it is preferable to use the water depth inversion result image generated by the image with the highest resolution as the reference. Matching is performed to obtain all single-source inversion water depth values and other information at the coordinates, and decision-making fusion is performed to reduce the amount of information that may be lost and ensure the accuracy of inversion.
本发明的有益效果:Beneficial effects of the present invention:
与现有的反演方法相比,本方法可综合利用多种遥感数据源对水深信息的不同响应,扩大了遥感影像光谱数据波段、光谱信息的使用范围,挖掘其中的水深数据,提高反演精度,经过决策融合处理,尤其适用于复杂情况下浅水区域的海洋水深测量。Compared with the existing inversion methods, this method can comprehensively utilize the different responses of various remote sensing data sources to water depth information, expand the use range of remote sensing image spectral data bands and spectral information, mine water depth data, and improve inversion. Accuracy, after decision fusion processing, especially suitable for ocean bathymetry in shallow water areas under complex conditions.
附图说明Description of drawings
图1是本发明的流程图;Fig. 1 is a flow chart of the present invention;
图2是本发明的水深多源反演融合流程图;Fig. 2 is the fusion flow chart of water depth multi-source inversion of the present invention;
图3a是水深多源融合结果散点图;Figure 3a is the scatter diagram of the multi-source fusion results of water depth;
图3b是单源WorldView-2水深反演结果散点图;Figure 3b is a scatter diagram of single-source WorldView-2 water depth inversion results;
图3c是单源Pleiades水深反演结果散点图;Figure 3c is a scatter diagram of the single-source Pleiades water depth inversion results;
图3d是单源QuickBird水深反演结果散点图;Figure 3d is a scatter diagram of single-source QuickBird water depth inversion results;
图3e是单源SPOT-6水深反演结果散点图;Figure 3e is a scatter diagram of single-source SPOT-6 water depth inversion results;
图4是本发明水深多源遥感反演融合结果;Fig. 4 is the fusion result of water depth multi-source remote sensing inversion of the present invention;
具体实施方式detailed description
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。In order to make the purpose, technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention.
结合附图1对本发明具体实施方式进一步详细描述,浅海水深多源遥感融合反演方法,具体包括以下步骤:In conjunction with accompanying drawing 1, the specific embodiment of the present invention is further described in detail, and the shallow water depth multi-source remote sensing fusion inversion method specifically includes the following steps:
第一步:多光谱遥感数据预处理:Step 1: Multispectral remote sensing data preprocessing:
首先要对通过多源传感器获取的参与水深反演融合的多光谱遥感影像进行预处理,包括辐亮度转换、大气校正和太阳耀斑去除。辐亮度转换是将影像DN值转化为辐亮度的过程,不同的遥感数据产品对应的辐亮度转换公式不同,一般采用以下两种:Firstly, the multi-spectral remote sensing images obtained through multi-source sensors and involved in the fusion of water depth inversion should be preprocessed, including radiance conversion, atmospheric correction and solar flare removal. Radiance conversion is the process of converting image DN values into radiance. Different remote sensing data products correspond to different radiance conversion formulas. Generally, the following two methods are used:
式中对应的各项参数均可在影像的元数据文件中获得。得到多光谱的辐亮度影像后,采用FLAASH或者暗像元、6S等方法进行大气校正,得到海表反射率数据;为去除海表太阳耀斑及漂浮物等带来的干扰,接着采用中值、均值或者小波等方法进行太阳耀斑去除。The corresponding parameters in the formula can be obtained in the metadata file of the image. After obtaining the multi-spectral radiance image, use FLAASH or dark pixel, 6S and other methods to perform atmospheric correction to obtain sea surface albedo data; Methods such as mean value or wavelet are used to remove solar flares.
第二:现场实测水深值获取及处理:Second: Acquisition and processing of on-site measured water depth values:
利用多波束水深仪或其它水深测量手段获取实验区的水深数据,同时获取对应的经纬度坐标。以潮汐表确认测量时刻的潮高值对水深数据进行校正。Use the multi-beam bathymeter or other bathymetric means to obtain the water depth data of the experimental area, and obtain the corresponding latitude and longitude coordinates at the same time. Correct the water depth data by confirming the tide height value at the time of measurement with the tide table.
第三步:单源水深反演和水深段标识:Step 3: Single-source water depth inversion and water depth segment identification:
根据水深控制点处水深与对应影像像元反射率之间的关系,采用多波段模型进行统计回归,输出在该源影像水深反演的参数作为多源反演融合的一项输入,并对多波段模型进行参数定标,多波段模型公式如下,According to the relationship between the water depth at the water depth control point and the reflectance of the corresponding image pixel, the multi-band model is used for statistical regression, and the parameters of the water depth inversion of the source image are output as an input of the multi-source inversion fusion, and the multi-band model is used to perform statistical regression. The parameters of the band model are calibrated, and the formula of the multi-band model is as follows,
Xi=Ln(ρi-ρsi) (2)X i =Ln(ρ i -ρ si ) (2)
其中,Z为水深,n为参与反演的波段个数,A0和Ai为待定系数;ρi是第i波段反射率数据,ρsi是该波段深水处的反射率。Among them, Z is the water depth, n is the number of bands participating in the inversion, A 0 and A i are undetermined coefficients; ρ i is the reflectance data of the i-th band, and ρ si is the reflectance of the deep water in this band.
将水深控制点分成多个水深段作为输入,输出各水深段的平均相对误差,作为多源水深反演融合的另一项输入,即融合参数;The sounding control point is divided into multiple sounding segments as input, and the average relative error of each sounding segment is output as another input of multi-source sounding inversion fusion, that is, the fusion parameter;
输入的融合参数,还包括输出的单源影像的Kappa系数和每个水深段的分段平均精度;The input fusion parameters also include the Kappa coefficient of the output single-source image and the segment average precision of each water depth segment;
其中,n为水深控制点个数,k表示水深段,式3中,δk为平均相对误差,zi是第i个水深控制点的实测值,zi'为其反演值,式4中,为Kappa系数,xii表示正确分类的控制点个数,xi+、x+i是对水深控制点进行分段统计时,误差矩阵的行列边界值,δma_k是分段平均精度,PAk是第k个水深段的生产者精度,UAk是第k个水深段的用户精度;Among them, n is the number of water depth control points, k represents the water depth section, in formula 3, δ k is the average relative error, z i is the measured value of the i-th water depth control point, z i ' is its inversion value, formula 4 middle, is the Kappa coefficient, x ii represents the number of correctly classified control points, x i+ and x +i are the row and column boundary values of the error matrix when performing segmental statistics on water depth control points, δ ma_k is the segmental average precision, PA k is Producer accuracy of the kth water depth segment, UA k is the user accuracy of the kth water depth segment;
利用得到的参数和整景遥感影像,计算得单源水深反演结果,并将其瞬时水深校正到理论深度基准面,之后对结果进行分段,得到水深段标识影像;Using the obtained parameters and the whole scene remote sensing image, calculate the single-source water depth inversion result, and correct the instantaneous water depth to the theoretical depth datum, and then segment the result to obtain the water depth segment identification image;
此处以多源反演融合方法中用到的WorldView-2影像在水深控制点处的误差矩阵为例,在表1展示说明。Here, the error matrix of the WorldView-2 image used in the multi-source inversion fusion method at the water depth control point is taken as an example, and it is shown in Table 1.
表1:WorldView-2水深控制点的误差矩阵Table 1: Error matrix of WorldView-2 water depth control points
误差矩阵中,列表示地面参考验证信息,行为遥感数据得到的分类,主对角元素(如x11,公式中表示为xii)是分类正确的像元,对角线外元素是遥感数据分类相对于地面参考错误的像元个数。所以,本实验中,1~4分别代表以2m、5m、10m为间隔分成的4个水深段,z为实测水深,z'为反演水深。列表示4个实测水深段中控制点的数量,行代表利用遥感影像反演得到4个水深段中控制点的数量,主对角线是像元反演水深被分到正确的测量水深段的点个数,反之,线外是错误分段的点个数。误差矩阵中的生产者精度(PA)是在假定1个水深控制点在第k类时,遥感影像反演水深时将该点对应的像元归为k的概率,通过第k类的正确分类个数除以第k列的总和(公式中表示为x+i)求得。用户精度(UA)是若影像反演水深将某控制点对应像元归到第k类时,该水深控制点的真实测量水深同属于第k类的百分比,其计算通过正确分类为第k类的个数除以分类为k的总和(也就是第k行的总和,即公式中的xi+)。In the error matrix, the columns represent the ground reference verification information, the classification obtained from the behavioral remote sensing data, the main diagonal elements (such as x 11 , expressed as x ii in the formula) are the correctly classified pixels, and the off-diagonal elements are the remote sensing data classification The number of pixels in error relative to the ground reference. Therefore, in this experiment, 1 to 4 represent four water depth sections divided into 2m, 5m, and 10m intervals, z is the measured water depth, and z' is the inversion water depth. The columns represent the number of control points in the 4 measured sounding intervals, the rows represent the number of control points in the 4 sounding intervals obtained by inversion of remote sensing images, and the main diagonal is the pixel inversion sounding depth assigned to the correct measured sounding interval The number of points, on the contrary, the number of points outside the line is the number of wrong segments. The producer accuracy (PA) in the error matrix is the probability that the pixel corresponding to this point is classified as k when a water depth control point is assumed to be in the k category when inverting the water depth from the remote sensing image. The number is divided by the sum of the kth column (expressed as x + i in the formula). User Accuracy (UA) is the percentage of the actual measured water depth of the sounding control point belonging to the kth class when the corresponding pixel of a certain control point is classified into the kth class by image inversion water depth, and its calculation is correctly classified as the kth class The number of divided by the sum classified as k (that is, the sum of row k, that is, x i+ in the formula).
Kappa系数是遥感分类图和参考数据间一致性或精度的量度,由主对角线和行列总数给出的概率一致性来表达。例子中的Kappa系数为0.7686可以解释为利用这景WorldView-2影像反演水深后获得的水深分布以76.86%的程度优于随机划分的水深段。The Kappa coefficient is a measure of agreement or precision between a remote sensing classification map and reference data, expressed by the probability agreement given by the main diagonal and the total number of rows and columns. The Kappa coefficient in the example is 0.7686, which can be explained that the water depth distribution obtained after inverting water depth from this WorldView-2 image is 76.86% better than the randomly divided water depth segments.
生产者精度和用户精度越接近1越好,最理想的情况是生产者精度和用户精度都为1。所以,为权衡考量不失偏颇,同时也为了精简最终参与到决策融合的参数个数,本实施例中取二者的均值作为融合参数,即分段平均精度。The closer the producer precision and user precision are to 1, the better. The ideal situation is that both producer precision and user precision are 1. Therefore, in order not to lose bias in consideration of trade-offs, but also to reduce the number of parameters finally involved in the decision-making fusion, in this embodiment, the mean value of the two is taken as the fusion parameter, that is, the segment average precision.
利用融合参数和整景遥感影像,计算得到单源水深反演结果,并将其进行校正理论深度基准面之后对结果进行分段,得到水深段标识影像;Using the fusion parameters and the whole scene remote sensing image, calculate the single-source water depth inversion result, and correct the theoretical depth datum and then segment the result to obtain the water depth segment identification image;
第四步:多源水深反演融合:Step 4: Multi-source water depth inversion fusion:
将单源水深反演结果、水深段标识影像和融合参数作为多源水深反演融合的输入,逐像元开展融合。采用4个单源(即遥感影像)在当前像元的水深段所投票数决定最终取值,具体如下:The single-source bathymetric inversion results, bathymetric identification images and fusion parameters are used as the input of multi-source bathymetric inversion fusion, and the fusion is carried out pixel by pixel. The final value is determined by the number of votes cast by 4 single sources (i.e. remote sensing images) in the water depth section of the current pixel, as follows:
a)当某个水深段的票数大于等于3,说明有3种或3种以上的影像反演结果是在同一个水深段内,此时,如果有2个及以上的影像得到相等的水深反演值,则直接为当前像元赋此值,否则,比较这几种影像在该水深段的平均相对误差和平均精度,尽量选择平均精度大而平均相对误差小的作为当前像元的水深值。在平均精度较大的前提下,考察该影像在这一水深段的平均相对误差,若平均精度最大的影像平均相对误差也最大,则弃之选择平均精度第二大的影像;a) When the number of votes for a certain water depth segment is greater than or equal to 3, it means that there are 3 or more image inversion results in the same water depth segment. At this time, if there are 2 or more images with equal water depth value, assign this value directly to the current pixel, otherwise, compare the average relative error and average precision of these images in the water depth section, and try to choose the one with the largest average accuracy and small average relative error as the water depth value of the current pixel . On the premise that the average precision is relatively large, the average relative error of the image in this water depth section is investigated, and if the average relative error of the image with the highest average precision is also the largest, then the image with the second highest average precision is discarded and selected;
b)当最大得票数等于2,且得票情况为2、2,意味着分别有两幅影像的反演水深落在同一水深段。此时考察Kappa系数和4个分类器在各自对应水深段的平均精度,若Kappa系数最大的影像与平均精度最大的都判定为同一个水深段,且是同源影像,就选择该影像像元的水深值作为结果,若不是同一景影像,则选择这两景在该水深段中平均相对误差较小的。若Kappa系数最大的影像所判定的水深段与平均精度最大的不同,则选择前者的水深值。当最大得票数等于2,且得票情况为2、1、1,在投票数为2的水深段中,尽量选择平均精度大而平均相对误差小的作为当前像元的水深值;b) When the maximum number of votes is equal to 2, and the votes are 2 and 2, it means that the inversion water depths of two images respectively fall in the same water depth range. At this time, examine the Kappa coefficient and the average precision of the four classifiers in their corresponding water depth segments. If the image with the largest Kappa coefficient and the largest average accuracy are both determined to be in the same water depth segment and are images of the same origin, then select the image pixel As a result, if the water depth value is not the same scene image, the average relative error of the two scenes in this water depth segment is selected. If the water depth segment determined by the image with the largest Kappa coefficient is different from the image with the largest average precision, the former water depth value is selected. When the maximum number of votes is equal to 2, and the number of votes is 2, 1, 1, in the water depth section where the number of votes is 2, try to choose the water depth value of the current pixel with the largest average precision and small average relative error;
c)当最大投票数为1时,即4个单源的分类结果均不相同,也就是说4景影像的反演水深分布在4个水深段内,此时相信Kappa系数最大的影像。c) When the maximum number of votes is 1, that is, the classification results of the four single sources are all different, that is to say, the inversion water depths of the four scene images are distributed in the four water depth segments, and the image with the largest Kappa coefficient is believed at this time.
第五步:水深反演精度验证:Step 5: Water depth inversion accuracy verification:
利用检查点开展单源反演结果和多源反演结果的精度验证,计算整体和分不同水深段的平均相对误差和平均绝对误差,从而对多源水深反演融合的精度进行验证。Use checkpoints to verify the accuracy of single-source inversion results and multi-source inversion results, and calculate the average relative error and average absolute error of the whole and different water depth segments, so as to verify the accuracy of multi-source water depth inversion fusion.
(1)水深多源遥感反演融合参数及融合模型执行情况(1) Water depth multi-source remote sensing inversion fusion parameters and fusion model implementation
本实施例对选取于2008年1月10日的QuickBird、2010年2月7日的WorldView-2、2012年3月9日的Pleiades和2013年4月5日的SPOT-6开展水深多源反演融合的经过进行对比验证。表2中展示了通过蓝、绿、红三波段对数线性模型反演水深得到的反演参数,以及控制点处的分段平均相对误差。多源遥感影像水深反演融合以WorldView-2水深反演影像为基础,与Pleiades、QuickBird和SPOT-6影像的水深反演结果进行决策融合。表2中影像类型的顺序按照影像的空间分辨率从左至右逐渐增大排列,分段平均精度与分段平均相对误差的1-4依次表示以2m、5m、10m为间隔点所分的4个水深段。This embodiment carries out water depth multi-source reflection on QuickBird on January 10, 2008, WorldView-2 on February 7, 2010, Pleiades on March 9, 2012, and SPOT-6 on April 5, 2013. The process of evolution and fusion is compared and verified. Table 2 shows the inversion parameters obtained by inverting the water depth through the logarithmic linear model of the blue, green, and red bands, and the segmented average relative error at the control points. Depth retrieval fusion of multi-source remote sensing images is based on WorldView-2 depth retrieval images, and is combined with the depth retrieval results of Pleiades, QuickBird and SPOT-6 images for decision-making. The order of the image types in Table 2 is arranged according to the spatial resolution of the image gradually increasing from left to right, and the segment average precision and segment average relative error of 1-4 represent the points divided by 2m, 5m, and 10m in turn. 4 water depth sections.
表2:水深单源遥感反演参数和多源决策融合参数Table 2: Water depth single-source remote sensing inversion parameters and multi-source decision-making fusion parameters
经过对比分析可以看出,整体分段精度最高的是SPOT-6影像,最差的是QuickBird影像。综合分段平均精度和分段平均相对误差考量,在第1段内最好的是Pleiades影像,它的分段精度最高,并且在该段的平均相对误差也较小,其次为SPOT-6影像,其平均相对误差较前者小8个百分点,但分段平均精度逊于前者。虽然Pleiades影像在第2段内的分段平均精度最好,但它在这一水深段的平均相对误差是4景影像中最大的,所以在保证了较高的分段平均精度的前提下,平均相对误差也较好的是SPOT-6影像。第3和4段内,SPOT-6影像不论在分段平均精度还是分段平均相对误差上均是最好的。After comparative analysis, it can be seen that the SPOT-6 image has the highest overall segmentation accuracy, and the QuickBird image has the worst accuracy. Considering the segment average precision and segment average relative error, the best in the first segment is the Pleiades image, which has the highest segment accuracy and the average relative error in this segment is also smaller, followed by the SPOT-6 image , its average relative error is 8 percentage points smaller than the former, but the segment average precision is worse than the former. Although the segment average accuracy of the Pleiades image is the best in the second segment, its average relative error in this water depth segment is the largest among the four scene images. Therefore, under the premise of ensuring a high segment average accuracy, The average relative error is also better for the SPOT-6 image. In the third and fourth segments, the SPOT-6 image is the best in both segment average precision and segment average relative error.
如图2所示,水深多源反演融合的结果中,生成的反演水深融合影像有1002行,1054列,即共有1056108个像元。经过统计,根据第2规则确定水深值的像元个数最多,为860835,占所有像元个数的81.51%,说明逐像元进行决策融合时,大多数像元的最大投票数占总数一半以上,也就是说,至少有3景影像在该像元的反演水深值为同一个水深段,而且最终的水深值取决于在这一水深段平均精度最大的水深反演影像。其次,执行次数较多的是第6规则,为72132,所占百分比为6.83%,最少的是第9规则,仅有128个像元。仅当4个影像反演的水深值均在不同水深段中才进行第9规则,这就意味着虽然4景影像的水深反演能力各不相同,但在水深分段上的结果相差不会太大,只有很小一部分会有明显的分歧,所以这4景影像都可以在决策融合中起到一定作用。As shown in Figure 2, in the results of multi-source inversion and fusion of water depth, the generated inversion water depth fusion image has 1002 rows and 1054 columns, that is, a total of 1056108 pixels. According to statistics, the number of pixels whose water depth value is determined according to the second rule is the largest, which is 860835, accounting for 81.51% of all pixels, indicating that when the decision fusion is performed pixel by pixel, the maximum number of votes of most pixels accounts for half of the total Above, that is to say, there are at least three scene images whose retrieved depth values in this pixel are in the same depth range, and the final depth value depends on the depth retrieval image with the highest average accuracy in this depth range. Secondly, the sixth rule is the most executed, 72132, accounting for 6.83%, and the least is the ninth rule, only 128 pixels. Only when the water depth values retrieved by the four images are all in different water depth segments, the ninth rule is used, which means that although the water depth inversion capabilities of the four images are different, the results in the water depth segments will not differ too much , only a small part will have obvious differences, so the images of these four scenes can all play a certain role in decision-making fusion.
(2)水深多源遥感反演融合的整体精度验证分析(2) Overall accuracy verification analysis of water depth multi-source remote sensing inversion fusion
对水深多源反演融合结果与融合前的单源结果作精度比较,得到的各精度评价指标如下表3所示。The accuracy of the fusion results of water depth multi-source inversion is compared with the single-source results before fusion, and the accuracy evaluation indicators obtained are shown in Table 3 below.
表3:水深多源反演融合的整体精度比较Table 3: Comparison of the overall accuracy of water depth multi-source inversion fusion
三个评价指标都表明经过水深多源决策融合后的结果较原影像反演的结果而言改善较为显著。平均相对误差从小到大依次为决策融合影像、SPOT-6影像、QuickBird影像、Pleiades影像和WorldView-2影像,相比最差的WorldView-2影像,融合后影像在水深控制点处的平均相对误差减小了40多个百分点,而融合前对结果影像初始化时,就是以这景影像的反演结果作为基准,说明决策融合确实很大程度改善了原影像的反演结果。即使与4景影像中反演精度最好的SPOT-6影像相比,融合影像在相对误差上也有12.7个百分比的减小。平均绝对误差的最小值与最大值间相差1.4m,是由融合影像或SPOT-6影像与Pleiades影像相比得到,QuickBird影像和WorldView-2影像的平均绝对误差较大,值分别为1.6m和1.8m,与最小值间相差0.8m和1m之多。用于评价分段精度的Kappa系数也表明:融合影像的像元在水深段归属的判别上更加准确,其次是Pleiades影像和SPOT-6影像,由QuickBird影像反演得到水深段标识影像精度较差。但一般认为Kappa值在大于0.80时,分类图和地面参考信息间的一致性很大或者精度很高,这4个影像的Kappa值都大于该临界值,说明其一致性都比较好。最差的是WorldView-2影像,以0.6139的Kappa值位列最末。The three evaluation indicators all show that the results after water depth multi-source decision-making fusion are more significant than the results of the original image inversion. The average relative error from small to large is the decision fusion image, SPOT-6 image, QuickBird image, Pleiades image and WorldView-2 image. Compared with the worst WorldView-2 image, the average relative error of the fused image at the water depth control point The reduction was more than 40 percentage points, and when the result image was initialized before fusion, the inversion result of this image was used as the benchmark, indicating that the decision-making fusion did greatly improve the inversion result of the original image. Even compared with the SPOT-6 image, which has the best inversion accuracy among the 4-scene images, the relative error of the fused image is reduced by 12.7 percent. The difference between the minimum value and the maximum value of the average absolute error is 1.4m, which is obtained by comparing the fusion image or SPOT-6 image with the Pleiades image. The average absolute error of the QuickBird image and WorldView-2 image is relatively large, with values of 1.6m and 1.8m, which differs from the minimum value by as much as 0.8m and 1m. The Kappa coefficient used to evaluate the segmentation accuracy also shows that the pixels of the fused image are more accurate in discriminating the ownership of the water depth segment, followed by the Pleiades image and the SPOT-6 image, and the image accuracy of the water depth segment identification obtained from the QuickBird image inversion is poor . However, it is generally believed that when the Kappa value is greater than 0.80, the consistency between the classification map and the ground reference information is very high or the accuracy is very high. The Kappa values of these four images are all greater than this critical value, indicating that the consistency is relatively good. The worst is the WorldView-2 image, which ranks last with a Kappa value of 0.6139.
如图3a、3b、3c、3d、3e所示,给出了水深多源反演融合前后实测水深与反演水深的散点图。通过散点图可以发现除Pleiades影像外,另3景影像对2m以下的水深点反演效果都不理想。WorldView-2影像散点图中,数据点的分布比较集中,平均相对误差大应是受到了浅水区的数据点的影响。WorldView-2影像和Pleiades影像反演的最大水深值超出了实测水深检查点的范围,这在QuickBird影像和SPOT-6影像的散点图中没有出现。As shown in Figures 3a, 3b, 3c, 3d, and 3e, the scatter diagrams of the measured water depth and the retrieved water depth before and after the integration of multi-source inversion of water depth are given. Through the scatter diagram, it can be found that except for the Pleiades image, the inversion effect of the other three scene images is not ideal for water depth points below 2m. In the WorldView-2 image scatter diagram, the distribution of data points is relatively concentrated, and the large average relative error should be affected by the data points in the shallow water area. The maximum bathymetry retrieved from the WorldView-2 image and the Pleiades image is beyond the range of the measured bathymetry checkpoint, which does not appear in the scatter plots of the QuickBird image and the SPOT-6 image.
如图4所示,本实施例经过决策融合将不同分辨率的单源反演结果结合在一起,围岛一圈的20m以浅水域纹理细腻,水深变化较小,可以清晰的看到北岛所在的礁盘;在岛西南和东北方向的更大深度的深水区纹理则较为粗糙;在深度约为20m处,水深梯度较大,由浅至深过渡较明显。As shown in Figure 4, this embodiment combines the single-source inversion results of different resolutions through decision fusion. The shallow water area around the island is less than 20 meters in texture and has a small change in water depth. You can clearly see where the North Island is located. In the southwest and northeast of the island, the texture of the deeper deep water area is relatively rough; at a depth of about 20m, the water depth gradient is relatively large, and the transition from shallow to deep is more obvious.
(3)水深多源遥感反演融合的分段精度验证分析(3) Segmented accuracy verification analysis of water depth multi-source remote sensing inversion fusion
观察水深多源反演融合的分段误差分布,如表4所示,随深度增加,平均相对误差与平均绝对误差均没有规律性的增大或者减小的趋势。Observing the segmental error distribution of water depth multi-source inversion fusion, as shown in Table 4, as the depth increases, the average relative error and the average absolute error have no regular increase or decrease trend.
表4:水深多源反演融合的分段误差比较Table 4: Segmentation error comparison of water depth multi-source inversion fusion
0-2m水深段内,虽然反演结果的平均相对误差普遍较低,但仍有着十分显著的差距。精度最高的是水深多源反演融合影像,平均相对误差是39.1%,平均绝对误差为0.3m。其次是Pleiades影像,与前者的平均相对误差相差3.9%,平均绝对误差相等。之后是QuickBird影像、SPOT-6影像和WorldView-2影像,其平均相对误差和平均绝对误差都呈逐渐增大之势,尤其是WorldView-2影像,它在该水深段内的平均相对误差是SPOT-6影像的2倍,与最好的反演融合影像相比,差距多达210.1%,而平均绝对误差也差不多是反演融合影像的4倍,为1.1m。在2-5m水深段内,精度最高的是反演融合影像和SPOT-6影像,两者的平均相对误差和平均绝对误差相等,分别为5.3%和0.2m。WorldView-2影像在该水深段的反演能力相比浅水段有大幅提升,平均相对误差为28.8%,平均绝对误差为1.0m,但与该水深段精度最好的反演融合影像和SPOT-6影像相比起来,差距已经相当明显。QuickBird影像以32.8%的平均相对误差和1.4m的平均绝对误差排在第4,而0-2m水深段内反演精度最好的Pleiades影像在这一水深段内的反演精度最差。在5-10m水深段内,按平均相对误差与平均绝对误差从小到大的顺序排列,依次为反演融合影像和SPOT-6影像、QuickBird影像、Pleiades影像、WorldView-2影像。最小和最大的平均相对误差间相差8个百分比,平均绝对误差最多相差0.6m。在10-20m范围的水深段内,最小的平均相对误差和平均绝对误差均来自SPOT-6影像,最大的为Pleiades影像,值分别为6.3%、22.5%和3.4m、0.9m。反演融合影像的反演精度较好,平均相对误差为6.4%,平均绝对误差为1.0m。In the 0-2m water depth section, although the average relative error of the inversion results is generally low, there are still significant gaps. The highest accuracy is the depth multi-source inversion fusion image, with an average relative error of 39.1% and an average absolute error of 0.3m. The second is the Pleiades image, which differs from the former by 3.9% in mean relative error and equal in mean absolute error. Then there are QuickBird images, SPOT-6 images and WorldView-2 images, the average relative error and average absolute error are gradually increasing, especially the WorldView-2 image, its average relative error in this water depth range is SPOT Compared with the best inversion fusion image, the difference is as much as 210.1%, and the average absolute error is almost four times that of the inversion fusion image, which is 1.1m. In the 2-5m water depth section, the inversion fusion image and SPOT-6 image have the highest accuracy, and the average relative error and average absolute error of the two are equal, 5.3% and 0.2m, respectively. The inversion capability of WorldView-2 images in this water depth section has been greatly improved compared with the shallow water section, with an average relative error of 28.8% and an average absolute error of 1.0m. 6 images, the gap is quite obvious. The QuickBird image ranks fourth with an average relative error of 32.8% and an average absolute error of 1.4m, while the Pleiades image, which has the best inversion accuracy in the 0-2m water depth range, has the worst inversion accuracy in this water depth range. In the 5-10m water depth section, the average relative error and average absolute error are arranged in ascending order, including the inversion fusion image, SPOT-6 image, QuickBird image, Pleiades image, and WorldView-2 image. There is a difference of 8 percentages between the minimum and maximum average relative errors, and a maximum difference of 0.6m between the average absolute errors. In the water depth range of 10-20m, the smallest average relative error and average absolute error are from the SPOT-6 image, and the largest is the Pleiades image, with values of 6.3%, 22.5%, and 3.4m, 0.9m, respectively. The inversion accuracy of the fusion image is good, with an average relative error of 6.4% and an average absolute error of 1.0m.
在水深多源反演融合反演中,SPOT-6影像除在浅水段内反演能力表现不佳外,在其它水深段均是所有遥感水深反演影像中精度最高的1景。Pleiades影像有效弥补了SPOT-6影像在浅水区的不足,但其在2-5m和10-20m的精度是4景中最差的。WorldView-2影像在0-2m和5-10m这2个水深段内反演精度最差,在另外2个水深段精度一般。而QuickBird影像在各个水深段内的反演精度都在中等程度徘徊。除在10-20m水深段内的精度略低于SPOT-6影像,多源反演融合影像在其他水深段的反演精度均是最好的。In the fusion inversion of multi-source inversion of water depth, the SPOT-6 image has the highest accuracy among all remote sensing water depth inversion images in other water depths, except that the inversion ability in the shallow water section is not good. The Pleiades image effectively makes up for the deficiency of the SPOT-6 image in shallow water, but its accuracy at 2-5m and 10-20m is the worst among the four scenes. The inversion accuracy of WorldView-2 images is the worst in the two water depth ranges of 0-2m and 5-10m, and the accuracy in the other two water depth ranges is average. However, the inversion accuracy of QuickBird images in each water depth range is at a medium level. Except that the accuracy in the 10-20m water depth section is slightly lower than that of the SPOT-6 image, the inversion accuracy of the multi-source inversion fusion image is the best in other water depth sections.
本发明与现有的反演方法相比,本方法可综合利用多种遥感数据源对水深信息的不同响应,扩大了遥感影像光谱数据波段、光谱信息的使用范围,挖掘其中的水深数据,提高反演精度,经过决策融合处理,尤其适用于复杂情况下浅水区域的海洋水深测量。Compared with the existing inversion method, the present invention can comprehensively utilize the different responses of multiple remote sensing data sources to water depth information, expand the use range of remote sensing image spectral data bands and spectral information, mine water depth data, and improve Inversion accuracy, after decision-making fusion processing, is especially suitable for ocean bathymetry in shallow water areas under complex conditions.
本发明未详尽描述的技术内容均为公知技术。The technical contents not described in detail in the present invention are all known technologies.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510975396.6A CN105651263B (en) | 2015-12-23 | 2015-12-23 | Shallow water depth multi-source remote sensing merges inversion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510975396.6A CN105651263B (en) | 2015-12-23 | 2015-12-23 | Shallow water depth multi-source remote sensing merges inversion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105651263A CN105651263A (en) | 2016-06-08 |
CN105651263B true CN105651263B (en) | 2018-02-23 |
Family
ID=56476649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510975396.6A Expired - Fee Related CN105651263B (en) | 2015-12-23 | 2015-12-23 | Shallow water depth multi-source remote sensing merges inversion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105651263B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109059796B (en) * | 2018-07-20 | 2020-07-31 | 自然资源部第三海洋研究所 | Shallow sea water depth multispectral satellite remote sensing inversion method for water depth control point-free area |
CN109657392A (en) * | 2018-12-28 | 2019-04-19 | 北京航空航天大学 | A kind of high-spectrum remote-sensing inversion method based on deep learning |
CN109631862A (en) * | 2019-01-22 | 2019-04-16 | 青岛秀山移动测量有限公司 | A kind of multi-Sensor Information Fusion Approach of intertidal zone integration mapping |
CN111561916B (en) * | 2020-01-19 | 2021-09-28 | 自然资源部第二海洋研究所 | Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image |
CN111651707B (en) * | 2020-05-28 | 2023-04-25 | 广西大学 | A Tide Level Retrieval Method Based on Satellite Remote Sensing Images in Optical Shallow Water Area |
CN112013822A (en) * | 2020-07-22 | 2020-12-01 | 武汉智图云起科技有限公司 | Multispectral remote sensing water depth inversion method based on improved GWR model |
CN111947628B (en) * | 2020-08-25 | 2022-05-27 | 自然资源部第一海洋研究所 | Linear water depth inversion method based on inherent optical parameters |
CN113326470B (en) * | 2021-04-11 | 2022-08-16 | 桂林理工大学 | Remote sensing water depth inversion tidal height correction method |
CN113255144B (en) * | 2021-06-02 | 2021-09-07 | 中国地质大学(武汉) | A shallow sea remote sensing bathymetric inversion method based on FUI partition and Ransac |
CN113639716A (en) * | 2021-07-29 | 2021-11-12 | 北京航空航天大学 | A water depth remote sensing inversion method based on depth residual shrinkage network |
CN113793374B (en) * | 2021-09-01 | 2023-12-22 | 自然资源部第二海洋研究所 | Method for inverting water depth based on water quality inversion result by improved four-band remote sensing image QAA algorithm |
CN114943161B (en) * | 2022-07-27 | 2022-09-27 | 中国水利水电科学研究院 | A topographic inversion method for inland lakes based on multi-source remote sensing data |
CN117514148B (en) * | 2024-01-05 | 2024-03-26 | 贵州航天凯山石油仪器有限公司 | Oil-gas well working fluid level identification and diagnosis method based on multidimensional credibility fusion |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9046363B2 (en) * | 2012-04-27 | 2015-06-02 | SATOP GmbH | Using multispectral satellite data to determine littoral water depths despite varying water turbidity |
CN104457901B (en) * | 2014-11-28 | 2018-01-05 | 南京信息工程大学 | A kind of method and system for determining the depth of water |
-
2015
- 2015-12-23 CN CN201510975396.6A patent/CN105651263B/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
CN105651263A (en) | 2016-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105651263B (en) | Shallow water depth multi-source remote sensing merges inversion method | |
CN107063197B (en) | Reservoir characteristic curve extraction method based on spatial information technology | |
CN103363962B (en) | Remote sensing evaluation method of lake water reserves based on multispectral images | |
CN101799921B (en) | Cloud detection method of optic remote sensing image | |
CN112013822A (en) | Multispectral remote sensing water depth inversion method based on improved GWR model | |
CN104361590B (en) | High-resolution remote sensing image registration method with control points distributed in adaptive manner | |
CN105469393A (en) | Shallow water depth multi-temporal remote sensing image inversion method based on decision fusion | |
CN107610164B (en) | High-resolution four-number image registration method based on multi-feature mixing | |
CN105139375B (en) | Combining global DEM and stereoscopic vision a kind of satellite image cloud detection method of optic | |
CN112926468B (en) | Tidal flat elevation automatic extraction method | |
CN103793907A (en) | Water body information extracting method and device | |
CN103295239A (en) | Laser-point cloud data automatic registration method based on plane base images | |
CN109359533B (en) | A coastline extraction method based on multi-band remote sensing images | |
CN113569760B (en) | Three-dimensional change detection method based on multi-mode deep learning | |
CN117274831B (en) | A method for inverting water depth of nearshore turbid water bodies based on machine learning and hyperspectral satellite remote sensing images | |
CN105627997A (en) | Multi-angle remote sensing water depth decision fusion inversion method | |
WO2024036739A1 (en) | Reservoir water reserve inversion method and apparatus | |
CN104197902A (en) | Method for extracting shallow sea terrain by single-shot high-resolution optical remote sensing image | |
CN115546656A (en) | Remote sensing image breeding area extraction method based on deep learning | |
CN113221813B (en) | Coastline remote sensing extraction method | |
CN104680151A (en) | High-resolution panchromatic remote-sensing image change detection method considering snow covering effect | |
CN114724045A (en) | A method for inversion of underwater topography in coastal shallow water | |
CN116817869A (en) | Submarine photon signal determination method using laser radar data | |
CN117036442A (en) | Robust monocular depth completion method, system and storage medium | |
Zhang et al. | A study on coastline extraction and its trend based on remote sensing image data mining |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20180223 Termination date: 20181223 |
|
CF01 | Termination of patent right due to non-payment of annual fee |