CN105651263A - Shallow sea water depth multi-source remote sensing fusion inversion method - Google Patents

Shallow sea water depth multi-source remote sensing fusion inversion method Download PDF

Info

Publication number
CN105651263A
CN105651263A CN201510975396.6A CN201510975396A CN105651263A CN 105651263 A CN105651263 A CN 105651263A CN 201510975396 A CN201510975396 A CN 201510975396A CN 105651263 A CN105651263 A CN 105651263A
Authority
CN
China
Prior art keywords
depth
water
image
source
precision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510975396.6A
Other languages
Chinese (zh)
Other versions
CN105651263B (en
Inventor
张靖宇
马毅
张震
梁建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Institute of Oceanography SOA
Original Assignee
First Institute of Oceanography SOA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Institute of Oceanography SOA filed Critical First Institute of Oceanography SOA
Priority to CN201510975396.6A priority Critical patent/CN105651263B/en
Publication of CN105651263A publication Critical patent/CN105651263A/en
Application granted granted Critical
Publication of CN105651263B publication Critical patent/CN105651263B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C13/00Surveying specially adapted to open water, e.g. sea, lake, river or canal
    • G01C13/008Surveying specially adapted to open water, e.g. sea, lake, river or canal measuring depth of open water

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Hydrology & Water Resources (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

A shallow sea water depth multi-source remote sensing fusion inversion method comprises the steps of: 1, preprocessing a multispectral remote sensing image to obtain a sea surface reflectivity; 2, acquiring and processing a field test depth value; 3, conducting single-source water depth inversion and water depth segment identification; 4, conducting multi-source water depth inversion fusion; and 5, conducting water depth inversion accuracy verification; using the n types of single-source water depth results, corresponding water depth segment identification images and fusion parameters as input, and conducting multi-source water depth inversion fusion by pixel; verifying water depth inversion accuracy, and using a final depth value as the output data of the actual water depth value of the remote sensing image. Compared with the inversion method of the prior art, the method can comprehensively utilize different responses of a plurality of remote sensing data sources to water depth information, excavate the water depth data, improve inversion accuracy, and conduct decision-making fusion process, and is especially applicable to marine water depth measurement of shallow water region in complex situations.

Description

Shallow water depth multi-source remote sensing merges inversion method
Technical field
The present invention relates to a kind of marine sounding method, belong to remote sensing technology field, space, particularly relate to one and multiple remote sensor can be utilized to carry out the deep penetrating shallow water depth multi-source remote sensing fusion inversion method of ocean water.
Background technology
Marine sounding is guarantee ship's navigation, carries out port and pier and the necessary basis data of oceanographic engineering construction, formulation seashore and island Correlative plan. Compared with depth of water field measurement means, remote sensing technology have cover wide, the cycle is short, expense is low, the high many-sided advantage of spatial resolution. Since the seventies in 20th century, having carried out the research of various passive remote sensing Depth extraction model both at home and abroad, conventional visible ray Depth extraction model mainly comprises analytical model, half and analyzes semi-empirical model and statistical model. Utilize different model, in recent years in river, lake, reservoir, the bathymetry field such as island and littoral zone periphery carried out inverting application.
Depth of water visible spectral remote sensing inverting is the effective terms of settlement obtaining the complicated landform depth of water in shallow sea, especially can inverting obtain ship cannot near and be difficult to enter the water depth information in region. But it is difficult to due to model take into account physical mechanism and parametrization, the limited space that therefore existing visible ray depth of water remote sensing estimation model precision improves again.
The restriction of envrionment conditions when depth of water multi-source remote sensing inverting can overcome single source video imaging, and the abundanter band class information of multi-source remote sensing image offer is also conducive to the extraction of Water Depth Information with its spectral resolution being not quite similar, the current existing research work that multi-source is applied to depth of water remote-sensing inversion, but it is mostly the interpolation for spatial information, it does not have develop and apply in decision-making fusion aspect. And decision-making fusion can make full use of existing remote sensing image resource and information, provide new way for improving optics remote sensing Depth extraction precision.
Chinese patent (application number 201310188829.4, Shen Qing Publication day CN104181515A) discloses " a kind of shallow water depth inversion method based on blue-yellow wave band high-spectral data ". it is mainly used in solving the model utilizing optics remote sensing means to carry out cleaning water body Depth extraction mostly to set up for multispectral data, such algorithm is by multispectral data wide waveband, the restriction that spectrum information is few, this invention is according to water body optical attenuation mechanism, the novel method that one utilizes the cleaning water body shallow water depth of blue-yellow wave band (450-610 nanometer) high-spectral data inverting is proposed based on high-spectral data, the method can accurately extract 30 meters within shallow water depth distributed intelligence, and for a kind of remote sensor, only need to carry out an algorithm coefficient to demarcate, algorithm universality be improved significantly.But the party adopts the remote sensor in single source to obtain image as detection data source, remote sensing image spectroscopic data wave band, the spectrum range of message that can utilize are limited, it is unfavorable for improving the accuracy of shallow water depth inverting for bathymetry, especially in complex situations that the Effect on Detecting of the neritic province domain depth of water is not enough.
Summary of the invention
The present invention provides a kind of shallow water depth multi-source remote sensing merged based on decision-making and merges inversion method, prior art only use the image of single source remote sensor as data source for solving, the use range of its remote sensing image spectroscopic data wave band, spectrum information is limited, bathymetry precision and the poor problem of accuracy.
Shallow water depth multi-source remote sensing merges inversion method, comprises the following steps:
The first step: multi-spectrum remote sensing image is carried out pre-treatment, obtains sea table reflectivity;
Described pre-treatment comprises radiance conversion, air correction and solar flare and removes;
2nd step: field measurement water depth value obtains and process;
Obtain the depth of water data of trial plot and corresponding latitude and longitude coordinates, confirm to measure the tidal height value in moment by tide table, the correction of depth of water data is obtained the depth of water of theoretical depth datum, again according to the acquisition moment of multi-spectrum remote sensing image, the tide correction that the depth of water data of theoretical depth datum carry out the instantaneous depth of water is to obtain the instantaneous depth of water;
3rd step: single source Depth extraction and depth of water segment identification;
According to the relation between the depth of water reference mark place depth of water and corresponding image picture element reflectivity, adopt multiband model to carry out statistical regression, export the input merged as multi-source inverting in the parameter of this source image Depth extraction, and multiband model is carried out parameter calibration, multiband model formation is as follows
Z = A 0 + Σ i = 1 n A i X i - - - ( 1 )
Xi=Ln (��i-��si)(2)
Wherein, Z is the depth of water, and n is the wave band number participating in inverting, A0And AiFor undetermined coefficient; ��iIt is the i-th wave band reflectivity data, ��siIt it is the reflectivity at this wave band deep water place;
Depth of water reference mark is divided into multiple depth of water section as input, exports the average relative error of each depth of water section, as another input that multi-source Depth extraction merges, i.e. fusion parameters; Merge the fusion parameters of input as multi-source Depth extraction, also comprise the Kappa coefficient of single source image and the average precision of segmentation of each depth of water section of output;
δ k = 1 n Σ i = 1 n | z i - z i ′ | z i - - - ( 3 )
K ^ = n Σ i = 1 k x i i - Σ i = 1 k ( x i + × x + i ) n 2 - Σ i = 1 k ( x i + × x + i ) - - - ( 4 )
δ m a _ k = PA k + UA k 2 - - - ( 5 )
Wherein, n is depth of water reference mark number, and k represents depth of water section, in formula 3, and ��kFor average relative error, ziIt is the measured value at i-th depth of water reference mark, zi' it is its inverting value, in formula 4,For Kappa coefficient, xiiRepresent the reference mark number of correct classification, xi+��x+iWhen being that depth of water reference mark is carried out segmentation statistics, the ranks cut off value of error matrix, in formula 5, ��ma_kIt is the average precision of segmentation, PAkIt is producer's precision of kth depth of water section, UAkIt it is user's precision of kth depth of water section;
Utilize fusion parameters and whole scape remote sensing image, calculate single source Depth extraction result, and after being carried out correcting theory depth datum, result is carried out segmentation, obtain depth of water segment identification image;
4th step: multi-source Depth extraction merges;
Using the depth of water segment identification image of the Depth extraction result in n kind list source and correspondence thereof and fusion parameters as input, merge by carrying out multi-source Depth extraction as unit, specifically comprise;
A) when the poll of certain depth of water section is t, andExplanation hasIt is in same depth of water section that kind or more plants the inversion result of image, whereinRepresent get downwards whole, now, if having 2 kinds or image of more than two kinds obtains equal Depth extraction value, it is directly then that this value is composed by current picture unit, otherwise, compare this several image in the average relative error of this depth of water section and average precision, using maximum for average for depth of water section precision as final picture unit value;Only when depth of water section average relative error corresponding to the image that this average precision of depth of water section is maximum is also maximum, select the image that average precision is taken second place;
B) when maximum number of votes obtained t meetsAnd have x (x >=2) number of votes obtained to be t, now contrast Kappa coefficient and n sorter average precision in correspondence depth of water section separately, if the image that Kappa coefficient is maximum and average precision maximum be all judged to same depth of water section, and be homology image, by the water depth value of this image picture element as a result; If not same scape image, then determine that this two scape average relative error in this depth of water section is less; If different maximum from average precision of the depth of water section that the maximum image of Kappa coefficient judges, then get the former water depth value; If only there being 1 number of votes obtained to be t, then it is in the depth of water section of t in votes, maximum for average for depth of water section precision is worth as final picture unit; When depth of water section average relative error corresponding to the image that this average precision of depth of water section is maximum is also maximum, select the image that average precision is taken second place;
C) as maximum votes t=1, the water depth value corresponding to image selecting Kappa coefficient maximum;
5th step: Depth extraction precision test;
Described precision test is single source inversion result and the comparison merging rear multi-source inversion result before utilizing depth of water check point to carry out fusion, after Depth extraction precision test completes, as the actual water depth value of remote sensing image, final water depth value is exported data.
Shallow water depth multi-source remote sensing as above merges inversion method, and the spoke brightness transition in the described the first step is that remote sensing image DN value is converted into spoke brightness value; Described solar flare is removed can adopt median method, average method or wavelet method; The correction of described air can adopt FLAASH, dark picture unit or 6S atmospheric correction method.
In the present invention, the reference images selected is merged in decision-making, depending on scale and the resolving power of required depth of water image, without particular requirement. If benchmark made by the image selecting spatial resolution maximum, although the travelling speed merged has a certain upgrade, but space coupling adopts the decision-making with centre coordinate place of picture unit to merge numerical value represents whole picture unit, and the quantity of information of loss is bigger. So, consider from processing efficiency and inverting fusion accuracy angle, preferably adopt the Depth extraction result image generated using the image that resolving power is the highest as benchmark, need to mate reference images picture unit centre coordinate with the position of other remote sensing source depth of water image simultaneously, obtain all single source inverting water depth value and the out of Memory at this coordinate place, carry out decision-making fusion, to reduce the quantity of information of possible loss, ensure the precision of inverting.
The useful effect of the present invention:
Compared with existing inversion method, present method can fully utilize multiple remotely-sensed data source to the different responses of Water Depth Information, expand the use range of remote sensing image spectroscopic data wave band, spectrum information, excavate depth of water data wherein, improve inversion accuracy, through decision-making fusion treatment, the marine sounding being particularly useful under complexcase shallow water area.
Accompanying drawing explanation
Fig. 1 is the schema of the present invention;
Schema is merged in the depth of water multi-source inverting that Fig. 2 is the present invention;
Fig. 3 a is depth of water multi-source fusion results scatter diagram;
Fig. 3 b is single source WorldView-2 Depth extraction result scatter diagram;
Fig. 3 c is single source Pleiades Depth extraction result scatter diagram;
Fig. 3 d is single source QuickBird Depth extraction result scatter diagram;
Fig. 3 e is single source SPOT-6 Depth extraction result scatter diagram;
Fig. 4 is depth of water multi-source remote sensing inverting fusion results of the present invention;
Embodiment
For making the object of the embodiment of the present invention, technical scheme and advantage clearly, below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described.
1 the specific embodiment of the invention being described in further detail by reference to the accompanying drawings, shallow water depth multi-source remote sensing merges inversion method, specifically comprises the following steps:
The first step: Multi-spectral Remote Sensing Data pre-treatment:
First the multi-spectrum remote sensing image that the participation Depth extraction obtained by multi-source sensor is merged is carried out pre-treatment, comprise spoke brightness transition, air correction and solar flare and remove. Spoke brightness transition is the process that image DN value is converted into spoke brightness, and the spoke brightness transition formula that different remotely-sensed data products is corresponding is different, following two kinds of general employing:
L = D N G a i n + B i a s - - - ( 6 )
L = D N * a b s C a l F a c t o r e f f e c t i v e B a n d w i d t h - - - ( 7 )
Parameters corresponding in formula all can obtain in the meta data file of image. After obtaining multispectral spoke brightness image, adopt the method such as FLAASH or dark picture unit, 6S to carry out air correction, obtain sea table reflectivity data; For removing the interference that sea table solar flare and floating matter etc. bring, the methods such as intermediate value, average or little ripple are then adopted to carry out solar flare removal.
2nd: field measurement water depth value obtains and process:
Utilize multi-beam depth of water instrument or other bathymetry means to obtain the depth of water data of trial plot, obtain corresponding latitude and longitude coordinates simultaneously. Confirm that depth of water data are corrected by the tidal height value measuring the moment with tide table.
3rd step: single source Depth extraction and depth of water segment identification:
According to the relation between the depth of water reference mark place depth of water and corresponding image picture element reflectivity, adopt multiband model to carry out statistical regression, export the input merged as multi-source inverting in the parameter of this source image Depth extraction, and multiband model is carried out parameter calibration, multiband model formation is as follows
Z = A 0 + Σ i = 1 n A i X i - - - ( 1 )
Xi=Ln (��i-��si)(2)
Wherein, Z is the depth of water, and n is the wave band number participating in inverting, A0And AiFor undetermined coefficient; ��iIt is the i-th wave band reflectivity data, ��siIt it is the reflectivity at this wave band deep water place.
Depth of water reference mark is divided into multiple depth of water section as input, exports the average relative error of each depth of water section, as another input that multi-source Depth extraction merges, i.e. fusion parameters;
The fusion parameters of input, also comprises the Kappa coefficient of single source image and the average precision of segmentation of each depth of water section of output;
δ k = 1 n Σ i = 1 n | z i - z i ′ | z i - - - ( 3 )
K ^ = n Σ i = 1 k x i i - Σ i = 1 k ( x i + × x + i ) n 2 - Σ i = 1 k ( x i + × x + i ) - - - ( 4 )
δ m a _ k = PA k + UA k 2 - - - ( 5 )
Wherein, n is depth of water reference mark number, and k represents depth of water section, in formula 3, and ��kFor average relative error, ziIt is the measured value at i-th depth of water reference mark, zi' it is its inverting value, in formula 4,For Kappa coefficient, xiiRepresent the reference mark number of correct classification, xi+��x+iWhen being that depth of water reference mark is carried out segmentation statistics, the ranks cut off value of error matrix, ��ma_kIt is the average precision of segmentation, PAkIt is producer's precision of kth depth of water section, UAkIt it is user's precision of kth depth of water section;
Utilize the parameter and whole scape remote sensing image that obtain, calculate single source Depth extraction result, and by its instantaneous water deep correction to theoretical depth datum, afterwards result is carried out segmentation, obtain depth of water segment identification image;
This sentences in multi-source inverting fusion method the error matrix of the WorldView-2 image used at depth of water reference mark place is example, shows at table 1.
The error matrix at table 1:WorldView-2 depth of water reference mark
In error matrix, ground reference checking information is shown in list, and the classification that behavior remotely-sensed data obtains, main diagonal element is (such as x11, formula represents for xii) it is the picture unit that classification is correct, the outer element of diagonal lines is the picture unit number of remotely-sensed data classification relative to ground reference mistake.So, in this experiment, 1��4 represents 4 depth of water sections being divided into as interval taking 2m, 5m, 10m respectively, and z is the actual measurement depth of water, and z' is the inverting depth of water. The quantity at reference mark in 4 actual measurement depth of water sections is shown in list, row represents and utilizes remote sensing image inverting to obtain the quantity at reference mark in 4 depth of water sections, principal diagonal is the some number that the picture unit inverting depth of water is assigned to correct measurement depth of water section, otherwise, it is the some number of mistake segmentation outside line. Producer's precision (PA) in error matrix is supposing that 1 depth of water reference mark is when kth class, the probability that the picture unit of this some correspondence is classified as during the remote sensing image inverting depth of water k, the summation arranged divided by kth by the correct classification number of kth class (is represented in formula for x+i) try to achieve. User's precision (UA) is if when to be the image inverting depth of water be grouped into kth class by certain reference mark correspondence as unit, the true measurement depth of water at this depth of water reference mark belongs to the per-cent of kth class, its calculate the number by being correctly categorized as kth class divided by be categorized as k summation (be also exactly the summation of row k, i.e. x in formulai+)��
Kappa coefficient is the measuring of consistence or precision between remote sensing classification chart and reference data, and the probability consistence provided by principal diagonal and ranks sum is expressed. Kappa coefficient in example be 0.7686 can be interpreted as utilizing this scape WorldView-2 image inverting depth of water after the water depth distribution that obtains be better than the depth of water section of random division with the degree of 76.86%.
Producer's precision and user's precision are more more good close to 1, and optimal situation is producer's precision and user's precision is all 1. So, consider for balance do not lose biased, the number of parameters simultaneously also finally participating in decision-making merge to simplify, the average getting the two in the present embodiment as fusion parameters, i.e. the average precision of segmentation.
Utilize fusion parameters and whole scape remote sensing image, calculate single source Depth extraction result, and after being carried out correcting theory depth datum, result is carried out segmentation, obtain depth of water segment identification image;
4th step: multi-source Depth extraction merges:
Using the input that single source Depth extraction result, depth of water segment identification image and fusion parameters merge as multi-source Depth extraction, carry out fusion by as unit. 4 single sources (i.e. remote sensing image) are adopted to determine final value in the depth of water section institute votes of current picture unit, specific as follows:
A) when the poll of certain depth of water section is more than or equal to 3, the image inversion result that 3 kinds or more than 3 kinds are described is in same depth of water section, now, if having 2 and above image obtains equal Depth extraction value, it is directly then that this value is composed by current picture unit, otherwise, compare this several image in the average relative error of this depth of water section and average precision, select the water depth value as current picture unit that average precision is big and average relative error is little as far as possible. Under the prerequisite that average precision is bigger, investigate the average relative error of this image in this depth of water section, if the image average relative error that on average precision is maximum is also maximum, then the image that the average precision of the selection abandoned is second largest;
B) when maximum number of votes obtained equals 2, and gained vote situation is 2,2, it is meant that have the inverting depth of water of two width images to drop on same depth of water section respectively. Now investigate Kappa coefficient and 4 sorters average precision in corresponding depth of water section separately, if the image that Kappa coefficient is maximum and average precision maximum be all judged to same depth of water section, and be homology image, the water depth value just selecting this image picture element is as a result, if not same scape image, then select this two scape average relative error in this depth of water section less. If different maximum from average precision of the depth of water section that the maximum image of Kappa coefficient judges, then select the former water depth value.When maximum number of votes obtained equals 2, and gained vote situation is 2,1,1, is in the depth of water section of 2 in votes, selects the water depth value as current picture unit that average precision is big and average relative error is little as far as possible;
C) when maximum votes is 1, namely the classification results in 4 single sources is all not identical, and that is the inverting water depth distribution of 4 scape images is in 4 depth of water sections, now believes the image that Kappa coefficient is maximum.
5th step: Depth extraction precision test:
Utilize check point to carry out the precision test of single source inversion result and multi-source inversion result, calculate overall and that divide different water depth section average relative error and mean absolute error, thus the precision that multi-source Depth extraction merges is verified.
(1) depth of water multi-source remote sensing inverting fusion parameters and Fusion Model implementation status
The process choosing the QuickBird on January 10th, 2008, the WorldView-2 on February 7th, 2010, the SPOT-6 in 5, Pleiades and 2013 on the April on March 9th, 2012 carry out depth of water multi-source inverting fusion is carried out contrast verification by the present embodiment. Table 2 is illustrated the inverted parameters obtained by the three wave band log-linear model inversion depth of waters blue, green, red, and the segmentation average relative error at reference mark place. Multi-source remote sensing image Depth extraction merges based on WorldView-2 Depth extraction image, carries out decision-making fusion with the Depth extraction result of Pleiades, QuickBird and SPOT-6 image. In table 2, the order of image modality increases arrangement from left to right gradually according to the spatial resolution of image, and the 1-4 of the average precision of segmentation and segmentation average relative error represents 4 depth of water sections divided by spaced points of 2m, 5m, 10m successively.
Table 2: depth of water list source remote-sensing inversion parameter and multi-source decision-making fusion parameters
Through comparative analysis it may be seen that overall segmentation precision the highest be SPOT-6 image, worst is QuickBird image. The average precision of comprehensive segmentation and segmentation average relative error are considered, most preferably Pleiades image in the 1st section, its segmentation precision is the highest, and it is also less in the average relative error of this section, secondly it is SPOT-6 image, its average relative error is little compared with the former 8 percentage points, but the average precision of segmentation is inferior to the former. Although the segmentation average precision of Pleiades image in the 2nd section is best, but it the average relative error of this depth of water section be in 4 scape images maximum, so under the prerequisite that ensure that the average precision of higher segmentation, average relative error also preferably SPOT-6 image. 3rd and 4 sections in, no matter SPOT-6 image is all best in the average precision of segmentation or segmentation average relative error.
As shown in Figure 2, in the result that depth of water multi-source inverting is merged, the inverting depth of water fusion evaluation of generation has 1002 row, 1054 row, namely has 1056108 picture units. Through statistics, determine that the picture unit number of water depth value is maximum according to the 2nd rule, it is 860835, account for the 81.51% of all pictures unit number, illustrate by as unit carry out decision-making merge time, the maximum votes of most of picture unit account for sum more than half, that is, 3 scape images are had at least to be same depth of water section at the inverting water depth value of this picture unit, and final water depth value depends at the Depth extraction image that this average precision of depth of water section is maximum. Secondly, what execution number of times was more is the 6th rule, is 72132, and shared per-cent is 6.83%, and minimum is the 9th rule, only has 128 picture units. Only when the water depth value of 4 image invertings all carries out the 9th rule at different water depth Duan Zhongcai, although this just means that the Depth extraction ability of 4 scape images is different, but the result difference in depth of water segmentation can not be too big, an only very little part has obvious difference, so this 4 scape image can play a role in decision-making is merged.
(2) the overall precision checking that the inverting of depth of water multi-source remote sensing is merged is analyzed
Single source result before depth of water multi-source inverting fusion results and fusion is done precision comparison, and each precision evaluation index obtained is as shown in table 3 below.
Table 3: the overall precision that depth of water multi-source inverting is merged compares
Three evaluation indexes all show that the result of the result more former image inverting after depth of water multi-source decision-making is merged is improved comparatively remarkable. Average relative error is followed successively by decision-making fusion evaluation, SPOT-6 image, QuickBird image, Pleiades image and WorldView-2 image from small to large, compare worst WorldView-2 image, the average relative error of the rear image of fusion at depth of water reference mark place reduces more than 40 percentage point, and merge before to result image initialize time, it is exactly using the inversion result of this scape image as benchmark, illustrates that decision-making is merged very big degree really and improved the inversion result of former image. Even if compared with the SPOT-6 image best with inversion accuracy in 4 scape images, fusion evaluation also has the reduction of 12.7 per-cents in relative error. The minimum value of mean absolute error differs 1.4m with between maximum value, it is obtain compared with Pleiades image by fusion evaluation or SPOT-6 image, the mean absolute error of QuickBird image and WorldView-2 image is bigger, value is respectively 1.6m and 1.8m, differs with between minimum value more than 0.8m and 1m. Also show for evaluating the Kappa coefficient of segmentation precision: the picture unit of fusion evaluation is more accurate in the differentiation of depth of water section ownership, is secondly Pleiades image and SPOT-6 image, obtains depth of water segment identification image precision by QuickBird image inverting poor. It is generally understood that Kappa value is when being greater than 0.80, the consistence between classification chart and ground reference information is very big or precision is very high, and the Kappa value of these 4 images is all greater than this threshold value, illustrates that its consistence is all relatively good. Worst is WorldView-2 image, ranks most end with the Kappa value of 0.6139.
As shown in Fig. 3 a, 3b, 3c, 3d, 3e, give the scatter diagram that the front and back actual measurement depth of water and the inverting depth of water are merged in depth of water multi-source inverting. Can finding except Pleiades image by scatter diagram, another 3 scape images are all undesirable to the depth of water point efficiency of inverse process of below 2m. In WorldView-2 image scatter diagram, the distribution of data point is more concentrated, and average relative error should be greatly the impact of the data point receiving shoaling water. The maximum water depth value of WorldView-2 image and Pleiades image inverting is beyond the scope of actual measurement depth of water check point, and this does not occur in the scatter diagram of QuickBird image and SPOT-6 image.
As shown in Figure 4, single source inversion result of different resolution is combined in together through decision-making fusion by the present embodiment, encloses the 20m enclosed on island one fine and smooth with shallow water area texture, and water depth ratio is less, it is possible to see the reef dish at Bei Dao place clearly; Then comparatively coarse at the ballow texture in southwest, island and the bigger degree of depth in direction, northeast; Being about 20m place in the degree of depth, depth of water gradient is relatively big, and transition is more obvious from shallow to deep.
(3) the segmentation precision test that the inverting of depth of water multi-source remote sensing is merged is analyzed
Observe the segmentation error distribution that depth of water multi-source inverting is merged, as shown in table 4, increase with the degree of depth, average relative error and mean absolute error all do not have the regular trend increasing or reducing.
Table 4: the segmentation application condition that depth of water multi-source inverting is merged
In 0-2m depth of water section, although the average relative error of inversion result is general lower, but still there is very significant gap. What precision was the highest is depth of water multi-source inverting fusion evaluation, and average relative error is 39.1%, and mean absolute error is 0.3m.Next is Pleiades image, differs 3.9% with the former average relative error, and mean absolute error is equal. It is QuickBird image, SPOT-6 image and WorldView-2 image afterwards, its average relative error and mean absolute error are all the gesture of increase gradually, especially WorldView-2 image, its average relative error in this depth of water section is 2 times of SPOT-6 image, compared with best inverting fusion evaluation, gap reaches 210.1%, and mean absolute error is also almost 4 times of inverting fusion evaluation, is 1.1m. In 2-5m depth of water section, what precision was the highest is inverting fusion evaluation and SPOT-6 image, and both average relative errors and mean absolute error are equal, is respectively 5.3% and 0.2m. WorldView-2 image compares shallow water section in the inverting ability of this depth of water section to be had and significantly promotes, average relative error is 28.8%, mean absolute error is 1.0m, but the inverting fusion evaluation best with this depth of water section precision and SPOT-6 image are compared mutually, and gap is quite obvious. QuickBird image comes the 4th with the average relative error of 32.8% and the mean absolute error of 1.4m, and the inversion accuracy of the Pleiades image that in 0-2m depth of water section, inversion accuracy is best in this depth of water section is worst. In 5-10m depth of water section, arrange by average relative error and mean absolute error order from small to large, it is followed successively by inverting fusion evaluation and SPOT-6 image, QuickBird image, Pleiades image, WorldView-2 image. Differing 8 per-cents between minimum and maximum average relative error, mean absolute error differs at most 0.6m. In the depth of water section of 10-20m scope, minimum average relative error and mean absolute error are all from SPOT-6 image, and maximum is Pleiades image, and value is respectively 6.3%, 22.5% and 3.4m, 0.9m. The inversion accuracy of inverting fusion evaluation is better, and average relative error is 6.4%, and mean absolute error is 1.0m.
Merging in inverting in depth of water multi-source inverting, SPOT-6 image, except in shallow water section, inverting ability is performed poor, is 1 scape that in all remote sensing Depth extraction images, precision is the highest at other depth of water Duan Jun. Pleiades image effectively compensate for the deficiency of SPOT-6 image in shoaling water, but its precision of 2-5m and 10-20m be in 4 scapes worst. WorldView-2 image inversion accuracy in these 2 depth of water sections of 0-2m and 5-10m is worst, general in other 2 depth of water section precision. And the inversion accuracy of QuickBird image in each depth of water section is all hovered moderate. Except the precision in 10-20m depth of water section is a little less than SPOT-6 image, multi-source inverting fusion evaluation is all best at the inversion accuracy of other depth of water sections.
The present invention is compared with existing inversion method, present method can fully utilize multiple remotely-sensed data source to the different responses of Water Depth Information, expand the use range of remote sensing image spectroscopic data wave band, spectrum information, excavate depth of water data wherein, improve inversion accuracy, through decision-making fusion treatment, the marine sounding being particularly useful under complexcase shallow water area.
The technology contents of the not detailed description of the present invention is known technology.

Claims (2)

1. shallow water depth multi-source remote sensing merges inversion method, it is characterised in that, comprise the following steps:
The first step: multi-spectrum remote sensing image is carried out pre-treatment, obtains sea table reflectivity;
Described pre-treatment comprises radiance conversion, air correction and solar flare and removes;
2nd step: field measurement water depth value obtains and process;
Obtain the depth of water data of trial plot and corresponding latitude and longitude coordinates, confirm to measure the tidal height value in moment by tide table, the correction of depth of water data is obtained the depth of water of theoretical depth datum, again according to the acquisition moment of multi-spectrum remote sensing image, the tide correction that the depth of water data of theoretical depth datum carry out the instantaneous depth of water is to obtain the instantaneous depth of water;
3rd step: single source Depth extraction and depth of water segment identification;
According to the relation between the depth of water reference mark place depth of water and corresponding image picture element reflectivity, adopt multiband model to carry out statistical regression, export the input merged as multi-source inverting in the parameter of this source image Depth extraction, and multiband model is carried out parameter calibration, multiband model formation is as follows
Z = A 0 + Σ i = 1 n A i X i - - - ( 1 )
Xi=Ln (��i-��si)(2)
Wherein, Z is the depth of water, and n is the wave band number participating in inverting, A0And AiFor undetermined coefficient; ��iIt is the i-th wave band reflectivity data, ��siIt it is the reflectivity at this wave band deep water place;
Depth of water reference mark is divided into multiple depth of water section as input, exports the average relative error of each depth of water section, as another input that multi-source Depth extraction merges, i.e. fusion parameters; Merge the fusion parameters of input as multi-source Depth extraction, also comprise the Kappa coefficient of single source image and the average precision of segmentation of each depth of water section of output;
δ k = 1 n Σ i = 1 n | z i - z i ′ | z i - - - ( 3 )
K ^ = n Σ i = 1 k x i i - Σ i = 1 k ( x i + × x + i ) n 2 - Σ i = 1 k ( x i + × x + i ) - - - ( 4 )
δ m a _ k = PA k + UA k 2 - - - ( 5 )
Wherein, n is depth of water reference mark number, and k represents depth of water section, in formula 3, and ��kFor average relative error, ziIt is the measured value at i-th depth of water reference mark, zi' it is its inverting value, in formula 4,For Kappa coefficient, xiiRepresent the reference mark number of correct classification, xi+��x+iWhen being that depth of water reference mark is carried out segmentation statistics, the ranks cut off value of error matrix, in formula 5, ��ma_kIt is the average precision of segmentation, PAkIt is producer's precision of kth depth of water section, UAkIt it is user's precision of kth depth of water section;
Utilize fusion parameters and whole scape remote sensing image, calculate single source Depth extraction result, and after being carried out correcting theory depth datum, result is carried out segmentation, obtain depth of water segment identification image;
4th step: multi-source Depth extraction merges;
Using the depth of water segment identification image of the Depth extraction result in n kind list source and correspondence thereof and fusion parameters as input, merge by carrying out multi-source Depth extraction as unit, specifically comprise;
A) when the poll of certain depth of water section is t, andExplanation hasIt is in same depth of water section that kind or more plants the inversion result of image, whereinRepresent get downwards whole, now, if having 2 kinds or image of more than two kinds obtains equal Depth extraction value, it is directly then that this value is composed by current picture unit, otherwise, compare this several image in the average relative error of this depth of water section and average precision, using maximum for average for depth of water section precision as final picture unit value; Only when depth of water section average relative error corresponding to the image that this average precision of depth of water section is maximum is also maximum, select the image that average precision is taken second place;
B) when maximum number of votes obtained t meetsAnd have x (x >=2) number of votes obtained to be t, now contrast Kappa coefficient and n sorter average precision in correspondence depth of water section separately, if the image that Kappa coefficient is maximum and average precision maximum be all judged to same depth of water section, and be homology image, by the water depth value of this image picture element as a result; If not same scape image, then determine that this two scape average relative error in this depth of water section is less; If different maximum from average precision of the depth of water section that the maximum image of Kappa coefficient judges, then get the former water depth value; If only there being 1 number of votes obtained to be t, then it is in the depth of water section of t in votes, maximum for average for depth of water section precision is worth as final picture unit;When depth of water section average relative error corresponding to the image that this average precision of depth of water section is maximum is also maximum, select the image that average precision is taken second place;
C) as maximum votes t=1, the water depth value corresponding to image selecting Kappa coefficient maximum;
5th step: Depth extraction precision test;
Described precision test is single source inversion result and the comparison merging rear multi-source inversion result before utilizing depth of water check point to carry out fusion, after Depth extraction precision test completes, as the actual water depth value of remote sensing image, final water depth value is exported data.
2. shallow water depth multi-source remote sensing according to claim 1 merges inversion method, it is characterised in that, the spoke brightness transition in the described the first step is that remote sensing image DN value is converted into spoke brightness value; Described solar flare is removed can adopt median method, average method or wavelet method; The correction of described air can adopt FLAASH, dark picture unit or 6S atmospheric correction method.
CN201510975396.6A 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method Expired - Fee Related CN105651263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510975396.6A CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510975396.6A CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Publications (2)

Publication Number Publication Date
CN105651263A true CN105651263A (en) 2016-06-08
CN105651263B CN105651263B (en) 2018-02-23

Family

ID=56476649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510975396.6A Expired - Fee Related CN105651263B (en) 2015-12-23 2015-12-23 Shallow water depth multi-source remote sensing merges inversion method

Country Status (1)

Country Link
CN (1) CN105651263B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059796A (en) * 2018-07-20 2018-12-21 国家海洋局第三海洋研究所 The multispectral satellite remote sensing inversion method of shallow water depth without depth of water control point region
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
WO2020151214A1 (en) * 2019-01-22 2020-07-30 青岛秀山移动测量有限公司 Multi-sensor data fusion method for integrated surveying and mapping of intertidal zone
CN111561916A (en) * 2020-01-19 2020-08-21 自然资源部第二海洋研究所 Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN111651707A (en) * 2020-05-28 2020-09-11 广西大学 Tidal level inversion method based on optical shallow water satellite remote sensing image
CN111947628A (en) * 2020-08-25 2020-11-17 自然资源部第一海洋研究所 Linear water depth inversion method based on inherent optical parameters
CN112013822A (en) * 2020-07-22 2020-12-01 武汉智图云起科技有限公司 Multispectral remote sensing water depth inversion method based on improved GWR model
CN113255144A (en) * 2021-06-02 2021-08-13 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113326470A (en) * 2021-04-11 2021-08-31 桂林理工大学 Remote sensing water depth inversion tidal height correction method
CN113639716A (en) * 2021-07-29 2021-11-12 北京航空航天大学 Depth residual shrinkage network-based water depth remote sensing inversion method
CN113793374A (en) * 2021-09-01 2021-12-14 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by using improved four-waveband remote sensing image QAA algorithm
CN114943161A (en) * 2022-07-27 2022-08-26 中国水利水电科学研究院 Inland lake terrain inversion method based on multi-source remote sensing data
CN117514148A (en) * 2024-01-05 2024-02-06 贵州航天凯山石油仪器有限公司 Oil-gas well working fluid level identification and diagnosis method based on multidimensional credibility fusion

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104457901A (en) * 2014-11-28 2015-03-25 南京信息工程大学 Water depth determining method and system
US20150310618A1 (en) * 2012-04-27 2015-10-29 SATOP GmbH Using Multispectral Satellite Data to Determine Littoral Water Depths Despite Varying Water Turbidity

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310618A1 (en) * 2012-04-27 2015-10-29 SATOP GmbH Using Multispectral Satellite Data to Determine Littoral Water Depths Despite Varying Water Turbidity
CN104457901A (en) * 2014-11-28 2015-03-25 南京信息工程大学 Water depth determining method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
张靖宇等: "小波滤噪对多光谱遥感水深反演精度的影响分析", 《海洋科学进展》 *
徐升等: "长江口水域多光谱遥感水深反演模型研究", 《地理与地理信息科学》 *
田震等: "基于Landsat-8 遥感影像和LiDAR 测深数据的水深主被动遥感反演研究", 《海洋技术学报》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109059796A (en) * 2018-07-20 2018-12-21 国家海洋局第三海洋研究所 The multispectral satellite remote sensing inversion method of shallow water depth without depth of water control point region
CN109657392A (en) * 2018-12-28 2019-04-19 北京航空航天大学 A kind of high-spectrum remote-sensing inversion method based on deep learning
WO2020151214A1 (en) * 2019-01-22 2020-07-30 青岛秀山移动测量有限公司 Multi-sensor data fusion method for integrated surveying and mapping of intertidal zone
CN111561916B (en) * 2020-01-19 2021-09-28 自然资源部第二海洋研究所 Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN111561916A (en) * 2020-01-19 2020-08-21 自然资源部第二海洋研究所 Shallow sea water depth uncontrolled extraction method based on four-waveband multispectral remote sensing image
CN111651707A (en) * 2020-05-28 2020-09-11 广西大学 Tidal level inversion method based on optical shallow water satellite remote sensing image
CN111651707B (en) * 2020-05-28 2023-04-25 广西大学 Tidal level inversion method based on optical shallow water region satellite remote sensing image
CN112013822A (en) * 2020-07-22 2020-12-01 武汉智图云起科技有限公司 Multispectral remote sensing water depth inversion method based on improved GWR model
CN111947628A (en) * 2020-08-25 2020-11-17 自然资源部第一海洋研究所 Linear water depth inversion method based on inherent optical parameters
CN113326470A (en) * 2021-04-11 2021-08-31 桂林理工大学 Remote sensing water depth inversion tidal height correction method
CN113326470B (en) * 2021-04-11 2022-08-16 桂林理工大学 Remote sensing water depth inversion tidal height correction method
CN113255144B (en) * 2021-06-02 2021-09-07 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113255144A (en) * 2021-06-02 2021-08-13 中国地质大学(武汉) Shallow sea remote sensing water depth inversion method based on FUI partition and Randac
CN113639716A (en) * 2021-07-29 2021-11-12 北京航空航天大学 Depth residual shrinkage network-based water depth remote sensing inversion method
CN113793374A (en) * 2021-09-01 2021-12-14 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by using improved four-waveband remote sensing image QAA algorithm
CN113793374B (en) * 2021-09-01 2023-12-22 自然资源部第二海洋研究所 Method for inverting water depth based on water quality inversion result by improved four-band remote sensing image QAA algorithm
CN114943161A (en) * 2022-07-27 2022-08-26 中国水利水电科学研究院 Inland lake terrain inversion method based on multi-source remote sensing data
CN114943161B (en) * 2022-07-27 2022-09-27 中国水利水电科学研究院 Inland lake terrain inversion method based on multi-source remote sensing data
CN117514148A (en) * 2024-01-05 2024-02-06 贵州航天凯山石油仪器有限公司 Oil-gas well working fluid level identification and diagnosis method based on multidimensional credibility fusion
CN117514148B (en) * 2024-01-05 2024-03-26 贵州航天凯山石油仪器有限公司 Oil-gas well working fluid level identification and diagnosis method based on multidimensional credibility fusion

Also Published As

Publication number Publication date
CN105651263B (en) 2018-02-23

Similar Documents

Publication Publication Date Title
CN105651263A (en) Shallow sea water depth multi-source remote sensing fusion inversion method
CN102436652B (en) Automatic registering method of multisource remote sensing images
CN103295239B (en) A kind of autoegistration method of the laser point cloud data based on datum plane image
CN102609701B (en) Remote sensing detection method based on optimal scale for high-resolution SAR (synthetic aperture radar)
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN102110227B (en) Compound method for classifying multiresolution remote sensing images based on context
CN103914678B (en) Abandoned land remote sensing recognition method based on texture and vegetation indexes
CN102750696B (en) Affine invariant feature and coastline constraint-based automatic coastal zone remote-sensing image registration method
CN105243367A (en) Method and device for monitoring scope of water body based on satellite remote sensing data
CN105627997A (en) Multi-angle remote sensing water depth decision fusion inversion method
CN105469098A (en) Precise LINDAR data ground object classification method based on adaptive characteristic weight synthesis
CN111008664B (en) Hyperspectral sea ice detection method based on space-spectrum combined characteristics
CN101738172B (en) Method for three-dimensional measurement of high sampling density color structured light based on green stripe segmentation
CN102279973A (en) Sea-sky-line detection method based on high gradient key points
CN108154094A (en) The non-supervisory band selection method of high spectrum image divided based on subinterval
CN106128121A (en) Vehicle queue length fast algorithm of detecting based on Local Features Analysis
CN112013822A (en) Multispectral remote sensing water depth inversion method based on improved GWR model
CN105139401A (en) Depth credibility assessment method for depth map
CN105139375A (en) Satellite image cloud detection method combined with global DEM and stereo vision
CN110765912A (en) SAR image ship target detection method based on statistical constraint and Mask R-CNN
CN116415843A (en) Multi-mode remote sensing auxiliary mine ecological environment evaluation method for weak network environment
CN113111706B (en) SAR target feature unwrapping and identifying method for azimuth continuous deletion
CN102208103A (en) Method of image rapid fusion and evaluation
Li et al. A pseudo-siamese deep convolutional neural network for spatiotemporal satellite image fusion
Li et al. Multiscale cross-modal homogeneity enhancement and confidence-aware fusion for multispectral pedestrian detection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180223

Termination date: 20181223

CF01 Termination of patent right due to non-payment of annual fee