CN116957935A - Side-scan sonar stripe image stitching method based on path line constraint - Google Patents

Side-scan sonar stripe image stitching method based on path line constraint Download PDF

Info

Publication number
CN116957935A
CN116957935A CN202310928212.5A CN202310928212A CN116957935A CN 116957935 A CN116957935 A CN 116957935A CN 202310928212 A CN202310928212 A CN 202310928212A CN 116957935 A CN116957935 A CN 116957935A
Authority
CN
China
Prior art keywords
scan sonar
image
coordinate
point
sen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310928212.5A
Other languages
Chinese (zh)
Inventor
张红梅
赵建虎
龙佳威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202310928212.5A priority Critical patent/CN116957935A/en
Publication of CN116957935A publication Critical patent/CN116957935A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/05Underwater scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a side-scan sonar stripe image stitching method based on path line constraint, which comprises the following steps: acquiring side-scan sonar strip data of a research area, and determining two side-scan sonar strips to be spliced in each group according to the position coordinates of the navigation line; determining whether a public coverage area exists between two side-scan sonar strips to be spliced in each group, if so, performing the next step, and if not, skipping the group of side-scan sonar strips, and reselecting the next group of side-scan sonar strips; detecting characteristic points of two side-scan sonar strips to be spliced in each group, and removing mismatching point pairs after initial matching to obtain correct matching point pairs; dividing the public coverage area into blocks, and completing splicing and fusion of adjacent two-side sonar strip images in each block according to the correct matching point pairs; repeating the steps until the splicing of all groups of side-scan sonar images in a large area is completed. The invention realizes the acquisition of the submarine image of the large area with clear target and consistent position.

Description

Side-scan sonar stripe image stitching method based on path line constraint
Technical Field
The invention belongs to the technical field of geodetic survey and mapping engineering, and particularly relates to a side scan sonar stripe image stitching method based on path line constraint.
Background
The ocean resources of China are rich, and the ocean resources are developed and the ocean development space is widened. The high-quality submarine topography and topography has important effects on developing ocean scientific research, ocean investigation, maritime shipping and the like. The submarine image, the topography and the substrate reflect the texture features, the topography relief and the composition of the submarine surface, and are three very important elements in the ocean basic geographic information.
Submarine images are obtained mainly by means of a side-scan sonar system (Side scan Sonar System, SSS). Because of the limited sweep amplitude, the side scan sonar can only reflect the features of the seabed surface in localized areas on either side of the path line. If a large-area submarine image needs to be acquired, a plurality of measuring lines need to be laid. In order to ensure sufficient coverage and data quality of the seabed surface, the measuring lines are generally arranged in parallel, a certain coverage rate between adjacent strips is ensured by arranging a track line interval and a sweeping width, and finally the measured strip images are spliced to obtain a large-area full-coverage seabed image. At present, two types of geographic stitching methods and feature stitching methods appear aiming at stitching of multiple band side-scan sonar images. The geographic stitching method is affected by inaccurate positions of the side-scan sonar images, and target ghosts and dislocation exist on the stitched images; the feature stitching method still has problems in the method for automatically determining and partitioning the public coverage area, matching the features and constructing the coordinate conversion model of the adjacent strips. Aiming at the problems, the stitching research of the side-scan sonar images is developed, and the method has important significance for acquiring the large-area full-coverage high-resolution submarine images.
A single strip side scan sonar image can only acquire a submarine image of a local area. To obtain a large-area full-coverage submarine image, the measurement results of the single strips need to be spliced. The existing side scan sonar image stitching method is divided into geographic stitching and feature stitching.
The splicing method based on the geographic coordinates is to splice the geographic coding sonar images endowed with geographic position information under a uniform geographic frame. When there are multiple pixel values at the same location, an average value is typically taken; when the position has only one pixel value, directly assigning values; when there is no corresponding pixel value at a certain position, a linear interpolation method is adopted for assignment. The method has high splicing speed, but the accuracy is lower because the geographic position of the side-scan sonar image is influenced by a calculation method (underwater short baseline positioning or calculation through the length of a towing rope) and external factors (factors such as water flow, terrain complexity and the like), so that the positions of the same seabed target on the side-scan sonar images of different strips are different. In this case, stitching directly based on geographic coordinates can lead to target ghosting or misplacement, affecting subsequent image interpretation and interpretation.
Feature stitching is based on the consistency of common feature points within adjacent strip coverage areas. The method comprises the steps of detecting and matching characteristic points of images in the public coverage areas of adjacent strips, constructing a coordinate conversion model by utilizing the position information of the public characteristic points, correcting the geographic position of one image by taking the geographic position of the other image as a reference, and realizing the unification of the geographic positions of the public characteristics. The characteristic splicing method effectively avoids the target ghost phenomenon which can occur on the splicing result, and is widely used for generating the submarine image in a large area. By researching the existing characteristic splicing method, the following problems still exist in the existing method:
first, since feature point detection and matching are performed in the common coverage area of adjacent strips, it is necessary to determine the common coverage area of adjacent strips. When the measuring lines are laid, a certain coverage rate is usually set between the adjacent strips, so that the full coverage and the data quality of the measuring area are ensured. When the coverage rate is low, the coverage area width is far smaller than the sweeping width, and the detection of the target features of the public area is insufficient, and a geographic stitching method is still adopted to obtain the submarine image of the large area. In order to achieve image matching between adjacent strips, 40% coverage between adjacent strips is required. If the quality of the data directly under the side scan sonar is poor, the data is replaced by the edge data of another adjacent band, which needs to be increased coverage, and can be set to be 75%. By setting the coverage, it can be ensured that there is a sufficiently large common coverage between adjacent strips for subsequent feature point matching.
In summary, for multi-strip side scan sonar image stitching, there are shortcomings in automatic determination and segmentation of an image overlapping region, accurate matching of image features, and construction of a coordinate conversion model between image feature point pairs.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a side scan sonar stripe image splicing method based on the path line constraint, which can effectively solve the problems existing in the geographic splicing method and the characteristic splicing method at present and realize the acquisition of the submarine image in a large area with clear targets and consistent positions.
In order to solve the technical problems, the invention adopts the following technical scheme:
a side-scan sonar stripe image stitching method based on path line constraint comprises the following steps:
step 1, acquiring side-scan sonar strip data of a research area, and determining each group of side-scan sonar strips to be spliced according to the position coordinates of a navigation line;
step 2, determining whether a public coverage area exists between two groups of side-scan sonar strips to be spliced according to the route line and the breadth constraint, if so, carrying out the next step, and if not, skipping the group of side-scan sonar strips, and reselecting the next group of side-scan sonar strips;
step 3, detecting characteristic points of two side-scan sonar strips to be spliced in each group, and removing mismatching point pairs after initial matching to obtain correct matching point pairs;
step 4, partitioning the public coverage area, and completing splicing and fusion of adjacent two-side sonar strip images according to correct matching point pairs in each partitioned area;
and 5, repeating the steps 2-4 until the splicing of all groups of side-scan sonar images in the large area is completed.
Further, the step 1 specifically includes:
step 1.1, calculating a function expression of a path line corresponding to each side scan sonar strip according to the position coordinates of the path line, calculating function values according to the function expression, and arranging the function values in a sequence from small to large or from large to small;
and 1.2, selecting the side-scan sonar strips corresponding to the two nearest function values as the two side-scan sonar strips to be spliced.
Further, the step 2 specifically comprises the following steps:
step 2.1, obtaining an edge area of a side scan sonar stripe image through a model built by a navigation path;
step 2.2, judging whether pixel points on the single side scan sonar strip image fall between edge areas, if not, the common coverage area exists, namely the group of adjacent strips is skipped, the subsequent steps are not carried out, and the next group of adjacent strips is reselected; if so, the public coverage area exists, and the next step is performed.
Further, the edge region is obtained by the following model:
Y 2(i-1)-1 =a i-1 x+b i-1 +D;
Y 2(i-1) =a i x+b i -D
wherein a is i Is a slope, b i For intercept, i=1, 2, …, n, x is the plane abscissa of the course, Y is the plane ordinate of the course, D is the vertical distance between the edge of each side scan sonar image and the course;
the two functions correspond to two boundary lines representing the edge region, and the region formed by the two boundary lines is the edge region.
Further, the step 3 specifically includes:
step 3.1, performing feature point detection and description on the side-scan sonar strip images to be spliced by using a SURF algorithm;
step 3.2, selecting one of the side scan sonar strips to be spliced as a reference image, the other one as an image to be matched, selecting m feature points closest to the geographic position of each feature point on the reference image to be matched as points to be matched, and selecting the point to be matched with the minimum Euclidean distance of the SURF feature vector as a final matching point;
and 3.3, repeating the steps 3.1 and 3.2 for all the characteristic points on the reference image, finally obtaining all the initial matching point pairs, and removing the error matching point pairs in the initial matching point pairs by adopting a random sampling consistency algorithm to obtain correct matching point pairs.
Further, when the random sampling consistency algorithm is adopted to remove mismatching, some data are randomly selected to establish the following model:
wherein x, y, x ', y' are the x coordinate and the y coordinate of the feature point pair in the reference image and the image to be matched respectively, and a and b are parameters to be solved;
and then testing by adopting the rest data, wherein the verification conditions in the test are as follows:
wherein Deltax, deltay is the calculated matching point pair coordinates, and u is a set threshold;
when the data meets the verification condition, the matching point pair is a correct matching point pair.
Further, the step 4 side scan sonar image block stitching method is as follows:
step 4.1, partitioning the public coverage area by adopting a K-means algorithm, if no enough characteristic point pairs exist in the block, directly adopting a geographic splicing method, and if enough characteristic point pairs exist, performing step 4.2;
and 4.2, constructing a coordinate conversion model under the constraint of the path line by using a thin plate spline function by using the characteristic point pairs, and completing splicing and fusion of adjacent side scan sonar strip images by using the coordinate conversion model.
Further, the number of pairs of sufficient feature points in step 4.1 is not less than 4.
Further, in step 4.1, based on the correct matching point pair, position points on the track are selected as constraint conditions, K-means clustering is performed on the characteristic points in the public coverage area and the corresponding track position points according to coordinates and distribution, and for each type of clustering result, the sum of standard deviations of coordinates of the characteristic point pairs in the class and the gradient of the sum are calculated, and the number of blocks of the public coverage area is obtained according to the gradient.
Further, the coordinate transformation model under the path line constraint is constructed by the thin-plate spline function specifically as follows:
in each partitioned area, selecting a matching point pair (Xf, yf) and coordinate points (Xc, yc) on a track, and constructing a coordinate conversion model by adopting a TPS function, wherein n is the total number of the matching point pair and the track point number:
wherein W, V, A and B are parameters, F represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, G represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, 0 3*1 Zero matrix, 0, representing 3*1 3*3 Zero matrix representing 3*3
In the case of the model of the present invention,
wherein r is ij 2 Calculated by the following formula:
wherein it is assumed that the ith feature point coordinate on the reference image is (x i ′,y i '), the ith feature point coordinate on the image to be matched is (x) i ,y i );
Wherein Q is:
Q=(ones n*1 X sen Y sen )'
Ones n*1 represents a matrix of n 1, X sen Representing the x-coordinates, Y, of the matching points on the reference image and the points on the track sen Representing y coordinates of the matching points on the reference image and the points on the track;
wherein:
X sen =(Xf sen ′,Xc sen ′)′=(x′ 1 ,x′ 2 ,...,x′ n )′;
Y sen =(Yf sen ′,Yc sen ′)′=(y′ 1 ,y′ 2 ,...,y′ n )′;
wherein Xf sen ' x-coordinate, xc representing the matching point on the reference image sen ' x-coordinate, x ' representing the point on the track on the reference image ' n Representing the x-coordinate, yf, of the nth point on the reference image sen ' represents the y-coordinate, yc, of the matching point on the reference image sen ' represents the y-coordinate, y ' of the point on the track on the reference image ' n Representing the y-coordinate of the nth point on the reference image;
wherein f=x ref -X sen G=Y ref -Y sen
X ref =(Xf ref ',Xc ref ')'=(x 1 ,x 2 ,...,x n )' Y ref =(Yf ref ',Yc ref ')'=(y 1 ,y 2 ,...,y n )';
Wherein Xf ref ' x-coordinate, xc representing the matching point on the image to be matched ref ' x-coordinate, x, representing the point on the track on the image to be matched n Representing the x-coordinate, yf, of the nth point on the image to be matched ref ' y-coordinates, yc, representing the matching points on the image to be matched ref ' represents the y-coordinate, y, of a point on a track on an image to be matched n Representing the y coordinate of the nth point on the image to be matched;
solving the model, the parameters W, V, a and B can be calculated by:
W=(w 1 ,w 2 ,…,w n )' V=(V 1 ,V 2 ,…,V n )'
A=(a 1 ,a 2 ,a 3 )' B=(b 1 ,b 2 ,b 3 )'
wherein w is n Represents the nth element, V, in the parameter W vector n Represents the nth element, a, in the parameter V vector i (i=1, 2, 3) represents the i-th element in the parameter a vector, b i (i=1, 2, 3) represents the i-th element in the parameter B vector;
after the parameters W, V, A and B in the conversion model are obtained, Q in the construction formula of all points on the image is utilized, so that the obtained F and G are coordinate conversion values corresponding to all the points, and the method comprises the following steps of
X ref =F+X sen Y ref =G+Y sen
And the splicing and fusion of the side-scan sonar images can be completed.
Compared with the prior art, the invention has the beneficial effects that: aiming at the problems that the geographic splicing method can cause target dislocation and ghost and the problems that the characteristic splicing method still exists in multiple aspects, the invention combines the precise matching of the SURF algorithm and the characteristic point position constraint, adopts the K-means algorithm to segment a public coverage area, and performs characteristic splicing or geographic splicing according to the precisely matched point pairs in the segments, thereby completing the splicing of the two-side sonar images.
Drawings
FIG. 1 is a flow chart of a method for stitching side-scan sonar stripe images based on path constraints in an embodiment of the invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described in the following in conjunction with the embodiments of the present invention, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
The invention will be further illustrated, but is not limited, by the following examples.
The embodiment of the invention provides a side-scan sonar stripe image stitching method based on path line constraint, which comprises the following steps:
step 1, measuring and acquiring side-scan sonar strip data of a research area, and determining two side-scan sonar strips to be spliced in each group according to position coordinates of a navigation line;
in the embodiment, four side-scan sonar strips are obtained by measuring a certain water depth of Shenzhen Dapengwen bay through Edgetech towing, and the original data recording format is 'xtf'. The obtained original data is subjected to data decoding, radiation distortion correction, geometric distortion correction, geocoding and image denoising to obtain a side-scan sonar strip with pixel resolution of 0.6 m.
According to the position coordinates of the navigation tracks, calculating a function expression of the navigation track corresponding to each side scan sonar strip, and arranging the function values in order from small to large; in this embodiment, the functional expression of the path line can be obtained as follows:
using a function, e.g. y i =a i x+b i (i=2, 3, …, n) describes the course mathematically, x, y being the plane coordinates. Selecting a plurality of points corresponding to the track, and obtaining the slope a of the function based on least square estimation i And intercept b i . Selecting the same x, calculating the function value and sorting the function values from small to large according to a sorting formula of y 1 <y 2 <…<y n
And selecting the side-scan sonar strips corresponding to the two nearest function values as a group of two side-scan sonar strips to be spliced.
Step 2, determining whether a public coverage area exists between two groups of side-scan sonar strips to be spliced according to the route line and the breadth constraint, if so, carrying out the next step, and if not, skipping the group of side-scan sonar strips, and reselecting the next group of side-scan sonar strips; the method specifically comprises the following substeps:
step 2.1, obtaining an edge area of a side scan sonar stripe image through a model built by a navigation path; the edge region is determined by:
Y 2(i-1)-1 =a i-1 x+b i-1 +D
Y 2(i-1) =a i x+b i -D
wherein D is the vertical distance between the edge of each side-scan sonar image and the path line, and other parameters have the same meaning as in step 1, namely a i Is a slope, b i For intercept, i=1, 2, …, n, x is the plane abscissa of the route, Y is the plane ordinate of the route;
the two functions are two boundary lines of an edge area, and an area formed between the two boundary lines is the edge area;
step 2.2, judging whether pixel points on the single side scan sonar strip image fall between edge areas, if not, the common coverage area exists, namely the group of adjacent strips is skipped, the subsequent steps are not carried out, and the next group of adjacent strips is reselected; if so, the public coverage area exists, and the next step is performed.
Step 3, detecting characteristic points of each group of two side scan sonar strips to be spliced, performing characteristic point matching on the two side scan sonar strips to obtain an initial matching point pair, and then processing the initial matching point pair to remove the mismatching point pair to obtain a correct matching point pair; the method specifically comprises the following substeps:
step 3.1, performing feature point detection and description on two side-scan sonar strip images to be spliced by using a SURF algorithm; in this embodiment, the SURF algorithm is used to extract feature points and obtain corresponding descriptors for the two side scan sonar strip diagrams to be spliced respectively, so as to be used in the subsequent steps.
Step 3.2, selecting one of the two side scan sonar stripe images to be spliced as a reference image, the other one as an image to be matched, selecting m feature points closest to the geographic position of the image to be matched on each feature point on the reference image as points to be matched, and selecting the point to be matched with the minimum Euclidean distance of the SURF feature vector as a final matching point, wherein the Euclidean distance calculation formula in the embodiment is as follows:
wherein, (x) 1 ,x 1 ,…,x n ) Feature descriptors derived for SURF algorithm of reference image feature points, (y) 1 ,y 1 ,…,y n ) And the feature descriptors of the points to be matched selected from the images to be matched are feature descriptors of the points to be matched.
In the embodiment, the m parameter is set to 20, which is a preferred embodiment, but the invention is not limited to this specific numerical selection, and can be adjusted according to practical situations;
step 3.3, repeating the steps 3.1 and 3.2 for all the characteristic points on the reference image, and finally obtaining all the initial matching point pairs; in this embodiment, 570 sets of matching point pairs are initially extracted, and there are a large number of mismatching point pairs, so that a random sampling consistency (Random Sample Consensus, RANSAC) algorithm is also required to be used to reject mismatching, specifically, some data are randomly selected from all data, a mathematical model is constructed by using the data, and then the remaining data are tested to verify whether the remaining data satisfy the mathematical model constructed according to the data. Finally obtaining 116 correct matching point groups through screening of a random sampling consistency algorithm;
in this embodiment, when a random sampling consistency algorithm is adopted to reject mismatching, some data are randomly selected to establish the following model:
wherein x, y, x ', y' are the x coordinate and the y coordinate of the feature point pair in the reference image and the image to be matched respectively, and a and b are parameters to be solved;
and then testing by adopting the rest data, wherein the verification conditions in the test are as follows:
wherein Δx, Δy is the calculated matching point pair coordinate, u is the set threshold, and in this embodiment, 40;
when the matching point meets the verification condition, the matching point pair is a correct matching point pair.
Step 4, partitioning the public coverage area, and completing splicing and fusion of adjacent two-side sonar strip images according to correct matching point pairs in each partitioned area;
step 4.1, partitioning the public coverage area by adopting a K-means algorithm, solving the coordinate conversion parameters if enough characteristic point pairs do not exist in the block, directly adopting a geographic stitching method, and performing step 4.2 if enough characteristic point pairs exist;
in the example, based on the correct matching point pair, position points on the track are selected as constraint conditions, K-means clustering is carried out on the characteristic points in the public coverage area and the corresponding track position points according to coordinates and distribution, the category number is changed from 2 to 10, and K-means is an existing algorithm and is not repeated here. For each class of clustering results, calculating the sum of standard deviations of the coordinate deviations of the feature point pairs in the class and the gradient of the sum:
the standard deviation of the coordinate deviation in each block is calculated as follows:
wherein x is i The coordinate deviation is the coordinate deviation of coordinate points in the block, and x is the average value of the coordinate deviation;
and the sum of standard deviation is the sum of calculation results in each block. The gradient value is the difference between the sum of standard deviation of the blocks i and the sum of standard deviation of the blocks i-1.
The gradient is sum calculated by the current class number, and the class number corresponding to the inflection point when the gradient change is gentle is taken as the last common coverage area block number; if the number of the characteristic point pairs in the block is less than 4, the coordinate conversion requirement cannot be met, and the geographic stitching method is directly used. If the characteristic point pairs in the block are not less than 4, performing the operation of the step 4.2;
step 4.2, constructing a coordinate conversion model under the constraint of a path line by using a thin plate spline function by using the characteristic point pairs, and completing splicing and fusion of adjacent side scan sonar strip images by using the coordinate conversion model;
in this embodiment, the coordinate transformation model under the path constraint constructed by the thin-plate spline function is specifically as follows:
in each partitioned area, selecting a matching point pair (Xf, yf) and coordinate points (Xc, yc) on a track, and constructing a coordinate conversion model by adopting a TPS function, wherein n is the total number of the matching point pair and the track point number:
wherein W, V, A and B are parameters, F represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, G represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, 0 3*1 Zero matrix, 0, representing 3*1 3*3 A zero matrix representing 3*3;
in the case of the model of the present invention,
wherein r is ij 2 Calculated by the following formula:
wherein it is assumed that the ith feature point coordinate on the reference image is (x i ′,y i '), the ith feature point coordinate on the image to be matched is (x) i ,y i );
Wherein Q is:
Q=(ones n*1 X sen Y sen )'
Ones n*1 represents a matrix of n 1, X sen Representing the x-coordinates, Y, of the matching points on the reference image and the points on the track sen Representing y coordinates of the matching points on the reference image and the points on the track;
wherein:
X sen =(Xf sen ′,Xc sen ′)′=(x′ 1 ,x′ 2 ,...,x′ n )′;
Y sen =(Yf sen ′,Yc sen ′)′=(y′ 1 ,y′ 2 ,...,y′ n )′;
wherein Xf sen ' x-coordinate, xc representing the matching point on the reference image sen ' x-coordinate, x ' representing the point on the track on the reference image ' n Representing the x-coordinate, yf, of the nth point on the reference image sen ' represents the y-coordinate, yc, of the matching point on the reference image sen ' represents the y-coordinate, y ' of the point on the track on the reference image ' n Representing the y-coordinate of the nth point on the reference image;
wherein f=x ref -X sen G=Y ref -Y sen
X ref =(Xf ref ',Xc ref ')'=(x 1 ,x 2 ,...,x n )' Y ref =(Yf ref ',Yc ref ')'=(y 1 ,y 2 ,...,y n )';
Wherein Xf ref ' x-coordinate, xc representing the matching point on the image to be matched ref ' x-coordinate, x, representing the point on the track on the image to be matched n Representing the x-coordinate, yf, of the nth point on the image to be matched ref ' indicate waitingY-coordinates, yc, of matching points on the matching image ref ' represents the y-coordinate, y, of a point on a track on an image to be matched n Representing the y coordinate of the nth point on the image to be matched;
solving the model, the parameters W, V, a and B can be calculated by:
W=(w 1 ,w 2 ,…,w n )' V=(V 1 ,V 2 ,…,V n )'
A=(a 1 ,a 2 ,a 3 )' B=(b 1 ,b 2 ,b 3 )'
wherein w is n Represents the nth element, V, in the parameter W vector n Represents the nth element, a, in the parameter V vector i (i=1, 2, 3) represents the i-th element in the parameter a vector, b i (i=1, 2, 3) represents the i-th element in the parameter B vector;
after the parameters W, V, A and B in the conversion model are obtained, Q in the construction formula of all points on the image is utilized, so that the obtained F and G are coordinate conversion values corresponding to all the points, and the method comprises the following steps of
X ref =F+X sen Y ref =G+Y sen
And the splicing and fusion of the side-scan sonar images can be completed.
And 5, repeating the steps 2-4 until the side scan sonar image stitching of the large area is completed.
The foregoing is merely illustrative of the preferred embodiments of the present invention and is not intended to limit the embodiments and scope of the present invention, and it should be appreciated by those skilled in the art that equivalent substitutions and obvious variations may be made using the teachings of the present invention, which are intended to be included within the scope of the present invention.

Claims (10)

1. A side-scan sonar stripe image stitching method based on path line constraint is characterized by comprising the following steps:
step 1, acquiring side-scan sonar strip data of a research area, and determining two side-scan sonar strips to be spliced in each group according to position coordinates of a navigation line;
step 2, determining whether a public coverage area exists between two groups of side-scan sonar strips to be spliced according to the route line and the breadth constraint, if so, carrying out the next step, and if not, skipping the group of side-scan sonar strips, and reselecting the next group of side-scan sonar strips;
step 3, detecting characteristic points of two side-scan sonar strips to be spliced in each group, and removing mismatching point pairs after initial matching to obtain correct matching point pairs;
step 4, partitioning the public coverage area, and completing splicing and fusion of adjacent two-side sonar strip images according to correct matching point pairs in each partitioned area;
and 5, repeating the steps 2-4 until the splicing of all groups of side-scan sonar images in the large area is completed.
2. The method for stitching the side-scan sonar ribbon images based on the path constraints of claim 1, wherein the step 1 specifically comprises:
step 1.1, calculating a function expression of a path line corresponding to each side scan sonar strip according to the position coordinates of the path line, calculating function values according to the function expression, and arranging the function values in a sequence from small to large or from large to small;
and 1.2, selecting the side-scan sonar strips corresponding to the two nearest function values as the two side-scan sonar strips to be spliced.
3. The method for stitching the side-scan sonar ribbon images based on the path constraints according to claim 1, wherein the step 2 specifically comprises the following steps:
step 2.1, obtaining an edge area of a side scan sonar stripe image through a model built by a navigation path;
step 2.2, judging whether pixel points on the single side-scan sonar strip image fall between edge areas, if not, a common coverage area exists, namely, the adjacent side-scan sonar strips are skipped, the subsequent steps are not carried out, and the next adjacent strips are reselected; if so, the public coverage area exists, and the next step is performed.
4. The method for stitching side-scan sonar banding images based on path constraints according to claim 1, wherein the edge area is obtained by the following model:
Y 2(i-1)-1 =a i-1 x+b i-1 +D;
Y 2(i-1) =a i x+b i -D
wherein a is i Is a slope, b i For the intercept, i=1, 2, …, n, x and Y are the plane abscissa and ordinate of the route, respectively, and D is the vertical distance between the edge of each side scan sonar image and the route;
the two functions correspond to two boundary lines representing the edge region, and the region formed by the two boundary lines is the edge region.
5. The method for stitching the side-scan sonar ribbon images based on the path constraints of claim 1, wherein the step 3 specifically comprises:
step 3.1, performing feature point detection and description on the side-scan sonar strip images to be spliced by using a SURF algorithm;
step 3.2, selecting one of the side scan sonar strips to be spliced as a reference image, the other one as an image to be matched, selecting m feature points closest to the geographic position of each feature point on the reference image to be matched as points to be matched, and selecting the point to be matched with the minimum Euclidean distance of the SURF feature vector as a final matching point;
and 3.3, repeating the steps 3.1 and 3.2 for all the characteristic points on the reference image, finally obtaining all the initial matching point pairs, and removing the error matching point pairs in the initial matching point pairs by adopting a random sampling consistency algorithm to obtain correct matching point pairs.
6. The method for stitching the side-scan sonar banding image based on the path constraint according to claim 5, wherein when a random sampling consistency algorithm is adopted to reject mismatching, some data are randomly selected to establish the following model:
wherein x, y, x ', y' are the x coordinate and the y coordinate of the feature point pair in the reference image and the image to be matched respectively, and a and b are parameters to be solved;
and then testing by adopting the rest data, wherein the verification conditions in the test are as follows:
wherein Deltax and Deltay are calculated matching point pair coordinates, and u is a set threshold;
when the data meets the verification condition, the matching point pair is a correct matching point pair.
7. The method for stitching side-scan sonar stripe images based on path constraints according to claim 1, wherein the method for stitching side-scan sonar image blocks in step 4 is as follows:
step 4.1, partitioning the public coverage area by adopting a K-means algorithm, if no enough characteristic point pairs exist in the block, directly adopting a geographic splicing method, and if enough characteristic point pairs exist, performing step 4.2;
and 4.2, constructing a coordinate conversion model under the constraint of the path line by using a thin plate spline function by using the characteristic point pairs, and completing splicing and fusion of adjacent side scan sonar strip images by using the coordinate conversion model.
8. The method for stitching the side-scan sonar banding image based on the path constraint according to claim 7, wherein the sufficient pairs of feature points in step 4.1 refer to not less than 4 feature points.
9. The method for stitching the side scan sonar banding image based on the path constraint according to claim 7, wherein in step 4.1, based on a correct matching point pair, position points on the path are selected as constraint conditions, K-means clustering is performed on feature points in a public coverage area and corresponding path position points according to coordinates and distribution, for each type of clustering result, the sum of standard deviation of coordinates and the sum gradient of the feature point pairs in the class are calculated, and the number of blocks of the public coverage area is obtained according to the gradient.
10. The method for stitching the side-scan sonar ribbon images based on the path constraints according to claim 7, wherein the construction of the coordinate transformation model under the path constraints by the thin-plate spline function is specifically as follows:
in each partitioned area, selecting a matching point pair (Xf, yf) and coordinate points (Xc, yc) on a track, and constructing a coordinate conversion model by adopting a TPS function, wherein n is the total number of the matching point pair and the track point number:
wherein W, V, A and B are parameters, F represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, G represents the x coordinate difference between the matching point pair of the image to be matched and the reference image and the path line point, 0 3*1 Zero matrix, 0, representing 3*1 3*3 Zero matrix representing 3*3
In the case of the model of the present invention,
in the method, in the process of the invention,calculated by the following formula:
wherein it is assumed that the ith feature point coordinate on the reference image is (x i ′,y i '), the ith feature point coordinate on the image to be matched is (x) i ,y i );
Wherein Q is:
Q=(ones n*1 X sen Y sen )'
Ones n*1 represents a matrix of n 1, X sen Representing the x-coordinates, Y, of the matching points on the reference image and the points on the track sen Representing y coordinates of the matching points on the reference image and the points on the track;
wherein:
X sen =(Xf sen ′,Xc sen ′)′=(x 1 ′,x 2 ′,...,x′ n )′;
Y sen =(Yf sen ′,Yc sen ′)′=(y 1 ′,y 2 ′,...,y n ′)′;
wherein Xf sen ' x-coordinate, xc representing the matching point on the reference image sen ' x-coordinate, x ' representing the point on the track on the reference image ' n Representing the x-coordinate, yf, of the nth point on the reference image sen ' represents the y-coordinate, yc, of the matching point on the reference image sen ' represents the y-coordinate, y, of a point on a track on a reference image n ' represents the y-coordinate of the nth point on the reference image;
wherein f=x ref -X sen G=Y ref -Y sen
X ref =(Xf ref ',Xc ref ')'=(x 1 ,x 2 ,...,x n )' Y ref =(Yf ref ',Yc ref ')'=(y 1 ,y 2 ,...,y n )';
Wherein Xf ref ' represent waitingMatching x coordinate, xc of matching point on image ref ' x-coordinate, x, representing the point on the track on the image to be matched n Representing the x-coordinate, yf, of the nth point on the image to be matched ref ' y-coordinates, yc, representing the matching points on the image to be matched ref ' represents the y-coordinate, y, of a point on a track on an image to be matched n Representing the y coordinate of the nth point on the image to be matched;
solving the model, the parameters W, V, a and B can be calculated by:
W=(w 1 ,w 2 ,…,w n )'V=(V 1 ,V 2 ,…,V n )'
A=(a 1 ,a 2 ,a 3 )'B=(b 1 ,b 2 ,b 3 )'
wherein w is n Represents the nth element, V, in the parameter W vector n Represents the nth element, a, in the parameter V vector i (i=1, 2, 3) represents the i-th element in the parameter a vector, b i (i=1, 2, 3) represents the i-th element in the parameter B vector;
after the parameters W, V, A and B in the conversion model are obtained, Q in the construction formula of all points on the image is utilized, so that the obtained F and G are coordinate conversion values corresponding to all the points, and the method comprises the following steps of
X ref =F+X sen Y ref =G+Y sen
And the splicing and fusion of the side-scan sonar images can be completed.
CN202310928212.5A 2023-07-26 2023-07-26 Side-scan sonar stripe image stitching method based on path line constraint Pending CN116957935A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310928212.5A CN116957935A (en) 2023-07-26 2023-07-26 Side-scan sonar stripe image stitching method based on path line constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310928212.5A CN116957935A (en) 2023-07-26 2023-07-26 Side-scan sonar stripe image stitching method based on path line constraint

Publications (1)

Publication Number Publication Date
CN116957935A true CN116957935A (en) 2023-10-27

Family

ID=88457931

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310928212.5A Pending CN116957935A (en) 2023-07-26 2023-07-26 Side-scan sonar stripe image stitching method based on path line constraint

Country Status (1)

Country Link
CN (1) CN116957935A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522684A (en) * 2023-12-29 2024-02-06 湖南大学无锡智能控制研究院 Underwater side-scan sonar image stitching method, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522684A (en) * 2023-12-29 2024-02-06 湖南大学无锡智能控制研究院 Underwater side-scan sonar image stitching method, device and system
CN117522684B (en) * 2023-12-29 2024-03-19 湖南大学无锡智能控制研究院 Underwater side-scan sonar image stitching method, device and system

Similar Documents

Publication Publication Date Title
WO2020258793A1 (en) Target detection and training of target detection network
CN106600622B (en) A kind of point cloud data segmentation method based on super voxel
CN110969624A (en) Laser radar three-dimensional point cloud segmentation method
CN110443836A (en) A kind of point cloud data autoegistration method and device based on plane characteristic
CN111028154B (en) Side-scan sonar image matching and stitching method for rugged seafloor
CN110706177B (en) Method and system for equalizing gray level of side-scan sonar image
CN116957935A (en) Side-scan sonar stripe image stitching method based on path line constraint
CN108564532B (en) Large-scale ground distance satellite-borne SAR image mosaic method
JPH05258058A (en) Image analysis based upon position sampling
CN115187666A (en) Deep learning and image processing combined side-scan sonar seabed elevation detection method
CN106526651B (en) The method for building up and system of a kind of detector crystal position table
CN114863258B (en) Method for detecting small target based on visual angle conversion in sea-sky-line scene
CN111948658A (en) Deep water area positioning method for identifying and matching underwater landform images
CN114241022B (en) Unmanned aerial vehicle image automatic registration method and system
Berger et al. Automated ice-bottom tracking of 2D and 3D ice radar imagery using Viterbi and TRW-S
CN114494020A (en) Data splicing method for cable channel point cloud data
CN114720955A (en) Three-dimensional ground penetrating radar multi-channel data splicing processing method and system
CN109461137B (en) Object-oriented orthographic image quality inspection method based on gray level correlation
CN113593026A (en) Lane line marking auxiliary map generation method and device and computer equipment
CN113343819A (en) Efficient unmanned aerial vehicle-mounted SAR image target segmentation method
JP2961140B2 (en) Image processing method
CN110619650A (en) Edge point extraction method and device based on line structure laser point cloud
CN117522684B (en) Underwater side-scan sonar image stitching method, device and system
CN116758106B (en) Water flow registration unit boundary line verification method based on unmanned aerial vehicle
CN113298869B (en) Distance measuring method, distance measuring device, computer device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination