CN113313172A - Underwater sonar image matching method based on Gaussian distribution clustering - Google Patents

Underwater sonar image matching method based on Gaussian distribution clustering Download PDF

Info

Publication number
CN113313172A
CN113313172A CN202110601626.8A CN202110601626A CN113313172A CN 113313172 A CN113313172 A CN 113313172A CN 202110601626 A CN202110601626 A CN 202110601626A CN 113313172 A CN113313172 A CN 113313172A
Authority
CN
China
Prior art keywords
image
sonar
target
dimensional
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110601626.8A
Other languages
Chinese (zh)
Other versions
CN113313172B (en
Inventor
邱海洋
董苗
王慧
智鹏飞
朱志宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202110601626.8A priority Critical patent/CN113313172B/en
Publication of CN113313172A publication Critical patent/CN113313172A/en
Application granted granted Critical
Publication of CN113313172B publication Critical patent/CN113313172B/en
Priority to PCT/CN2022/094444 priority patent/WO2022253027A1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

An underwater sonar image matching method based on Gaussian distribution clustering is used for carrying out accurate three-dimensional reconstruction on a two-dimensional sonar image through image registration and sonar three-dimensional motion parameter optimization, and comprises the following steps: step A: extracting features, establishing a matching relation among the features, and obtaining elevation information of a map; and B: and performing pose estimation, and updating the feature map so as to generate a three-dimensional space map. The method realizes the reconstruction of the sonar image from the features to the environment map covering elevation information, provides certain motion attitude estimation information, and can be used in the field of sonar image processing and image construction of underwater robots.

Description

Underwater sonar image matching method based on Gaussian distribution clustering
Technical Field
The invention relates to the field of ship and ocean technologies and sonar image processing, in particular to a method for underwater detection and perception and three-dimensional reconstruction of a target, and particularly provides an underwater sonar image matching method based on Gaussian distribution clustering.
Background
Over the past decade, various feature-based, template-based, region-based, and fourier-based approaches of two-dimensional FS sonar image registration techniques have been explored, and various techniques-based high-resolution two-dimensional (2D) multibeam forward-scan (FS) sonar video systems have been commercialized for operation and imaging of turbid waters. But previous studies are generally based on single feature points, and then image registration is formulated as an optimization problem, using normal distribution transformation. The selection of the grid size involves a trade-off between resolution and computation time and is not necessarily automatic. The distribution of points is not in accordance with the single-variable Gaussian distribution, the original Gaussian distribution calculation can cause the error representation of the real distribution, and the calculation optimization time of the complex grid structure is too long. When the elevation map is generated, the elevation angle projected to the 2D sonar image by the 3D camera is lost, so that the target error of reconstruction is large, and the image registration and the accurate three-dimensional reconstruction are influenced by the non-ideal optimization method.
Disclosure of Invention
The invention aims to develop an underwater sonar image matching method based on Gaussian distribution clustering, which carries out three-dimensional reconstruction of an underwater environment through image registration and sonar three-dimensional motion parameter optimization and two-dimensional sonar images.
An underwater sonar image matching method based on Gaussian distribution clustering comprises the following steps:
step A, extracting features, establishing a matching relation among the features, and obtaining height information of a map; the method specifically comprises the following 3 steps:
a1, collecting sonar data and carrying out feature detection;
step A2, registering images to generate a Gaussian map;
step A3, calculating a scene elevation map;
b, estimating the pose and updating the feature map so as to generate a three-dimensional space map; the method specifically comprises the following steps:
step B1, pose estimation;
step B2, optimizing parameters;
and step B3, generating a three-dimensional map.
Further, collecting sonar data in the step a1, and performing feature extraction includes the following 4 steps:
a1-1, collecting sonar data, and selecting an area with obvious pixels from a sonar data image as a target processing area;
and step A1-2, dividing the target area into three categories by utilizing the gray level distribution information of the target characteristic area and by scene analysis, distinguishing a target pixel area and other non-target areas, and eliminating the pixel area generated by noise according to the gray level distribution information to realize the description of the characteristic points.
Further, it is characterized in that the step A-1-2 comprises the following 2 steps:
step A1-2a, dividing relatively obvious gray values in the sonar image into three categories by utilizing the gray distribution information of the target characteristic region and through scene analysis: bright speckles that make up the 3D object or structure, shadow regions projected by surfaces adjacent to the 3D object, flat surfaces between the first two gray values;
and step A1-2b, using the bright speckles forming the 3D target or structure as target areas through scene analysis, distinguishing target pixels from other non-target areas, removing pixel areas generated by noise, and reducing image errors.
Further, in the step a2, the image registration and the generation of the gaussian map include the following steps:
a2-1, clustering target pixels with strong pixel intensity by a K-means clustering method;
and step A2-2, removing noise by adopting low-pass filtering, and reducing errors of feature extraction.
Further, step a2-1 includes the following steps:
step A2-1a, gridding the target characteristic points obtained in the step A1-2b, and carrying out intra-grid clustering on each gridded target pixel by a K-means clustering method based on the points in the grid;
and step A2-1b, representing the target characteristic points clustered by k-means by Gaussian distribution, calculating the characteristic value of a covariance matrix and information such as the direction and smoothness of a characteristic vector reaction surface, representing any cluster by a cluster mean and covariance, and generating a Gaussian map. The mean vector and covariance matrix correspond to the location, size, and orientation of each two-dimensional image region.
Further, step a2-1b includes the following 2 steps:
a2-1b-1, selecting 1% -2% of pixels with high brightness, eliminating pixels smaller than 8, selecting k value to make any sub-area not larger than 32 pixels, dividing the area larger than 32 pixels into smaller areas, and obtaining proper characteristic areas;
and step A2-1b-2, calculating the position, size and direction of each two-dimensional image area corresponding to the mean vector and the covariance matrix, and generating a Gaussian map.
Further, the calculating of the scene elevation map in step a3 includes the following steps:
step A3-1, calculating the measured value of the elevation angle of the drop shadow point of the scanning azimuth according to the method of the planarity assumption;
step A3-2, fixing a plane by using image points with minimum and maximum distance values, namely fixing three-dimensional coordinates of a front end point and a rear edge point, scanning similar point pairs in different directions, determining an occlusion contour point by casting a shadow to establish a corresponding relation between a target object elevation angle and the shadow, wherein the elevation value of each three-dimensional object from the front end to the rear end is filled by linear interpolation;
step A3-3, processing the smoothed data through low-pass filtering to reduce the influence of image noise on local peak values; . Establishing thresholds of a background, an object and a shadow region through k-means segmentation, namely dividing the clustered regions into three categories according to gray values, and dividing image categories to position the conversion from the object to the shadow and from the shadow to the ground;
step A3-4, estimating the size of the three-dimensional object through the elevation angle of the trailing edge point and the angle of the shielding outline, and calculating a scene elevation map by using the following formula:
Figure BDA0003092851290000041
in the formula, Hs is the distance of the sonar for detecting the seabed, Rs is the slant distance, Ls is the shadow length, Ht is the height of the target, and Lt is the length of the target.
Further, the pose estimation of step B1 includes the following steps:
step B1-1, in the scene height chart obtained in the step A3-4, a first frame of sonar image is selected as a reference image, a second frame of image is selected as an image to be matched, each image point moves to a new position through sonar rigid motion, and the two sonar images are registered by calculating motion parameters from frame to frame;
b1-2, calculating sonar motion parameters, searching for the most suitable image to be matched from the reference image in the conversion characteristics, performing pose estimation by using a space conversion function, and comparing the transformation, namely rotation and translation and the like, of the two images to obtain the best transformation parameters; and then, taking the registration image obtained in the previous time as a reference, registering the next frame of sonar image until the registration of all the sonar images to be matched in the whole image sequence is completed, unifying the registration of all the adjacent frames into an adjacent reference system, and reducing the accumulated error caused by paired registration.
Further, the step B2 parameter optimization includes the following steps:
step B2-1, projecting each image point of the elevation map obtained in the step A3-4 to a corresponding space point, calculating a new position three-dimensional scene point after sonar rigid motion, and obtaining two sonar views;
and step B2-2, optimizing parameters of two sonar views, transforming the three-dimensional points of the second view into the coordinate system of the first view, and evaluating the characteristic Gaussian distribution of all the transformations by optimal registration.
Further, the step B3 of generating the three-dimensional map includes the following steps:
and B2, after parameters are optimized according to the step B, the calculation speed is increased to obtain an optimal solution, the sonar track relative to the initial position is calculated, and accurate three-dimensional reconstruction is completed.
The invention achieves the following beneficial effects: the reconstruction of the sonar image from the features to the environment map covering the elevation information is realized, and meanwhile, certain motion attitude estimation information is provided, so that the method can be used in the field of sonar image processing and image construction of underwater robots.
Drawings
Fig. 1 is a flowchart of the entire sonar image processing in the embodiment of the present invention.
Fig. 2 is a target processing region where pixels are apparent in a sonar data image in an embodiment of the present invention.
Fig. 3 shows three levels of pixel regions obtained by scene analysis according to an embodiment of the present invention.
FIG. 4 is a diagram of a seafloor sonar chart and a k-means cluster processed image in an embodiment of the present invention.
FIG. 5 is a diagram illustrating k-means clustering processing target pixels by Gaussian distribution according to an embodiment of the present invention.
FIG. 6 is a diagram illustrating elevation calculations using elevation information in accordance with an embodiment of the present invention.
Fig. 7 is a sonar view 1 and a view 2 obtained by rigid motion in an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings in the specification.
The invention aims to develop an underwater sonar image matching method based on Gaussian distribution clustering, which carries out underwater environment three-dimensional reconstruction through image registration and sonar three-dimensional motion parameter optimization and two-dimensional sonar images, and the overall steps are shown in figure 1 and comprise the following steps:
step A: extracting features, establishing a matching relation among the features, and obtaining height information of the map;
and B: and performing pose estimation, and updating the feature map so as to generate a three-dimensional space map.
The step A of characteristic detection comprises the following steps: step A1, collecting sonar data and carrying out feature detection; step A2, registering images to generate a Gaussian map; step a3 calculates a scene elevation map.
Collecting sonar data in the step A1, and performing feature extraction comprises the following 2 steps:
step a1-1, collecting sonar data, and selecting an area with obvious pixels from the sonar data image as a target processing area, as shown in fig. 2.
And step A1-2, dividing the target area into three categories by utilizing the gray level distribution information of the target characteristic area and by scene analysis, distinguishing a target pixel area and other non-target areas, and eliminating the pixel area generated by noise according to the gray level distribution information to realize the description of the characteristic points.
Step a1-2 includes the following 2 steps:
step a1-2a, dividing relatively obvious gray values in the sonar image into three categories by using the gray distribution information of the target feature region and by scene analysis, as shown in fig. 3: 1. bright small spots that make up the 3D object or structure; 2. a shadow region projected from a surface adjacent to the 3D object; 3. a flat surface between the first two grey values.
And step A1-2b, distinguishing target pixels from other non-target areas through scene analysis and pixel intensity, removing pixel areas generated by noise and reducing image errors.
In step a2, image registration and gaussian map generation comprise the following steps:
step A2-1, clustering the target pixels with strong pixel intensity by a K-MEANS clustering method, as shown in FIG. 4.
And step A2-2, removing noise by adopting low-pass filtering, and reducing errors of feature extraction.
As a further limitation of the present invention, step A2-1 comprises the steps of:
step A2-1a, gridding the target feature points obtained in the step A1-2b, carrying out in-grid clustering on each gridded target pixel by a K-means clustering method based on points in a grid, as shown in figure 5, selecting 1% -2% of pixels with high brightness, eliminating pixels smaller than 8, selecting a K value to ensure that any sub-region is not larger than 32 pixels, and dividing the region larger than 32 pixels into smaller regions to obtain a proper feature region.
And step A2-1b, expressing the target characteristic points clustered by k-means by Gaussian distribution, calculating the characteristic value of a covariance matrix and information such as the direction and smoothness of a characteristic vector reaction surface by using a formula 1 and a formula 2, expressing any cluster by using a clustering mean and covariance, and generating a Gaussian map. The mean vector and covariance matrix correspond to the location, size, and orientation of each two-dimensional image region.
Figure BDA0003092851290000081
Figure BDA0003092851290000082
Wherein the content of the first and second substances,
Figure BDA0003092851290000083
showing all scanning points in a grid, and showing N in an area j through an upper and lower mark and a value rangejThe ith point in the set of points (R regions total). Mean value mu for characteristic region jjSum variance ΣjThe gaussian representation is different from the normal distribution as shown in fig. 3, and is based on the gaussian mapping (statistical mean, covariance) representation of the cluster, thereby eliminating the necessity of maintaining a complicated grid structure without losing precision and reducing the calculation time in the optimization process.
The step of calculating the scene elevation map in the step a3 includes the following steps:
step A3-1, calculating according to the method of the planarity assumption, estimating the elevation angle of the scene relative to a flat part and a point on the three-dimensional object. The assumption of planarity is primarily to estimate the elevation angles (based on the shadows they cast) of relatively flat parts of the scene and points on the 3-D object.
Step a3-2, fixing the plane by using the image points with the minimum and maximum distance values, i.e. the three-dimensional coordinates of the front end point and the back end point are fixed, as shown in fig. 6, scanning out similar point pairs in different orientations, determining the occlusion contour point by projecting a shadow to establish the corresponding relation between the elevation angle of the target object and the shadow, and filling the elevation value of each three-dimensional object from the front end to the back end by linear interpolation.
Step A3-3, processing the smoothed data through low-pass filtering to reduce the influence of image noise on local peak values; . Thresholds for background, object and shadow regions (established in step A1-2) are established by k-means segmentation to locate object to shadow and shadow to ground transitions.
And step A3-4, estimating the size of the three-dimensional object by using the formula 3 through the elevation angle of the trailing edge point and the angle of the shielding outline, and calculating a scene elevation map. Small objects may not have a noticeable shadow, for which the object height is set to zero.
Figure BDA0003092851290000091
In the formula, Hs: sonar detects the distance to the sea floor, Rs: slope, Ls: length of hatching, Rh: flat pitch, Ht: target height, Lt: the length of the target.
The step B comprises the following steps: b1 estimating the pose; step B2 parameter optimization; step B3 generates a three-dimensional map.
The pose estimation of the step B1 comprises the following steps:
step B1-1, in the scene height chart obtained in step A3-4, a first frame of sonar image is selected as a reference image, a second frame of image is selected as an image to be matched, each image point can move to a new position through sonar rigid motion, and as shown in figure 7, the two sonar images are registered by calculating frame-to-frame motion parameters.
The lost elevation angle projected by the three-dimensional world onto the two-dimensional sonar image can be represented as a three-dimensional scene point PsZero elevation (X) in range and azimuth along three-dimensional pointss,Ys) Mapping h (P) on a planes) The calculation formula is as follows:
Figure BDA0003092851290000092
wherein R is the distance from the sonar beam to the reflecting target, and phi is the azimuth angle.
Three-dimensional coordinate system P in which the points P in the sonar coordinate system ares=(Xs Ys Zs)TCan be used (R, theta, phi)TExpressing, cartesian and spherical sonar coordinate conversion formulas:
Figure BDA0003092851290000101
Figure BDA0003092851290000102
describing the three-dimensional motion of the sonar with 6 components, T ═ Tx,ty,tz]TAnd W ═ Wx,wy,wz]TNot including a rotation component [ w ] in the two-dimensional imagex,wy,]By calculating the frame-to-frame motion parameter tx,ty,tz,wz]The two maps are registered.
Figure BDA0003092851290000103
And step B1-2, calculating sonar motion parameters, searching for the most suitable image to be matched from the reference image in the conversion characteristics, performing pose estimation by using a space conversion function, and projecting the first image to the second image S' by using a formula 7 to perform transformation, namely rotational translation and other comparisons on the two images. Sonar position motion generation two scene graphs (in the same range R, at different elevation angles) image registration estimates motion parameters [ t ] of two-frame sonar through formula 8 according to elevation angle changes of scene featuresx,ty,tz,wz]The minimum S and S' (formula 9) are obtained to obtain the best transformation parameter; and then, taking the registration image obtained in the previous time as a reference, registering the next frame of sonar image until the registration of all the sonar images to be matched in the whole image sequence is completed, unifying the registration of all the adjacent frames into an adjacent reference system, and reducing the accumulated error caused by paired registration.
Figure BDA0003092851290000104
S′=HS (9)
Where H is a transformation matrix containing translation and in-plane rotation.
The step B2 parameter optimization comprises the following steps:
step B2-1, each image point of the elevation map obtained in step A3-4 may be projected to a corresponding point in space P using equation 5s'after the component m ═ T, W sonar rigid motion, the new position P' three-dimensional scene point is determined by equation 10 and equation 11.
Figure BDA0003092851290000111
Figure BDA0003092851290000112
In the formula, R is a rotation matrix, and W is a rotation velocity vector.
S-h (P) corresponding to sonar images) And S' ═ h (P)s') is determined by equation 9 and equation 12
Figure BDA0003092851290000113
After the sonar rigid motion, a new position three-dimensional scene point is calculated to obtain two sonar views.
Step B2-2 includes the following steps:
the two sonar views obtained in step B2-1 are optimized by the parameters of equation 13, and the three-dimensional points of the second view are transformed into the coordinate system of the first view, with the best registration yielding the maximum function value when evaluating all transformed feature gaussian distributions.
Figure BDA0003092851290000114
In the formula, Gj(s) denotes the Gaussian distribution in the first view (corresponding to the jth feature), G′j(s) denotes the Gaussian distribution in the second view (corresponding to the jth feature), PsiAnd P'siRepresenting a three-dimensional scene corresponding to two sets of view feature points, M-1The three-dimensional points of the second view are transformed into the coordinate system of the first view.
The step B3 of generating a three-dimensional map includes the steps of:
after optimizing the parameters according to step B2, the first image is directly mapped into the second view of S', using the transformation matrix H ≈ M, R, Φ, cos Φ ≈ 1, S ═ H (MP) in equation 8s) And S' ═ h (M)-1Ps) Calculated by equation 14.
Figure BDA0003092851290000121
And performing homogeneous transformation M, and calculating the sonar track relative to the initial position by recursive calculation according to a formula 15 to complete accurate three-dimensional reconstruction.
kM0kMk-1 k-1M0(formula 15)
In the formula, k represents the number of frames (time index).
The above description is only a preferred embodiment of the present invention, and the scope of the present invention is not limited to the above embodiment, but equivalent modifications or changes made by those skilled in the art according to the present disclosure should be included in the scope of the present invention as set forth in the appended claims.

Claims (10)

1. The utility model provides an underwater sonar image matching method based on gaussian distribution clustering which characterized in that: the method comprises the following steps:
step A, extracting features, establishing a matching relation among the features, and obtaining height information of a map; the method specifically comprises the following 3 steps:
a1, collecting sonar data and carrying out feature detection;
step A2, registering images to generate a Gaussian map;
step A3, calculating a scene elevation map;
b, estimating the pose and updating the feature map so as to generate a three-dimensional space map; the method specifically comprises the following steps:
step B1, pose estimation;
step B2, optimizing parameters;
and step B3, generating a three-dimensional map.
2. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 1, is characterized in that: collecting sonar data in the step A1, and performing feature extraction comprises the following 4 steps:
a1-1, collecting sonar data, and selecting an area with obvious pixels from a sonar data image as a target processing area;
and step A1-2, dividing the target area into three categories by utilizing the gray level distribution information of the target characteristic area and by scene analysis, distinguishing a target pixel area and other non-target areas, and eliminating the pixel area generated by noise according to the gray level distribution information to realize the description of the characteristic points.
3. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 2, is characterized in that: the method is characterized in that the step A-1-2 comprises the following 2 steps:
step A1-2a, dividing relatively obvious gray values in the sonar image into three categories by utilizing the gray distribution information of the target characteristic region and through scene analysis: bright speckles that make up the 3D object or structure, shadow regions projected by surfaces adjacent to the 3D object, flat surfaces between the first two gray values;
and step A1-2b, using the bright speckles forming the 3D target or structure as target areas through scene analysis, distinguishing target pixels from other non-target areas, removing pixel areas generated by noise, and reducing image errors.
4. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 1, is characterized in that: in step a2, image registration and gaussian map generation comprise the following steps:
a2-1, clustering target pixels with strong pixel intensity by a K-means clustering method;
and step A2-2, removing noise by adopting low-pass filtering, and reducing errors of feature extraction.
5. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 4, is characterized in that: step A2-1 includes the following steps:
step A2-1a, gridding the target characteristic points obtained in the step A1-2b, and carrying out intra-grid clustering on each gridded target pixel by a K-means clustering method based on the points in the grid;
and step A2-1b, representing the target characteristic points clustered by k-means by Gaussian distribution, calculating the characteristic value of a covariance matrix and information such as the direction and smoothness of a characteristic vector reaction surface, representing any cluster by a cluster mean and covariance, and generating a Gaussian map. The mean vector and covariance matrix correspond to the location, size, and orientation of each two-dimensional image region.
6. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 5, is characterized in that: step A2-1b includes the following 2 steps:
a2-1b-1, selecting 1% -2% of pixels with high brightness, eliminating pixels smaller than 8, selecting k value to make any sub-area not larger than 32 pixels, dividing the area larger than 32 pixels into smaller areas, and obtaining proper characteristic areas;
and step A2-1b-2, calculating the position, size and direction of each two-dimensional image area corresponding to the mean vector and the covariance matrix, and generating a Gaussian map.
7. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 1, is characterized in that: the step of calculating the scene elevation map in the step a3 includes the following steps:
step A3-1, calculating the measured value of the elevation angle of the drop shadow point of the scanning azimuth according to the method of the planarity assumption;
step A3-2, fixing a plane by using image points with minimum and maximum distance values, namely fixing three-dimensional coordinates of a front end point and a rear edge point, scanning similar point pairs in different directions, determining an occlusion contour point by casting a shadow to establish a corresponding relation between a target object elevation angle and the shadow, wherein the elevation value of each three-dimensional object from the front end to the rear end is filled by linear interpolation;
step A3-3, processing the smoothed data through low-pass filtering to reduce the influence of image noise on local peak values; establishing thresholds of a background, an object and a shadow region through k-means segmentation, namely dividing the clustered regions into three categories according to gray values, and dividing image categories to position the conversion from the object to the shadow and from the shadow to the ground;
step A3-4, estimating the size of the three-dimensional object through the elevation angle of the trailing edge point and the angle of the shielding outline, and calculating a scene elevation map by using the following formula:
Figure FDA0003092851280000041
in the formula, Hs is the distance of the sonar for detecting the seabed, Rs is the slant distance, Ls is the shadow length, Ht is the height of the target, and Lt is the length of the target.
8. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 1, is characterized in that: the pose estimation of the step B1 comprises the following steps:
step B1-1, in the scene height chart obtained in the step A3-4, a first frame of sonar image is selected as a reference image, a second frame of image is selected as an image to be matched, each image point moves to a new position through sonar rigid motion, and the two sonar images are registered by calculating motion parameters from frame to frame;
b1-2, calculating sonar motion parameters, searching for the most suitable image to be matched from the reference image in the conversion characteristics, performing pose estimation by using a space conversion function, and comparing the transformation, namely rotation and translation and the like, of the two images to obtain the best transformation parameters; and then, taking the registration image obtained in the previous time as a reference, registering the next frame of sonar image until the registration of all the sonar images to be matched in the whole image sequence is completed, unifying the registration of all the adjacent frames into an adjacent reference system, and reducing the accumulated error caused by paired registration.
9. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 8, is characterized in that: the step B2 parameter optimization comprises the following steps:
step B2-1, projecting each image point of the elevation map obtained in the step A3-4 to a corresponding space point, calculating a new position three-dimensional scene point after sonar rigid motion, and obtaining two sonar views;
and step B2-2, optimizing parameters of two sonar views, transforming the three-dimensional points of the second view into the coordinate system of the first view, and evaluating the characteristic Gaussian distribution of all the transformations by optimal registration.
10. The underwater sonar image matching method based on Gaussian distribution clustering according to claim 1, is characterized in that: the step B3 of generating a three-dimensional map includes the steps of:
and B2, after parameters are optimized according to the step B, the calculation speed is increased to obtain an optimal solution, the sonar track relative to the initial position is calculated, and accurate three-dimensional reconstruction is completed.
CN202110601626.8A 2021-05-31 2021-05-31 Underwater sonar image matching method based on Gaussian distribution clustering Active CN113313172B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110601626.8A CN113313172B (en) 2021-05-31 2021-05-31 Underwater sonar image matching method based on Gaussian distribution clustering
PCT/CN2022/094444 WO2022253027A1 (en) 2021-05-31 2022-05-23 Underwater sonar image matching method based on gaussian distribution clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110601626.8A CN113313172B (en) 2021-05-31 2021-05-31 Underwater sonar image matching method based on Gaussian distribution clustering

Publications (2)

Publication Number Publication Date
CN113313172A true CN113313172A (en) 2021-08-27
CN113313172B CN113313172B (en) 2022-03-18

Family

ID=77376534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110601626.8A Active CN113313172B (en) 2021-05-31 2021-05-31 Underwater sonar image matching method based on Gaussian distribution clustering

Country Status (2)

Country Link
CN (1) CN113313172B (en)
WO (1) WO2022253027A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744337A (en) * 2021-09-07 2021-12-03 江苏科技大学 Synchronous positioning and mapping method integrating vision, IMU and sonar
CN114596408A (en) * 2022-02-08 2022-06-07 武汉大学 Micro-parallel three-dimensional reconstruction method based on continuous two-dimensional metal distribution image
WO2022253027A1 (en) * 2021-05-31 2022-12-08 江苏科技大学 Underwater sonar image matching method based on gaussian distribution clustering

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116012539B (en) * 2023-03-27 2023-06-06 埃尔法(山东)仪器有限公司 Calculation method for three-dimensional imaging of air mass by combining unmanned aerial vehicle with laser detection
CN116342965B (en) * 2023-05-26 2023-11-24 中国电建集团江西省电力设计院有限公司 Water level measurement error analysis and control method and system
CN117824664B (en) * 2024-03-05 2024-05-28 河海大学 Active SLAM method of autonomous unmanned system based on multi-beam sounding sonar
CN118071754A (en) * 2024-04-24 2024-05-24 胜利油田中心医院 Thyroid ultrasound image data processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405146A (en) * 2015-11-17 2016-03-16 中国海洋大学 Feature density clustering and normal distribution transformation based side-scan sonar registration method
CN109544609A (en) * 2018-10-11 2019-03-29 天津大学 A kind of sidescan-sonar image matching process based on SIFT algorithm
WO2021055646A1 (en) * 2019-09-17 2021-03-25 FLIR Belgium BVBA Navigational danger identification and feedback systems and methods
CN112581610A (en) * 2020-10-16 2021-03-30 武汉理工大学 Robust optimization method and system for establishing map from multi-beam sonar data
CN112802195A (en) * 2020-12-30 2021-05-14 浙江大学 Underwater robot continuous occupying and mapping method based on sonar

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113313172B (en) * 2021-05-31 2022-03-18 江苏科技大学 Underwater sonar image matching method based on Gaussian distribution clustering

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105405146A (en) * 2015-11-17 2016-03-16 中国海洋大学 Feature density clustering and normal distribution transformation based side-scan sonar registration method
CN109544609A (en) * 2018-10-11 2019-03-29 天津大学 A kind of sidescan-sonar image matching process based on SIFT algorithm
WO2021055646A1 (en) * 2019-09-17 2021-03-25 FLIR Belgium BVBA Navigational danger identification and feedback systems and methods
CN112581610A (en) * 2020-10-16 2021-03-30 武汉理工大学 Robust optimization method and system for establishing map from multi-beam sonar data
CN112802195A (en) * 2020-12-30 2021-05-14 浙江大学 Underwater robot continuous occupying and mapping method based on sonar

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAIYANG QIU.ET.: "Single Variable-constrained NDT Matching in Traffic Data Collection Using a Laser-based Detector", 《IEEE ACCESS》 *
牛伯城: "基于声视觉的UUV水下同步定位与建图方法研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022253027A1 (en) * 2021-05-31 2022-12-08 江苏科技大学 Underwater sonar image matching method based on gaussian distribution clustering
CN113744337A (en) * 2021-09-07 2021-12-03 江苏科技大学 Synchronous positioning and mapping method integrating vision, IMU and sonar
CN113744337B (en) * 2021-09-07 2023-11-24 江苏科技大学 Synchronous positioning and mapping method integrating vision, IMU and sonar
CN114596408A (en) * 2022-02-08 2022-06-07 武汉大学 Micro-parallel three-dimensional reconstruction method based on continuous two-dimensional metal distribution image

Also Published As

Publication number Publication date
CN113313172B (en) 2022-03-18
WO2022253027A1 (en) 2022-12-08

Similar Documents

Publication Publication Date Title
CN113313172B (en) Underwater sonar image matching method based on Gaussian distribution clustering
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
RU2713611C2 (en) Three-dimensional space simulation method
US9317741B2 (en) Three-dimensional object modeling fitting and tracking
CN111899328B (en) Point cloud three-dimensional reconstruction method based on RGB data and generation countermeasure network
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN106780576A (en) A kind of camera position and orientation estimation method towards RGBD data flows
Aykin et al. On feature extraction and region matching for forward scan sonar imaging
CN113610889A (en) Human body three-dimensional model obtaining method and device, intelligent terminal and storage medium
CN112132876B (en) Initial pose estimation method in 2D-3D image registration
CN111127613B (en) Image sequence three-dimensional reconstruction method and system based on scanning electron microscope
JP7173471B2 (en) 3D position estimation device and program
CN112927251B (en) Morphology-based scene dense depth map acquisition method, system and device
CN113740871A (en) Laser SLAM method, system equipment and storage medium in high dynamic environment
Yu et al. A stereovision method for obstacle detection and tracking in non-flat urban environments
Kim et al. High-precision underwater 3d mapping using imaging sonar for navigation of autonomous underwater vehicle
CN111915651A (en) Visual pose real-time estimation method based on digital image map and feature point tracking
CN117422753A (en) High-precision scene real-time three-dimensional reconstruction method combining optics and SAR (synthetic aperture radar) images
EP1307705A1 (en) Height measurement apparatus
CN116704112A (en) 3D scanning system for object reconstruction
CN112146647B (en) Binocular vision positioning method and chip for ground texture
CN114660641A (en) Self-adaptive GPS fusion positioning system, method and medium
Spears et al. Determining underwater vehicle movement from sonar data in relatively featureless seafloor tracking missions
An et al. Tracking an RGB-D camera on mobile devices using an improved frame-to-frame pose estimation method
WO2022198603A1 (en) Real-time simultaneous localization and mapping using an event camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant