CN114693524A - Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium - Google Patents

Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium Download PDF

Info

Publication number
CN114693524A
CN114693524A CN202210348998.9A CN202210348998A CN114693524A CN 114693524 A CN114693524 A CN 114693524A CN 202210348998 A CN202210348998 A CN 202210348998A CN 114693524 A CN114693524 A CN 114693524A
Authority
CN
China
Prior art keywords
image
scan sonar
matching
point
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210348998.9A
Other languages
Chinese (zh)
Inventor
冯霞
吴功才
杨乃如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Vocational and Technical College
Original Assignee
Hangzhou Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Vocational and Technical College filed Critical Hangzhou Vocational and Technical College
Priority to CN202210348998.9A priority Critical patent/CN114693524A/en
Publication of CN114693524A publication Critical patent/CN114693524A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/80
    • G06T5/90
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a side scan sonar image accurate matching and fast splicing method based on an ORB-GMS algorithm, which comprises the following steps: acquiring a side scan sonar original data fragment in real time and analyzing the side scan sonar original data fragment into a picture; performing slope correction, speed correction and gray gain on the original side-scan sonar image segment obtained in the step S1; the side-scan sonar image segment clearly presents submarine geomorphic information; s3: extracting and describing feature points of the adjacent image segments obtained in the step S2 by using a modified ORB algorithm; carrying out coarse matching on the feature descriptors by using a FLANN matching strategy; introducing longitude and latitude information, and purifying the coarse matching by using an improved GMS algorithm; performing local fitting on the matched result by using a PROSAC algorithm, and calculating a homography matrix; and performing weighted average fusion on the two images according to the homography matrix to obtain a side-scan sonar spliced image. By the method, the real-time image splicing of the side-scan sonar can be realized quickly, accurately and efficiently, the side-scan sonar can work while imaging is realized, and the submarine landform characteristic diagram can be obtained in real time.

Description

Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium
Technical Field
The invention relates to a side-scan sonar image accurate matching fast splicing method, and particularly relates to a side-scan sonar image accurate matching fast splicing method, equipment and a storage medium based on an ORB-GMS algorithm.
Background
The physical properties of seawater prevent humans from effectively surveying the ocean and obtaining information about the seafloor surfaces directly using optical imagers. The side scan sonar is used as a device for rapidly acquiring a high-resolution image of the seabed, and is widely applied to the fields of ocean engineering construction, seabed resource development, target detection and identification and the like.
But influenced by the imaging angle and the distance of sonar equipment, a single sonar image which is processed simply through an original data strip and is filtered or corrected cannot comprehensively depict the high-quality submarine geomorphologic characteristics of a large area. Therefore, the images with higher spatial resolution are generated by using the plurality of images in a proper fusion method, and the method has important significance for accurately imaging the distribution characteristics of the submarine surface landform and promoting the marine scientific research and marine engineering construction.
In the existing method, a side-scan sonar image splicing method is a splicing technology after fitting by using a RANSAC algorithm based on SIFT (scale invariant feature transform) features or SURF (speeded up robust features) features, but the calculation speed is too low, so that the requirements of real-time splicing and dynamic display under high-speed motion of the side-scan sonar cannot be met. The traditional ORB technology utilizes the FAST algorithm to search for the characteristic points, so that the calculation speed is improved, but more mismatching points exist, the accuracy rate is reduced, and the repeated interpretation of the image is caused.
For example, CN111028154A discloses a method for matching and splicing side-scan sonar images on the rugged seabed, which comprises preprocessing the side-scan sonar images to make them correspond to actual seabed scene information; labeling a large number of side-scan sonar images to obtain a data set of side-scan sonar image semantic segmentation; building a semantic segmentation neural network, and training the semantic segmentation neural network; segmenting the side-scan sonar images needing to be matched and spliced by the trained semantic segmentation network, and performing template matching by using the segmented images to obtain the relative position information of the two side-scan sonar images; and finally, performing fusion splicing on the preprocessed side-scan sonar images according to the obtained relative position information. The method is characterized in that template matching is carried out after image data set labeling training is carried out in advance, but the actual submarine topography is complex, even if a large number of data sets exist, the correlation of the actual data of the submarine with complex changes is difficult to obtain by using a template matching mode, and the method is limited to scenes with obvious submarine topography changes.
Also disclosed in CN113284048A is a side scan sonar image stitching method, comprising: sequentially carrying out image filtering, image geometric correction and gray correction based on a sonar image gray correction network for generating a countermeasure network on the side-scan sonar image; calculating the longitude and latitude of each pixel in the preprocessed side-scan sonar image; extracting characteristic points of the side-scan sonar image to be matched to obtain an area to be matched; performing grid division on the image overlapping area based on the longitude and latitude information; matching the feature points in the region to be matched according to the grid division result; screening the obtained matching feature points, and eliminating mismatching points; and fusing the images by adopting a fusion method combining multiband fusion and maximum fusion to obtain a side-scan sonar spliced image. The method adopts a neural network method to preprocess the image, belongs to a posterior method in data processing, does not have the rapid real-time property required in the field acquisition work, does not consider the condition that the longitude and the latitude of the adjacent position of an overlapping area are not changed in a real-time splicing scene when the image is matched by utilizing the longitude and latitude information, and is only suitable for the splicing postprocessing of the large-area segment of the sonar image.
In conclusion, the design of the side-scan sonar image real-time high-precision splicing technology which has strong real-time performance and high matching accuracy has wide practical application requirements.
Disclosure of Invention
The invention aims to provide a side-scan sonar image accurate matching and fast splicing method based on an ORB-GMS algorithm, which can accurately splice sonar images in an underwater complex environment, has higher splicing speed, can quickly and accurately restore the landform characteristics of a large seabed area, solves the problems of poor instantaneity and low accuracy of the existing side-scan sonar image splicing, and overcomes the defects of image distortion and low efficiency in the signal splicing process.
The invention provides the following technical scheme:
the first aspect of the embodiments of the present invention discloses: a side scan sonar image accurate matching fast splicing method based on an ORB-GMS algorithm is characterized by comprising the following steps:
s1: acquiring a side scan sonar original data fragment in real time and analyzing the side scan sonar original data fragment into a picture;
s2: performing slope correction, speed correction and gray gain on the original side-scan sonar image segment obtained in the step S1; the side-scan sonar image segment clearly presents submarine geomorphic information;
s3: extracting and describing feature points of the adjacent image segments obtained in the S2 by using an improved ORB algorithm, namely replacing a BRIEF descriptor in the ORB algorithm with a BEBLID descriptor;
s4: carrying out coarse matching on the feature descriptors by using a FLANN matching strategy;
s5: introducing longitude and latitude information, and purifying the coarse matching by using an improved GMS algorithm, namely after 20 multiplied by 20 grids in the GMS algorithm are divided, selecting by using an overlapped area with the same longitude and latitude in two images;
s6: performing local fitting on the matched result by using a PROSAC algorithm, and calculating a homography matrix;
s7: and performing weighted average fusion on the two images according to the homography matrix to obtain a side-scan sonar spliced image.
Preferentially, in S2: the skew correction is as follows: calculating the horizontal distance position, and assigning the gray value on the oblique distance point to correction
The subsequent flat pitch point, the position calculation formula is:
Figure BDA0003578515100000021
Yg=Xs
wherein, Width is the image Width, (Xs, Ys) is a certain point on the original image, (Xg, Yg) is a point on the horizontal distance image after the slant distance correction, and H is the height of the side scan sonar from the sea bottom;
the speed correction includes the following: longitudinally compressing or stretching the side-scan sonar image by utilizing the longitude and latitude time of each ping data record of the side-scan sonar and the proportion of each ping data acquisition period;
the gray scale gain includes the following: the mean value of the compensated sampling points on the image transverse sliding window is calculated according to the formula:
Figure BDA0003578515100000031
wherein, N is the total pixel point in the window, v (i) is the gray value of each pixel point, j is the window serial number, and i is the pixel.
Preferably, S3 replaces the BRIEF descriptor in the ORB algorithm with the blid descriptor process as follows:
s31, the characteristic point extraction method comprises the following steps:
selecting a pixel point from the image, setting the gray level of the pixel point as lp, comparing that continuous n points are different from the point in 16 points around the point, setting the point as a characteristic point, and defining operation T:
Figure BDA0003578515100000032
wherein IARepresenting the gray scale of point A, IBRepresents the gray scale of point B;
s32, in the feature description stage, the feature points obtained in S31 are changed into BEBLID descriptors, and the details are as follows:
is provided with
Figure BDA0003578515100000033
Training set consisting of a pair of image patches, label liE { -1, 1 }. Wherein li1 indicates that two patches correspond to the same saliency image structure, li-1 represents a different saliency image structure, the training loss function being:
Figure BDA0003578515100000034
wherein h isk(z)=hk(z; f, T) which depends on the feature extraction function f: x → R and a threshold T. Given f and T, the description is defined by f (x) with T, and the formula is:
Figure BDA0003578515100000035
the BEBLID binary descriptor is obtained according to the above formula.
Preferably, the FLANN fast nearest neighbor feature matching strategy in S4 is as follows:
reconstructing HASH for the binary descriptor, performing cluster modeling on the descriptor by adopting a KD tree, and searching a nearest matching point;
the method comprises the following specific steps: dividing the highest dimensionality in the binary description subset into two parts, repeating the same process on the subset, and establishing a plurality of random k-d trees, wherein the neighbor search process is as follows:
(1) starting retrieval from the root node N;
(2) if the leaf nodes are N leaf nodes, adding the leaf nodes in the same level into the search result, and counting + ═ N |;
(3) if N is not a leaf node, comparing the child node with the query Q, finding out the nearest node Cq, and adding other nodes in the same level into a priority queue;
(4) carrying out recursive search on the Cq nodes;
if the priority queue is not empty and count < L, then the first element of the priority queue is assigned to N.
Preferably, the purification process in S5 is as follows:
grid division is carried out according to the size of the overlapped image segments with the same longitude and latitude information, the default grid number G of GMS is 20 multiplied by 20 and 10000 characteristic points, the grid number is dynamically divided according to the size of the overlapping degree, each grid area is scored, and the scoring formula is as follows:
Figure BDA0003578515100000041
wherein: k denotes the number of disjoint regions, { ak,bkIs the predicted pair of regions,
Figure BDA0003578515100000045
is located in the region pair { ak,bkThe distribution of scores for the subset of matches on (v) } is as follows:
Figure BDA0003578515100000042
this distribution is in the form of a binomial distribution, and the corresponding mean and variance calculation is as follows:
Figure BDA0003578515100000043
the discriminative power P for a true and false match is calculated according to the above equation, i.e. the difference of the mean values divided by the sum of the standard deviations, as follows:
Figure BDA0003578515100000044
when the number of matching points in the image is more, the distinguishability of correct matching and incorrect matching is stronger; and 4, carrying out error matching elimination processing on the pre-matched point pair obtained in the step 4 based on the discrimination.
Preferably, the PROSAC algorithm in S6 includes:
the semi-random method for asymptotically approximating the consistency of the samples performs descending sorting on the quality evaluation of all the points, and only performs model assumption and verification on the high-quality point pairs;
the homography matrix is calculated as follows: using progressive consistent sampling to the matching set, using a semi-random mode to evaluate the quality of all the point pairs to calculate Q values, then arranging the Q values in a descending order, only assuming and verifying the models in the high-quality point pairs each time, and filtering the point pairs to obtain a final homography matrix which is a transformation matrix H;
the method comprises the following steps:
s61: calculating the minimum Euclidean distance d of the feature pointsminThe Euclidean distance ratio beta and the quality factor gamma;
s62: selecting 4 groups of matching points with the highest mass sum to calculate a homography matrix H;
s63: removing the residual matching points of the 4 groups of matching point pairs, and calculating corresponding projection points according to H;
s64: calculating errors between the projection points and the matching points to judge inner and outer points;
s65: and repeating the step S62 by using the interior points, and returning the homography matrix after the maximum iteration times are reached.
Preferentially, the weighted average fusion of the two images by the homography matrix in S7 specifically includes the following steps:
the homography matrix formula obtained from S65 is as follows:
Figure BDA0003578515100000051
wherein f isx、fy、u0、υ0Gamma denotes an internal reference, s denotes a scale factor, r1、r2T represents an external reference, M is an internal reference matrix, and the formula is as follows:
Figure BDA0003578515100000052
and carrying out affine transformation alignment according to the homography matrix, wherein the affine transformation alignment comprises linear transformation and translation transformation, and the image seams are fused by average weighting to finally obtain a side-scan sonar spliced image.
In a second aspect of the embodiment of the present invention, an electronic device is further disclosed, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the method for accurately matching and rapidly splicing the side scan sonar images based on the ORB-GMS algorithm according to the first aspect of the present invention is implemented.
The third aspect of the embodiment of the present invention further discloses a computer-readable storage medium, in which a stored computer program enables a processor to execute the method for accurately matching and rapidly splicing the side scan sonar images based on the ORB-GMS algorithm in the first aspect.
The invention has the beneficial effects that:
1. compared with a comparison file in the background art, the method does not need a large number of data set marking training and is a prior method;
2. the invention introduces an ORB algorithm into the extraction of feature points and replaces a BRIEF descriptor as a BEBLID descriptor, and mainly solves the problems of low speed, low efficiency and the like in the existing side-scan sonar image splicing method;
3. aiming at the problem of repeated image splicing caused by low image matching accuracy, the invention uses a double matching strategy: firstly, a FLANN matching strategy is adopted to carry out nearest neighbor coarse matching on descriptors, then an overlapping area is calculated based on longitude and latitude position information, the overlapping area is used as a grid division basis, grid scoring is carried out on the distance of a feature point pair in a grid, super-robust GMS feature purification with motion statistical characteristics is carried out according to the scoring, a matching result with higher accuracy is obtained, and final splicing is completed, so that the problem that in CN113284048A, due to uncertain longitude and latitude information, the false matching in a small-distance shallow sea scanning range is excessive is solved, and the real-time performance without posterior is realized;
4. by the method, the real-time image splicing of the side-scan sonar can be realized quickly, accurately and efficiently, the side-scan sonar can work while imaging is realized, and the submarine landform characteristic diagram can be obtained in real time.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method embodying the present invention;
FIG. 2 is an original side-scan sonar image segment analyzed by the method of the present invention;
FIG. 3 is a side scan sonar image slice after pre-processing and correction by the method of the present invention;
FIG. 4 is a side scan sonar image feature extraction result obtained by the method of the present invention;
FIG. 5 shows the side scan sonar image matching result obtained by the method of the present invention;
FIG. 6 is a side scan sonar splicing effect diagram obtained by the method of the present invention.
Detailed Description
Example 1
As shown in the figure, the side scan sonar image accurate matching fast splicing method based on the ORB-GMS algorithm comprises the following steps:
s1: acquiring a side scan sonar original data fragment in real time and analyzing the side scan sonar original data fragment into a picture;
the specific process is as follows: the side scanning system adopts 450kHz low-frequency data, the scanning amplitude is 30 meters, the interception overlapping degree of data segments is 30 percent, the sampling precision is 16 bits, the echo intensity of each ping is mapped into a grey value original image segment, and the data quantization formula is as follows:
Figure BDA0003578515100000061
Figure BDA0003578515100000062
wherein G is quantized gray value data, GB is pre-quantization echo data, GmaxAnd GminThe maximum and minimum values of the gray image, C is a constant, m is the sampling precision of the original echo data, and n is the numerical precision after quantization, the intensity of the echo is enhanced while the gray level conversion is performed, and please refer to fig. 2 for the original image segment.
S2: performing slope correction, speed correction and gray gain on the original side-scan sonar image segment obtained in the step S1 to enable the side-scan sonar image segment to clearly present submarine landform information; the side-scan sonar image slice after pre-processing is shown in figure 3.
Among them, because the pulse spreads in the form of spherical wave when the side scan sonar is working, so that there is compression in the object in the horizontal direction, the effect is expressed as the projection stretch is longer the farther from the side scan sonar, so the slant correction is performed: calculating the coordinate of the horizontal distance position, assigning the gray value on the coordinate of the oblique distance point to the coordinate of the corrected horizontal distance point, wherein the position coordinate calculation formula is as follows:
Figure BDA0003578515100000063
Yg=Xs
wherein, Width is the image Width, (Xs, Ys) is the coordinate of a certain point on the original image, (Xg, Yg) is the coordinate of the point on the flat-pitch image after the slant-pitch correction, and H is the height from the sea bottom of the side-scan sonar;
calculating the width of the echo flat pixel according to the corrected flat point coordinates (Xg, Yg), wherein the formula of the width of the flat pixel is as follows:
Figure BDA0003578515100000071
Figure BDA0003578515100000072
wherein n ishIs the slant pixel width from the sea bottom line to the emission line, niIs the slant range pixel width, n ', of the current echo to the transmit line'iThe square pixel width of each echo to the sea bottom line.
The speed correction includes the following: the image has compression distortion under the influence of the speed of the carrier, and the side scan sonar image is longitudinally compressed or stretched by utilizing the longitude and latitude time of each ping data record of the side scan sonar and the proportion of each ping data acquisition period. Specifically, a scanning filling method is adopted on two ping data with adjacent longitude and latitude time, namely, the two ping image closed communication areas are weighted according to the inverse distance to obtain the pixel value. And calculating the size of the overlapping proportion in two adjacent ping data in the acquisition period, and compressing according to the overlapping proportion.
Solving radiation distortion on an image by gray scale gains, the gray scale gains comprising: calculating the mean value of sampling points in the transverse sliding window of the image
Figure BDA0003578515100000073
And a compensation formula TL, the calculation formula is as follows:
Figure BDA0003578515100000074
TL=n·10logR+aR
wherein N is the total pixel point in the window, upsilon (i) is the gray value of each pixel point, j is the window serial number, i is the pixel, N is the beam shape, a is the absorption coefficient, and R is the sweep distance.
And accumulating the mean value of the sampling points in each image transverse sliding window to obtain a final compensation equation, wherein the formula is as follows:
Figure BDA0003578515100000075
s3: and (4) extracting and describing feature points of the adjacent image segments obtained in the S2 by using a modified ORB algorithm, namely replacing a BRIEF descriptor in the ORB algorithm with a BEBLID descriptor. The improved algorithm is a characteristic point extraction FAST algorithm: automatically searching pixel values of which the gray value is smaller than or larger than the selected threshold value within the range of 16 pixels around any pixel point p on the image as the circle center, and selecting the pixel p as a characteristic point if the gray value is more than 8; the method comprises the following specific steps:
s31, the characteristic point extraction method comprises the following steps:
selecting a pixel point from the image, setting the gray level of the pixel point as lp, comparing that continuous n points are different from the point in 16 points around the point, setting the point as a characteristic point, and defining operation T:
Figure BDA0003578515100000081
wherein IARepresenting the gray scale of point A, IBThe gradation of the point B is shown in fig. 4 after extraction.
The feature point description algorithm replaces the binary descriptor BRIEF defaulted by ORB with the eblid descriptor to implement the re-description of the extracted feature points, which is specifically as follows:
is provided with
Figure BDA0003578515100000082
Training set consisting of a pair of image patches, label liE { -1, 1 }. Wherein li1 indicates that two patches correspond to the same saliency image structure, li-1 represents a different saliency image structure, the training loss function being:
Figure BDA0003578515100000083
wherein h isk(z)=hk(z; f, T) which depends on the feature extraction function f: x → R and a threshold T. Given f and T, the description is defined by f (x) with T, and the formula is:
Figure BDA0003578515100000084
the BEBLID binary descriptor is obtained according to the above formula.
S4: carrying out coarse matching on the feature descriptors by using a FLANN matching strategy; the strategy carries out nearest neighbor search on high-dimensional features, mainly reconstructs HASH on binary descriptors, and adopts KD tree to carry out cluster modeling on the descriptors so as to search nearest matching points. The method comprises the following specific steps:
dividing the highest dimensionality in the binary description subset into two parts, repeating the same process on the subset, and establishing a plurality of random k-d trees, wherein the neighbor search process is as follows:
(1) starting retrieval from the root node N;
(2) if the leaf nodes are N leaf nodes, adding the leaf nodes in the same level into the search result, and counting + ═ N |;
(3) if N is not a leaf node, comparing the child node with the query Q, finding out the nearest node Cq, and adding other nodes in the same level into a priority queue;
(4) carrying out recursive search on the Cq node;
if the priority queue is not empty and count < L, then the first element of the priority queue is assigned to N.
S5: introducing longitude and latitude information, and purifying the coarse matching point obtained by the S4 by using an improved GMS algorithm, namely, after 20 x 20 grids in the GMS algorithm are divided, selecting by using an overlapped area with the same longitude and latitude in the two images;
the specific process is as follows:
grid division is carried out according to the size of the overlapped image segments with the same longitude and latitude information, the default grid number G of GMS is 20 multiplied by 20 and 10000 characteristic points, the grid number is dynamically divided according to the size of the overlapping degree, each grid area is scored, and the scoring formula is as follows:
Figure BDA0003578515100000091
wherein: k denotes the number of disjoint regions, { ak,bkIs the predicted pair of regions,
Figure BDA0003578515100000095
is located in the region pair { ak,bkThe distribution of scores for the subset of matches on (v) } is as follows:
Figure BDA0003578515100000092
this distribution is in the form of a binomial distribution, and the corresponding mean and variance calculation is as follows:
Figure BDA0003578515100000093
the discriminative power P for a true and false match is calculated according to the above equation, i.e. the difference of the mean values divided by the sum of the standard deviations, as follows:
Figure BDA0003578515100000094
when the number of matching points in the image is more, the distinguishability of correct matching and incorrect matching is stronger; and (4) carrying out error elimination matching processing on the pre-matching point pairs obtained in the step (4) based on the discrimination, wherein the matching result is shown in fig. 5.
Based on image grid segmentation, the matching correctness is judged by judging the number of other matching pairs existing in the peripheral region to be matched in each grid, and the matching is interpreted as that other matching pairs exist in the adjacent region of a pair of matching points, so that the possibility that the matching pair is correct is high.
S6: performing local fitting on the matched result by using a PROSAC algorithm, and calculating a homography matrix;
the homography matrix is calculated as follows: and (3) carrying out progressive consistent sampling on the matching set, carrying out quality evaluation on all the point pairs in a semi-random mode to calculate Q values, then carrying out descending order arrangement according to the Q values, only carrying out hypothesis and verification on the models in the high-quality point pairs each time, and filtering the point pairs to obtain a final homography matrix which is a transformation matrix H. The method comprises the following steps:
s61: calculating the minimum Euclidean distance d of the feature pointsminThe Euclidean distance ratio beta and the quality factor gamma;
s62: selecting 4 groups of matching points with the highest mass sum to calculate a homography matrix H;
s63: removing the residual matching points of the 4 groups of matching point pairs, and calculating corresponding projection points according to H;
s64: calculating errors between the projection points and the matching points to judge inner and outer points;
s65: and repeating the step S62 by using the interior points, and returning the homography matrix after the maximum iteration times are reached.
And (3) adopting a semi-random method for asymptotic sample consistency, sequencing the quality evaluation of all the points in a descending order, and only performing model assumption and verification on the high-quality point pairs.
S7: and performing weighted average fusion on the two images according to the homography matrix to obtain a side-scan sonar spliced image. The process is as follows:
the homography matrix formula obtained from S65 is as follows:
Figure BDA0003578515100000101
wherein f isx、fy、u0、v0Gamma denotes an internal reference, s denotes a scale factor, r1、r2T represents an external reference, M is an internal reference matrix, and the formula is as follows:
Figure BDA0003578515100000102
affine transformation alignment is carried out according to the homography matrix, including linear transformation and translation transformation, average weighting fusion is adopted at image seams, and finally a side-scan sonar splicing image is obtained, as shown in fig. 6.
Example 2
An embodiment of the present invention further provides an electronic device, including: one or more processors; and
one or more machine-readable media having instructions stored thereon, which when executed by the one or more processors, cause the apparatus to perform the ORB-GMS algorithm-based side scan sonar image exact-match fast stitching method as described in embodiment 1 of the present invention above.
Example 3
The embodiment of the present invention further provides a computer-readable storage medium, in which a stored computer program enables a processor to execute the method for accurately matching and rapidly splicing the side scan sonar images based on the ORB-GMS algorithm according to the above embodiment 1 of the present invention.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A side scan sonar image accurate matching fast splicing method based on an ORB-GMS algorithm is characterized by comprising the following steps:
s1: acquiring a side scan sonar original data fragment in real time and analyzing the side scan sonar original data fragment into a picture;
s2: performing slope correction, speed correction and gray gain on the original side-scan sonar image segment obtained in the step S1; the side-scan sonar image segment clearly presents submarine geomorphic information;
s3: extracting and describing feature points of the adjacent image segments obtained in the S2 by using an improved ORB algorithm, namely replacing a BRIEF descriptor in the ORB algorithm with a BEBLID descriptor;
s4: carrying out coarse matching on the feature descriptors by using a FLANN matching strategy;
s5: introducing longitude and latitude information, and purifying the coarse matching by using an improved GMS algorithm, namely after 20 multiplied by 20 grids in the GMS algorithm are divided, selecting by using an overlapped area with the same longitude and latitude in two images;
s6: performing local fitting on the matched result by using a PROSAC algorithm, and calculating a homography matrix;
s7: and performing weighted average fusion on the two images according to the homography matrix to obtain a side-scan sonar spliced image.
2. The ORB-GMS algorithm-based side-scan sonar image accurate matching fast stitching method according to claim 1,
in S2:
the skew correction is as follows: calculating the flat distance position, assigning the gray value on the oblique distance point to the corrected flat distance point, wherein the position calculation formula is as follows:
Figure FDA0003578515090000011
Yg=Ys
wherein Width is the image Width, (X)s,Ys) For a point on the original image, (X)g,Yg) The point on the horizontal distance image after the slant distance correction is carried out, and H is the height from the side scan sonar to the seabed;
the speed correction includes the following: longitudinally compressing or stretching the side-scan sonar image by utilizing the longitude and latitude time of each ping data record of the side-scan sonar and the proportion of each ping data acquisition period;
the gray scale gain includes the following: the mean value of the compensated sampling points on the image transverse sliding window is calculated according to the formula:
Figure FDA0003578515090000012
wherein, N is the total pixel point in the window, v (i) is the gray value of each pixel point, j is the window serial number, and i is the pixel.
3. The method for accurately matching and rapidly splicing the side-scan sonar images based on the ORB-GMS algorithm according to claim 1, wherein S3 replaces a BRIEF descriptor in the ORB algorithm with a BEBLID descriptor as follows:
s31, the characteristic point extraction method comprises the following steps:
selecting a pixel point from the image, setting the gray level of the pixel point as lp, comparing that continuous n points are different from the point in 16 points around the point, setting the point as a characteristic point, and defining operation T:
Figure FDA0003578515090000021
wherein IARepresenting the gray scale of point A, IBRepresents the gray scale of point B;
s32, in the feature description stage, the feature points obtained in S31 are changed into BEBLID descriptors, and the details are as follows:
is provided with
Figure FDA0003578515090000022
Training set consisting of a pair of image patches, label liE { -1, 1 }. Wherein li1 indicates that two patches correspond to the same saliency image structure, li-1 represents a different saliency image structure, the training loss function being:
Figure FDA0003578515090000023
wherein h isk(z)=hk(z; f, T) which depends on the feature extraction function f: x → R and a threshold T. Given f and T, the description is defined by f (x) with T, and the formula is:
Figure FDA0003578515090000024
the BEBLID binary descriptor is obtained according to the above formula.
4. The ORB-GMS algorithm-based side-scan sonar image accurate matching fast splicing method according to claim 1, which is characterized in that:
the FLANN fast nearest neighbor feature matching strategy in S4 is as follows:
reconstructing HASH for the binary descriptor, performing cluster modeling on the descriptor by adopting a KD tree, and searching a nearest matching point;
the method comprises the following specific steps: dividing the highest dimensionality in the binary description subset into two parts, repeating the same process on the subset, and establishing a plurality of random k-d trees, wherein the neighbor search process is as follows:
(1) starting retrieval from the root node N;
(2) if the leaf nodes are N leaf nodes, adding the leaf nodes in the same level into the search result, and counting + ═ N |;
(3) if N is not a leaf node, its child node is compared with query Q to find the nearest node CqAdding other nodes in the same level into a priority queue;
(4) to CqCarrying out recursive search on the nodes;
if the priority queue is not empty and count < L, then the first element of the priority queue is assigned to N.
5. The ORB-GMS algorithm-based side-scan sonar image accurate matching fast splicing method according to claim 1, which is characterized in that:
the purification process in S5 is as follows:
grid division is carried out according to the size of the overlapped image segments with the same longitude and latitude information, the default grid number G of GMS is 20 multiplied by 20 and 10000 characteristic points, the grid number is dynamically divided according to the size of the overlapping degree, each grid area is scored, and the scoring formula is as follows:
Figure FDA0003578515090000031
wherein: k denotes the number of disjoint regions, { ak,bkIs the predicted pair of regions,
Figure FDA0003578515090000032
is located in the region pair { ak,bkThe subset of matches on the } the distribution of scores is as follows:
Figure FDA0003578515090000033
this distribution is in the form of a binomial distribution, and the corresponding mean and variance calculation is as follows:
Figure FDA0003578515090000034
the discriminative power P for a true and false match is calculated according to the above equation, i.e. the difference of the mean values divided by the sum of the standard deviations, as follows:
Figure FDA0003578515090000035
when the number of matching points in the image is more, the distinguishability of correct matching and wrong matching is stronger; and 4, carrying out error matching elimination processing on the pre-matched point pair obtained in the step 4 based on the discrimination.
6. The ORB-GMS algorithm-based side-scan sonar image accurate matching fast splicing method according to claim 1, which is characterized in that:
the PROSAC algorithm in S6 includes:
the semi-random method for asymptotically approximating the consistency of the samples performs descending sorting on the quality evaluation of all the points, and only performs model assumption and verification on the high-quality point pairs;
the homography matrix is calculated as follows: using progressive consistent sampling to the matching set, using a semi-random mode to evaluate the quality of all the point pairs to calculate Q values, then arranging the Q values in a descending order, only assuming and verifying the models in the high-quality point pairs each time, and filtering the point pairs to obtain a final homography matrix which is a transformation matrix H;
the method comprises the following steps:
s61: calculating the minimum Euclidean distance d of the feature pointsminThe Euclidean distance ratio beta and the quality factor gamma;
s62: selecting 4 groups of matching points with the highest relative mass sum to calculate a homography matrix H;
s63: removing the residual matching points of the 4 groups of matching point pairs, and calculating corresponding projection points according to H;
s64: calculating errors between the projection points and the matching points to judge inner and outer points;
s65: and repeating the step S62 by using the interior points, and returning the homography matrix after the maximum iteration times are reached.
7. The ORB-GMS algorithm-based side-scan sonar image accurate matching fast splicing method according to claim 6, characterized in that:
the weighted average fusion process of the homography matrix in the S7 on the two images is as follows:
the homography matrix formula obtained from S65 is as follows:
Figure FDA0003578515090000041
wherein f isx、fy、u0、υ0Gamma denotes an internal reference, s denotes a scale factor, r1、r2T represents an external reference, M is an internal reference matrix, and the formula is as follows:
Figure FDA0003578515090000042
and carrying out affine transformation alignment according to the homography matrix, wherein the affine transformation alignment comprises linear transformation and translation transformation, and the image seams are fused by average weighting to finally obtain a side-scan sonar spliced image.
8. An electronic device, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the ORB-GMS algorithm-based side scan sonar image exact match fast stitching method according to any one of claims 1 to 7 when executed.
9. A computer-readable storage medium, characterized in that it stores a computer program to make a processor execute the ORB-GMS algorithm-based side scan sonar image exact match fast stitching method according to any one of claims 1 to 7.
CN202210348998.9A 2022-04-01 2022-04-01 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium Pending CN114693524A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210348998.9A CN114693524A (en) 2022-04-01 2022-04-01 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210348998.9A CN114693524A (en) 2022-04-01 2022-04-01 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114693524A true CN114693524A (en) 2022-07-01

Family

ID=82141308

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210348998.9A Pending CN114693524A (en) 2022-04-01 2022-04-01 Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114693524A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN117408879A (en) * 2023-10-26 2024-01-16 中国人民解放军32021部队 Side-scan sonar image stitching method and device
CN117408879B (en) * 2023-10-26 2024-05-10 中国人民解放军32021部队 Side-scan sonar image stitching method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205562A (en) * 2022-07-22 2022-10-18 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN115205562B (en) * 2022-07-22 2023-03-14 四川云数赋智教育科技有限公司 Random test paper registration method based on feature points
CN117408879A (en) * 2023-10-26 2024-01-16 中国人民解放军32021部队 Side-scan sonar image stitching method and device
CN117408879B (en) * 2023-10-26 2024-05-10 中国人民解放军32021部队 Side-scan sonar image stitching method and device

Similar Documents

Publication Publication Date Title
CN110570433B (en) Image semantic segmentation model construction method and device based on generation countermeasure network
CN110490158B (en) Robust face alignment method based on multistage model
CN112633382B (en) Method and system for classifying few sample images based on mutual neighbor
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN108428220A (en) Satellite sequence remote sensing image sea island reef region automatic geometric correction method
CN112200163B (en) Underwater benthos detection method and system
CN116468995A (en) Sonar image classification method combining SLIC super-pixel and graph annotation meaning network
CN110598581B (en) Optical music score recognition method based on convolutional neural network
CN114693524A (en) Side-scan sonar image accurate matching and fast splicing method, equipment and storage medium
CN108921872B (en) Robust visual target tracking method suitable for long-range tracking
CN107748885B (en) Method for recognizing fuzzy character
CN116486408B (en) Cross-domain semantic segmentation method and device for remote sensing image
CN117274627A (en) Multi-temporal snow remote sensing image matching method and system based on image conversion
CN110378167B (en) Bar code image correction method based on deep learning
CN116703932A (en) CBAM-HRNet model wheat spike grain segmentation and counting method based on convolution attention mechanism
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN114821098A (en) High-speed pavement damage detection algorithm based on gray gradient fusion characteristics and CNN
CN114820987A (en) Three-dimensional reconstruction method and system based on multi-view image sequence
CN112132835A (en) SeFa and artificial intelligence-based jelly effect analysis method for photovoltaic track camera
Yu et al. A lightweight ship detection method in optical remote sensing image under cloud interference
Zhou et al. Design identification of curve patterns on cultural heritage objects: combining template matching and CNN-based re-ranking
CN112418262A (en) Vehicle re-identification method, client and system
CN112949385A (en) Water surface target detection and identification method based on optical vision
CN112200850A (en) ORB extraction method based on mature characteristic points
CN114882292B (en) Remote sensing image ocean target identification method based on cross-sample attention mechanism graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination