CN112184783A - Three-dimensional point cloud registration method combined with image information - Google Patents

Three-dimensional point cloud registration method combined with image information Download PDF

Info

Publication number
CN112184783A
CN112184783A CN202011006715.XA CN202011006715A CN112184783A CN 112184783 A CN112184783 A CN 112184783A CN 202011006715 A CN202011006715 A CN 202011006715A CN 112184783 A CN112184783 A CN 112184783A
Authority
CN
China
Prior art keywords
point
dimensional
point cloud
source
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011006715.XA
Other languages
Chinese (zh)
Inventor
梁晋
邬宏
陆旺
陈仁虹
张继耀
赫景彬
冯超
刘世凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011006715.XA priority Critical patent/CN112184783A/en
Publication of CN112184783A publication Critical patent/CN112184783A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a three-dimensional point cloud registration method combining image information, which comprises the following steps: reading a source three-dimensional point cloud, a source two-dimensional image, a target three-dimensional point cloud and a target two-dimensional image; cutting the source two-dimensional image and the target two-dimensional image; detecting feature points of the cut source two-dimensional image and the cut target two-dimensional image and performing feature description; fuzzy matching is carried out on the characteristic points through brute force matching to obtain fuzzy matching characteristic point pairs; screening fuzzy matching characteristic point pairs to obtain optimal matching characteristic point pairs; screening the best matching characteristic point pairs to obtain effective characteristic point pairs; selecting effective characteristic point pairs by adopting random sampling consistency to obtain stable effective characteristic point pairs; and corresponding the stable effective characteristic point pairs with the source three-dimensional point cloud and the target three-dimensional point cloud to obtain three-dimensional point cloud point pairs, solving a rotation and translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud according to the three-dimensional point cloud point pairs, and finishing registration of the source three-dimensional point cloud and the target three-dimensional point cloud according to the rotation and translation matrix.

Description

Three-dimensional point cloud registration method combined with image information
Technical Field
The disclosure belongs to the technical field of computer vision, image processing and three-dimensional measurement, and particularly relates to a high-precision point cloud registration method of a three-dimensional scanning system.
Background
As a current research hotspot, three-dimensional reconstruction has been widely applied to various fields such as medical technology, cultural relic restoration, human-computer interaction, 3D animation, immersive motion sensing games, and the like. And the three-dimensional point cloud data registration is always a research heat point and difficulty in the three-dimensional reconstruction process. The method is limited by the problems of the visual field range, the shape of a scanned object, the scanning range, the shielding and the like of a three-dimensional scanning system, the scanning process of the real object is usually carried out from multiple visual angles, point cloud registration is to convert point cloud data acquired from different visual angles into a uniform coordinate system to realize the alignment of point clouds, and thus a three-dimensional model closer to the real object is obtained.
Point cloud registration comprises coarse registration and fine registration: the rough registration is a registration method for quickly estimating a rough point cloud registration matrix under the condition that the original relative positions of the source point cloud and the target point cloud are completely unknown; the fine registration is a registration method which obtains a more accurate solution by calculation such as an iterative closest point algorithm (ICP algorithm) using an initial transformation matrix in the coarse registration. The existing commonly used point cloud registration method needs to stick mark points on the surface of a measured object, and realizes rough registration of the point clouds through the mark points between two adjacent point clouds, but the sticking of the mark points on the surface of the measured object is time-consuming and labor-consuming, has great limitations, and is difficult to meet the registration requirements of different occasions. How to register point clouds without auxiliary means is a difficult problem to be solved urgently.
The unassisted registration method has become the focus of research on automatic point cloud registration due to the advantages of convenience, wide applicability and the like. In the existing Point cloud registration method without an auxiliary means, two-dimensional image Feature points, such as Speeded Up Robust Features (SURF) Features of images for Lin et al, can be used for registration, and three-dimensional Feature points can be extracted from Point clouds for registration, such as Fast Point Feature Histograms (FPFH) Features, local three-dimensional Feature points, and the like. Both the method based on the two-dimensional image and the method based on the geometric characteristics have the problems of low robustness and low precision, and the method based on the three-dimensional characteristic points needs to search point by point in the whole point cloud to cause the problems of low efficiency and the like. Therefore, it is an urgent need to solve the problem of providing a robust, efficient and high-precision point cloud registration method.
Disclosure of Invention
Aiming at the defects in the prior art, the method for registering the three-dimensional point cloud with high robustness, high efficiency and high precision is provided by combining the image and the point cloud information, and the method not only has high calculation efficiency, but also can meet the registration requirements of different occasions.
In order to achieve the above purpose, the present disclosure provides the following technical solutions:
a three-dimensional point cloud registration algorithm based on image information comprises the following steps:
s100: reading a source three-dimensional point cloud and a source two-dimensional image corresponding to the source three-dimensional point cloud as well as a target three-dimensional point cloud and a target two-dimensional image corresponding to the target three-dimensional point cloud;
s200: cutting the source two-dimensional image and the target two-dimensional image;
s300: detecting feature points in the cut source two-dimensional image and the cut target two-dimensional image by using a key point detection operator, and performing feature description on the detected feature points by using an image feature descriptor;
s400: fuzzy matching is carried out on the feature points which are described by the features through brute force matching to obtain fuzzy matching feature point pairs;
s500: screening the fuzzy matching characteristic point pairs to obtain optimal matching characteristic point pairs;
s600: screening the optimal matching characteristic point pairs to obtain effective characteristic point pairs;
s700: selecting the effective characteristic point pairs by adopting random sampling consistency to obtain stable effective characteristic point pairs;
s800: and corresponding the steady effective characteristic point pairs with the source three-dimensional point cloud and the target three-dimensional point cloud to obtain three-dimensional point cloud point pairs, solving a rotational translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud according to the three-dimensional point cloud point pairs, and finishing registration of the source three-dimensional point cloud and the target three-dimensional point cloud according to the rotational translation matrix.
Preferably, in step S200, the source two-dimensional image and the target two-dimensional image are cropped by: and obtaining the two-dimensional coordinates of the corresponding points of the three-dimensional point cloud on the two-dimensional image according to the three-dimensional coordinates of each point in the three-dimensional point cloud, and cutting the two-dimensional image according to the coordinate distribution range of the whole three-dimensional point cloud on the two-dimensional image.
Preferably, step S400 includes the steps of:
s401: comparing all feature points in the source two-dimensional image with all feature points in the target two-dimensional image one by one;
s402: and matching the completed feature points according to the similarity measurement comparison.
Preferably, in step S402, the matching of the completed feature points according to the similarity metric comparison is performed by calculating a hamming distance between each feature point in the source two-dimensional image and all feature points in the target two-dimensional image, wherein a distance between a pair of feature points closest to each other in the hamming distances between each feature point in the source two-dimensional image and all feature points in the target two-dimensional image is recorded as dist1, and the pair of feature points is regarded as a fuzzy matching feature point pair.
Preferably, step S500 includes the following steps:
s501: keeping a feature point which is next closest to each feature point in the source two-dimensional image in the target two-dimensional image, calculating the distance between each feature point in the source two-dimensional image and the next closest feature point, and recording as dist 2;
s502: the ratio of dist1 to dist2 is calculated and compared to the nearest neighbor distance ratio, i.e.,
Figure BDA0002694552410000041
when the ratio of dist1 to dist2 is smaller than the nearest neighbor distance ratio, the feature point pair is the best matching feature point pair, otherwise, the feature point pair is the non-best matching feature point pair.
Preferably, in step S600, the screening of the best matching feature point pairs is performed by the following method: and if each optimal matching characteristic point pair simultaneously has a corresponding three-dimensional point pair in the source three-dimensional point cloud and the target three-dimensional point cloud, the optimal matching characteristic point pair is an effective characteristic point pair, otherwise, the optimal matching characteristic point pair is an invalid characteristic point pair.
Preferably, step S700 includes the steps of:
s701: randomly sampling four pairs of effective characteristic points to estimate an initial basic matrix, verifying and judging inner points and outer points on the whole point cloud by using the initial basic matrix, and recording inner NMAXThe number of points and taking the point data set in the group as a data set A;
s702: executing step S701 to obtain a second basic matrix, and when the number N of the obtained internal points is larger than the number N of the internal points in the data set A according to verification judgment of the second basic matrixMAXThen, assign N to NMAXUpdating the data set A by updating the number of the internal points and using the data set in the group, otherwise not updating;
s703: step S701 is repeatedly executed, and execution is stopped when the execution number n is greater than the initialization iteration number K, i.e., n > K,
Figure BDA0002694552410000042
Nvalid pairs of characteristic pointsThe effective point logarithm is represented, and the effective characteristic point pairs in the data set a are taken as robust effective characteristic point pairs.
Preferably, in step S701, the verifying determines the inner point and the outer point by: and transforming each point on the source point cloud through the basic matrix, searching a point closest to the transformed source point cloud in the target point cloud, and if the distance is smaller than a threshold value, determining the point as an inner point, otherwise, determining the point as an outer point.
Preferably, in step S300, the key point detection algorithm includes any one of the following: HARRISS, FAST, BRISK, ORB, SIFT, SUFT; the image feature descriptor includes any one of: BRIEF, ORB, FREAK, BRISK, SIFT, SUFT.
Preferably, in step S800, a rotational translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud is solved by using a singular value decomposition method.
Compared with the prior art, the beneficial effect that this disclosure brought does:
1. by cutting the two-dimensional image, the searching range in the two-dimensional image is reduced, the interference of characteristic points possibly existing in the background to the characteristic point matching process on the actual object is reduced, and the matching efficiency and precision are improved;
2. fuzzy matching is eliminated by using a nearest neighbor distance ratio to the feature point pairs after brute force matching, effective feature point pairs of corresponding points can be found in the three-dimensional point cloud only by retaining the pixel points of the source image and the pixel points of the target image, the effective feature point pairs are screened by utilizing random sampling consistency, and more stable effective feature point pairs are obtained, and the precision of point cloud matching is greatly improved in the three processes.
3. The use requirements of different occasions can be met without the help of mark points or other auxiliary equipment.
Drawings
Fig. 1 is a flowchart of a three-dimensional point cloud registration method in combination with image information according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a relationship between three-dimensional point coordinates and pixel point coordinates on a two-dimensional image according to another embodiment of the present disclosure;
fig. 3 is a schematic flowchart of determining valid pairs of feature points according to another embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating a comparison between before and after cropping a two-dimensional image according to another embodiment of the present disclosure;
FIG. 5 is a diagram illustrating an effect of detecting feature points of a clipped source image and a clipped target image by using a BRISK key point detector according to another embodiment of the present disclosure;
FIG. 6 is a diagram illustrating the effect of brute force matching of feature points on a source image and a target image according to another embodiment of the present disclosure;
FIG. 7 is a diagram illustrating the effect of eliminating fuzzy matching by using nearest neighbor distance ratio method according to another embodiment of the present disclosure;
fig. 8 is an effect diagram provided by another embodiment of the present disclosure, which can find valid pairs of feature points of a corresponding three-dimensional point cloud at the same time;
FIG. 9 is a flow chart of inlier dataset acquisition provided by another embodiment of the present disclosure;
FIG. 10 is a graph of the effect of pairs of robust feature points picked by random sampling consistency according to another embodiment of the present disclosure;
fig. 11 is a comparison graph of a source point cloud and a target point cloud before and after registration according to another embodiment of the disclosure.
Detailed Description
Specific embodiments of the present disclosure will be described in detail below with reference to fig. 1 to 11. While specific embodiments of the disclosure are shown in the drawings, it should be understood that the disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that certain terms are used throughout the description and claims to refer to particular components. As one skilled in the art will appreciate, various names may be used to refer to a component. This specification and claims do not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. The description which follows is a preferred embodiment of the invention, but is made for the purpose of illustrating the general principles of the invention and not for the purpose of limiting the scope of the invention. The scope of the present disclosure is to be determined by the terms of the appended claims.
To facilitate an understanding of the embodiments of the present disclosure, the following detailed description is to be considered in conjunction with the accompanying drawings, and the drawings are not to be construed as limiting the embodiments of the present disclosure.
In one embodiment, as shown in fig. 1, the present disclosure provides a three-dimensional point cloud registration algorithm in combination with image information, comprising the following steps:
s100: reading a source three-dimensional point cloud and a source two-dimensional image corresponding to the source three-dimensional point cloud as well as a target three-dimensional point cloud and a target two-dimensional image corresponding to the target three-dimensional point cloud;
s200: according to the coordinate distribution of the source three-dimensional point cloud on the source two-dimensional image and the distribution of the target three-dimensional point cloud on the target two-dimensional image, the source two-dimensional image and the target two-dimensional image are cut;
s300: detecting feature points of the cut source two-dimensional image and the cut target two-dimensional image by using a key point detection operator, and describing the feature points by using an image feature descriptor;
s400: carrying out fuzzy matching on the feature points described by the image feature descriptors in the source two-dimensional image and the target two-dimensional image through brute force matching to obtain fuzzy matching feature point pairs;
s500: screening the fuzzy matching characteristic point pairs to obtain optimal matching characteristic point pairs;
s600: screening the optimal matching characteristic point pairs to obtain effective characteristic point pairs;
s700: selecting the effective characteristic point pairs by adopting random sampling consistency to obtain stable effective characteristic point pairs;
s800: and corresponding the steady effective characteristic point pairs with the source three-dimensional point cloud and the target three-dimensional point cloud to obtain three-dimensional point cloud point pairs, solving a rotational translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud according to the three-dimensional point cloud point pairs, and finishing registration of the source three-dimensional point cloud and the target three-dimensional point cloud according to the rotational translation matrix.
Compared with the prior art, the method has the advantages that the two-dimensional image is cut, so that the searching range in the two-dimensional image can be reduced, the interference of characteristic points possibly existing in the background to the characteristic point matching process on an actual object can be reduced, and the matching efficiency and precision are improved; in addition, fuzzy matching is eliminated through the characteristic point pairs after the brute force matching is conducted through nearest neighbor distance comparison, effective characteristic point pairs of corresponding points can be found in the three-dimensional point cloud only by keeping the pixel points of the source image and the pixel points of the target image, the effective characteristic point pairs are screened by utilizing random sampling consistency, the more stable effective characteristic point pairs are obtained, and the precision of point cloud matching can be greatly improved. In addition, the method and the device can meet the use requirements of different occasions without the help of mark points or other auxiliary equipment.
In another embodiment, the source two-dimensional image and the target two-dimensional image are cropped by: the two-dimensional coordinates of the corresponding points on the two-dimensional image can be obtained according to the three-dimensional coordinates of each point in the three-dimensional point cloud, and the two-dimensional image is cut according to the coordinate distribution range of the whole three-dimensional point cloud on the two-dimensional image.
In this embodiment, as can be known from the three-dimensional measurement principle shown in fig. 2, the following relationship exists between the pixel point coordinates on the two-dimensional image and the spatial coordinates of the corresponding three-dimensional point cloud:
Figure BDA0002694552410000091
wherein s represents depth, the first two rows of the first matrix on the left represent pixel point coordinates, the first matrix on the right represents an internal camera parameter matrix, the second matrix represents an external camera parameter matrix, and the first three rows of the third matrix represent space coordinates of three-dimensional point clouds, namely, each three-dimensional point cloud obtained by the three-dimensional scanning system can find a unique corresponding point in a two-dimensional image. The two-dimensional coordinates of the corresponding points of the three-dimensional point cloud on the image can be obtained according to the three-dimensional coordinates of the points in the three-dimensional point cloud, then the two-dimensional image can be cut according to the coordinate distribution range of the whole three-dimensional point cloud on the two-dimensional image, the background of the two-dimensional image can be removed as far as possible through cutting, the detection range of the feature points is reduced, and meanwhile, the interference of the background on the feature point matching can be avoided.
Illustratively, the source two-dimensional image is cropped, and a comparison graph before and after cropping is shown in fig. 4. The image pixel size is changed from 2448 × 2048 to 560 × 1320, and the detection range of the feature point of the image after the cropping is reduced by nearly 6 times compared with that before the cropping, which directly reduces the time for detecting the subsequent feature point and the complexity of the subsequent steps.
The image size before and after cropping and the related step running time are shown in the following table:
Figure BDA0002694552410000092
in another embodiment, the keypoint detection operator comprises any of: HARRISS, FAST, BRISK, ORB, SIFT, SUFT; the image feature descriptor includes any one of: BRIEF, ORB, FREAK, BRISK, SIFT, SUFT.
In this embodiment, two kinds of image feature point detection and description can be arbitrarily combined and selected in the operator, but after actual data verification, the best registration result can be obtained by selecting BRISK detection and BRISK description to register the three-dimensional point cloud, therefore, in this embodiment, it is preferable to use a BRISK feature descriptor based on binary comparison of pixel points to describe the detected feature points, that is, to sample around the feature points, where all sampling points around the feature points and the feature points themselves have N total, and N sampling points are combined in pairs and two in total
Figure BDA0002694552410000101
And (3) binary coding the point pairs by considering 512 short-distance point pairs in the short-distance subset S, wherein the coding mode is as follows:
Figure BDA0002694552410000102
wherein b represents a coded valueAnd S represents a short-range subset,
Figure BDA0002694552410000103
representing a pair of points arbitrarily belonging to a short-range subset S
Figure BDA0002694552410000104
The indices i, j indicate being on different scales and j < i,
Figure BDA0002694552410000105
which represents the gray-scale value of the pixel,
Figure BDA0002694552410000106
denotes the sample point after the rotation angle alpha, sigmaiRepresenting a scale.
The resulting 512-bit binary code will serve as a BRISK algorithm feature descriptor. The effect of using the BRISK keypoint detector to detect the feature points of the clipped source image and target image is shown in fig. 5 (the small white points on the graph are the feature points).
In another embodiment, step S400 specifically includes the following steps:
s401: comparing all feature points in the source two-dimensional image with all feature points in the target two-dimensional image one by one;
s402: and matching the completed feature points according to the similarity measurement comparison.
In this embodiment, it is assumed that the source two-dimensional image has M key points and their associated descriptors, and the target two-dimensional image has N key points and their associated descriptors, all features of the two images are compared with each other, M × N times are performed, and then the feature points are assigned to each other according to the similarity measure. That is, for the feature points detected in the source two-dimensional image, it will acquire each feature point in the target two-dimensional image and calculate the hamming distance, and the pair of feature points with the smallest hamming distance will be regarded as a pair of fuzzy matching feature points, which are denoted as dist1, and the effect of brute force matching is shown in fig. 6.
Calculation mode of Hamming distance: for a BRISK binary descriptor consisting of only 1's and 0's, the Hamming distance is calculated by using an XOR function to compute the difference between the two vectors, returning a 0 if the two bits are the same, and otherwise returning a 1. The sum of all XOR operations is the number of different bits between the two descriptors.
A part of the fuzzy matching feature points in fig. 6 is selected and exemplarily illustrated below, which is specifically shown in table 1:
TABLE 1
Figure BDA0002694552410000111
The table shows that there are many errors in the fuzzy matching feature points obtained after the brute force matching, and a subsequent means is required to perform screening to obtain valid point pairs which can be used for point cloud registration.
In another embodiment, step S500 specifically includes the following steps:
s501: keeping a feature point which is next closest to each feature point in the source two-dimensional image in the target two-dimensional image, calculating the distance between each feature point in the source two-dimensional image and the next closest feature point, and recording as dist 2;
s502: the ratio of dist1 to dist2 is calculated and compared to the nearest neighbor distance ratio, i.e.,
Figure BDA0002694552410000121
when the ratio is smaller than the nearest neighbor distance ratio, the characteristic point pair is the best matching characteristic point pair, otherwise, the characteristic point pair is the non-best matching characteristic point pair.
In this embodiment, a matched pair of feature points will always be returned by brute force matching, which necessarily results in many false matches. Fuzzy matching can be eliminated by adopting the nearest neighbor distance ratio method of each feature point. That is, for each feature point in the source two-dimensional image, the next closest feature point to each feature point in the source two-dimensional image is retained in the target two-dimensional image, and the distance between each feature point in the source two-dimensional image and the next closest feature point is calculated and recorded as dist 2. And then screening fuzzy matching feature points by calculating the relation between the ratio of dist1 to dist2 and the nearest neighbor distance ratio, wherein the relation is shown as the following formula:
Figure BDA0002694552410000122
wherein, dist1Representing the Hamming distance, dist, between the source image feature point and the nearest neighbor feature point in the target image2Representing the hamming distance between the source image feature point and the next-nearest neighbor feature point in the target image.
And when the ratio is smaller than the nearest neighbor distance ratio, the characteristic point pair is the best matching characteristic point pair, otherwise, the characteristic point pair is the non-best matching characteristic point pair.
In practice, it has been shown that a nearest neighbor distance ratio (minDescDistRatio) set to 0.8 provides a best match.
The effect of eliminating the fuzzy matching by using the nearest neighbor distance ratio is shown in fig. 7.
After the fuzzy matching feature point pairs shown in table 1 are screened, the obtained best matching feature point pairs are shown in table 2:
TABLE 2
Figure BDA0002694552410000131
As can be seen from table 2, after the screening, part of the pairs of feature points in table 1 that are apparently mismatched are deleted, and the matching performance of the retained pairs of feature points is improved.
In another embodiment, the screening of the best matching feature point pairs is performed by: and if each optimal matching characteristic point pair simultaneously has a corresponding three-dimensional point pair in the source three-dimensional point cloud and the target three-dimensional point cloud, the optimal matching characteristic point pair is an effective characteristic point pair, otherwise, the optimal matching characteristic point pair is an invalid characteristic point pair.
In this embodiment, the matched pixel feature point pairs on the two-dimensional image are corresponding to the three-dimensional point cloud, but in reality, the three-dimensional point cloud cannot be reconstructed by some pixel points in the reconstruction process. If one or both of the matched pair of feature points cannot find the corresponding three-dimensional point, the pair of feature points belongs to an invalid feature point pair (the process of selecting a valid feature point pair is shown in fig. 3). If there are many invalid pairs, the result of the subsequent algorithm (selecting robust pairs of feature points with random sampling consistency) will be affected. Therefore, in the process of converting each pair of pixel points into three-dimensional point cloud point pairs, only the pixel points of the source image and the pixel points of the target image are reserved, and the effective characteristic point pairs of the corresponding points can be found in the three-dimensional point cloud at the same time. The two-dimensional image valid feature point pair matching effect is shown in fig. 8.
After further screening the best matching feature point pairs in table 2, the obtained valid feature point pairs are shown in table 3:
TABLE 3
Figure BDA0002694552410000141
As can be seen from table 3, after further screening, the feature point pairs with low matching degree in table 2 are deleted, and the matching performance of the retained feature point pairs is further improved.
In another embodiment, step S700 specifically includes the following steps:
s701: randomly sampling four pairs of effective characteristic points to estimate an initial basic matrix, verifying and judging inner points and outer points on the whole point cloud by using the initial basic matrix, and recording inner NMAXThe number of points and taking the point data set in the group as a data set A;
s702: executing step S701 to obtain a second basic matrix, and when the number N of the obtained internal points is larger than the number N of the internal points in the data set A according to verification judgment of the second basic matrixMAXThen, assign N to NMAXUpdating the data set A by updating the number of the internal points and using the data set in the group, otherwise not updating;
s703: step S701 is repeatedly executed, and execution is stopped when the execution number n is greater than the initialization iteration number K, i.e., n > K,
Figure BDA0002694552410000151
Nvalid pairs of characteristic pointsThe effective point logarithm is represented, and the effective characteristic point pairs in the data set a are taken as robust effective characteristic point pairs.
In this embodiment, the valid feature point pairs are selected according to random sampling consistency in the following specific process: initializing iteration times K, randomly selecting four pairs of pixel point estimation basic matrixes, verifying and judging the number of inner points (correct matching point pairs) and outer points (error matching point pairs) on the whole point cloud by using the basic matrixes, and recording the number N of the first inner pointsmaxAnd an inlier dataset; when the iteration number n is less than K,
Figure BDA0002694552410000152
(Nvalid pairs of characteristic pointsRepresenting the effective point logarithm), repeating the above process, and judging whether the number of the inner points N is greater than N or notmaxIf N > NmaxUpdating the number of interior points and the corresponding interior point data set, otherwise not updating, and stopping the process until the iteration number n is more than K. Since the more inliers, the more likely the fundamental matrix representing the four-pair pixel point estimation is to be the correct fundamental matrix, the valid feature point pairs in the group of inlier data sets are taken as robust valid feature point pairs, and the process of acquiring the inlier data set is shown in fig. 9. The effect of robust valid pairs of feature points on matching is shown in fig. 10.
After selecting the valid feature point pairs in table 3, the robust valid feature point pairs are obtained as shown in table 4:
TABLE 4
Figure BDA0002694552410000161
As can be seen from table 4, after selection, the feature point pairs in table 3 are further optimized, so that feature point pairs capable of being used for three-dimensional point cloud registration are obtained.
In another embodiment, in step S701, the inner point and the outer point are determined by: and transforming each point on the source point cloud through the basic matrix, searching a point closest to the transformed source point cloud in the target point cloud, and if the distance is smaller than a threshold value, determining the point as an inner point, otherwise, determining the point as an outer point.
In another embodiment, the rotational translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud is solved by using a singular value decomposition method.
In this embodiment, the robust effective feature point pairs are respectively corresponding to the source three-dimensional point cloud and the target three-dimensional point cloud to obtain three-dimensional point cloud point pairs, and exemplarily, the three-dimensional point cloud point pairs corresponding to the robust effective feature point pairs in table 4 are shown in table 5:
TABLE 5
Figure BDA0002694552410000162
The three-dimensional point cloud point pair shown in table 5 can be solved into a rotational and translational matrix between the source three-dimensional point cloud and the target three-dimensional point cloud by using a Singular Value Decomposition (SVD) method. The problem can be described as: suppose P ═ P1,p2,...pnQ ═ Q1,q2,...qnIs two sets of corresponding points in three-dimensional space, and then calculates the rigid transformation matrix between the two sets of points according to the two sets of points, and then converts the two sets of points into a least square optimization problem, namely
Figure BDA0002694552410000171
The rotation matrix R and the translation matrix T can be obtained by using a Singular Value Decomposition (SVD) method.
And applying the obtained rotation and translation matrix to the source three-dimensional point cloud, and transforming the source three-dimensional point cloud to the position of the target three-dimensional point cloud so as to complete the registration of the source three-dimensional point cloud and the target three-dimensional point cloud.
Using a rotation translation matrix obtained by a Singular Value Decomposition (SVD) method as an initial three-dimensional rigid transformation matrix to transform the source point cloud to the target point cloud for rough registration to obtain an initial position from the source point cloud to the target point cloud; and performing precise registration on the source point cloud to the target point cloud by adopting an iterative closest point algorithm (ICP algorithm) according to the initial position from the source point cloud to the target point cloud to complete point cloud registration. A comparison of the source and target point clouds before and after registration is shown in fig. 11. The effect graph shows that the overlapped areas of the source point cloud and the target point cloud of the two colors after registration are mixed uniformly, and the registration effect is good.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the foregoing disclosure is not intended to be exhaustive or to limit the disclosure to the precise details disclosed.

Claims (10)

1. A three-dimensional point cloud registration algorithm based on image information comprises the following steps:
s100: reading a source three-dimensional point cloud and a source two-dimensional image corresponding to the source three-dimensional point cloud as well as a target three-dimensional point cloud and a target two-dimensional image corresponding to the target three-dimensional point cloud;
s200: cutting the source two-dimensional image and the target two-dimensional image;
s300: detecting feature points in the cut source two-dimensional image and the cut target two-dimensional image by using a key point detection operator, and performing feature description on the detected feature points by using an image feature descriptor;
s400: fuzzy matching is carried out on the feature points which are described by the features through brute force matching to obtain fuzzy matching feature point pairs;
s500: screening the fuzzy matching characteristic point pairs to obtain optimal matching characteristic point pairs;
s600: screening the optimal matching characteristic point pairs to obtain effective characteristic point pairs;
s700: selecting the effective characteristic point pairs by adopting random sampling consistency to obtain stable effective characteristic point pairs;
s800: and corresponding the steady effective characteristic point pairs with the source three-dimensional point cloud and the target three-dimensional point cloud to obtain three-dimensional point cloud point pairs, solving a rotational translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud according to the three-dimensional point cloud point pairs, and finishing registration of the source three-dimensional point cloud and the target three-dimensional point cloud according to the rotational translation matrix.
2. The method according to claim 1, wherein in step S200, the source two-dimensional image and the target two-dimensional image are preferably cropped by: and obtaining the two-dimensional coordinates of the corresponding points of the three-dimensional point cloud on the two-dimensional image according to the three-dimensional coordinates of each point in the three-dimensional point cloud, and cutting the two-dimensional image according to the coordinate distribution range of the whole three-dimensional point cloud on the two-dimensional image.
3. The method of claim 1, wherein step S400 comprises the steps of:
s401: comparing all feature points in the source two-dimensional image with all feature points in the target two-dimensional image one by one;
s402: and matching the completed feature points according to the similarity measurement comparison.
4. The method according to claim 3, wherein the matching of the completed feature points according to the similarity metric comparison in step S402 is performed by calculating the Hamming distance between each feature point in the source two-dimensional image and all feature points in the target two-dimensional image, wherein the distance between the closest pair of feature points in the Hamming distance between each feature point in the source two-dimensional image and all feature points in the target two-dimensional image is recorded as dist1, and the pair of feature points is regarded as a fuzzy matching feature point pair.
5. The method of claim 4, wherein step S500 comprises the steps of:
s501: keeping a feature point which is next closest to each feature point in the source two-dimensional image in the target two-dimensional image, calculating the distance between each feature point in the source two-dimensional image and the next closest feature point, and recording as dist 2;
s502: the ratio of dist1 to dist2 is calculated and compared to the nearest neighbor distance ratio, i.e.,
Figure FDA0002694552400000021
when the ratio of dist1 to dist2 is smaller than the nearest neighbor distance ratio, the feature point pair is the best matching feature point pair, otherwise, the feature point pair is the non-best matching feature point pair.
6. The method according to claim 1, wherein in step S600, the best matching feature point pair is screened by: and if each optimal matching characteristic point pair simultaneously has a corresponding three-dimensional point pair in the source three-dimensional point cloud and the target three-dimensional point cloud, the optimal matching characteristic point pair is an effective characteristic point pair, otherwise, the optimal matching characteristic point pair is an invalid characteristic point pair.
7. The method of claim 1, wherein step S700 comprises the steps of:
s701: randomly sampling four pairs of effective characteristic points to estimate an initial basic matrix, verifying and judging inner points and outer points on the whole point cloud by using the initial basic matrix, and recording inner NMAXThe number of points and taking the point data set in the group as a data set A;
s702: executing step S701 to obtain a second basic matrix, and when the number N of the obtained internal points is larger than the number N of the internal points in the data set A according to verification judgment of the second basic matrixMAXThen, assign N to NMAXUpdating the data set A by updating the number of the internal points and using the data set in the group, otherwise not updating;
s703: step S701 is repeatedly executed, and execution is stopped when the execution number n is greater than the initialization iteration number K, i.e., n > K,
Figure FDA0002694552400000031
Nvalid pairs of characteristic pointsThe effective point logarithm is represented, and the effective characteristic point pairs in the data set a are taken as robust effective characteristic point pairs.
8. The method according to claim 7, wherein in step S701, the verifying determines the inner point and the outer point by: and transforming each point on the source point cloud through the basic matrix, searching a point closest to the transformed source point cloud in the target point cloud, and if the distance is smaller than a threshold value, determining the point as an inner point, otherwise, determining the point as an outer point.
9. The method of claim 1, wherein in step S300, the keypoint detection algorithm comprises any of: HARRISS, FAST, BRISK, ORB, SIFT, SUFT; the image feature descriptor includes any one of: BRIEF, ORB, FREAK, BRISK, SIFT, SUFT.
10. The method of claim 1, wherein in step S800, the rotational-translation matrix between the source three-dimensional point cloud and the target three-dimensional point cloud is solved by using a singular value decomposition method.
CN202011006715.XA 2020-09-22 2020-09-22 Three-dimensional point cloud registration method combined with image information Pending CN112184783A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011006715.XA CN112184783A (en) 2020-09-22 2020-09-22 Three-dimensional point cloud registration method combined with image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011006715.XA CN112184783A (en) 2020-09-22 2020-09-22 Three-dimensional point cloud registration method combined with image information

Publications (1)

Publication Number Publication Date
CN112184783A true CN112184783A (en) 2021-01-05

Family

ID=73956578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011006715.XA Pending CN112184783A (en) 2020-09-22 2020-09-22 Three-dimensional point cloud registration method combined with image information

Country Status (1)

Country Link
CN (1) CN112184783A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
CN113793370A (en) * 2021-01-13 2021-12-14 北京京东叁佰陆拾度电子商务有限公司 Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN114359353A (en) * 2021-12-03 2022-04-15 北京理工大学 Image registration method and system
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105469388A (en) * 2015-11-16 2016-04-06 集美大学 Building point cloud registration algorithm based on dimension reduction
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
CN109523501A (en) * 2018-04-28 2019-03-26 江苏理工学院 One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851094A (en) * 2015-05-14 2015-08-19 西安电子科技大学 Improved method of RGB-D-based SLAM algorithm
CN105469388A (en) * 2015-11-16 2016-04-06 集美大学 Building point cloud registration algorithm based on dimension reduction
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN107220995A (en) * 2017-04-21 2017-09-29 西安交通大学 A kind of improved method of the quick point cloud registration algorithms of ICP based on ORB characteristics of image
CN109523501A (en) * 2018-04-28 2019-03-26 江苏理工学院 One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张晓: "基于图像的点云初始配准", 《基于图像的点云初始配准 *
朱德海: "《点云库PCL学习教程》", 31 October 2012, 北京航空航天大学出版社 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113793370A (en) * 2021-01-13 2021-12-14 北京京东叁佰陆拾度电子商务有限公司 Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN113793370B (en) * 2021-01-13 2024-04-19 北京京东叁佰陆拾度电子商务有限公司 Three-dimensional point cloud registration method and device, electronic equipment and readable medium
CN112734862A (en) * 2021-02-10 2021-04-30 北京华捷艾米科技有限公司 Depth image processing method and device, computer readable medium and equipment
WO2022267287A1 (en) * 2021-06-25 2022-12-29 浙江商汤科技开发有限公司 Image registration method and related apparatus, and device and storage medium
CN114359353A (en) * 2021-12-03 2022-04-15 北京理工大学 Image registration method and system
CN115100258A (en) * 2022-08-29 2022-09-23 杭州三坛医疗科技有限公司 Hip joint image registration method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112184783A (en) Three-dimensional point cloud registration method combined with image information
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
Yang et al. Go-ICP: A globally optimal solution to 3D ICP point-set registration
JP5677798B2 (en) 3D object recognition and position and orientation determination method in 3D scene
WO2013094441A1 (en) Method for estimating pose of object
Irschara et al. Towards wiki-based dense city modeling
Mousavi Kahaki et al. Invariant feature matching for image registration application based on new dissimilarity of spatial features
CN108830888B (en) Coarse matching method based on improved multi-scale covariance matrix characteristic descriptor
CN111145232A (en) Three-dimensional point cloud automatic registration method based on characteristic information change degree
US20150199573A1 (en) Global Scene Descriptors for Matching Manhattan Scenes using Edge Maps Associated with Vanishing Points
CN107025647B (en) Image tampering evidence obtaining method and device
CN111524193B (en) Method and device for measuring two-dimensional size of object
CN110793441B (en) High-precision object geometric dimension measuring method and device
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning
CN114066954B (en) Feature extraction and registration method for multi-modal image
Junior et al. A new variant of the ICP algorithm for pairwise 3D point cloud registration
EP1240616A1 (en) A technique for estimating the pose of surface shapes using tripod operators
Olson Adaptive-scale filtering and feature detection using range data
Andreasson et al. Vision aided 3D laser scanner based registration
Chater et al. Robust Harris detector corresponding and calculates the projection error using the modification of the weighting function
CN109741389A (en) One kind being based on the matched sectional perspective matching process of region base
Wan et al. A performance comparison of feature detectors for planetary rover mapping and localization
Albouy et al. Accurate 3D structure measurements from two uncalibrated views
Jankó et al. Photo-consistency based registration of an uncalibrated image pair to a 3D surface model using genetic algorithm
Lohani et al. An evaluation of intensity augmented ICP for terrestrial LiDAR data registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105

RJ01 Rejection of invention patent application after publication