CN109961463B - View registration method, system, device and storage medium based on dual quaternion - Google Patents

View registration method, system, device and storage medium based on dual quaternion Download PDF

Info

Publication number
CN109961463B
CN109961463B CN201711341011.6A CN201711341011A CN109961463B CN 109961463 B CN109961463 B CN 109961463B CN 201711341011 A CN201711341011 A CN 201711341011A CN 109961463 B CN109961463 B CN 109961463B
Authority
CN
China
Prior art keywords
point cloud
cloud data
frame
registration processing
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711341011.6A
Other languages
Chinese (zh)
Other versions
CN109961463A (en
Inventor
陈桂芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Original Assignee
Beijing Jingdong Century Trading Co Ltd
Beijing Jingdong Shangke Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Century Trading Co Ltd, Beijing Jingdong Shangke Information Technology Co Ltd filed Critical Beijing Jingdong Century Trading Co Ltd
Priority to CN201711341011.6A priority Critical patent/CN109961463B/en
Publication of CN109961463A publication Critical patent/CN109961463A/en
Application granted granted Critical
Publication of CN109961463B publication Critical patent/CN109961463B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a view registration method, a system, equipment and a storage medium based on dual quaternion, wherein the view registration method comprises the following steps: acquiring multi-frame point cloud data, and selecting a first frame of point cloud in the multi-frame point cloud data as a world coordinate system; carrying out registration on any two adjacent frames by adopting an ICP (inductively coupled plasma) algorithm to obtain point cloud data of each frame and a registration processing result; then, acquiring rigid transformation by adopting a dual quaternion mixing method; acquiring new multi-frame point cloud data; acquiring new multi-frame point cloud data and a registration processing result by adopting an ICP (inductively coupled plasma) algorithm; and judging whether the registration processing result is converged or not until the registration processing result is converged. The invention solves the problem that the data can not be converged in the registration process only by adopting an ICP algorithm in the prior art, thereby causing the registration failure; and the registration error generated in the registration process of every two of the multiple views is reduced by fusing a dual quaternion mixing method with an ICP algorithm, so that the registration accuracy is improved.

Description

View registration method, system, device and storage medium based on dual quaternion
Technical Field
The invention relates to the technical field of three-dimensional data reconstruction, in particular to a view registration method, a system, equipment and a storage medium based on dual quaternion.
Background
For the reconstruction process of the three-dimensional environment, the essence is to fuse and align the environment information under different viewing angles to restore the whole real three-dimensional scene. The reconstruction technology of the three-dimensional environment is widely applied to various novel fields of autonomous navigation of robots, unmanned driving, unmanned aerial vehicles and the like.
The key techniques used in the reconstruction of three-dimensional environments vary greatly depending on the type of sensor used. In the prior art, two reconstruction techniques are generally used:
(1) feature-based environmental modeling: the method utilizes projective geometric transformation to restore the approximate structure of a Three-Dimensional scene, the reconstructed 3D (Three-Dimensional) structure is represented by some sparse features, but the reconstructed 3D structure has the defects of insufficient point cloud number and density, so that the reconstructed scene model is not complete, and an intuitive picture of a Three-Dimensional real scene cannot be obtained.
(2) Dense point cloud based environment reconstruction: in the point cloud reconstruction process, a three-dimensional sensor such as a kinect (3D somatosensory camera) or a 3D radar is generally used, the three-dimensional sensor can directly acquire depth information of a scene, and information such as change of pose of a robot is acquired according to a corresponding relation of point cloud data between frames, so that data fusion is realized. The core algorithm in the registration process of the interframe point cloud data is an ICP (Iterative Closest Points) algorithm, the frame-to-frame registration alignment method has the defects of easily generating errors and easily falling into a local minimum value, so that data cannot be converged and the registration accuracy of three-dimensional scene reconstruction is influenced.
Disclosure of Invention
The invention aims to solve the technical problems that registration failure is easily caused and registration accuracy is not high due to the fact that three-dimensional environment reconstruction is carried out by adopting an ICP (inductively coupled plasma) algorithm on the basis of dense point cloud in the prior art, and aims to provide a view registration method, system, equipment and storage medium based on dual quaternion.
The invention solves the technical problems through the following technical scheme:
the invention provides a multi-view registration method based on dual quaternion, which comprises the following steps:
acquiring multi-frame point cloud data, and selecting a first frame point cloud in the multi-frame point cloud data as a world coordinate system;
carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after the registration processing and a first registration processing result;
judging whether the first registration processing result is converged, and if not, acquiring rigid transformation from each frame of point cloud to a world coordinate system;
performing coordinate transformation on the multi-frame point cloud data subjected to the registration processing according to the rigid transformation to obtain the multi-frame point cloud data subjected to the coordinate transformation;
carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after registration processing and a second registration processing result;
and judging whether the second registration processing result is converged, and returning to the step of obtaining rigid transformation from each frame of point cloud to the world coordinate system when the second registration processing result is judged to be converged.
Preferably, the step of obtaining the rigid transformation from each frame of point cloud to the world coordinate system specifically includes:
recursion is carried out to obtain rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data according to the relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently obtained first registration processing result;
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the step of obtaining the rigid transformation from each frame of point cloud to the world coordinate system by processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by using a dual-quaternion mixing method specifically includes:
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion iterative mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the step of obtaining the rigid transformation from each frame of point cloud to the world coordinate system by processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by using a dual-quaternion mixing method specifically includes:
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual quaternion linear mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the step of performing coordinate transformation on the multi-frame point cloud data after the registration processing according to the rigid transformation to obtain the multi-frame point cloud data after the coordinate transformation specifically includes:
and multiplying the multi-frame point cloud data after the registration processing by the rigid transformation to obtain the multi-frame point cloud data after the coordinate transformation.
Preferably, the step of judging whether the first registration processing result converges and/or the step of judging whether the second registration processing result converges outputs the final registration processing result of the multi-frame point cloud data when the judgment is yes.
Preferably, the registration process specifically includes:
collecting any two adjacent frames of point cloud data by taking the first frame of point cloud as a starting point, and respectively taking the two adjacent frames of point cloud data as a source point cloud and a target point cloud;
acquiring interframe corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames;
obtaining an ICP error function according to the corresponding points between frames to obtain the corresponding relative coordinate transformation from the target point cloud to the source point cloud when the minimization is achieved;
carrying out coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation to obtain a new target point cloud, obtaining corresponding new multi-frame point cloud data, and calculating an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
Preferably, the step of determining whether the first registration processing result converges and/or the step of determining whether the second registration processing result converges specifically includes:
and judging whether the error value is smaller than a first set threshold or whether the iteration number is larger than a second set threshold.
Preferably, the acquisition mode of the corresponding point between frames includes at least one of point-to-point, point-to-projection and point-to-surface.
The invention also provides a multi-view registration system based on dual quaternion, which comprises a first point cloud data acquisition module, a selection unit, a first registration processing module, a first judgment module, a rigid transformation acquisition module, a second point cloud data acquisition module, a second registration processing module and a second judgment module;
the first point cloud data acquisition module is used for acquiring multi-frame point cloud data;
the selecting unit is used for selecting a first frame of point cloud in the multi-frame of point cloud data as a world coordinate system;
the first registration processing module adopts an ICP algorithm to perform registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data, and obtains the multi-frame point cloud data after the registration processing and a first registration processing result;
the first judging module is used for judging whether the first registration processing result obtained by the first registration processing module is converged or not, and if not, the rigid transformation obtaining module is called;
the rigid transformation acquisition module is used for acquiring rigid transformation from each frame of point cloud to a world coordinate system;
the second point cloud data acquisition module is used for carrying out coordinate transformation on the multi-frame point cloud data after registration processing according to the rigid transformation acquired by the second rigid transformation acquisition unit to acquire the multi-frame point cloud data after coordinate transformation;
the second registration processing module is used for performing registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after registration processing and a second registration processing result;
the second judging module is configured to judge whether the second registration processing result obtained by the second registration processing module converges, and if not, invoke the second rigid transformation obtaining unit until the second registration processing result in the second registration processing module converges.
Preferably, the rigid transformation obtaining module includes a first rigid transformation obtaining unit and a second rigid transformation obtaining unit;
the first rigid transformation obtaining unit is used for recursively obtaining rigid transformation of each frame of point cloud data and a plurality of adjacent frames of point cloud data according to relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently obtained first registration processing result;
the second rigid transformation obtaining unit is used for processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion mixing method, and obtaining the rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the second rigid transformation obtaining unit is configured to perform, by using a dual-quaternion iterative mixture method, a relative coordinate transformation between each two adjacent frames of point cloud data and a rigid transformation between each frame of point cloud data and several adjacent frames of point cloud data, so as to obtain a rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the second rigid transformation obtaining unit is configured to perform, by using a dual-quaternion linear mixing method, a relative coordinate transformation between each two adjacent frames of point cloud data and a rigid transformation between each frame of point cloud data and several adjacent frames of point cloud data, so as to obtain a rigid transformation from each frame of point cloud to a world coordinate system.
Preferably, the second point cloud data obtaining module is configured to multiply the multi-frame point cloud data after the registration processing by the rigid transformation in the second rigid transformation obtaining unit to obtain the multi-frame point cloud data after the coordinate transformation.
Preferably, the multi-view registration system comprises an output module;
when the first judgment module and/or the second judgment module judge to be yes, the output module is called;
the output module is used for outputting the final registration processing result of the multi-frame point cloud data.
Preferably, the first registration processing module and/or the second registration processing module includes a point cloud data acquisition unit, an inter-frame corresponding point acquisition unit, a rigid transformation acquisition unit, a target point cloud acquisition unit and an error calculation unit;
the point cloud data acquisition unit is used for acquiring any two adjacent frames of point cloud data by taking the first frame of point cloud as a starting point, and respectively taking the two adjacent frames of point cloud data as a source point cloud and a target point cloud;
the inter-frame corresponding point acquisition unit is used for acquiring inter-frame corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames;
the rigid transformation obtaining unit is used for obtaining an ICP error function according to the corresponding points between frames to obtain the corresponding relative coordinate transformation from the target point cloud to the source point cloud when the minimization is achieved;
the target point cloud obtaining unit is used for carrying out coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation to obtain a new target point cloud and obtain corresponding new multi-frame point cloud data;
the error calculation unit is used for calculating an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
Preferably, the first determining unit is configured to determine whether the error value is smaller than a first set threshold or whether the iteration number is greater than a second set threshold.
Preferably, the acquisition mode of the corresponding point between frames includes at least one of point-to-point, point-to-projection and point-to-surface.
The invention also provides a device for multi-view registration based on dual quaternion, which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the computer program to realize the multi-view registration method based on dual quaternion.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the dual quaternion-based multi-view registration method described above.
The positive progress effects of the invention are as follows:
by fusing the ICP algorithm and the dual quaternion, the invention solves the problems that in the prior art, the registration process is carried out by only adopting the ICP algorithm, so that the registration process result is easy to fall into a local minimum value, and the data can not be converged, thereby causing the registration failure; and the registration error generated in the registration process of every two of the multiple views is reduced by fusing a dual quaternion mixing method with an ICP algorithm, so that the registration accuracy is improved.
Drawings
Fig. 1 is a flowchart of a dual quaternion-based multi-view registration method of embodiment 1 of the present invention;
FIG. 2 is a flowchart of a dual quaternion blending method of example 1 of the present invention;
FIG. 3 is a flowchart of a dual quaternion based multi-view registration method of embodiment 2 of the present invention;
fig. 4 is a block schematic diagram of a dual quaternion-based multi-view registration system of embodiment 3 of the present invention;
fig. 5 is a schematic structural diagram of a dual-quaternion-based multi-view registration system according to embodiment 4 of the present invention.
Detailed Description
The invention is further illustrated by the following examples, which are not intended to limit the scope of the invention.
Example 1
As shown in fig. 1, the dual-quaternion-based multi-view registration method of the present embodiment includes:
s101, obtaining multi-frame point cloud data, and selecting a first frame point cloud in the multi-frame point cloud data as a world coordinate system;
s102, carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data by adopting an ICP (inductively coupled plasma) algorithm, and obtaining the multi-frame point cloud data after the registration processing and a registration processing result; wherein the registration processing result is a first registration processing result;
step S102 specifically includes:
s1021, collecting any two adjacent frames of point cloud data by taking the first frame of point cloud as a starting point, and respectively taking the two adjacent frames of point cloud data as a source point cloud and a target point cloud;
when the source point cloud and the target point cloud are selected in the iterative process of the ICP algorithm, the data volume of each frame of point cloud is too much, and the processes of corresponding point search and matching and the like all need to consume a large amount of time, and feature point extraction is generally adopted, and the feature points including NARF, SURF, SIFT (NARF, SURF, SIFT are all a feature extraction method) and the like are taken as key points to perform matching alignment.
S1022, acquiring interframe corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames; the acquisition mode of the corresponding point between the frames comprises at least one of point-to-point, point-to-projection and point-to-surface.
For example, preferably, the interframe corresponding points of the candidate point cloud and the matching corresponding point cloud are obtained by adopting a point-to-surface search method, so that the problem of matching a large number of wrong corresponding points by adopting a point-to-point search method can be reduced, and the convergence precision is greatly improved compared with the point-to-projection search method. Meanwhile, the error corresponding points are removed by using the curved surface set characteristics (such as curvature or normal vector) and the corresponding rigid transformation is obtained when the ICP error function is minimized according to the remaining candidate points.
S1023, obtaining an ICP error function according to the corresponding points between frames to obtain the corresponding coordinate transformation from the corresponding target point cloud to the source point cloud when the minimization is achieved;
specifically, the relative coordinate transformation is obtained as follows:
aiming at two frames of three-dimensional point sets M and D which are respectively obtained and represent the same scene or object, wherein M is a model set, D is a data set, the best aligned rigid body transformation (R, t) is obtained by solving the minimization of an error function of the following formula, R represents a three-dimensional rotation matrix, and t represents a translation vector:
Figure BDA0001508368360000081
wherein the content of the first and second substances,
Figure BDA0001508368360000082
representing a point in the data set D,
Figure BDA0001508368360000083
representing a point in the set of models M; w is ai,jRepresents a weight, wherein wi,jThe value is 0 or 1.
W is the same point in space as the ith point in the M point set and the jth point in the D point seti,jOtherwise, it is 0. Calculated in tuple form (m) according to the following formula (a represents point set correlation; b represents minimizing an error function on a correlated point set for pose estimation)i,di) To represent the associated point pair, the above formula is changed to:
Figure BDA0001508368360000084
wherein the content of the first and second substances,
Figure BDA0001508368360000091
the main solving methods for the error function closed solution include Singular Value Decomposition (SVD), Orthogonal Matrix (OM) and Unit Quaternion (UQ).
S1024, performing coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation to obtain a new target point cloud, obtaining corresponding new multi-frame point cloud data, and calculating an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
S103, judging whether the registration processing result in the step S1024 is converged, and if not, continuing to the step S104; if yes, go to step S108;
wherein, as long as one registration processing result has unconvergence, the situation belongs to the unconvergence; and as long as the registration processing result is not converged, further registration processing is required to be carried out on all point cloud data by adopting a dual quaternion mixing method.
The step S103 of determining whether the registration processing result converges specifically includes:
and judging whether the error value is smaller than a first set threshold or whether the iteration number is larger than a second set threshold.
S104, obtaining rigid transformation from each frame of point cloud to a world coordinate system, and specifically comprising the following steps:
s1041, recurrently acquiring rigid transformation of each frame of point cloud data and a plurality of adjacent frames of point cloud data according to the relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently acquired registration processing result;
s1042, processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion iterative mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system;
the adjacent frames of point cloud data are used for representing the frame point cloud data adjacent to the left and the right of each frame of point cloud data.
S105, performing coordinate transformation on the multi-frame point cloud data after the registration processing according to the rigid transformation in the step S1042 to obtain the multi-frame point cloud data after the coordinate transformation;
specifically, step S105 specifically includes:
and multiplying the multi-frame point cloud data after the registration processing by the rigid transformation in the step S1042 to obtain the multi-frame point cloud data after the coordinate transformation.
S106, carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after registration processing and a registration processing result; wherein the registration processing result is a second registration processing result;
the specific step of obtaining the corresponding registration processing result by using the ICP algorithm in step S106 is the same as that in step 102, and is not described herein again.
And S107, judging whether the registration processing result in the step S106 is converged, and returning to the step S104 until the registration processing result in the step S106 is judged to be converged if the registration processing result is judged to be converged.
And S108, outputting a final registration processing result of the multi-frame point cloud data.
When the point cloud data of each frame are overlapped, the registration problem of the point cloud data of multiple frames is converted into the registration processing problem after rigid transformation diffusion between the images of each frame.
Suppose ViRepresenting the transformation from a frame of point cloud i coordinate system to a global coordinate system, VjRepresenting the transformation from a frame of point cloud picture j coordinate system to a global coordinate system, TijRepresenting the coordinate transformation from the ith frame point cloud to the jth frame point cloud, and if the point pair registration is in a noise-free state, the adjacent views i and j can obtain Tji*Vi=VjI.e. multi-frame registration, may be understood as seeking a rigid transformation from each view to a common reference frame. Pose V of point cloud picture iiAnd the calculated pose Tij*VjThe error between the two is minimized, the multi-frame registration is satisfied, and the error function represents the following formula:
D=argmin∑ij∈N(i)d(TijVj,Vi)
where n (i) represents all views adjacent to the point cloud i. Can be converted into a spiral motion by any rigid transformation, i.e. rotate an angle around any spiral shaft in three-dimensional spaceTranslated a distance along this axis. Defining the helical distance
Figure BDA0001508368360000101
Wherein t represents a translation distance along the screw axis, α represents a rotation angle along the screw axis, γpRepresenting the distance of point p from the helical axis, the error function is expressed as:
DSC=argmin∑ij∈N(i)p∈SdSC(TijVj,Vi,p)2
for any p ∈ R3When is coming into contact with
Figure BDA0001508368360000102
Then the helical distance is obtained
Figure BDA0001508368360000111
Taking the minimum value.
When the angle change between the view angles of the respective frame images is not large, d can be obtainedSC(TijVj,Vi,p)≈||TijVjp-Vip | |, therefore, the registration process of the multiple views can be achieved by solving for Vi=SCAVG(±TijVj) Thus obtaining the product. The spiral distance mean value is obtained by adopting the process of solving a dual quaternion iterative hybrid (DIB) method by using the same weight value, and the dual quaternion iterative hybrid solving process is as follows:
Figure BDA0001508368360000112
Figure BDA0001508368360000113
obtaining the satisfaction according to the above formula
Figure BDA0001508368360000114
Of the hour
Figure BDA0001508368360000115
Wherein the content of the first and second substances,
Figure BDA0001508368360000116
denotes a dual quaternion, w ═ w1,w2,...,wn) Representing a weight value and p representing a desired precision. When the same weight value is taken, will be output
Figure BDA0001508368360000117
Is called as
Figure BDA0001508368360000118
Mean value of the helical distance of (a).
Figure BDA0001508368360000119
Specifically, as shown in fig. 2, when each frame of point cloud data in the multi-frame point cloud data in step S102 and two sequentially adjacent frames of point cloud data are processed by using a dual quaternion iterative mixing method in step S104, and in this embodiment, 4 frames of point cloud data are specifically processed (that is, n in fig. 2 is 3), specific contents of the dual quaternion-based multi-view registration method in this embodiment are as follows:
1) acquiring 4 frames of point cloud data 0, 1, 2 and 3;
2) selecting a first frame point cloud data 0 in 4 frames of point cloud data as a world coordinate system, and setting i belonging to (0, 3);
3) with the point cloud data 0 as a starting point, respectively registering the ith frame of point cloud data and the (i-1) th frame of point cloud data in the 4 frames of point cloud data by adopting an ICP (inductively coupled plasma) algorithm, and respectively acquiring registration processing results of the registered multiframe frame of point cloud data 0', 1', 2 'and 3';
specifically, point cloud data 0, 1, 2 and 3 are processed by adopting an ICP (inductively coupled plasma) algorithm, and T is obtained by taking first frame point cloud data 0 as a world coordinate system10、T21、T32Wherein, T10Representing the coordinate transformation of the first frame point cloud 1 to the 0 th frame point cloud, T21Representing a first frame point cloud 2 toCoordinate transformation of the 1 st frame point cloud, T32Representing a coordinate transformation of the first frame point cloud 3 to the 2 nd frame point cloud;
4) judging whether the registration processing result in the step 3) is converged, and if not, continuing to step 5);
5) by taking the point cloud data 0 'as a starting point (the point cloud data 0', namely the point cloud data 0), recurrently acquiring rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data through relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently acquired registration processing result;
in particular from T obtained in step 3)10、T21、T32Recursive acquisition of T31、T20And T30Etc. such as T31=T32*T21,T20=T21*T10,T30=T32*T20
Processing the relative coordinate transformation between every two adjacent frames of point cloud data and the rigid transformation between every frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual quaternion iterative mixing method to obtain the rigid transformation from every frame of point cloud to a world coordinate system;
wherein when the absolute movement T is10…T(n-1)0When noise or an initial value is inappropriate, an error inevitably occurs, and the error needs to be repaired, so that the error is reduced as much as possible.
Specifically, the relative motion of each frame from the 1 st frame to the n-th frame with the adjacent M frames, i.e., T, is calculated01...T0[1+(M-1)]、T12...T1[2+(M-1)]...Ti(i+1)...Ti[(i+1)+(M-1)]When M is set to 2, as shown in fig. 2, for each frame, diffusion motion is performed along its neighboring 2 frames, and the topology of the image is set to a ring shape. For T(i-1)iRigid motion is expressed by dual quaternion when the error function is minimized. Changing w to (w)1,w2,…,wn) The weighted values in the method are set to be the same, and a dual quaternion iterative mixing method is continuously adopted to obtain rigid transformation from each frame of point cloud data to a world coordinate system:
Figure BDA0001508368360000121
Figure BDA0001508368360000122
Figure BDA0001508368360000123
6) multiplying the 4 frames of point cloud data in the step 3) by the corresponding rigid transformation in the step 5)
Figure BDA0001508368360000124
Figure BDA0001508368360000125
And
Figure BDA0001508368360000126
carrying out coordinate transformation to obtain multi-frame point cloud data after coordinate transformation;
7) respectively carrying out registration processing on ith frame point cloud data and (i-1) th frame point cloud data in the 4 frames of point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm, and respectively obtaining registration processing results of corresponding new multi-frame point cloud data 0', 1', 2', 3';
8) judging whether the registration processing result in the step 7) is converged, and returning to the step 5) when the judgment is negative until the registration processing result in the step 7) is converged; if yes, executing step 9);
9) and outputting a final registration processing result of the 4 frames of point cloud data.
In the embodiment, the ICP algorithm and the dual quaternion are fused, so that the problem that in the prior art, the registration process is performed only by adopting the ICP algorithm, the registration result is easy to fall into a local minimum value, and thus data cannot be converged, and the registration fails is solved; and the registration error generated in the process of pairwise registration of multiple views is reduced by fusing a dual quaternion iterative mixing method with an ICP algorithm, so that the registration accuracy is improved.
Example 2
As shown in fig. 3, the difference between this embodiment and the dual quaternion-based multi-view registration method of embodiment 1 is that step 1042 is different, specifically:
1043, processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion linear mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system;
the adjacent frames of point cloud data are used for representing the frame point cloud data adjacent to the left and the right of each frame of point cloud data.
Specifically, as shown in fig. 2, when each frame of point cloud data in the multi-frame point cloud data in step S102 and two sequentially adjacent frames of point cloud data are processed by using a dual quaternion linear mixing method in step S104, and in this embodiment, 4 frames of point cloud data are specifically processed (that is, n in fig. 2 is 3), specific contents of the dual quaternion-based multi-view registration method in this embodiment are as follows:
1) acquiring 4 frames of point cloud data 0, 1, 2 and 3;
2) selecting a first frame point cloud data 0 in 4 frames of point cloud data as a world coordinate system, and setting i belonging to (0, 3);
3) with the point cloud data 0 as a starting point, respectively registering the ith frame of point cloud data and the (i-1) th frame of point cloud data in the 4 frames of point cloud data by adopting an ICP (inductively coupled plasma) algorithm, and respectively acquiring registration processing results of the registered multiframe frame of point cloud data 0', 1', 2 'and 3';
specifically, point cloud data 0, 1, 2 and 3 are processed by adopting an ICP (inductively coupled plasma) algorithm, and T is obtained by taking first frame point cloud data 0 as a world coordinate system10、T21、T32Wherein, T10Representing the coordinate transformation of the first frame point cloud 1 to the 0 th frame point cloud, T21Representing the coordinate transformation of the first frame point cloud 2 to the 1 st frame point cloud, T32Representing a coordinate transformation of the first frame point cloud 3 to the 2 nd frame point cloud;
4) judging whether the registration processing result in the step 3) is converged, and if not, continuing to step 5);
5) by taking the point cloud data 0 'as a starting point (the point cloud data 0', namely the point cloud data 0), recurrently acquiring rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data through relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently acquired registration processing result;
in particular from T obtained in step 3)10、T21、T32Recursive acquisition of T31、T20And T30Etc. such as T31=T32*T21,T20=T21*T10,T30=T32*T20
Processing the relative coordinate transformation between every two adjacent frames of point cloud data and the rigid transformation between every frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual quaternion linear mixing method to obtain the rigid transformation from every frame of point cloud to a world coordinate system;
wherein when the absolute movement T is10…T(n-1)0When noise or an initial value is inappropriate, an error inevitably occurs, and the error needs to be repaired, so that the error is reduced as much as possible.
Specifically, the relative motion of each frame from the 1 st frame to the n-th frame with the adjacent M frames, i.e., T, is calculated01...T0[1+(M-1)]、T12...T1[2+(M-1)]...Ti(i+1)...Ti[(i+1)+(M-1)]When M is set to 2, as shown in fig. 2, for each frame, diffusion motion is performed along its neighboring 2 frames, and the topology of the image is set to a ring shape. For T(i-1)iRigid motion is expressed by dual quaternion when the error function is minimized.
Processing the relative motion of each frame of point cloud data by adopting a dual quaternion linear mixing method to obtain that the absolute motion from the 1 st, 2 nd and 3 th frame of point cloud data to a world coordinate system is T'10、T’20、T’30
T'10=w1T10·T00+w2T12·T20+w3T13·T30+w4T31·T30
T'20=w1T20·T00+w2T12·T10+w3T23·T30+w4T02·T00
T'30=w1T30·T00+w2T31·T10+w3T32·T20+w4T13·T10
6) Multiplying the 4-frame point cloud data in the step 3) by the corresponding rigid transformation T 'in the step 5)'10、T’20、T’30Carrying out coordinate transformation to obtain multi-frame point cloud data after coordinate transformation;
7) respectively carrying out registration processing on ith frame point cloud data and (i-1) th frame point cloud data in the 4 frames of point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm, and respectively obtaining registration processing results of corresponding new multi-frame point cloud data 0', 1', 2', 3';
8) judging whether the registration processing result in the step 7) is converged, and returning to the step 5) when the judgment is negative until the registration processing result in the step 7) is converged; if yes, executing step 9);
9) and outputting a final registration processing result of the 4 frames of point cloud data.
In the embodiment, the ICP algorithm and the dual quaternion are fused, so that the problem that in the prior art, the registration process is performed only by adopting the ICP algorithm, the registration result is easy to fall into a local minimum value, and thus data cannot be converged, and the registration fails is solved; and the registration error generated in the process of pairwise registration of multiple views is reduced by fusing a dual quaternion linear mixing method with an ICP algorithm, so that the registration accuracy is improved.
Example 3
As shown in fig. 4, the dual quaternion-based multi-view registration system of the present embodiment includes a first point cloud data obtaining module 1, a first registration processing module 2, a first determining module 3, a selecting unit 4, a rigid transformation obtaining module 5, a second point cloud data obtaining module 6, a second registration processing module 7, a second determining module 8, and an output module 9.
The first point cloud data acquisition module 1 is used for acquiring multi-frame point cloud data;
the selecting unit 4 is configured to select a first frame of point cloud in the multi-frame of point cloud data as a world coordinate system;
the first registration processing module 2 adopts an ICP algorithm to perform registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data, and obtains the multi-frame point cloud data after the registration processing and a registration processing result; wherein the registration processing result is a first registration processing result; the first judging module 3 is configured to judge whether the registration processing result obtained by the first registration processing module 2 converges, and if not, invoke the rigid transformation obtaining module 5;
specifically, the first determining unit 3 is configured to determine whether the error value is smaller than a first set threshold or whether the iteration number is greater than a second set threshold.
The rigid transformation obtaining module 5 comprises a first rigid transformation obtaining unit 51 and a second rigid transformation obtaining unit 52, and is used for obtaining rigid transformation from each frame of point cloud to a world coordinate system;
specifically, the first rigid transformation obtaining unit 51 obtains rigid transformation of each frame of point cloud data and several adjacent frames of point cloud data in a recursion manner through relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently obtained registration processing result;
the second rigid transformation obtaining unit 52 is configured to perform processing on the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by using a dual-quaternion iterative mixture method, so as to obtain a rigid transformation from each frame of point cloud to a world coordinate system;
the second point cloud data obtaining module 6 is configured to multiply the multi-frame point cloud data after the registration processing by the rigid transformation in the second rigid transformation obtaining unit 52 to obtain the multi-frame point cloud data after the coordinate transformation.
The second registration processing module 7 is configured to perform registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data in the second point cloud data obtaining module 6 by using an ICP algorithm, and obtain corresponding new multi-frame point cloud data and a registration processing result respectively.
The first registration processing module 2 and/or the second registration processing module 7 includes a point cloud data acquisition unit 11, an inter-frame corresponding point acquisition unit 12, a rigid transformation acquisition unit 13, a target point cloud acquisition unit 14, and an error calculation unit 15.
The point cloud data acquisition unit 11 is configured to acquire any two adjacent frames of point cloud data with the first frame of point cloud as a starting point, and the two adjacent frames of point cloud data are respectively used as a source point cloud and a target point cloud;
in the ICP algorithm iteration process, when the source point cloud and the target point cloud are selected, the amount of point cloud data of each frame is too large, and a large amount of time is consumed in the processes of corresponding point searching, matching and the like, and feature points are generally extracted by using NARF, SURF, SIFT and the like as key points to perform matching alignment.
The inter-frame corresponding point acquiring unit 12 is configured to acquire inter-frame corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames; the acquisition mode of the corresponding point between the frames comprises at least one of point-to-point, point-to-projection and point-to-surface.
For example, preferably, the interframe corresponding points of the candidate point cloud and the matching corresponding point cloud are obtained by adopting a point-to-surface search method, so that the problem of matching a large number of wrong corresponding points by adopting a point-to-point search method can be reduced, and the convergence precision is greatly improved compared with the point-to-projection search method. Meanwhile, the error corresponding points are removed by using the curved surface set characteristics (such as curvature or normal vector) and the corresponding relative coordinate transformation when the ICP error function is minimized is obtained according to the remaining candidate points.
The rigid transformation obtaining unit 13 is configured to obtain a relative coordinate transformation from the corresponding target point cloud to the source point cloud when the ICP error function is minimized according to the inter-frame corresponding point;
specifically, the relative coordinate transformation is obtained as follows:
aiming at two frames of three-dimensional point sets M and D which are respectively obtained and represent the same scene or object, wherein M is a model set, D is a data set, the best aligned rigid body transformation (R, t) is obtained by solving the minimization of an error function of the following formula, R represents a three-dimensional rotation matrix, and t represents a translation vector:
Figure BDA0001508368360000171
wherein the content of the first and second substances,
Figure BDA0001508368360000172
representing a point in the data set D,
Figure BDA0001508368360000173
representing a point in the set of models M; w is ai,jRepresents a weight, wherein wi,jThe value is 0 or 1.
W is the same point in space as the ith point in the M point set and the jth point in the D point seti,jOtherwise, it is 0. Calculated in tuple form (m) according to the following formula (a represents point set correlation; b represents minimizing an error function on a correlated point set for pose estimation)i,di) To represent the associated point pair, the above formula is changed to:
Figure BDA0001508368360000174
wherein the content of the first and second substances,
Figure BDA0001508368360000175
the main solutions for the error function closed solution in the formula are Singular Value Decomposition (SVD), Orthogonal Matrix (OM) and Unit Quaternion (UQ).
The target point cloud obtaining unit 14 is configured to perform coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation, obtain a new target point cloud, and obtain corresponding new multi-frame point cloud data;
the error calculation unit 15 is configured to calculate an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
The second judging module 8 is configured to judge whether the registration processing result obtained by the second registration processing module 7 converges until the registration processing result in the second registration processing module 7 converges; wherein the registration processing result is a second registration processing result; wherein, as long as one registration processing result has unconvergence, the situation belongs to the unconvergence; and as long as the registration processing result is not converged, further registration processing is required to be carried out on all point cloud data by adopting a dual quaternion iterative mixing method.
When the first judging module 3 and/or the second judging module 8 judge that the judgment result is yes, the output module 9 is called;
the output module 9 is configured to output a final registration processing result of the multi-frame point cloud data.
When the point cloud data of each frame are overlapped, the multi-frame registration problem is converted into iteration after rigid transformation diffusion between the images of each frame.
Suppose ViRepresenting the transformation from a frame of point cloud i coordinate system to a global coordinate system, VjRepresenting the transformation from a frame of point cloud picture j coordinate system to a global coordinate system, TijRepresenting the coordinate transformation from the ith frame point cloud to the jth frame point cloud, and if the point pair registration is in a noise-free state, the adjacent views i and j can obtain Tji*Vi=VjI.e. multi-frame registration, may be understood as seeking a rigid transformation from each view to a common reference frame. Pose V of point cloud picture iiAnd the calculated pose Tij*VjThe error between is minimized to satisfy the multi-frame registrationThe error function is expressed as follows:
D=argmin∑ij∈N(i)d(TijVj,Vi)
where n (i) represents all views adjacent to the point cloud i. Any rigid transformation can be converted into a spiral motion, namely, the rotation of any spiral shaft in a three-dimensional space is performed by an angle, and meanwhile, the translation is performed for a distance along the shaft. Defining the helical distance
Figure BDA0001508368360000181
Wherein t represents a translation distance along the screw axis, α represents a rotation angle along the screw axis, γpRepresenting the distance of point p from the helical axis, the error function is expressed as:
DSC=argmin∑ij∈N(i)p∈SdSc(TijVj,Vi,p)2
for any p ∈ R3When is coming into contact with
Figure BDA0001508368360000182
Then the helical distance is obtained
Figure BDA0001508368360000183
Taking the minimum value.
When the angle change between the view angles of the respective frame images is not large, d can be obtainedSC(TijVj,Vi,p)≈||TijVjp-Vip | |, therefore, the most registered process of the multi-view can be achieved by solving for Vi=SCAVG(±TijVj) Thus obtaining the product. The spiral distance mean value is obtained by adopting the process of solving a dual quaternion iterative hybrid (DIB) method by using the same weight value, and the dual quaternion iterative hybrid solving process is as follows:
Figure BDA0001508368360000191
Figure BDA0001508368360000192
is obtained according to the above formula
Figure BDA0001508368360000193
Of the hour
Figure BDA0001508368360000194
Wherein the content of the first and second substances,
Figure BDA0001508368360000195
denotes a dual quaternion, w ═ w1,w2,…,wn) Representing a weight value and p representing a desired precision. When the same weight value is taken, will be output
Figure BDA0001508368360000196
Is called as
Figure BDA0001508368360000197
Mean value of the helical distance of (a).
Figure BDA0001508368360000198
Specifically, as shown in fig. 2, when each frame of point cloud data in the multi-frame point cloud data in step S102 and two frames of point cloud data that are sequentially adjacent are processed by using a dual quaternion iterative mixing method in step S104, and in this embodiment, 4 frames of point cloud data are specifically processed (that is, n in fig. 2 is 3), the working principle of the dual quaternion-based multi-view registration system of this embodiment is as follows:
1) acquiring 4 frames of point cloud data 0, 1, 2 and 3 through a first point cloud data acquisition module 1;
2) selecting point cloud data 0 in 4 frames of point cloud data as a world coordinate system through a second selection unit 11, and setting i belongs to (0, 3);
3) with the point cloud data 0 as a starting point, respectively registering the ith frame of point cloud data and the (i-1) th frame of point cloud data in the 4 frames of point cloud data by using an ICP (inductively coupled plasma) algorithm through a first registration processing module 2, and respectively acquiring registration processing results of the registered multiframe frame of point cloud data 0', 1', 2 'and 3';
specifically, point cloud data 0, 1, 2 and 3 are processed by adopting an ICP (inductively coupled plasma) algorithm, and T is obtained by taking first frame point cloud data 0 as a world coordinate system10、T21、T32Wherein, T10Representing the coordinate transformation of the first frame point cloud 1 to the 0 th frame point cloud, T21Representing the coordinate transformation of the first frame point cloud 2 to the 1 st frame point cloud, T32Representing a coordinate transformation of the first frame point cloud 3 to the 2 nd frame point cloud;
4) judging whether the registration processing result in the first registration processing module 2 is converged by the first judging module 3, and if not, continuing to call the selecting unit 4;
5) by taking the point cloud data 0 'as a starting point (the point cloud data 0', namely the point cloud data 0), recurrently acquiring rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data through relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently acquired registration processing result;
in particular from T obtained in step 3)10、T21、T32Recursive acquisition of T31、T20And T30Etc. such as T31=T32*T21,T20=T21*T10,T30=T32*T20
Processing the relative coordinate transformation between every two adjacent frames of point cloud data and the rigid transformation between every frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual quaternion iterative mixing method to obtain the rigid transformation from every frame of point cloud to a world coordinate system;
wherein when the absolute movement T is10…T(n-1)0When noise or an initial value is inappropriate, an error inevitably occurs, and the error needs to be repaired, so that the error is reduced as much as possible.
Specifically, the relative motion of each frame from the 1 st frame to the n-th frame with the adjacent M frames, i.e., T, is calculated0...T0[1+(M-1)]、T12...T1[2+(M-1)]...Ti(i+1)...Ti[(i+1)+(M-1)]When M is set to 2, as shown in fig. 2, each frame performs a diffusion motion along its neighboring 2 frames, and the topology of the image is set to a ring. For T(i-1)iRigid motion is expressed by dual quaternion when the error function is minimized.
Changing w to (w)1,w2,…,wn) The weight values in the method are set to be the same, and a dual quaternion iterative mixing method is continuously adopted to obtain rigid transformation of the world coordinate system of each frame:
Figure BDA0001508368360000201
Figure BDA0001508368360000202
Figure BDA0001508368360000203
6) multiplying the 4 frames of point cloud data in the first registration processing module 2 by the corresponding rigid transformation in the second rigid transformation obtaining unit 52 by using the second point cloud data obtaining module 5
Figure BDA0001508368360000204
And
Figure BDA0001508368360000205
carrying out coordinate transformation to obtain multi-frame point cloud data after coordinate transformation;
7) respectively carrying out registration processing on the ith frame point cloud data and the (i-1) th frame point cloud data in the 4 frames of point cloud data in the second point cloud data acquisition module 6 by using an ICP (inductively coupled plasma) algorithm through a second registration processing module 7, and respectively acquiring registration processing results of the registered multiple frames of point cloud data 0', 1', 2', 3';
8) judging whether the registration processing result in the second registration processing module 7 is converged, and if not, returning to the step 5) until the registration processing result in the second registration processing module 7 is judged to be converged; if yes, calling the output module 9;
9) and outputting the final registration processing result of the 4 frames of point cloud data by adopting the output module 9.
In the embodiment, the ICP algorithm and the dual quaternion are fused, so that the problem that in the prior art, the registration process is performed only by adopting the ICP algorithm, the registration result is easy to fall into a local minimum value, and thus data cannot be converged, and the registration fails is solved; and the registration error generated in the process of pairwise registration of multiple views is reduced by fusing a dual quaternion iterative mixing method with an ICP algorithm, so that the registration accuracy is improved.
Example 4
As shown in fig. 5, the present embodiment differs from the dual quaternion-based multi-view registration system of embodiment 3 in that: replacing the second rigid transformation obtaining unit 52 with a third rigid transformation obtaining unit, specifically:
the third rigid transformation obtaining unit 53 is configured to perform a dual quaternion linear mixture method on the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and several adjacent frames of point cloud data, so as to obtain a rigid transformation from each frame of point cloud to a world coordinate system.
Specifically, as shown in fig. 2, when a dual quaternion linear mixing method is adopted to process each frame of point cloud data in multiple frames of point cloud data and two frames of point cloud data that are sequentially adjacent to each other, in this embodiment, 4 frames of point cloud data are specifically processed (that is, n in fig. 2 is 3), the working principle of the dual quaternion-based multi-view registration system of this embodiment is as follows:
1) acquiring 4 frames of point cloud data 0, 1, 2 and 3 through a first point cloud data acquisition module 1;
2) selecting point cloud data 0 in 4 frames of point cloud data as a world coordinate system through a second selection unit 11, and setting i belongs to (0, 3);
3) with the point cloud data 0 as a starting point, respectively registering the ith frame of point cloud data and the (i-1) th frame of point cloud data in the 4 frames of point cloud data by using an ICP (inductively coupled plasma) algorithm through a first registration processing module 2, and respectively acquiring registration processing results of the registered multiframe frame of point cloud data 0', 1', 2 'and 3';
specifically, point cloud data 0, 1, 2 and 3 are processed by adopting an ICP (inductively coupled plasma) algorithm, and T is obtained by taking first frame point cloud data 0 as a world coordinate system10、T21、T32Wherein, T10Representing the coordinate transformation of the first frame point cloud 1 to the 0 th frame point cloud, T21Representing the coordinate transformation of the first frame point cloud 2 to the 1 st frame point cloud, T32Representing a coordinate transformation of the first frame point cloud 3 to the 2 nd frame point cloud;
4) judging whether the registration processing result in the first registration processing module 2 is converged by the first judging module 3, and if not, continuing to call the selecting unit 4;
5) by taking the point cloud data 0 'as a starting point (the point cloud data 0', namely the point cloud data 0), recurrently acquiring rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data through relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently acquired registration processing result;
in particular from T obtained in step 3)10、T21、T32Recursive acquisition of T31、T20And T30Etc. such as T31=T32*T21,T20=T21*T10,T30=T32*T20
The third rigid transformation obtaining unit 17 processes the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion linear mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system;
wherein when the absolute movement T is10…T(n-1)0When noise or an initial value is inappropriate, an error inevitably occurs, and the error needs to be repaired, so that the error is reduced as much as possible.
Specifically, the relative motion of each frame from the 1 st frame to the n-th frame with the adjacent M frames, i.e., T, is calculated01...T0[1+(M-1)]、T12...T1[2+(M-1)]...Ti(i+1)...Ti[(i+1)+(M-1)]When M is set to 2, as shown in fig. 2, each frame performs a diffusion motion along its neighboring 2 frames, and the topology of the image is set to a ring. For T(i-1)iRigid motion is expressed by dual quaternion when the error function is minimized.
Processing the relative motion of each frame of point cloud data by adopting a dual quaternion linear mixing method to obtain that the absolute motion from the 1 st, 2 nd and 3 th frame of point cloud data to a world coordinate system is T'10、T’20、T’30
T'10=w1T10·T00+w2T12·T20+w3T13·T30+w4T31·T30
T'20=w1T20·T0o+w2T12·T10+w3T23·T30+w4T02·T00
T'30=w1T30·T00+w2T31·T10+w3T32·T20+w4T13·T10
6) Multiplying the 4 frames of point cloud data in the first registration processing module 2 by the corresponding rigid transformation T 'in the second rigid transformation obtaining unit 52 by the second point cloud data obtaining module 5'10、T’20、T’30Carrying out coordinate transformation to obtain multi-frame point cloud data after coordinate transformation;
7) respectively carrying out registration processing on the ith frame point cloud data and the (i-1) th frame point cloud data in the 4 frames of point cloud data in the second point cloud data acquisition module 6 by using an ICP (inductively coupled plasma) algorithm through a second registration processing module 7, and respectively acquiring registration processing results of the registered multiple frames of point cloud data 0', 1', 2', 3';
8) judging whether the registration processing result in the second registration processing module 7 is converged, and if not, returning to the step 5) until the registration processing result in the second registration processing module 7 is judged to be converged; if yes, calling the output module 9;
9) and outputting the final registration processing result of the 4 frames of point cloud data by adopting the output module 9.
In the embodiment, the ICP algorithm and the dual quaternion are fused, so that the problem that in the prior art, the registration process is performed only by adopting the ICP algorithm, the registration result is easy to fall into a local minimum value, and thus data cannot be converged, and the registration fails is solved; and the registration error generated in the process of pairwise registration of multiple views is reduced by fusing a dual quaternion linear mixing method with an ICP algorithm, so that the registration accuracy is improved.
Example 5
An apparatus for dual-quaternion-based multi-view registration, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the dual-quaternion-based multi-view registration method of embodiment 1 when executing the computer program.
Example 6
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the dual quaternion-based multi-view registration method of embodiment 1.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the invention may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps of implementing the dual quaternion based multi-view registration method of embodiment 1, when the program product is run on the terminal device.
Where program code for carrying out the invention is written in any combination of one or more programming languages, the program code may be executed entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device. While specific embodiments of the invention have been described above, it will be appreciated by those skilled in the art that these are by way of example only, and that the scope of the invention is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the spirit and scope of the invention, and these changes and modifications are within the scope of the invention.

Claims (18)

1. A multi-view registration method based on dual quaternion, the multi-view registration method comprising:
acquiring multi-frame point cloud data, and selecting a first frame point cloud in the multi-frame point cloud data as a world coordinate system;
carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after the registration processing and a first registration processing result;
judging whether the first registration processing result is converged, and if not, acquiring rigid transformation from each frame of point cloud to a world coordinate system;
performing coordinate transformation on the multi-frame point cloud data subjected to the registration processing according to the rigid transformation to obtain the multi-frame point cloud data subjected to the coordinate transformation;
carrying out registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after registration processing and a second registration processing result;
judging whether the second registration processing result is converged, and returning to the step of obtaining rigid transformation from each frame of point cloud to a world coordinate system when the second registration processing result is judged to be converged;
the registration process specifically includes:
collecting any two adjacent frames of point cloud data by taking the first frame of point cloud as a starting point, and respectively taking the two frames of point cloud data as a source point cloud and a target point cloud;
acquiring interframe corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames;
obtaining an ICP error function according to the corresponding points between frames to obtain the corresponding relative coordinate transformation from the target point cloud to the source point cloud when the minimization is achieved;
carrying out coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation to obtain a new target point cloud, obtaining corresponding new multi-frame point cloud data, and calculating an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
2. The dual-quaternion-based multi-view registration method of claim 1, wherein the step of obtaining a rigid transformation of each frame point cloud to a world coordinate system specifically comprises: recursion is carried out to obtain rigid transformation of each frame of point cloud data and a plurality of adjacent frames of point cloud data according to the relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the first registration processing result;
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
3. The dual-quaternion-based multi-view registration method of claim 2, wherein the step of processing the relative coordinate transformation between the two adjacent frames of point cloud data and the rigid transformation between the frame point cloud data and the several adjacent frames of point cloud data by using a dual-quaternion mixing method to obtain the rigid transformation from the frame point cloud to the world coordinate system specifically comprises:
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion iterative mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
4. The dual-quaternion-based multi-view registration method of claim 2, wherein the step of processing the relative coordinate transformation between the two adjacent frames of point cloud data and the rigid transformation between the frame point cloud data and the several adjacent frames of point cloud data by using a dual-quaternion mixing method to obtain the rigid transformation from the frame point cloud to the world coordinate system specifically comprises:
and processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual quaternion linear mixing method to obtain the rigid transformation from each frame of point cloud to a world coordinate system.
5. The dual-quaternion-based multi-view registration method of claim 1, wherein the step of performing coordinate transformation on the multi-frame point cloud data after the registration processing according to the rigid transformation to obtain the multi-frame point cloud data after the coordinate transformation specifically comprises:
and multiplying the multi-frame point cloud data after the registration processing by the rigid transformation to obtain the multi-frame point cloud data after the coordinate transformation.
6. The dual-quaternion-based multi-view registration method as claimed in claim 1, wherein the step of determining whether the first registration processing result converges and/or the step of determining whether the second registration processing result converges outputs a final registration processing result of the multi-frame point cloud data when the determination is yes.
7. The dual-quaternion-based multi-view registration method as claimed in claim 1, wherein the step of determining whether the first registration processing result converges and/or the step of determining whether the second registration processing result converges specifically comprises:
and judging whether the error value is smaller than a first set threshold or whether the iteration number is larger than a second set threshold.
8. The dual-quaternion-based multiview registration method of claim 1, wherein the inter-frame corresponding point acquisition mode comprises at least one of point-to-point, point-to-projection, and point-to-plane.
9. A multi-view registration system based on dual quaternion is characterized by comprising a first point cloud data acquisition module, a selection unit, a first registration processing module, a first judgment module, a rigid transformation acquisition module, a second point cloud data acquisition module, a second registration processing module and a second judgment module;
the first point cloud data acquisition module is used for acquiring multi-frame point cloud data;
the selecting unit is used for selecting a first frame of point cloud in the multi-frame of point cloud data as a world coordinate system;
the first registration processing module adopts an ICP algorithm to perform registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data, and obtains the multi-frame point cloud data after the registration processing and a first registration processing result;
the first judging module is used for judging whether the first registration processing result obtained by the first registration processing module is converged or not, and if not, the rigid transformation obtaining module is called;
the rigid transformation acquisition module is used for acquiring rigid transformation from each frame of point cloud to a world coordinate system;
the second point cloud data acquisition module is used for carrying out coordinate transformation on the multi-frame point cloud data after registration processing according to the rigid transformation acquired by the second rigid transformation acquisition unit to acquire the multi-frame point cloud data after coordinate transformation;
the second registration processing module is used for performing registration processing on any two adjacent frames of point cloud data in the multi-frame point cloud data after coordinate transformation by adopting an ICP (inductively coupled plasma) algorithm to obtain the multi-frame point cloud data after registration processing and a second registration processing result;
the second judging module is configured to judge whether the second registration processing result obtained by the second registration processing module converges, and if not, invoke the second rigid transformation obtaining unit until the second registration processing result in the second registration processing module converges;
the first registration processing module and/or the second registration processing module comprise a point cloud data acquisition unit, an inter-frame corresponding point acquisition unit, a rigid transformation acquisition unit, a target point cloud acquisition unit and an error calculation unit;
the point cloud data acquisition unit is used for acquiring any two adjacent frames of point cloud data by taking the first frame of point cloud as a starting point, and respectively taking the two adjacent frames of point cloud data as a source point cloud and a target point cloud;
the inter-frame corresponding point acquisition unit is used for acquiring inter-frame corresponding points of the source point cloud and the target point cloud;
the interframe corresponding points are used for representing the positions of the same object in different frames;
the rigid transformation obtaining unit is used for obtaining an ICP error function according to the corresponding points between frames to obtain the corresponding relative coordinate transformation from the target point cloud to the source point cloud when the minimization is achieved;
the target point cloud obtaining unit is used for carrying out coordinate transformation on the target point cloud according to the corresponding relative coordinate transformation to obtain a new target point cloud and obtain corresponding new multi-frame point cloud data;
the error calculation unit is used for calculating an error value between the new target point cloud and the source point cloud;
and the error value is used for representing the registration processing result of the random two adjacent frames of point cloud data.
10. The dual-quaternion-based multiview registration system of claim 9, wherein the rigid transformation obtaining module comprises a first rigid transformation obtaining unit and a second rigid transformation obtaining unit;
the first rigid transformation obtaining unit is used for recursively obtaining rigid transformation of each frame of point cloud data and a plurality of adjacent frames of point cloud data according to relative coordinate transformation between each two adjacent frames of point cloud data corresponding to the currently obtained first registration processing result;
the second rigid transformation obtaining unit is used for processing the relative coordinate transformation between each two adjacent frames of point cloud data and the rigid transformation between each frame of point cloud data and a plurality of adjacent frames of point cloud data by adopting a dual-quaternion mixing method, and obtaining the rigid transformation from each frame of point cloud to a world coordinate system.
11. The dual-quaternion-based multi-view registration system as claimed in claim 10, wherein the second rigid transformation obtaining unit is configured to process the relative coordinate transformation between the two adjacent frames of point cloud data and the rigid transformation between the frames of point cloud data and the adjacent frames of point cloud data by using a dual-quaternion iterative mixture method to obtain a rigid transformation from the frames of point cloud to a world coordinate system.
12. The dual-quaternion-based multi-view registration system as claimed in claim 10, wherein the second rigid transformation obtaining unit is configured to process the relative coordinate transformation between the two adjacent frames of point cloud data and the rigid transformation between the frames of point cloud data and the adjacent frames of point cloud data by using a dual-quaternion linear blending method to obtain a rigid transformation from the frames of point cloud to a world coordinate system.
13. The dual-quaternion-based multiview registration system of claim 9, wherein the second point cloud data acquisition module is configured to multiply the registered multi-frame point cloud data by a rigid transformation in the second rigid transformation acquisition unit to acquire coordinate-transformed multi-frame point cloud data.
14. The dual-quaternion-based multiview registration system of claim 9, comprising an output module;
when the first judgment module and/or the second judgment module judge to be yes, the output module is called;
the output module is used for outputting the final registration processing result of the multi-frame point cloud data.
15. The dual-quaternion-based multiview registration system of claim 9, wherein the first determining unit is configured to determine whether the error value is smaller than a first set threshold or whether the number of iterations is larger than a second set threshold.
16. The dual-quaternion-based multiview registration system of claim 9, wherein the inter-frame corresponding point acquisition modality comprises at least one of point-to-point, point-to-projection, and point-to-plane.
17. An apparatus for dual-quaternion based multi-view registration, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the dual-quaternion based multi-view registration method of any of claims 1-8 when executing the computer program.
18. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the dual quaternion-based multi-view registration method of any of claims 1 to 8.
CN201711341011.6A 2017-12-14 2017-12-14 View registration method, system, device and storage medium based on dual quaternion Active CN109961463B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711341011.6A CN109961463B (en) 2017-12-14 2017-12-14 View registration method, system, device and storage medium based on dual quaternion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711341011.6A CN109961463B (en) 2017-12-14 2017-12-14 View registration method, system, device and storage medium based on dual quaternion

Publications (2)

Publication Number Publication Date
CN109961463A CN109961463A (en) 2019-07-02
CN109961463B true CN109961463B (en) 2021-12-31

Family

ID=67018226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711341011.6A Active CN109961463B (en) 2017-12-14 2017-12-14 View registration method, system, device and storage medium based on dual quaternion

Country Status (1)

Country Link
CN (1) CN109961463B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236064A (en) * 2013-05-06 2013-08-07 东南大学 Point cloud automatic registration method based on normal vector
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN105787933A (en) * 2016-02-19 2016-07-20 武汉理工大学 Water front three-dimensional reconstruction apparatus and method based on multi-view point cloud registration
CN106643563A (en) * 2016-12-07 2017-05-10 西安知象光电科技有限公司 Table type large visual field three-dimensional scanning device and method

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538973B (en) * 2011-12-31 2014-04-02 南京理工大学 Rapidly converged scene-based non-uniformity correction method
KR101893788B1 (en) * 2012-08-27 2018-08-31 삼성전자주식회사 Apparatus and method of image matching in multi-view camera
CN103942782A (en) * 2014-03-31 2014-07-23 Tcl集团股份有限公司 Image stitching method and device
CN104935909B (en) * 2015-05-14 2017-02-22 清华大学深圳研究生院 Multi-image super-resolution method based on depth information
CN106097324A (en) * 2016-06-07 2016-11-09 中国农业大学 A kind of non-rigid 3D shape corresponding point determine method
CN106846387B (en) * 2017-02-09 2019-07-12 中北大学 Point cloud registration method based on neighborhood rotary volume

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236064A (en) * 2013-05-06 2013-08-07 东南大学 Point cloud automatic registration method based on normal vector
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN105787933A (en) * 2016-02-19 2016-07-20 武汉理工大学 Water front three-dimensional reconstruction apparatus and method based on multi-view point cloud registration
CN106643563A (en) * 2016-12-07 2017-05-10 西安知象光电科技有限公司 Table type large visual field three-dimensional scanning device and method

Also Published As

Publication number Publication date
CN109961463A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
Fernandez-Labrador et al. Corners for layout: End-to-end layout recovery from 360 images
Liu et al. Learning auxiliary monocular contexts helps monocular 3d object detection
CN107990899B (en) Positioning method and system based on SLAM
CN110705574B (en) Positioning method and device, equipment and storage medium
CN110264509B (en) Method, apparatus, and storage medium for determining pose of image capturing device
CN110704563B (en) Map fusion method and device, equipment and storage medium
Whelan et al. Real-time large-scale dense RGB-D SLAM with volumetric fusion
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
WO2018009473A1 (en) Motion capture and character synthesis
WO2013173465A1 (en) Imaging device capable of producing three dimensional representations and methods of use
CN111144349B (en) Indoor visual relocation method and system
CN104778688A (en) Method and device for registering point cloud data
WO2014022036A1 (en) Fast 3-d point cloud generation on mobile devices
CN113436270A (en) Sensor calibration method and device, electronic equipment and storage medium
Kim et al. Real-time panorama canvas of natural images
Dharmasiri et al. MO-SLAM: Multi object slam with run-time object discovery through duplicates
CN111709984A (en) Pose depth prediction method, visual odometer method, device, equipment and medium
Zhou et al. PADENet: An efficient and robust panoramic monocular depth estimation network for outdoor scenes
Moreno et al. ERODE: An efficient and robust outlier detector and its application to stereovisual odometry
CN113190120A (en) Pose acquisition method and device, electronic equipment and storage medium
CN109961463B (en) View registration method, system, device and storage medium based on dual quaternion
CN112085842A (en) Depth value determination method and device, electronic equipment and storage medium
Lee et al. So-nerf: Active view planning for nerf using surrogate objectives
CN113592015B (en) Method and device for positioning and training feature matching network
Zhu et al. Multimodal neural radiance field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant