CN111862111A - Point cloud registration algorithm based on region segmentation and fusion - Google Patents

Point cloud registration algorithm based on region segmentation and fusion Download PDF

Info

Publication number
CN111862111A
CN111862111A CN202010627992.6A CN202010627992A CN111862111A CN 111862111 A CN111862111 A CN 111862111A CN 202010627992 A CN202010627992 A CN 202010627992A CN 111862111 A CN111862111 A CN 111862111A
Authority
CN
China
Prior art keywords
final
region
point cloud
rotation
translation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010627992.6A
Other languages
Chinese (zh)
Other versions
CN111862111B (en
Inventor
王向伟
沙建军
孙英贺
彭锐晖
钱海宁
律帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Original Assignee
Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd filed Critical Qingdao Jiuwei Huadun Science And Technology Research Institute Co ltd
Priority to CN202010627992.6A priority Critical patent/CN111862111B/en
Publication of CN111862111A publication Critical patent/CN111862111A/en
Application granted granted Critical
Publication of CN111862111B publication Critical patent/CN111862111B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of three-dimensional point cloud registration, and particularly relates to a point cloud registration algorithm based on region segmentation and fusion. The two point cloud data with different visual angles are respectively divided into two groups of data according to regional properties, one group of data is target point cloud data, the other group of data is background point cloud data, the different groups of data are respectively registered based on an ICP (inductively coupled plasma) algorithm, corresponding fusion is carried out according to different weights, the optimal point cloud rotation and translation matrix is worked out, the overall registration effect can be improved, and meanwhile the robustness of the algorithm is improved.

Description

Point cloud registration algorithm based on region segmentation and fusion
The technical field is as follows:
the invention belongs to the technical field of three-dimensional point cloud registration, and particularly relates to a point cloud registration algorithm based on region segmentation and fusion.
Background art:
point cloud registration is an important research direction of computer vision and image graphics, and aims to splice local three-dimensional point cloud data of multiple visual angles into a complete three-dimensional point cloud model. When the three-dimensional target of the actual scene is scanned, the effect of point cloud registration is poor under the influence of background data such as ground and wall surfaces, low quality of collected point cloud and defects of a registration algorithm.
At present, the most widely applied point cloud registration algorithm is an iterative closest point algorithm (ICP algorithm), which is simple and intuitive in calculation and high in registration accuracy, but is easy to fall into local optimization under the condition of poor initial value.
The invention content is as follows:
the invention aims to solve the technical problem that when a three-dimensional target in an actual scene is scanned, due to the fact that the point cloud data quality is low and the point cloud data with a complex structure formed by a ground wall surface, an actual target object and the like exist, the traditional ICP algorithm has the defect of large error during registration, and the point cloud registration effect is poor.
In order to solve the problems, the invention provides an improved algorithm based on classical ICP registration, which is characterized in that two different sets of point cloud data are respectively divided into two sets of data, one set is target point cloud data, the other set is background point cloud data, the different sets of data are registered based on the ICP algorithm, corresponding fusion is carried out according to different weights, the optimal point cloud rotation and translation matrix is solved, the integral registration effect is improved, and meanwhile, the robustness of the algorithm is increased.
In order to achieve the above object, the present invention is implemented by the following technical solution, and a point cloud registration algorithm based on region segmentation and fusion, including the following steps:
(1) collecting scene point cloud data: obtaining two initial point cloud data with different visual angles;
(2) area division: and dividing each initial point cloud data into a target area and a background area. Performing region segmentation on each point cloud data based on a region growing method, wherein the segmented regions are as follows:
R∈{R0,R1,..RN-1n is the number of zones;
according to a threshold value thlowAnd thhighThe region R is divided into regions. Number of points in the regionlowthhigh]The regions within the range constitute a target region; the number of the region points is (0, th)low) Or (th)high, + ∞) constitutes the background region. Therein, threshold thlowAnd thhighAnd determining according to the point cloud scale. Respectively carrying out region division on the two initial point clouds according to the method to obtain a group of target regions (the corresponding point clouds are Cloud)ob1And Cloudob2) A set of background regions (corresponding point clouds are Cloud)bg1And Cloudbg2)。
(3) Area registration: the target region and the background region are registered separately according to a conventional ICP algorithm. The method comprises the following specific steps: solving a corresponding rotation and translation matrix of the region, wherein the rotation and translation matrix of the target region is Mat objectSplit it into a rotation matrix RobAnd a translation vector VobThe background area rotation-translation matrix is TMatbackgroundSplit it into a rotation matrix RbgAnd a translation vector Vbg
(4) Setting weight: setting different weights for the split rotation matrix and translation vector according to registration errors of different regions: the weight of the translation component of the target region rotation-translation matrix is set as WvThe translation component of the background region has a weight of 1-Wv(ii) a Similarly, the weight of the rotation component of the target region rotation-translation matrix is set to WrThe weight of the rotation matrix of the background area is 1-Wr。Wv=WrW is calculated from the following equation:
Figure BDA0002565480220000021
wherein,obA registration error representing the target region is determined,bgrepresenting registration errors of background regions.
(5) Final translation vector VfinalDirectly solving by a translation vector calculation formula:
Vfinal=Vob*Wv,+Vbg*(1-Wv)
wherein, VfinalTranslation vectors, V, representing the final two initial point cloudsobTranslation vector, V, representing the target regionbgA translation vector representing a background region.
(6) And (3) solving a final rotation matrix, namely converting the final rotation matrix into a corresponding quaternion for calculation, and fusing:
Rob→qob;Rbg→qbg,qfinal=qob*Wr+qbg*(1-Wr)
then qfinalNormalization, converting quaternion to rotation matrix, qfinal→Rfinal
Wherein R isobA rotation matrix representing the target area, qobRepresenting quaternions, R, corresponding to the rotation matrix of the target area bgRotation matrix representing background region, qbgRepresenting the quaternion, q, corresponding to the rotation matrix of the background regionfinalQuaternion, R, corresponding to the rotation matrix representing the final registration of the two initial point cloudsfinalA rotation matrix representing the final registration of the two initial point clouds.
(7) Weighted translation vector VfinalAnd a rotation matrix RfinalRotation and translation matrix TMat combined into two initial point clouds and finally registeredfinal
The invention has the beneficial effects that:
according to the invention, two different point cloud data are respectively divided into two groups of data, one group of data is target area data to be registered, the other group of data is background area data, different groups of data are registered based on an ICP algorithm respectively, corresponding fusion is carried out according to different weights, and the optimal point cloud rotation translation matrix is worked out, so that the registration effect can be improved, and meanwhile, the robustness of the algorithm is increased.
Drawings
FIG. 1 is a flow chart of the split-zone registration algorithm of the present invention;
fig. 2 is a scene point cloud data acquisition diagram of embodiment 1, wherein the target area is a cabinet, a flowerpot, etc., and the background area is a ground, a wall, etc.;
FIG. 3 is a right side view of the scene of embodiment 1;
FIG. 4 is a left side view of the scene of embodiment 1;
FIG. 5 is a graph of composite ICP for example 1;
fig. 6 is a diagram of a conventional ICP of embodiment 1.
The specific implementation mode is as follows:
in order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention are clearly and completely described below, and it is obvious that the described embodiments are a part of the embodiments of the present invention, but not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
a point cloud registration algorithm based on region segmentation fusion comprises the following steps:
(1) collecting scene point cloud data: as shown in fig. 1, two point cloud data with different viewing angles are obtained;
(2) area division:
and dividing each point cloud data into a target area and a background area. Performing region segmentation on each point cloud data based on a region growing method, wherein the segmented regions are as follows:
R∈{R0,R1,..RN-1n is the number of zones;
according to a threshold value thlowAnd thhighThe region R is divided into regions. Number of points in the regionlowthhigh]The regions within the range constitute a target region; the number of the region points is (0, th)low) Or (th)high, + ∞) constitutes the background region. Therein, threshold th lowAnd thhighAnd determining according to the point cloud scale. Respectively carrying out region division on the two initial point clouds according to the method to obtain a group of target regions (the corresponding point clouds are Cloud)ob1And Cloudob2) A set of background regions (corresponding point clouds are Cloud)bg1And Cloudbg2)。
(3) Area registration: the target region and the background region are registered separately according to a conventional ICP algorithm. The method comprises the following specific steps: solving a corresponding rotation and translation matrix of the area, namely a rotation and translation matrix Mat of the target areaobjectSplit into a rotation matrix RobAnd a translation vector VobBackground area rotation translation matrix TMatbackgroundSplit into a rotation matrix RbgAnd a translation vector Vbg
Area registration: each region was registered according to the conventional ICP algorithm: the steps of the registration algorithm are as follows:
1. registering two target area point clouds through traditional ICP algorithmob1And Cloudob2Obtaining a target area rotation translation matrix MatobjectAnd split into rotation matrices RobAnd a translation vector Vob
2. Registering two background area point clouds through traditional ICP algorithmbg1And Cloudbg2Obtaining a background area rotation translation matrix TMatbackgroundAnd split into rotation matrices RbgAnd a translation vector Vbg
(4) Setting weight: setting different weights for the split rotation matrix and translation vector according to registration errors of different regions: the weight of the translation component of the target rotational-translation matrix is set to W vThe translation component of the background region has a weight of 1-Wv(ii) a Similarly, the weight of the rotation component of the target rototranslation matrix is set to WrThe weight of the rotation matrix of the background area is 1-Wr。Wv=WrW is calculated from the following equation:
Figure BDA0002565480220000041
wherein,oba registration error representing the target region is determined,bgrepresenting registration errors of background regions.
(5) Final translation vector VfinalDirectly solving by a translation vector calculation formula:
Vfinal=Vob*Wv+Vbg*(1-Wv)
(6) and (3) solving a final rotation matrix, namely converting the final rotation matrix into a corresponding quaternion for calculation, and fusing:
Rob→qob;Rbg→qbg,qfinal=qob*Wr+qbg*(1-Wr)
then qfinalNormalization processing, converting quaternion into rotation matrix, qfinal→Rfinal
(7) Weighted translation vector VfinalAnd a rotation matrix RfinalRotation and translation matrix TMat combined into two initial point clouds and finally registeredfinal
As can be seen from FIGS. 2-5, when the invention is applied to actual scene for actual measurement point cloud data registration, the point is used
Cloud data quality is not high, meanwhile, complex structure point cloud data composed of a ground wall surface, an actual target object and the like exist, the defect of large error occurs in the traditional ICP algorithm during registration, the algorithm provided by the text can well realize registration of two scene data, and the accuracy is higher than that of the traditional ICP algorithm.
The algorithm is realized on a PC of an Intel Xeon E33.31GHz CPU, an 8G memory and a 64-bit Window 10 operating system. The algorithm is applied to the registration of the actual complex scene point cloud data. The scene data comprises backgrounds such as the ground and the wall, targets such as cabinets and flowerpots, and a large amount of noise exists in the collected point cloud, so that the registration difficulty is increased.
The scene is shown in fig. 2, and the algorithm proposed herein is used to collect scene point cloud data from different view angles, as shown in fig. 3
As shown in fig. 4, region division and region registration are performed; then calculating a final translation vector and a final rotation matrix according to different weights; and finally, combining the weighted translation vectors and the weighted rotation matrixes to obtain a rotation translation transformation matrix of the two point clouds, so that the point clouds can be transformed to realize the registration of the point clouds. Clearly, the algorithm herein is feasible.
To verify the actual performance of the registration algorithm described herein, registration was performed on the point cloud data of fig. 3 and 4 using the present algorithm and the conventional ICP algorithm, respectively, with the following results:
Figure BDA0002565480220000051
as can be seen from the above table, the algorithm herein has a higher registration accuracy. Compared with the traditional ICP algorithm, the speed is reduced by about 4%, and the precision is improved by about 48%. In low-quality multi-view point cloud data with a large number of complex structures, a traditional ICP algorithm adopts a global registration strategy, so that the traditional ICP algorithm is easily influenced by factors such as background, noise, coverage rate and the like, and a large registration error is caused; the target and background regions provided by the method are respectively registered and then weighted and fused, the invariance of local regions and the difference of the importance degrees of the target and background regions are considered, and the overall error is reduced by a grouping registration and re-fusion mode. Therefore, the algorithm based on region segmentation fusion can realize accurate registration of point cloud.

Claims (5)

1. A point cloud registration algorithm based on region segmentation and fusion is characterized by comprising the following steps:
(1) collecting scene point cloud data: obtaining two initial point cloud data with different visual angles;
(2) area division: dividing each initial point cloud data into a target area and a background area;
(3) area registration: the target area and the background area are respectively registered according to a traditional ICP algorithm;
(4) setting weight: setting different weights for the split rotation matrix and translation vector according to registration errors of different areas;
(5) final translation vector VfinalDirectly solving by a translation vector calculation formula:
Vfinal=Vob*Wv+Vbg*(1-Wv)
wherein, VfinalTranslation vectors, V, representing the final two initial point cloudsobTranslation vector, V, representing the target regionbgA translation vector representing a background region.
(6) And (3) solving a final rotation matrix, namely converting the final rotation matrix into a corresponding quaternion for calculation, and fusing:
Rob→qob;Rbg→qbg,qfinal=qob*Wr+qbg*(1-Wr)
then qfinalNormalization, converting quaternion to rotation matrix, qfinal→Rfinal
Wherein R isobA rotation matrix representing the target area, qobRepresenting quaternions, R, corresponding to the rotation matrix of the target areabgRotation matrix representing background region, qbgRepresenting the quaternion, q, corresponding to the rotation matrix of the background region finalQuaternion, R, corresponding to the rotation matrix representing the final registration of the two initial point cloudsfinalA rotation matrix representing the final registration of the two initial point clouds.
(7) Weighted translation vector VfinalAnd a rotation matrix RfinalRotation and translation matrix TMat combined into two initial point clouds and finally registeredfinal
2. The region segmentation fusion based point cloud registration algorithm of claim 1, wherein: and (3) carrying out region segmentation on each point cloud data by region segmentation in the step (2) based on a region growing method, wherein the segmented regions are as follows:
R∈{R0,R1,..RN-1n is the number of zones;
according to a threshold value thlowAnd thhighDividing the region R into regions with the number of points in [ th ]lowthhigh]The regions within the range constitute a target region; the number of the region points is (0, th)low) Or (th)high, + ∞) constitutes a background region; therein, threshold thlowAnd thhighDetermining according to the point cloud scale; respectively carrying out region division on the two initial point clouds according to the method to obtain a group of target regions (the corresponding point clouds are Cloud)ob1And Cloudob2) A set of background regions (corresponding point clouds are Cloud)bg1And Cloudbg2)。
3. The region segmentation fusion based point cloud registration algorithm of claim 1, wherein: the specific steps of the step (3) comprise: solving a corresponding rotation and translation matrix of the region, wherein the rotation and translation matrix of the target region is Mat objectSplit it into a rotation matrix RobAnd a translation vector VobThe background area rotation-translation matrix is TMatbackgroundSplit it into a rotation matrix RbgAnd a translation vector Vbg
4. The region segmentation fusion based point cloud registration algorithm of claim 1, wherein: step (4) specifically, according to the registration errors of different areas, the split rotation matrix and the split translation vector are set to have different weights: the weight of the translation component of the target region rotation-translation matrix is set as WvThe translation component of the background region has a weight of 1-Wv(ii) a Similarly, the weight of the rotation component of the target region rotation-translation matrix is set to WrThe weight of the rotation matrix of the background area is 1-Wr
5. The region segmentation fusion based point cloud registration algorithm of claim 4, wherein: wv=WrW is calculated from the following equation:
Figure FDA0002565480210000021
wherein,oba registration error representing the target region is determined,bgrepresenting registration errors of background regions.
CN202010627992.6A 2020-07-01 2020-07-01 Point cloud registration method based on region segmentation and fusion Active CN111862111B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010627992.6A CN111862111B (en) 2020-07-01 2020-07-01 Point cloud registration method based on region segmentation and fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010627992.6A CN111862111B (en) 2020-07-01 2020-07-01 Point cloud registration method based on region segmentation and fusion

Publications (2)

Publication Number Publication Date
CN111862111A true CN111862111A (en) 2020-10-30
CN111862111B CN111862111B (en) 2024-06-14

Family

ID=73151847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010627992.6A Active CN111862111B (en) 2020-07-01 2020-07-01 Point cloud registration method based on region segmentation and fusion

Country Status (1)

Country Link
CN (1) CN111862111B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239144A1 (en) * 2009-02-20 2010-09-23 Gabor Fichtinger Marker Localization Using Intensity-Based Registration of Imaging Modalities
CN103860268A (en) * 2012-12-13 2014-06-18 中国科学院深圳先进技术研究院 Marker point registration method, device and surgical navigation system
CN106204718A (en) * 2016-06-28 2016-12-07 华南理工大学 A kind of simple and efficient 3 D human body method for reconstructing based on single Kinect
CN108801171A (en) * 2018-08-23 2018-11-13 南京航空航天大学 A kind of tunnel cross-section deformation analytical method and device
CN111210466A (en) * 2020-01-14 2020-05-29 华志微创医疗科技(北京)有限公司 Multi-view point cloud registration method and device and computer equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100239144A1 (en) * 2009-02-20 2010-09-23 Gabor Fichtinger Marker Localization Using Intensity-Based Registration of Imaging Modalities
CN103860268A (en) * 2012-12-13 2014-06-18 中国科学院深圳先进技术研究院 Marker point registration method, device and surgical navigation system
CN106204718A (en) * 2016-06-28 2016-12-07 华南理工大学 A kind of simple and efficient 3 D human body method for reconstructing based on single Kinect
CN108801171A (en) * 2018-08-23 2018-11-13 南京航空航天大学 A kind of tunnel cross-section deformation analytical method and device
CN111210466A (en) * 2020-01-14 2020-05-29 华志微创医疗科技(北京)有限公司 Multi-view point cloud registration method and device and computer equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘骏捷;乔文豹;单卫波;姚远;: "基于RGB-D数据的目标分割与实时重建方法", 计算机应用与软件, no. 04, 15 April 2015 (2015-04-15) *
李必卿;蔡勇;: "一种改进的ICP算法在多视配准中的应用", 机械工程师, no. 02, 10 February 2009 (2009-02-10) *
王欢;汪同庆;李阳;: "利用Kinect深度信息的三维点云配准方法研究", 计算机工程与应用, no. 12, 15 June 2016 (2016-06-15) *

Also Published As

Publication number Publication date
CN111862111B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN110443836B (en) Point cloud data automatic registration method and device based on plane features
CN109960402B (en) Virtual and real registration method based on point cloud and visual feature fusion
CN109146948B (en) Crop growth phenotype parameter quantification and yield correlation analysis method based on vision
CN106548462B (en) Non-linear SAR image geometric correction method based on thin-plate spline interpolation
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
WO2022213612A1 (en) Non-contact three-dimensional human body size measurement method
CN106204574B (en) Camera pose self-calibrating method based on objective plane motion feature
CN109785379A (en) The measurement method and measuring system of a kind of symmetric objects size and weight
CN111724433A (en) Crop phenotype parameter extraction method and system based on multi-view vision
CN101383899A (en) Video image stabilizing method for space based platform hovering
CN101639947A (en) Image-based plant three-dimensional shape measurement and reconstruction method and system
CN106447601A (en) Unmanned aerial vehicle remote image mosaicing method based on projection-similarity transformation
CN107341844A (en) A kind of real-time three-dimensional people's object plotting method based on more Kinect
CN111523547B (en) 3D semantic segmentation method and terminal
CN112465832B (en) Single-side tree point cloud skeleton line extraction method and system based on binocular vision
CN111998862A (en) Dense binocular SLAM method based on BNN
CN106408596A (en) Edge-based local stereo matching method
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN108961182A (en) Vertical direction vanishing point detection method and video positive twist method for video image
CN113340201B (en) Three-dimensional measurement method based on RGBD camera
CN109829459B (en) Visual positioning method based on improved RANSAC
CN113642397B (en) Object length measurement method based on mobile phone video
CN110942102A (en) Probability relaxation epipolar matching method and system
CN112634305B (en) Infrared visual odometer implementation method based on edge feature matching
CN110610503B (en) Three-dimensional information recovery method for electric knife switch based on three-dimensional matching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant