CN108090961B - Rapid adjustment method in three-dimensional laser point cloud imaging - Google Patents

Rapid adjustment method in three-dimensional laser point cloud imaging Download PDF

Info

Publication number
CN108090961B
CN108090961B CN201810021605.7A CN201810021605A CN108090961B CN 108090961 B CN108090961 B CN 108090961B CN 201810021605 A CN201810021605 A CN 201810021605A CN 108090961 B CN108090961 B CN 108090961B
Authority
CN
China
Prior art keywords
point
characteristic
point cloud
points
adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810021605.7A
Other languages
Chinese (zh)
Other versions
CN108090961A (en
Inventor
胡少兴
肖杨
李晓东
肖深
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201810021605.7A priority Critical patent/CN108090961B/en
Publication of CN108090961A publication Critical patent/CN108090961A/en
Application granted granted Critical
Publication of CN108090961B publication Critical patent/CN108090961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a rapid adjustment method in three-dimensional laser point cloud imaging, which comprises the following steps: selecting a key frame, registering the key frame, detecting adjustment and optimizing the adjustment. The key frame selection is responsible for selecting point clouds used in subsequent registration from all point cloud data acquired by the laser radar; the key frame registration link reduces imaging errors through registration between two adjacent key frames; the adjustment detection link is responsible for judging whether a point cloud matching error occurs in the key frame registration link; and in the adjustment optimization step, the generated point cloud matching errors are optimized. The invention provides a rapid adjustment method for real-time imaging of the mobile laser radar, has good algorithm fault tolerance and can be suitable for three-dimensional point cloud imaging in various different environments.

Description

Rapid adjustment method in three-dimensional laser point cloud imaging
Technical Field
The invention relates to a rapid adjustment method in three-dimensional laser point cloud imaging, belongs to the technologies of remote sensing and surveying and computer vision and mode identification, and is suitable for three-dimensional point cloud data acquired by any three-dimensional digitizer.
Background
The adjustment of three-dimensional laser point cloud imaging is always a very concerned problem in the fields of computer vision and mode recognition, remote sensing and surveying and mapping, and the continuous accumulated data inevitably causes track drift errors and needs to be optimized. The prior adjustment algorithm is finally carried out global optimization adjustment, so that the speed is low, the real-time performance is not realized, and the real-time imaging cannot be realized.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: in the process of establishing imaging of a mobile laser radar map, laser and a new frame of point cloud obtained by the laser are obtained in an accumulation mode, and the continuous accumulation can cause accumulated errors. The method starts from the distribution characteristics of the point cloud, extracts representative characteristic points, uses the characteristic points to quickly perform registration optimization error while imaging, greatly reduces accumulated error, has good algorithm fault tolerance, and is suitable for any three-dimensional digitizer to obtain the search matching of the three-dimensional point cloud.
The technical scheme adopted by the invention for solving the technical problems is as follows: a rapid adjustment method in three-dimensional laser point cloud imaging comprises the following steps:
determining a fixed time interval, and selecting key frames from all acquired point clouds based on the interval to perform adjustment so as to reduce the computational complexity;
extracting boundary points and plane points from the key frame point cloud, wherein the method comprises the following steps: for each point, calculating the mode of the vector sum of the point and all points within a certain distance, and dividing the mode by the distance from the laser range finder to the point to obtain the flatness of the point, namely
Figure BDA0001543721370000011
Where i is the calculated point, j is the neighboring point of i, S is the point set composed of the same number of points on both sides of i, Pi、PjRespectively the coordinates of point i and point j. The n points where the c value is the largest are taken as boundary points, and the n points where the c value is the smallest (close to 0) are taken as plane points.
And (3) after the characteristic points are determined, performing characteristic association on each characteristic point. Feature association is the distance from the feature point to the associated feature of the point in the previous frame.
After the step (4) of feature association, eachThe feature points all have a feature distance d relative to the previous frame data, and a pose transformation T is foundk=[tx,ty,tz,rx,ry,rz]Trying to make all the feature distances 0, the pose transformation is the movement of the lidar between the two frames of data, i.e. f (P)k,Tk)=dk→0,PkIs a matrix of characteristic points at time k, dkIs a characteristic distance matrix at time k, TkThe pose transformation from k-1 to k. Optimized computation of T using least squareskAccording to TkUpdating the point cloud position, performing feature association again and updating the distance d to calculate TkContinuously circulating until d is smaller than a set threshold value or the circulating times reach an upper limit;
step (5), if the average value of all the characteristic distances after the optimization is finished is smaller than that before the optimization, the adjustment is considered to be finished; otherwise, adjustment optimization is carried out: and (3) after the point cloud of the current key frame is mapped to the global coordinate system, determining the position of the laser radar in the created three-dimensional point cloud picture, taking all point clouds in a cube by taking the position as the center in the three-dimensional point cloud picture, and registering the current key frame and the part of point clouds again according to the step (2), the step (3) and the step (4) to complete adjustment optimization, wherein the size of the cube is related to the measurement range of the laser radar.
Wherein, the registration error solving method in the step (3) is as follows:
step (3.1) if the feature point is a plane point, the associated feature is a plane, three points need to be searched in the previous frame data to determine the plane, firstly, a point M closest to the feature point in the previous key frame is searched in a traversing manner, then, a point L which is the same as the scanning line of the M and is closest to the M is extracted, and a point N closest to the M on the adjacent scanning line of the M is extracted;
step (3.2) if the feature point is a plane point and its associated feature is a plane, three points need to be found in the previous frame data to determine the plane: firstly, a point M nearest to the feature point in the previous key frame is searched in a traversing mode, and then a point N nearest to the point M on the adjacent scanning line of the key frame is searched.
Compared with the prior art, the invention has the advantages that:
(1) the invention can carry out adjustment while imaging, thereby greatly improving the imaging efficiency in the actual environment.
(2) The method extracts the boundary points and the plane points in the point cloud as the feature points, has similar features in various environments, has universality and can be applied to various different actual scenes.
(3) The invention uses the traversal algorithm when the characteristics are associated, and can automatically select the current optimal point pair in the calculation process so as to obtain the optimal solution. The characteristic enables the experimenter to select the point pairs without hand, and improves the working efficiency.
(4) The invention can detect and find in time when the adjustment fails at a certain time, and automatically compensate, thereby improving the robustness of the algorithm.
(5) The method of the invention does not use equipment such as GPS/IMU and the like, is not limited by environment, and can be used in the GPS unlocking environment such as indoor and underground.
Drawings
FIG. 1 is a flow chart of a method implementation of the present invention;
FIG. 2 is a key frame selection diagram according to the present invention;
FIG. 3 is a diagram of a method for associating plane feature points according to the present invention;
FIG. 4 is a diagram of a boundary feature point association method of the present invention;
FIG. 5 is a cloud point plot of the three-dimensional imaging results of the present invention;
FIG. 6 is a plan projection view of the imaging result of the present invention;
Detailed Description
The invention is further described with reference to the following figures and detailed description.
As shown in fig. 1, the present invention comprises the following calculation steps: selecting a key frame, registering the key frame, detecting adjustment and optimizing the adjustment. The key frame selection is responsible for selecting point clouds used in subsequent registration from all point cloud data acquired by the laser radar; the key frame registration link reduces imaging errors through registration between two adjacent key frames; the adjustment detection link is responsible for judging whether a point cloud matching error occurs in the key frame registration link; and in the adjustment optimization step, the generated point cloud matching errors are optimized. The method comprises the following specific steps:
a map is created by acquiring indoor point cloud by a handheld 16-line laser radar, and adjustment is completed in real time while the map is created through 4 steps of key frame selection, key frame registration, adjustment detection, adjustment optimization and the like.
(1) Key frame selection
And determining a fixed time interval, and selecting key frames from all the acquired point clouds based on the interval to perform adjustment so as to reduce the operation complexity. As shown in fig. 2, an interval is set as X frame data, one frame data is selected as a key frame for each X frame, and the adjustment is performed for each key frame. Taking the n +1 part as an example, in the map creation process, the pose transformation obtained by the key frame (n +1) X +1 is mapped into the global coordinate system in a continuous accumulation mode, the mapped data and the previous key frame nX +1 are registered again, the mapping result is optimized, the optimized total pose transformation is obtained, and thus one adjustment is completed. The shorter the moving distance S of the laser radar between the two key frames is, the more repeated features between the two key frames are, and the better the registration effect is. As the equation (1) shows that the moving distance S is related to the number of frames X, the laser radar moving speed v and the frequency f included in each large window, a smaller key frame interval should be set without affecting the program running speed and the adjustment effect. X is chosen to be 10 in this example.
Figure BDA0001543721370000031
(2) Key frame registration
After the key frame is determined, boundary points and plane points are extracted from the key frame point cloud. For each point, calculating the mode of the vector sum of the point and all points within a certain distance, and dividing the mode by the distance from the laser range finder to the point to obtain the flatness of the point, namely
Figure BDA0001543721370000041
The 5 points with the largest c value are taken as boundary points, and the 5 points with the smallest c value (close to 0) are taken as plane points. If the point P is a plane point, its associated feature is a plane, and three points in the previous frame data need to be searched to determine the plane. As shown in fig. 3, M is the closest point to P in the previous frame data, L is the same scan line as M and the closest point to M, and N is the closest point to M on the adjacent scan line of M. If the point P is a boundary point, its associated feature is a straight line, and two points need to be searched in the previous frame data to determine the straight line. As shown in fig. 4, M is the closest point to P in the previous frame number 1 data, and N is the closest point to M on the adjacent scan line of M, where the distance from point P to line MN is equal to the distance from point P to the plane perpendicular to line MN and to plane PMN in three-dimensional space. Finally, both features are converted into the distance to a certain plane, and the formula (3) is a plane equation. After the correlation is completed, the characteristic distance can be obtained according to the formula (4), wherein x0y0z0Is the characteristic point coordinate.
Ax+By+Cz+D=0 (3)
Figure BDA0001543721370000042
Each feature point has a feature distance d relative to the previous frame data, and a pose transformation T is foundk=[tx,ty,tz,rx,ry,rz]Trying to make all the feature distances 0, the pose transformation is the movement of the lidar between the two frames of data, i.e. f (P)k,Tk)=dk→0,PkIs a matrix of characteristic points at time k, dkIs a characteristic distance matrix at time k, TkThe pose transformation from k-1 to k. Optimized computation of T using least squareskAccording to TkUpdating the point cloud position, performing feature association again and updating the distance d to calculate TkAnd continuously circulating until d is smaller than a set threshold or the circulating times reach an upper limit.
(3) And (4) comparing the average value of all the characteristic distances after the optimization is completed with that before the optimization, wherein n is the number of the characteristic points.
If the average value of all the characteristic distances after the optimization is finished is smaller than that before the optimization, the adjustment is considered to be finished; otherwise, adjustment optimization is performed.
Figure BDA0001543721370000043
(4) The failure of one adjustment is caused by the fact that similar features of the key frames of the vehicle are insufficient, and therefore the key frames of the upper garment need to be expanded.
In the map creation imaging process, after the point cloud of the current key frame is mapped to the global coordinate system, the position of the laser radar in the created three-dimensional point cloud picture can be determined, in the three-dimensional point cloud picture, all the point clouds in a cube are taken by taking the position as the center, and the current key frame and the part of the point clouds are registered again according to the method (2) to complete adjustment optimization, wherein the size of the cube is related to the measurement range of the laser radar, in the example, the measurement range of the 16-line laser radar is 100m, so that the point clouds in the range of 100x100x30m are selected from all the imaged point clouds.
As described above, the invention utilizes the distribution characteristics of the three-dimensional point cloud to select the key frame point cloud, then selects representative characteristic points from the key frame, and optimizes the matching position through iterative computation to complete adjustment. The algorithm selects the plane points and the boundary points, has strong universality, can be applied to mapping imaging and map creation in various complex environments, simultaneously searches for an optimal solution in a traversing way in the registration process, does not need an operator to select fixed point pairs manually, and improves the operation efficiency.
The above description is only an embodiment of the fast three-dimensional point cloud search registration method based on the point cloud shape in accordance with the present invention. The present invention is not limited to the above-described embodiments. The description of the invention is intended to be illustrative, and not to limit the scope of the claims. Many alternatives, modifications, and variations will be apparent to those skilled in the art. All the technical solutions formed by using equivalent substitutions or equivalent transformations fall within the protection scope of the claims of the present invention.

Claims (1)

1. A rapid adjustment method in three-dimensional laser point cloud imaging is characterized in that: the method comprises the following steps:
determining a fixed time interval, and selecting key frames from all acquired point clouds based on the interval to perform adjustment so as to reduce the computational complexity;
extracting boundary points and plane points from the key frame point cloud: for each point, calculating the mode of the vector sum of the point and all points within a certain distance, and dividing the mode by the distance from the laser range finder to the point to obtain the flatness of the point, namely
Figure FDA0002905882690000011
Where i is the calculated point, j is the neighboring point of i, S is the point set composed of the same number of points on both sides of i, PiPjRespectively taking the coordinates of the point i and the point j, taking n points with the maximum c value as boundary points, and taking n points with the minimum c value as plane points;
after the characteristic points are determined, performing characteristic association on each characteristic point, wherein the characteristic association is to calculate the distance from the characteristic point to the associated characteristic of the point in the previous frame;
after the characteristic correlation in the step (4), each characteristic point has a characteristic distance d relative to the previous frame data, and a pose transformation T is foundk=[tx,ty,tz,rx,ry,rz]Trying to make all the feature distances 0, the pose transformation is the movement of the lidar between the two frames of data, i.e. f (P)k,Tk)=dk→0,PkIs a matrix of characteristic points at time k, dkIs a characteristic distance matrix at time k, TkFor pose transformation from k-1 to k, optimizing and calculating T by using least square methodkAccording to TkUpdating the point cloud position, performing feature association again and updating the distance d to calculate TkContinuously circulating until d is smaller than a set threshold value or the circulating times reach an upper limit;
step (5), if the average value of all the characteristic distances after the optimization is finished is smaller than that before the optimization, the adjustment is considered to be finished; otherwise, adjustment optimization is carried out: and (3) after the point cloud of the current key frame is mapped to the global coordinate system, determining the position of the laser radar in the created three-dimensional point cloud picture, taking all point clouds in a cube by taking the position as the center in the three-dimensional point cloud picture, and registering the current key frame and the part of point clouds again according to the step (2), the step (3) and the step (4) to complete adjustment optimization, wherein the size of the cube is related to the measurement range of the laser radar.
CN201810021605.7A 2018-01-10 2018-01-10 Rapid adjustment method in three-dimensional laser point cloud imaging Active CN108090961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810021605.7A CN108090961B (en) 2018-01-10 2018-01-10 Rapid adjustment method in three-dimensional laser point cloud imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810021605.7A CN108090961B (en) 2018-01-10 2018-01-10 Rapid adjustment method in three-dimensional laser point cloud imaging

Publications (2)

Publication Number Publication Date
CN108090961A CN108090961A (en) 2018-05-29
CN108090961B true CN108090961B (en) 2021-04-20

Family

ID=62181947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810021605.7A Active CN108090961B (en) 2018-01-10 2018-01-10 Rapid adjustment method in three-dimensional laser point cloud imaging

Country Status (1)

Country Link
CN (1) CN108090961B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017202A (en) * 2019-05-28 2020-12-01 杭州海康威视数字技术股份有限公司 Point cloud labeling method, device and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
CN107038717A (en) * 2017-04-14 2017-08-11 东南大学 A kind of method that 3D point cloud registration error is automatically analyzed based on three-dimensional grid

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447908A (en) * 2015-12-04 2016-03-30 山东山大华天软件有限公司 Dentition model generation method based on oral cavity scanning data and CBCT (Cone Beam Computed Tomography) data
CN107038717A (en) * 2017-04-14 2017-08-11 东南大学 A kind of method that 3D point cloud registration error is automatically analyzed based on three-dimensional grid

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
3D shape modeling using a self-developed hand-held 3D laser scanner and an efficient HT-ICP point cloud registration algorithm;Jia Chen 等;《Optics &Laser Technology》;20120803;全文 *
三维激光点云数据配准研究;张步;《万方数据企业知识服务平台》;20160129;全文 *
地面三维激光扫描的点云配准误差研究;徐源强 等;《大地测量与地球动力学》;20110430;第31卷(第2期);全文 *
基于ISS特征点结合改进ICP 的点云配准算法;李仁忠 等;《激光与光电子学进展》;20171231;全文 *
基于点云边界特征点的改进;张明明;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170715;全文 *
基于特征点和改进ICP的三维点云数据配准算法;张晓娟 等;《传感器与微系统》;20121231;第31卷(第9期);全文 *
基于特征点提取和匹配的点云配准算法;严剑锋 等;《测绘通报》;20130930;全文 *
大型古文物真三维数字化方法;胡少兴 等;《系统仿真学报》;20060430;第18卷(第4期);全文 *

Also Published As

Publication number Publication date
CN108090961A (en) 2018-05-29

Similar Documents

Publication Publication Date Title
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
WO2021196294A1 (en) Cross-video person location tracking method and system, and device
US20200218929A1 (en) Visual slam method and apparatus based on point and line features
WO2021022615A1 (en) Method for generating robot exploration path, and computer device and storage medium
CN112767490B (en) Outdoor three-dimensional synchronous positioning and mapping method based on laser radar
CN109579825B (en) Robot positioning system and method based on binocular vision and convolutional neural network
CN107481274B (en) Robust reconstruction method of three-dimensional crop point cloud
CN104599286B (en) A kind of characteristic tracking method and device based on light stream
KR20080075730A (en) Method for esrimating location using objectionrecognition of a robot
CN110570474B (en) Pose estimation method and system of depth camera
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
CN116449384A (en) Radar inertial tight coupling positioning mapping method based on solid-state laser radar
Feng et al. Visual map construction using RGB-D sensors for image-based localization in indoor environments
Duan et al. Pfilter: Building persistent maps through feature filtering for fast and accurate lidar-based slam
CN108090961B (en) Rapid adjustment method in three-dimensional laser point cloud imaging
CN114004894A (en) Method for determining space relation between laser radar and binocular camera based on three calibration plates
CN112330604A (en) Method for generating vectorized road model from point cloud data
CN116679307A (en) Urban rail transit inspection robot positioning method based on three-dimensional laser radar
CN112241002B (en) Novel robust closed-loop detection method based on Karto SLAM
CN115267724A (en) Position re-identification method of mobile robot based on pose estimation of laser radar
CN111239761B (en) Method for indoor real-time establishment of two-dimensional map
CN110488320B (en) Method for detecting vehicle distance by using stereoscopic vision
CN113592976A (en) Map data processing method and device, household appliance and readable storage medium
CN112614162B (en) Indoor vision rapid matching and positioning method and system based on space optimization strategy
WO2023281797A1 (en) Facility recognition system and facility recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant