CN111553975A - Real-time loop detection method of handheld three-dimensional scanning system - Google Patents

Real-time loop detection method of handheld three-dimensional scanning system Download PDF

Info

Publication number
CN111553975A
CN111553975A CN202010342113.5A CN202010342113A CN111553975A CN 111553975 A CN111553975 A CN 111553975A CN 202010342113 A CN202010342113 A CN 202010342113A CN 111553975 A CN111553975 A CN 111553975A
Authority
CN
China
Prior art keywords
frame
point cloud
loop
detection
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010342113.5A
Other languages
Chinese (zh)
Other versions
CN111553975B (en
Inventor
刘洋汉
李荣军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Yiweisheng Medical Technology Co ltd
Original Assignee
Wuhan Yiweisheng Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Yiweisheng Medical Technology Co ltd filed Critical Wuhan Yiweisheng Medical Technology Co ltd
Priority to CN202010342113.5A priority Critical patent/CN111553975B/en
Publication of CN111553975A publication Critical patent/CN111553975A/en
Application granted granted Critical
Publication of CN111553975B publication Critical patent/CN111553975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/005Tree description, e.g. octree, quadtree
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a real-time loop detection method of a handheld three-dimensional scanning system, which comprises the steps of adding a time sequence label to key frame data, fusing by a Gaussian convolution method, combining a polygon intersection principle to realize real-time loop detection in a larger scene range, and greatly improving the sensitivity of loop detection by combining with a bag-of-words model method.

Description

Real-time loop detection method of handheld three-dimensional scanning system
Technical Field
The invention relates to a three-dimensional scanning system, in particular to a real-time loop detection method of a handheld three-dimensional scanning system.
Background
When structured light is adopted to carry out three-dimensional scanning, the data splicing process is usually carried out frame by depending on a point cloud splicing algorithm, and when the splicing error is smaller than a certain error, the splicing is regarded as successful. With the progress of scanning, the splicing error of each frame is accumulated and transmitted continuously, and is positively correlated with the number of scanning frames, if the errors are not eliminated, the result of the whole scanning is seriously affected, for example, the scanning model is distorted and broken, and the scanning data cannot meet the use requirement. Currently, it is effective to eliminate the error by determining whether the camera has returned to the position where the camera has previously passed, i.e., loop detection. If the camera moves back to a certain position passed by before, the camera movement is indicated to be closed-loop, the closed-loop information is used for a closed-loop optimization algorithm, and the scanning result can be more accurate through the optimization of the closed-loop optimization algorithm.
The bag-of-words model method is an effective loop detection method at present, but when the algorithm is used for loop detection, the method is limited by the size of a scanning window (the bag-of-words model algorithm needs to perform feature detection on each frame of data, the number of feature points depends on the amount of single-frame data, and the amount of the single-frame data is determined by the size of the scanning window), so the loop detection can only be performed in a small range (which is actually the rapid judgment of the similarity between the single-frame data), and the loop detection cannot be performed in a larger scene range, so the algorithm is often not sensitive when the loop detection is performed, especially when the size of the scanning window is far smaller than the size of an object to be scanned.
Disclosure of Invention
The invention aims to provide a real-time loop detection method of a handheld three-dimensional scanning system aiming at the defects of the prior art, so that real-time loop detection is realized in a larger scene range, and the sensitivity of loop detection is greatly improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a real-time loop detection method of a handheld three-dimensional scanning system comprises the following steps:
s1, matching each frame of point cloud P after scanning and splicing successfully with the corresponding camera pose R, wherein P is { P ═ P1,p2,...,pN}; calculating a transformation matrix O between the R and the corresponding camera pose R 'of the previous frame point cloud P', wherein
Figure BDA0002468884190000021
S2, extracting the three-dimensional linear transformation part in O, and calculating the rotation amount r, r being arccos (0.5 × (a)11+a22+a33-1)); extracting a three-dimensional translation transformation part in the O to calculate a translation amount t,
Figure BDA0002468884190000022
s3 number of key framesThe number of frames in the database is greater than a frame threshold dminWhen the loop is detected, loop detection is started;
s4, distance d from current key frameminThe point cloud P' is fused by a gaussian convolution method: taking a weight matrix with radius 1
Figure BDA0002468884190000023
Taking M as a convolution kernel to perform convolution on the point cloud depth value;
s5, recalculating the coordinates of each point in the point cloud P' by using a camera imaging equation;
s6, obtaining the triangulated topological relation of the frame point cloud by the Delaunay triangulation algorithm;
s7, adding the triangulated topological relation of the point cloud P' and the coordinates of each point into an AABB (axis AlignedBounding) binary tree T;
s8, fitting the point cloud P' into a plane by a least square method: let the plane equation be z ═ a0x+a1y+a2Solving the system of equations
Figure BDA0002468884190000031
Obtaining the direction vector V of the frame, V ═ a0,a1,a2);
S9, D nearest to the current frame is taken in DminThe frames are used as a detection group, and the rest frames are used as a detection group. Calculating an average value V' of the direction vectors of the key frames in the detection group;
and taking the key frame point cloud K corresponding to the median of the detection group time sequence as a detection frame, and detecting whether the intersection point exists between the Ray taking the geometric center Q of the point cloud K as a starting point and V' as a direction and T or not through a Ray-AABB collision detection algorithm.
Further, the method also comprises the following steps:
s10, if the intersection point does not exist, indicating that no loop exists;
if the intersection exists, calculating the normal point of the detection frame K and the intersection frame K' multiplied by s, wherein s is a.b;
if s is less than 0, the detection frame K and the intersected frame K' are in different directions, and no loop exists;
otherwise divide intoD adjacent to detection frame K and crossed frame K' is respectively takenminAnd for the/3 frames, performing rough matching by using a Super4PCS (Super 4-pointCongrenent Sets) algorithm, and calculating the point cloud overlapping rate lambda after matching is finished.
Further, the method also comprises the following steps:
s11, if the overlapping rate lambda is larger than 0.5, calculating x in the corresponding point cloud UV coordinatemin,ymin,xmax,ymaxAnd extracting the point cloud with the UV coordinate in the range, and performing ICP (iterative close Point) splicing again.
Further, the method also comprises the following steps:
s12, the step S11 is carried out circularly, and the overlapping rate lambda and the splicing error of each splicing are recorded;
if the overlapping rate λ and the splicing error tend to be stable and the splicing error meets the requirement, it indicates that a loop exists, and then the step S11 is interrupted and loop information is sent to the loop optimization algorithm for performing optimization.
Further, the steps S1 to S12 are executed in a loop as the scanning proceeds.
Further, step S2 further includes:
if the rotation amount r is larger than the rotation threshold value rminOr t is greater than the translation threshold tminIf so, storing the frame point cloud serving as a key frame point cloud into a key frame database D; the key frames in D are ordered by joining time starting from 1.
The invention has the beneficial effects that: by adding a time sequence label to single-frame data and combining the polygon intersection principle, real-time loop detection in a larger scene range is realized, and the method is combined with a bag-of-words model method, so that the sensitivity of loop detection is greatly improved.
Drawings
FIG. 1 is a schematic diagram of a real-time loop detection method of a handheld three-dimensional scanning system;
FIG. 2 is a schematic diagram of a first embodiment;
fig. 3 is a second schematic diagram of the first embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Please refer to fig. 1, S1, where P ═ P, for each frame of point cloud P after scan stitching succeeds and the corresponding camera pose R1,p2,...,pNAnd p is (x, y, z). Calculating a transformation matrix O between the R and the corresponding camera pose R 'of the previous frame point cloud P', wherein
Figure BDA0002468884190000051
S2, extracting the three-dimensional linear transformation part in O, and calculating the rotation amount r, r being arccos (0.5 × (a)11+a22+a33-1)). Extracting a three-dimensional translation transformation part in the O to calculate a translation amount t,
Figure BDA0002468884190000052
if r is greater than the rotation threshold rminOr t is greater than the translation threshold tminAnd storing the frame point cloud as a key frame point cloud into a key frame database D. The key frames in D are ordered by joining time starting from 1.
S3, when the number of frames in the key frame database is larger than the frame threshold dminAt that time, loop back detection is started.
S4, distance d from current key frameminThe point cloud P' is fused by a gaussian convolution method: taking a weight matrix with radius 1
Figure BDA0002468884190000053
And (5) taking M as a convolution kernel to perform convolution on the point cloud depth value.
S5, recalculating the coordinates of each point in the point cloud P' by the camera imaging equation.
S6, obtaining the triangulated topological relation of the frame point cloud through Delaunay triangulation algorithm to the point cloud P'.
S7, adding the triangulated topological relation of the point cloud P' and the coordinates of each point into an AABB (axis AlignedBounding) binary tree T.
S8, fitting the point cloud P' into a plane by a least square method: let the plane equation be z ═ a0x+a1y+a2Solving the system of equations
Figure BDA0002468884190000061
Obtaining the direction vector V of the frame, V ═ a0,a1,a2)。
S9, D nearest to the current frame is taken in DminThe frames are used as a detection group, and the rest frames are used as a detection group. The average V' of the keyframe direction vectors in the detected group is found. And taking the key frame point cloud K corresponding to the median of the detection group time sequence as a detection frame, and detecting whether the intersection point exists between the Ray taking the geometric center Q of the point cloud K as a starting point and V' as a direction and T or not through a Ray-AABB collision detection algorithm.
S10, if there is no intersection, it indicates that there is no loop. If the intersection exists, calculating the normal point product s of the detection frame K and the intersection frame K', wherein s is a · b. If s is smaller than 0, the detection frame K and the intersected frame K' are in different directions, and no loop exists. Otherwise, respectively taking d adjacent to the detection frame K and the intersected frame KminAnd for the/3 frames, performing rough matching through a Super4PCS (Super 4-points Congreent Sets) algorithm, and calculating the point cloud overlapping rate lambda after matching is finished.
S11, if the overlapping rate lambda is larger than 0.5, calculating x in the corresponding point cloud UV coordinatemin,ymin,xmax,ymaxAnd extracting the point cloud with the UV coordinate in the range, and performing ICP (iterative close Point) splicing again.
And S12 and S11 are carried out in a circulating mode, and the overlapping rate lambda and the splicing error of each splicing are recorded. If the overlapping rate λ and the splicing error tend to be stable and the splicing error meets the requirement, it indicates that a loop exists, and then the step 11 is interrupted and loop information is sent to the loop optimization algorithm for performing optimization.
S13 and S1-S12 are executed in a loop along with the scanning.
The first embodiment is as follows:
the continuous scanning is carried out by installing a handheld three-dimensional scanning system carrying the algorithm on an Intel Core i5 general computing platform, and the splicing errors are accumulated continuously, so that the model is misplaced at the same position, as shown in FIG. 2 (A). At this time, the algorithm detects that loop exists in the scanning process in about 50ms, loop information is transmitted to the closed loop optimization algorithm, the current camera posture is corrected in time, and the abnormity caused by dislocation is eliminated, such as in fig. 3 (B), so that the scanning is smoothly carried out.
The above-mentioned embodiments only express the embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A real-time loop detection method of a handheld three-dimensional scanning system is characterized by comprising the following steps:
s1, matching each frame of point cloud P after scanning and splicing successfully with the corresponding camera pose R, wherein P is { P ═ P1,p2,...,pN}; calculating a transformation matrix O between the R and the corresponding camera pose R 'of the previous frame point cloud P', wherein
Figure FDA0002468884180000011
S2, extracting the three-dimensional linear transformation part in O, and calculating the rotation amount r, r being arccos (0.5 × (a)11+a22+a33-1)); extracting a three-dimensional translation transformation part in the O to calculate a translation amount t,
Figure FDA0002468884180000012
s3, when the number of frames in the key frame database is larger than the frame threshold dminWhen the loop is detected, loop detection is started;
s4, distance d from current key frameminThe point cloud P' is fused by a gaussian convolution method: radius ofWeight matrix of 1
Figure FDA0002468884180000013
Taking M as a convolution kernel to perform convolution on the point cloud depth value;
s5, recalculating the coordinates of each point in the point cloud P' by using a camera imaging equation;
s6, obtaining the triangulated topological relation of the frame point cloud by the Delaunay triangulation algorithm;
s7, adding the triangulated topological relation of the point cloud P' and the coordinates of each point into an AABB (axis Aligned bounding Box) binary tree T;
s8, fitting the point cloud P' into a plane by a least square method: let the plane equation be z ═ a0x+a1y+a2Solving the system of equations
Figure FDA0002468884180000021
Obtaining the direction vector V of the frame, V ═ a0,a1,a2);
S9, D nearest to the current frame is taken in DminThe frames are used as a detection group, and the rest frames are used as a detection group. Calculating an average value V' of the direction vectors of the key frames in the detection group;
and taking the key frame point cloud K corresponding to the median of the detection group time sequence as a detection frame, and detecting whether the intersection point exists between the Ray taking the geometric center Q of the point cloud K as a starting point and V' as a direction and T or not through a Ray-AABB collision detection algorithm.
2. The real-time loop detection method of the handheld three-dimensional scanning system according to claim 1, further comprising the steps of:
s10, if the intersection point does not exist, indicating that no loop exists;
if the intersection exists, calculating the normal point of the detection frame K and the intersection frame K' multiplied by s, wherein s is a.b;
if s is less than 0, the detection frame K and the intersected frame K' are in different directions, and no loop exists;
otherwise, respectively taking d adjacent to the detection frame K and the intersected frame KminA/3 frame throughThe Super4PCS (Super 4-pointCongrenent Sets) algorithm carries out rough matching, and after the matching is finished, the point cloud overlapping rate lambda is calculated.
3. The real-time loop detection method of the handheld three-dimensional scanning system according to claim 2, further comprising the steps of:
s11, if the overlapping rate lambda is larger than 0.5, calculating x in the corresponding point cloud UV coordinatemin,ymin,xmax,ymaxAnd extracting the point cloud with the UV coordinate in the range, and performing ICP (iterative close Point) splicing again.
4. The real-time loop detection method of the handheld three-dimensional scanning system according to claim 3, further comprising the steps of:
s12, the step S11 is carried out circularly, and the overlapping rate lambda and the splicing error of each splicing are recorded;
if the overlapping rate λ and the splicing error tend to be stable and the splicing error meets the requirement, it indicates that a loop exists, and then the step S11 is interrupted and loop information is sent to the loop optimization algorithm for performing optimization.
5. The real-time loop detection method of the handheld three-dimensional scanning system according to claim 4, wherein: the steps S1 to S12 are performed in a loop with the progress of the scanning.
6. The method for detecting real-time loop back of a handheld three-dimensional scanning system according to claim 1, wherein the step S2 further includes:
if the rotation amount r is larger than the rotation threshold value rminOr t is greater than the translation threshold tminIf so, storing the frame point cloud serving as a key frame point cloud into a key frame database D; the key frames in D are ordered by joining time starting from 1.
CN202010342113.5A 2020-04-27 2020-04-27 Real-time loop detection method of handheld three-dimensional scanning system Active CN111553975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010342113.5A CN111553975B (en) 2020-04-27 2020-04-27 Real-time loop detection method of handheld three-dimensional scanning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010342113.5A CN111553975B (en) 2020-04-27 2020-04-27 Real-time loop detection method of handheld three-dimensional scanning system

Publications (2)

Publication Number Publication Date
CN111553975A true CN111553975A (en) 2020-08-18
CN111553975B CN111553975B (en) 2022-04-22

Family

ID=72004442

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010342113.5A Active CN111553975B (en) 2020-04-27 2020-04-27 Real-time loop detection method of handheld three-dimensional scanning system

Country Status (1)

Country Link
CN (1) CN111553975B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018129715A1 (en) * 2017-01-13 2018-07-19 浙江大学 Simultaneous positioning and dense three-dimensional reconstruction method
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109615698A (en) * 2018-12-03 2019-04-12 哈尔滨工业大学(深圳) Multiple no-manned plane SLAM map blending algorithm based on the detection of mutual winding
CN109974721A (en) * 2019-01-08 2019-07-05 武汉中海庭数据技术有限公司 A kind of vision winding detection method and device based on high-precision map
US20190371044A1 (en) * 2018-06-04 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN110796728A (en) * 2019-09-20 2020-02-14 南京航空航天大学 Non-cooperative spacecraft three-dimensional reconstruction method based on scanning type laser radar
CN110956664A (en) * 2019-12-17 2020-04-03 武汉易维晟医疗科技有限公司 Real-time camera position repositioning method for handheld three-dimensional scanning system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018129715A1 (en) * 2017-01-13 2018-07-19 浙江大学 Simultaneous positioning and dense three-dimensional reconstruction method
US20190371044A1 (en) * 2018-06-04 2019-12-05 Baidu Online Network Technology (Beijing) Co., Ltd Method, apparatus, device and computer readable storage medium for reconstructing three-dimensional scene
CN109409418A (en) * 2018-09-29 2019-03-01 中山大学 A kind of winding detection method based on bag of words
CN109615698A (en) * 2018-12-03 2019-04-12 哈尔滨工业大学(深圳) Multiple no-manned plane SLAM map blending algorithm based on the detection of mutual winding
CN109974721A (en) * 2019-01-08 2019-07-05 武汉中海庭数据技术有限公司 A kind of vision winding detection method and device based on high-precision map
CN110796728A (en) * 2019-09-20 2020-02-14 南京航空航天大学 Non-cooperative spacecraft three-dimensional reconstruction method based on scanning type laser radar
CN110956664A (en) * 2019-12-17 2020-04-03 武汉易维晟医疗科技有限公司 Real-time camera position repositioning method for handheld three-dimensional scanning system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
任健铭: "基于三维激光点云的室内机器人即时定位与建图算法研究", 《中国优秀硕士学位论文全文数据库(信息科技辑)》 *
尤邵尉等: "三维扫描技术在回转窑动态测量中的应用研究", 《水泥工程》 *
陈世浪 等: "基于RGB-D相机的SLAM技术研究综述", 《计算机工程与应用》 *

Also Published As

Publication number Publication date
CN111553975B (en) 2022-04-22

Similar Documents

Publication Publication Date Title
JP5328979B2 (en) Object recognition method, object recognition device, autonomous mobile robot
CN110992356B (en) Target object detection method and device and computer equipment
CN104573614B (en) Apparatus and method for tracking human face
US7894636B2 (en) Apparatus and method for performing facial recognition from arbitrary viewing angles by texturing a 3D model
WO2010147137A1 (en) Pattern processing device, method therefor, and program
CN110688947B (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
WO2012023593A1 (en) Position and orientation measurement apparatus, position and orientation measurement method, and storage medium
CN108381549A (en) A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN112927353B (en) Three-dimensional scene reconstruction method, storage medium and terminal based on two-dimensional target detection and model alignment
US20090232363A1 (en) Information processing apparatus, method, and program
JP5833507B2 (en) Image processing device
US20230015214A1 (en) Planar contour recognition method and apparatus, computer device, and storage medium
CN112200056A (en) Face living body detection method and device, electronic equipment and storage medium
CN110598647B (en) Head posture recognition method based on image recognition
CN116263622A (en) Gesture recognition method, gesture recognition device, electronic equipment, gesture recognition medium and gesture recognition program product
CN111553975B (en) Real-time loop detection method of handheld three-dimensional scanning system
CN111108515B (en) Picture target point correcting method, device and equipment and storage medium
CN111339973A (en) Object identification method, device, equipment and storage medium
Brucker et al. Sequential scene parsing using range and intensity information
CN115855018A (en) Improved synchronous positioning and mapping method based on point-line comprehensive characteristics
CN114549825A (en) Target detection method and device, electronic equipment and storage medium
CN112598736A (en) Map construction based visual positioning method and device
CN114219832B (en) Face tracking method and device and computer readable storage medium
CN116523984B (en) 3D point cloud positioning and registering method, device and medium
Xin et al. A Method for Unseen Object Six Degrees of Freedom Pose Estimation Based on Segment Anything Model and Hybrid Distance Optimization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant