CN109509215B - KinFu point cloud auxiliary registration device and method thereof - Google Patents

KinFu point cloud auxiliary registration device and method thereof Download PDF

Info

Publication number
CN109509215B
CN109509215B CN201811273380.0A CN201811273380A CN109509215B CN 109509215 B CN109509215 B CN 109509215B CN 201811273380 A CN201811273380 A CN 201811273380A CN 109509215 B CN109509215 B CN 109509215B
Authority
CN
China
Prior art keywords
point cloud
camera
pose
frame
freedom
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811273380.0A
Other languages
Chinese (zh)
Other versions
CN109509215A (en
Inventor
吴尧锋
许少锋
蔡汉龙
王骥
刘�文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Institute of Technology of ZJU
Original Assignee
Ningbo Institute of Technology of ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Institute of Technology of ZJU filed Critical Ningbo Institute of Technology of ZJU
Priority to CN201811273380.0A priority Critical patent/CN109509215B/en
Publication of CN109509215A publication Critical patent/CN109509215A/en
Application granted granted Critical
Publication of CN109509215B publication Critical patent/CN109509215B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a KinFu point cloud auxiliary registration device and a method thereof, wherein the device comprises: the Kinect multi-view point cloud registration device comprises a Kinect camera component, a cushion block, a Kinect microphone component, a first fastening device, a handle, a second fastening device, a head joint and a joint arm from top to bottom, wherein the cushion block is mounted between the Kinect camera component and the Kinect microphone component to prevent relative rotation of the camera and the microphone component.

Description

KinFu point cloud auxiliary registration device and method thereof
Technical Field
The invention belongs to the technical field of reverse engineering, and particularly relates to a KinFu point cloud auxiliary registration device and a KinFu point cloud auxiliary registration method.
Background
In the fields of reverse engineering, computer vision, cultural relic digitization and the like, due to incompleteness, rotation dislocation, translation dislocation and the like of point clouds, local point clouds need to be registered for the obtained complete point clouds, a proper coordinate system needs to be determined for obtaining a complete data model of a measured object, point sets obtained from various visual angles are combined into a uniform coordinate system to form a complete point cloud, then visual operation can be conveniently carried out, and point cloud data registration is achieved. Currently, point cloud registration is widely used for reconstruction of reverse engineering three-dimensional models, digital assembly of airplanes and automobile covering parts, VR (virtual reality) games and the like.
Kinect V2 is a body sensing device introduced by Microsoft corporation in recent years, and obtains a depth image of a measured object by adopting time flight and then can obtain point cloud data by combining a distance value and a camera coordinate pose. Conventional optical scanning usually attaches a mark point to a measured object to obtain accurate point cloud registration. At present, point cloud data are often acquired by using a Kinect V2, a Kinect Fusion method, abbreviated as KinFu, is adopted, and as no mark point is used for assistance, the handheld Kinect V2 has the condition of shaking or too large moving distance, so that the point cloud registration success rate is low, the registration precision is low, the model reconstruction fails, modeling personnel are required to perform a large amount of interactive operations in reverse engineering software, such as manual registration and hole repair, and the success rate of automatic point cloud registration greatly determines the subsequent workload. In addition, when the KinFu multi-view point cloud registration is used, if the environment is mainly formed by parallel planes, the limitation of the environment space in the visual field range of the camera is not comprehensive in the modeling process, so that the ICP algorithm in the KinFu can not be converged, the accumulated error is serious when a new scene is continuously created, and the whole modeling process is interrupted. The chinese patent application No. 2014103549129, "Kinect and jointed arm combined measurement and three-dimensional reconstruction method", mentions a method of using jointed arm and Kinect in combination, but two devices are used for independent measurement, and then three-dimensional point cloud data is fused, because the difference between the measured data precision of Kinect and jointed arm is large, there is a problem of consistency of point cloud model precision.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a KinFu point cloud auxiliary registration device and a KinFu point cloud auxiliary registration method, so as to solve the problem that KinFu point cloud registration is easy to fail.
In order to achieve the above and other objects, the present invention provides a KinFu point cloud assisted registration apparatus, which includes, from top to bottom, a Kinect camera component, a spacer, a Kinect microphone component, a first fastening device, a handle, a second fastening device, a head joint, and a joint arm, wherein the spacer is installed between the Kinect camera component and the Kinect microphone component to prevent relative rotation between the camera and the microphone component.
Preferably, the handle is fixedly connected with the Kinect camera component and the cushion block through a first fastening device.
Preferably, the lower end of the handle is fixedly connected with the head joint through a second fastening device.
Preferably, the articulated arm is an articulated arm with 6 degrees of freedom, and the head joint is located at the end of the articulated arm.
Preferably, the first fastening device and the second fastening device are screws, and the cushion block is a C-shaped cushion block.
In order to achieve the above object, the present invention further provides a point cloud assisted registration method of a point cloud assisted registration apparatus of KinFu, comprising the following steps:
step S1, adopting LM-ICP method to predict the pose S of the camera, and calculating the variation delta of the pose S of the camera of the previous frame and the next frame1
Step S2, changing the camera position delta1With a given first threshold value t1Comparing;
step S3, if the camera pose variation delta1Greater than a given first threshold value t1Calculating a new pose P with 6 degrees of freedom by using a D-H model of the joint arm of the point cloud auxiliary registration device, acquiring new point cloud data, and returning to the step S1 for the next frame;
step S4, if the camera pose variation delta1Less than a given first threshold value t1Then calculate the pose P of the current frame camera 6 degree of freedomk+1And camera pose Sk+1Deviation delta of2
Step S5, the pose P of the current frame camera 6 degree of freedom is determinedk+1And camera pose Sk+1Deviation delta of2With a given second threshold value t2And comparing, acquiring point cloud data by using the camera pose S according to a comparison result, and carrying out TSDF point cloud data fusion with the reconstructed point cloud model, or determining that the prediction fails, and entering the next cycle.
Preferably, the step S1 further includes:
step S100, calculating the pose P with 6 degrees of freedom of the camera by using the D-H model of the joint armkUsing said pose P of 6 degrees of freedomkAcquiring an initial reconstructed point cloud model, and initializing a frame number k to be 1;
step S101, according toCalculating three-dimensional point cloud vertex V from k frame depth data by Kinect intrinsic parameterskAnd the normal vector diagram Nk
Step S102, adopting LM-ICP algorithm, and passing through reconstructed point cloud model and k +1 frame point cloud top point VkAnd the normal vector diagram NkRegistering to obtain the pose S of the k +1 framek+1
Step S103, calculating the k +1 frame camera pose Sk+1And k frame camera pose SkAmount of change delta of1
Preferably, in step S3, the 6-degree-of-freedom pose P of the camera is calculated using the D-H model of the articulated armk+1Pose P with 6 degrees of freedomk+1And acquiring point cloud k +1 frame point cloud data, performing coarse and fine registration on the point cloud data and the reconstructed point cloud model, enabling k to be k +1, and returning to the step S101.
Preferably, the step S5 further includes:
step S500, if the current frame camera has the 6-degree-of-freedom pose Pk+1And camera pose Sk+1Is smaller than a given second threshold t2, the camera pose S of the k +1 frame is usedk+1Obtaining accurate k +1 frame point cloud top point VkAnd the normal vector diagram NkPerforming TSDF point cloud data fusion, judging whether scanning is finished or not, if not, adding 1 to k, returning to the step S101, and entering the next cycle; and if so, outputting the reconstructed point cloud model.
Step S501, if the current frame camera has 6 degrees of freedom pose Pk+1And camera pose Sk+1Is greater than a given second threshold value t2If the pose S of the camera fails to be predicted, k is not added by 1 and the next cycle is started by returning to the step S101.
Preferably, the variation δ1And deviation delta2Is calculated by the following formula:
δ=|OkOk+1|+α·(|Ok+1Xk+1-OkXk|+|Ok+1Yk+1-OkYk|+|Ok+1Zk+1-OkZk|)
wherein, alpha is a weight coefficient used for adjusting the change difference of translation and rotation of the coordinate system, O is a coordinate origin, and (X, Y, Z) is the coordinate position of the camera.
Compared with the prior art, the KinFu point cloud auxiliary registration device and the KinFu point cloud auxiliary registration method have better robustness and convergence accuracy compared with an ICP algorithm in the original KinFu algorithm by fixing a KinFu V2 camera component on a joint arm in a mechanical connection mode and adopting LM-ICP to predict the pose S of the camera, and the KinFu point cloud auxiliary registration device and the KinFu point cloud auxiliary registration method can enable KinFu V2 to normally acquire point cloud data under the conditions of large displacement, view angle jump and jitter by adopting two thresholds to respectively compare and process the pose variation and the pose deviation of the camera.
Drawings
FIG. 1 is a schematic diagram of a point cloud assisted registration apparatus of KinFu according to the present invention;
FIG. 2 is a schematic view of the kinect V2 and the end of the articulated arm according to one embodiment of the present invention;
FIG. 3 is a flowchart illustrating the steps of a point cloud assisted registration method of a point cloud assisted registration apparatus of KinFu according to the present invention;
FIG. 4 is a flow chart of a point cloud assisted registration method in an embodiment of the invention;
FIG. 5 is a diagram illustrating coordinate system variations according to an embodiment of the present invention.
Detailed Description
Other advantages and capabilities of the present invention will be readily apparent to those skilled in the art from the present disclosure by describing the embodiments of the present invention with specific embodiments thereof in conjunction with the accompanying drawings. The invention is capable of other and different embodiments and its several details are capable of modification in various other respects, all without departing from the spirit and scope of the present invention.
Fig. 1 is a general schematic diagram of a point cloud assisted registration apparatus of KinFu according to the present invention, and fig. 2 is a schematic diagram of a kinect V2 and an end of a joint arm according to an embodiment of the present invention. As shown in fig. 1 and 2, the point cloud assisted registration device of the KinFu of the present invention comprises, from top to bottom, a Kinect camera component 1, a spacer 2, a Kinect microphone component 3, a first fastening device 4, a handle 5, a second fastening device 6, a head joint 7 and a joint arm 8, in the specific embodiment of the present invention, the Kinect camera component 1 adopts a Kinect V2 camera, the Kinect microphone component 3 adopts a Kinect V2 microphone, the spacer 2 is a C-shaped spacer, and is installed between the Kinect camera component 1 and the Kinect microphone component 3 to prevent the relative rotation of the camera and the microphone component, the handle 5 is connected and fixed with the Kinect camera component 1 and the spacer 2 through the first fastening device 4, the lower end of the handle 5 is connected and fixed with the head joint 7 through the second fastening device 6, the present invention is ergonomic, and can reduce the fatigue degree measured by handheld scanning for a long time by providing the handle supporting the Kinect V2, the measurement mode of the original direct handheld Kinect V2 is changed, the head joint 7 is located at the end of the joint arm 8, and preferably, the joint arm 8 can adopt a 6-degree-of-freedom joint arm, so as to ensure the flexibility of the handheld Kinect V2 camera scanning measurement. In the embodiment of the invention, the first fastening device 4 and the second fastening device 6 are screws, and the lower end of the handle 5 is also provided with a threaded hole.
For high-precision measurement occasions, generally, parameters such as a rod length and a measuring head need to be calibrated before each measurement of the articulated arm. Because the Kinect V2 itself measurement accuracy is not high, this device need not the parameter calibration before using, also can keep Kinect V2's point cloud data precision, reduces the experience demand to the operator.
Fig. 3 is a flowchart illustrating a method for point cloud assisted registration of a point cloud assisted registration apparatus of KinFu according to the present invention. As shown in fig. 3, the point cloud assisted registration method of the point cloud assisted registration apparatus of the KinFu of the present invention includes the following steps:
step S1, adopting LM-ICP (Levenberg-Marquardt Iterative close Point, first proposed by Fitzgibbon) method to predict the position S of the camera (in the invention, the position S of the camera includes coordinate value and unit direction vector), and calculating the variation delta of the position S of the camera of the previous frame and the next frame1. In the embodiment of the invention, the Kinect V2 camera is used as the camera, and LM-ICP (Levenberg-Marquar) is used in the inventiondt Iterative Closest Point) algorithm predicts the pose S (coordinate value and direction vector) of the camera, and is different from the ICP algorithm in the original KinFu algorithm, so that the robustness and convergence accuracy are better.
Specifically, step S1 further includes:
step S100, calculating the Kinect V2 camera 6 freedom degree pose P by using the D-H (Denavit-Hartenberg) model of the joint arm of the point cloud auxiliary registration devicekUsing the pose P of 6 degrees of freedomkAcquiring an initial reconstructed point cloud model, wherein an initialization frame number k is 1;
step S101, calculating a three-dimensional point cloud vertex V from k frame depth data according to Kinect internal parameterskAnd the normal vector diagram Nk
Step S102, adopting LM-ICP algorithm, and passing through reconstructed point cloud model and k +1 frame point cloud top point VkAnd the normal vector diagram NkRegistering to obtain the pose S of the k +1 framek+1
Step S103, calculating the k +1 frame camera pose Sk+1And k frame camera pose SkAmount of change delta of1
In a specific embodiment of the invention, k +1 frame camera pose Sk+1And k frame camera pose SkAmount of change delta of1Can be obtained by the following formula:
δ=|OkOk+1|+α·(|Ok+1Xk+1-OkXk|+|Ok+1Yk+1-OkYk|+|Ok+1Zk+1-OkZk|)
wherein alpha is a weight coefficient and is used for adjusting the change difference of translation and rotation of the coordinate system, and the default value of alpha is 1.
Step S2, changing the camera position delta1With a given first threshold value t1Making comparison to obtain the variation delta of camera pose1The hand-held Kinect V2 is judged to be moving too fast or too fast.
Step S3, if the camera pose variation delta1Greater than a given first threshold value t1Indicating that the hand-held Kinect V2 is moving too fast or too fast, the hand-held Kinect V2 is utilizedThe D-H (Denavit-Hartenberg) model of the joint arm of the point cloud assisted registration apparatus calculates a new 6-degree-of-freedom pose P (in the present invention, the 6-degree-of-freedom pose P includes coordinate values and unit direction vectors), and acquires new point cloud data, and returns to step S1 to perform the next frame. Specifically, the D-H model of the joint arm is used for calculating the pose P of the camera 6 with the degree of freedomk+1By the use of Pk+1And acquiring point cloud k +1 frame point cloud data, performing coarse and fine registration on the point cloud data and the reconstructed point cloud model, namely k is k +1, and returning to the step S101.
Step S4, if the camera pose variation delta1Less than a given first threshold value t1If the moving speed of the handheld Kinect V2 is slow and the reliability of the camera pose S is high, the pose P of the current frame camera 6 degree of freedom is calculatedk+1And camera pose Sk+1Deviation delta of2
In an embodiment of the invention, the camera has 6 degrees of freedom pose Pk+1And camera pose Sk+1Deviation delta of2Is also obtained by the following formula:
δ=|OkOk+1|+α·(|Ok+1Xk+1-OkXk|+|Ok+1Yk+1-OkYk|+|Ok+1Zk+1-OkZk|)
wherein alpha is a weight coefficient and is used for adjusting the change difference of translation and rotation of the coordinate system, the default value of alpha is 1, O is the origin of coordinates, and (X, Y, Z) are the coordinate positions of the camera.
Step S5, the pose P of the current frame camera 6 degree of freedom is determinedk+1And camera pose Sk+1Deviation delta of2With a given second threshold value t2And comparing, acquiring point cloud data by using the camera pose S according to a comparison result, and carrying out TSDF point cloud data fusion with the reconstructed point cloud model, or determining that the prediction fails, and entering the next cycle.
Specifically, step S5 further includes:
step S500, if the current frame camera has the 6-degree-of-freedom pose Pk+1And camera pose Sk+1Deviation delta of2Less than a given second threshold value t2Then advantageCamera pose S with k +1 framek+1Obtaining accurate k +1 frame point cloud top point VkAnd the normal vector diagram NkPerforming TSDF point cloud data fusion, judging whether scanning is finished or not, if not, adding 1 to k, returning to the step S101, and entering the next cycle; and if so, outputting the reconstructed point cloud model.
Step S501, if the current frame camera has 6 degrees of freedom pose Pk+1And camera pose Sk+1Deviation delta of2Greater than a given second threshold value t2If the pose S of the camera fails to be predicted (caused by general hand-held shaking), k is returned to the step S101 without adding 1, and the next cycle is started.
It can be seen that in the present invention, two threshold values, the first threshold value t, are set1And a second threshold value t2When the hand-held camera moves too fast or over a large distance, delta1Greater than a threshold value t1When the hand-held shake is obvious, delta2Greater than a threshold value t2If so, judging that the current predicted point cloud data is invalid by using t1And t2The Kinect V2 can normally acquire point cloud data under the conditions of large displacement, view jump and jitter, and is different from a measurement mode that the original KinFu algorithm needs to slowly scan around.
Fig. 4 is a flowchart of a point cloud assisted registration method according to an embodiment of the present invention, and fig. 5 is a schematic diagram of coordinate system variation according to an embodiment of the present invention. As shown in fig. 4 and 5, in the embodiment of the present invention, the point cloud assisted registration process is as follows:
1. calculating 6-degree-of-freedom pose P of Kinect V2 camera by using D-H model of articulated armkBy the use of PkAcquiring an initial reconstructed point cloud model, wherein an initialization frame number k is 1;
2. calculating three-dimensional point cloud vertex V from k frame depth data according to Kinect intrinsic parameterskAnd the normal vector diagram Nk
3. Adopting LM-ICP algorithm, and using reconstructed point cloud model and k +1 frame point cloud top point VkAnd the normal vector diagram NkRegistering to obtain the pose S of the k +1 framek+1And calculating the k +1 frame camera pose Sk+1And k isFrame camera pose SkAmount of change delta of1
4. Change the pose of the camera by delta1With a given first threshold value t1Comparing;
5. if the camera pose variation delta1Greater than a given first threshold value t1Then, the D-H model of the joint arm is used for calculating the pose P of the camera 6 degree of freedomk+1By the use of Pk+1And acquiring point cloud k +1 frame point cloud data, performing coarse and fine registration on the point cloud data and the reconstructed point cloud model, namely k is k +1, and returning to 2.
6. If the camera pose variation delta1Less than a given first threshold value t1Then calculate the pose P of the current frame camera 6 degree of freedomk+1And camera pose Sk+1Deviation delta of2
7. Pose P of current frame camera 6 degree of freedomk+1And camera pose Sk+1Deviation delta of2With a given second threshold value t2Comparing;
8. if deviation delta2Less than a given second threshold value t2Then use the camera pose S of k +1 framek+1Obtaining accurate k +1 frame point cloud top point VkAnd the normal vector diagram NkPerforming TSDF point cloud data fusion, judging whether scanning is finished or not, if not, adding 1 to k, returning to 2, and entering the next cycle; and if so, outputting the reconstructed point cloud model.
9. If the current frame camera has 6 degrees of freedom pose Pk+1And camera pose Sk+1Deviation delta of2Greater than a given second threshold value t2If the pose S of the camera fails to be predicted (caused by general handheld shaking), k is not added with 1, and then the next cycle is entered by returning to 2.
In summary, the KinFu point cloud auxiliary registration device and the KinFu point cloud auxiliary registration method fix the KinFu V2 camera component on the joint arm in a mechanical connection mode, adopt LM-ICP to predict the pose S of the camera, and have better robustness and convergence accuracy compared with the ICP algorithm in the original KinFu algorithm.
The foregoing embodiments are merely illustrative of the principles and utilities of the present invention and are not intended to limit the invention. Modifications and variations can be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the present invention. Therefore, the scope of the invention should be determined from the following claims.

Claims (5)

1. A point cloud auxiliary registration method of a KinFu point cloud auxiliary registration device, wherein the point cloud auxiliary registration device comprises a Kinect camera component, a cushion block, a Kinect microphone component, a first fastening device, a handle, a second fastening device, a head joint and a joint arm from top to bottom, the cushion block is installed between the Kinect camera component and the Kinect microphone component to prevent relative rotation of the camera and the microphone component, and the point cloud auxiliary registration method comprises the following steps:
step S1, adopting LM-ICP method to predict the pose S of the camera, and calculating the variation delta of the pose S of the camera of the previous frame and the next frame1
Step S2, changing the camera position delta1With a given first threshold value t1Comparing;
step S3, if the camera pose variation delta1Greater than a given first threshold value t1Calculating a new pose P with 6 degrees of freedom by using a D-H model of the joint arm of the point cloud auxiliary registration device, acquiring new point cloud data, and returning to the step S1 for the next frame;
step S4, if the camera pose variation delta1Less than a given first threshold value t1Then calculate the pose P of the current frame camera 6 degree of freedomk+1And camera pose Sk+1Deviation delta of2
Step S5, the pose P of the current frame camera 6 degree of freedom is determinedk+1And camera pose Sk+1Deviation delta of2With a given second threshold value t2Making a comparison according to the ratioAnd (4) obtaining point cloud data by using the camera pose S as a comparison result, and carrying out TSDF point cloud data fusion with the reconstructed point cloud model or determining that the prediction fails, and entering the next cycle.
2. The point cloud assisted registration method of the KinFu point cloud assisted registration apparatus as claimed in claim 1, wherein the step S1 further comprises:
step S100, calculating the pose P with 6 degrees of freedom of the camera by using the D-H model of the joint armkUsing said pose P of 6 degrees of freedomkAcquiring an initial reconstructed point cloud model, and initializing a frame number k to be 1;
step S101, calculating a three-dimensional point cloud vertex V from k frame depth data according to Kinect internal parameterskAnd the normal vector diagram Nk
Step S102, adopting LM-ICP algorithm, and passing through reconstructed point cloud model and k +1 frame point cloud top point Vk+1And the normal vector diagram Nk+1Registering to obtain the pose S of the k +1 framek+1
Step S103, calculating the k +1 frame camera pose Sk+1And k frame camera pose SkAmount of change delta of1
3. The point cloud assisted registration method of the KinFu point cloud assisted registration apparatus as claimed in claim 2, wherein: in step S3, the pose P of 6 degrees of freedom of the camera is calculated using the D-H model of the articulated armk+1Pose P with 6 degrees of freedomk+1And acquiring point cloud k +1 frame point cloud data, performing coarse and fine registration on the point cloud data and the reconstructed point cloud model, enabling k to be k +1, and returning to the step S101.
4. The point cloud assisted registration method of the KinFu point cloud assisted registration apparatus as claimed in claim 3, wherein the step S5 further comprises:
step S500, if the current frame camera has the 6-degree-of-freedom pose Pk+1And camera pose Sk+1If the deviation delta 2 is smaller than a given second threshold value t2, the accurate k +1 frame point cloud peak is obtained by using the camera pose of the k +1 frameVk+1And the normal vector diagram Nk+1Performing TSDF point cloud data fusion, judging whether scanning is finished or not, if not, adding 1 to k, returning to the step S101, and entering the next cycle; if the point cloud model is finished, outputting the reconstructed point cloud model;
step S501, if the current frame camera has 6 degrees of freedom pose Pk+1And camera pose Sk+1Is greater than a given second threshold value t2If the pose S of the camera fails to be predicted, k is not added by 1 and the next cycle is started by returning to the step S101.
5. The point cloud assisted registration method of KinFu point cloud assisted registration apparatus of claim 4, wherein the variation δ is a variable1And deviation delta2Is calculated by the following formula:
δ=|OkOk+1|+α·(|Ok+1Xk+1-OkXk|+|Ok+1Yk+1-OkYk|+|Ok+1Zk+1-OkZk|)
wherein, alpha is a weight coefficient used for adjusting the change difference of translation and rotation of the coordinate system, O is a coordinate origin, and (X, Y, Z) is the coordinate position of the camera.
CN201811273380.0A 2018-10-30 2018-10-30 KinFu point cloud auxiliary registration device and method thereof Active CN109509215B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811273380.0A CN109509215B (en) 2018-10-30 2018-10-30 KinFu point cloud auxiliary registration device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811273380.0A CN109509215B (en) 2018-10-30 2018-10-30 KinFu point cloud auxiliary registration device and method thereof

Publications (2)

Publication Number Publication Date
CN109509215A CN109509215A (en) 2019-03-22
CN109509215B true CN109509215B (en) 2022-04-01

Family

ID=65747055

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811273380.0A Active CN109509215B (en) 2018-10-30 2018-10-30 KinFu point cloud auxiliary registration device and method thereof

Country Status (1)

Country Link
CN (1) CN109509215B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111216124B (en) * 2019-12-02 2020-11-06 广东技术师范大学 Robot vision guiding method and device based on integration of global vision and local vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101927391A (en) * 2010-08-27 2010-12-29 大连海事大学 Method for performing automatic surfacing repair on damaged metal part
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103500013A (en) * 2013-10-18 2014-01-08 武汉大学 Real-time three-dimensional mapping system and method based on Kinect and streaming media technology
CN104123751A (en) * 2014-07-24 2014-10-29 福州大学 Combined type measurement and three-dimensional reconstruction method combing Kinect and articulated arm
CN107578400A (en) * 2017-07-26 2018-01-12 西南交通大学 A kind of contact net device parameter detection method of BIM and three-dimensional point cloud fusion
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9235928B2 (en) * 2012-01-24 2016-01-12 University Of Southern California 3D body modeling, from a single or multiple 3D cameras, in the presence of motion
US9514522B2 (en) * 2012-08-24 2016-12-06 Microsoft Technology Licensing, Llc Depth data processing and compression

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101927391A (en) * 2010-08-27 2010-12-29 大连海事大学 Method for performing automatic surfacing repair on damaged metal part
CN103279987A (en) * 2013-06-18 2013-09-04 厦门理工学院 Object fast three-dimensional modeling method based on Kinect
CN103500013A (en) * 2013-10-18 2014-01-08 武汉大学 Real-time three-dimensional mapping system and method based on Kinect and streaming media technology
CN104123751A (en) * 2014-07-24 2014-10-29 福州大学 Combined type measurement and three-dimensional reconstruction method combing Kinect and articulated arm
CN107578400A (en) * 2017-07-26 2018-01-12 西南交通大学 A kind of contact net device parameter detection method of BIM and three-dimensional point cloud fusion
CN108389260A (en) * 2018-03-19 2018-08-10 中国计量大学 A kind of three-dimensional rebuilding method based on Kinect sensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Robot Assisted 3D Point Cloud Object Registration;Bojan Jerbić,et al.;《25th DAAAM International Symposium on Intelligent Manufacturing and Automation》;20151231;847-852 *
一种面向造像类文物的真三维模型精细重建方法;夏国芳 等;《敦煌研究》;20180615(第3期);131-140 *

Also Published As

Publication number Publication date
CN109509215A (en) 2019-03-22

Similar Documents

Publication Publication Date Title
CN107255476B (en) Indoor positioning method and device based on inertial data and visual features
US10643347B2 (en) Device for measuring position and orientation of imaging apparatus and method therefor
JP5624394B2 (en) Position / orientation measurement apparatus, measurement processing method thereof, and program
EP1893942B9 (en) Apparatus and method for relocating an articulating-arm coordinate measuring machine
US7542872B2 (en) Form measuring instrument, form measuring method and form measuring program
JP6324025B2 (en) Information processing apparatus and information processing method
CN109544630B (en) Pose information determination method and device and visual point cloud construction method and device
WO2020238346A1 (en) Method for optimizing orientation of drill bit in robot drilling
CN109475386B (en) Internal device tracking system and method of operating the same
EP2499616B1 (en) Three-dimensional measurement method
CN101116101A (en) Position posture measuring method and device
EP3727158B1 (en) Combining image based and inertial probe tracking
CN107941212B (en) Vision and inertia combined positioning method
CN114993608B (en) Wind tunnel model three-dimensional attitude angle measuring method
CN112258583B (en) Distortion calibration method for close-range image based on equal distortion partition
CN109509215B (en) KinFu point cloud auxiliary registration device and method thereof
CN103322984A (en) Distance measuring and speed measuring methods and devices based on video images
CN109764805A (en) A kind of mechanical arm positioning device and method based on laser scanning
CN113763479B (en) Calibration method of refraction and reflection panoramic camera and IMU sensor
JP2010256276A (en) Three-dimensional shape measuring apparatus and measuring method
JP2010145231A (en) Apparatus and method for measuring displacement of object
JP2000205821A (en) Instrument and method for three-dimensional shape measurement
JP2004108836A (en) Azimuth angle computing method of imaging apparatus and system, attitude sensor of imaging apparatus, tilt sensor of imaging device and computer program and three-dimentional model configuration device
CN113935904A (en) Laser odometer error correction method, system, storage medium and computing equipment
CN109900301A (en) Binocular solid orientation angle compensation method under a kind of dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant