CN113865506B - Automatic three-dimensional measurement method and system without mark point splicing - Google Patents

Automatic three-dimensional measurement method and system without mark point splicing Download PDF

Info

Publication number
CN113865506B
CN113865506B CN202111057005.4A CN202111057005A CN113865506B CN 113865506 B CN113865506 B CN 113865506B CN 202111057005 A CN202111057005 A CN 202111057005A CN 113865506 B CN113865506 B CN 113865506B
Authority
CN
China
Prior art keywords
coordinate system
point cloud
pose
measurement
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111057005.4A
Other languages
Chinese (zh)
Other versions
CN113865506A (en
Inventor
李中伟
刘玉宝
钟凯
袁超飞
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN POWER3D TECHNOLOGY Ltd
Original Assignee
WUHAN POWER3D TECHNOLOGY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN POWER3D TECHNOLOGY Ltd filed Critical WUHAN POWER3D TECHNOLOGY Ltd
Priority to CN202111057005.4A priority Critical patent/CN113865506B/en
Publication of CN113865506A publication Critical patent/CN113865506A/en
Application granted granted Critical
Publication of CN113865506B publication Critical patent/CN113865506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The invention discloses an automatic three-dimensional measurement method and system without marking point splicing, wherein the method comprises the following steps: through a transformation matrix X from a tail end coordinate system of the hand-eye calibration calculation robot to a coordinate system of a measuring device, points in a measuring point cloud of an object to be measured, which is acquired by the measuring device, are converted from the measuring coordinate system to a base coordinate system of the robot, and the point cloud rough stitching is carried out; setting a distance threshold criterion, traversing all points in a measurement point cloud of an object to be measured based on the distance threshold, and automatically removing redundant background data; setting a multi-view point cloud registration optimization target based on a pose map optimization framework, and solving the optimization target to obtain an optimized global pose relationship; and rigidly transforming the optimized global pose relationship to a unified coordinate system to obtain complete measurement data. According to the invention, on the basis of rough stitching, background irrelevant data is automatically removed, global pose is further optimized based on a graph optimization technology, and high-precision stitching of multi-view point cloud data is realized.

Description

Automatic three-dimensional measurement method and system without mark point splicing
Technical Field
The invention belongs to the technical field of surface structured light automatic three-dimensional measurement, and particularly relates to an automatic three-dimensional measurement method and system without marking point splicing.
Background
The surface structured light three-dimensional measurement method is widely applied to the fields of model digitization, product quality control, cultural relics digitization, biological medicine, industrial production and the like by virtue of the advantages of high measurement speed, high precision, non-contact and the like.
For practical industrial measurement, most of the points need to be spliced by pasting mark points, and an additional auxiliary device and high environmental requirements are needed, so that complete point cloud data are obtained. In the environments of thermoforming, production lines and the like, as the measured object is a high-temperature cast and forged piece, mark points cannot be adhered, and meanwhile, a complex background exists, and in the data acquisition process, the background also enters a measurement space and is tightly connected with target data, so that the background needs to be automatically removed. Meanwhile, as the measured object and the environment can be shielded, or the measured object is limited by the measuring range, only the three-dimensional data of part of the surface can be obtained by single measurement, the measured object needs to be measured for multiple times from different visual angles and postures to obtain a complete three-dimensional data model. In summary, for high temperature workpiece automated measurement systems, the following problems exist:
1) The conventional measurement mode needs to wait for the work piece to cool and then manually measure, and has long measurement period time and low efficiency.
2) The detection result is completely dependent on the acceptance of workers, misjudgment is easily caused by less negligence, and the measurement accuracy cannot be ensured.
3) The measurement of the high temperature thermal state cannot be performed by sticking mark points.
4) Irrelevant data cannot be automatically removed.
Therefore, it is necessary to adopt a more practical and effective automatic monitoring method to overcome the defects, so as to achieve the purposes of high temperature, no sticking of mark points, automatic background removal, high measurement precision, automatic measurement and the like.
Disclosure of Invention
In view of the above, the invention provides an automatic three-dimensional measurement method and system without mark point splicing, which are used for solving the problem that the measurement precision in the three-dimensional measurement of a high-temperature workpiece cannot be ensured.
The invention discloses an automatic three-dimensional measurement method without mark point splicing, which comprises the following steps:
calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
converting points in the measurement point cloud of the object to be measured obtained by the measuring device from the measuring coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough stitching;
setting a distance threshold criterion, traversing all points in a measurement point cloud of an object to be measured based on the distance threshold, and automatically removing redundant background data;
setting a multi-view point cloud registration optimization target based on a pose map optimization framework, and solving the optimization target to obtain an optimized global pose relationship;
and rigidly transforming the optimized global pose relationship to a unified coordinate system, and performing multi-view point cloud fine splicing to obtain complete measurement data.
Preferably, the formula for converting the points in the measured point cloud of the object to be measured obtained by the measuring device from the measurement coordinate system to the robot base coordinate system is as follows:
wherein, (x) i ,y i ,z i ) For the coordinates of points in the measurement point cloud of the object to be measured obtained by the measuring device in the measuring device coordinate system, (x' i ,y′ i ,z′ i ) To measure points in the point cloud to coordinates in the robot base coordinate system.
Preferably, the distance threshold criterion is:
{x′ i |dx min ≤x′ i ≤dx max ,x′ i ∈R 3 }
{y′ i |dy min ≤y′ i ′≤dy max ,y′ i ∈R 3 }
{z′ i |dz min ≤z′ i ≤dz max ,z′ i ∈R 3 }
wherein dx is min ,dy min ,dz min ,dx max ,dy max ,dz max And traversing all points in the measuring point cloud by using the distance threshold criterion, and deleting all data outside the field to remove background data.
Preferably, the setting of the multi-view point cloud registration optimization target based on the pose diagram optimization framework specifically includes:
taking the global pose of the measurement viewpoint as a node to be optimized, and taking the directed edges of the connected nodes as the relative pose T between the nodes i,j Modeling a pose map;
the distance error information of the corresponding points between the two associated point clouds is used for a pre-calculated covariance matrix omega i,j A representation;
calculating two associated point clouds f i And f j Geometric registration error between:
wherein T is j ,T i Global pose, T, of two measurement viewpoints j, i, respectively j,i The relative pose is adopted;
ignoring two associated point clouds f i And f j The distance error of the corresponding three-dimensional points after registration transformation is compared with the geometric registration error E i,j Transforming, namely using pose error term delta zeta for registration error between any two pieces of associated point cloud data i,j And a covariance matrix omega i,j The representation is performed, namely:
wherein delta is the poseError term δζ i,j Is multiplied by a small disturbance factor to the left,G i =[-[p i ]×I],/>is an antisymmetric operator;
according to geometrical registration error E i,j And constructing a multi-view point cloud registration optimization target, and performing global pose optimization.
Preferably, the point cloud f i Three-dimensional point p in (2) j And its point cloud f j Corresponding point p in (a) j In the process of registration transformation T i,j The following distance errors satisfy:
||T i,j p j -p i ||<∈
global pose T of two measurement viewpoints j, i j ,T i And the relative pose T j,i The relationship between them satisfies:
wherein I is an identity matrix,>for the pose error term, a represents the conversion from vector to matrix.
Preferably, the formula of the multi-view point cloud registration optimization target is as follows:
wherein,for measuring the view-point set, the optimization objective of the multi-view point cloud registration is to minimize E (T).
Preferably, the optimization target is solved through a Levenberg Marquardt algorithm, and the optimized global pose relationship is obtained.
In a second aspect of the present invention, an automated three-dimensional measurement system for marker-free stitching is disclosed, the system comprising:
parameter calibration module: the method comprises the steps of calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
coarse registration module: the method comprises the steps of converting points in a measurement point cloud of an object to be measured, which is acquired by a measuring device, from a measuring coordinate system to a robot base coordinate system according to a transformation matrix X from the robot end coordinate system to a measuring device coordinate system and a transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough registration between the measurement point cloud of the object to be measured and a model point cloud;
background removal module: the method comprises the steps of setting a distance threshold criterion, traversing all points in a measurement point cloud of an object to be measured based on the distance threshold, and automatically removing redundant background data;
pose optimization module: the method comprises the steps of setting a multi-view point cloud registration optimization target based on a pose diagram optimization framework, and solving the optimization target to obtain an optimized global pose relation; and rigidly transforming the optimized global pose relationship to a unified coordinate system, and performing multi-view point cloud fine splicing to obtain complete measurement data.
In a third aspect of the present invention, an electronic device is disclosed, comprising: at least one processor, at least one memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete communication with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to implement the method of any of claims 1-7.
In a fourth aspect of the present invention, a computer-readable storage medium is disclosed, the computer-readable storage medium storing computer instructions that cause the computer to implement the method of any one of claims 1 to 7.
Compared with the prior art, the invention has the following beneficial effects:
1) According to the invention, three-dimensional data under different visual angles are preliminarily unified to a global coordinate system through hand-eye calibration and coordinate system transformation, coarse splicing is realized, background irrelevant data is automatically removed through multi-field filtering based on a threshold value, global pose is further optimized based on a graph optimization technology, and high-precision splicing of multi-view point cloud data is realized.
2) The invention can realize automatic measurement of the high-temperature workpiece without pasting mark points and additional auxiliary devices, is not influenced by a positioning device, can automatically remove no light background data, has high measurement precision, has low requirements on environment, has higher efficiency and has higher robustness in actual measurement.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a measurement system according to the present invention;
FIG. 2 is a flow chart of an automated three-dimensional measurement method without landmark stitching according to the present invention;
FIG. 3 is a schematic diagram of point cloud data obtained by rough stitching according to the present invention;
fig. 4 is a schematic diagram of a result of the present invention using multi-field automatic background removal based on a distance threshold for the point cloud data of fig. 3.
Detailed Description
The following description of the embodiments of the present invention will clearly and fully describe the technical aspects of the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Referring to fig. 1, in the present invention, a measuring device is installed at a tail end of a robot, and the measuring device is driven to move by a moving robot tail end to perform three-dimensional measurement on an object to be measured, where the measuring device may be a three-dimensional scanner. Specifically, O b X b Y b Z b Is the robot base coordinate system, O t X t Y t Z t Is the robot end coordinate system, O s X s Y s Z s The coordinate system of the measuring device is that P is the object to be measured.
Referring to fig. 2, the present invention provides an automated three-dimensional measurement method without landmark splicing, which includes:
s1, parameter calibration
The parameter calibration comprises camera calibration of a three-dimensional scanner and hand-eye calibration based on eyes on hands, and specifically comprises the following sub-steps:
s11, camera calibration based on binocular stereo vision
The method mainly aims at calibrating the internal and external parameters of each camera and the conversion relation between the two cameras in the binocular camera of the three-dimensional scanner.
S12, hand-eye calibration based on' eye on hand
The conversion relation a of the robot base coordinate system to the robot end coordinate system and the conversion relation B of the sensor coordinate system to the target coordinate system are known. The conversion relation X between the measurement coordinate system and the robot end coordinate system needs to be determined through hand-eye calibration.
The tail end of the robot drives the measuring device to move from the initial position to the first measuring point to obtain a transformation matrix A of the tail end coordinate system of the robot from the initial position to the first measuring point 1 And a transformation matrix B of the measuring device coordinate system from the initial position to the first measuring point 1
The tail end of the robot drives the measuring device to move from the first measuring point to the second measuring point to obtain the tail end of the robotTransformation matrix A of coordinate system from first measuring point to second measuring point 2 And a transformation matrix B of the measuring device coordinate system from the first measuring point to the second measuring point 2
According to the hand-eye calibration equation, obtaining:
in particular, the method comprises the steps of, wherein R is X 、R A1 、R B1 、R A2 、R B2 Is a corresponding rotation matrix, which is a 3×3 unit orthogonal matrix; t (T) X 、T A1 、T B1 、T A2 、T B2 Is a corresponding translation matrix, which is a 3×1 matrix;
then formula (1) can be converted into:
solving the formula (2) by adopting a matrix direct product method to obtain a transformation matrix from a robot terminal coordinate system to a measuring device coordinate system:
X=(A T A) -1 A T b (3)
wherein,0 9×3 zero matrix representing 9 rows and 3 columns, 0 9 Representing a zero vector of 9 rows and 1 column. X is the relation of the hand and the eye.
S2, three-dimensional data rough stitching under different pose visual angles of robot
After a transformation matrix X from a robot terminal coordinate system to a measuring device coordinate system is calculated through hand-eye calibration, a transformation matrix T from a robot base coordinate system to the robot terminal coordinate system is obtained;
according to a transformation matrix X from a robot terminal coordinate system to a measuring device coordinate system and a transformation matrix T from a robot base coordinate system to the robot terminal coordinate system, converting points in a measuring point cloud of an object to be measured acquired by a measuring device from the measuring coordinate system to the robot base coordinate system, wherein the transformation formula is as follows:
wherein, (x) i ,y i ,z i ) For the coordinates of points in the measurement point cloud of the object to be measured obtained by the measuring device in the measuring device coordinate system, (x) i ′,y i ′,z i ') is the conversion of points in the measurement point cloud to coordinates in the robot base coordinate system. And carrying out rough point cloud registration between the measured point cloud and the model point cloud of the object to be measured under the robot base standard system and rough measurement point cloud splicing under different pose view angles of the robot.
S3, automatic multi-field background removal based on distance threshold
Due to the influence of the background of the measuring environment, a lot of irrelevant data are collected at the same time, and as shown in fig. 3, a lot of irrelevant background is around the measured target object. To achieve efficient automated measurement, automatic background removal is required. The invention adopts a multi-field filtering technology based on a distance threshold value to realize automatic removal of background irrelevant data. The distance threshold criterion is:
{x′ i |dx min ≤x′ i ≤dx max ,x′ i ∈R 3 } (5)
{y′ i |dy min ≤y′ i ≤dy max ,y′ i ∈R 3 } (6)
{z′ i |dz min ≤z′ i ≤dz max ,z′ i ∈R 3 } (7)
wherein dx is min ,dy min ,dz min ,dx max ,dy max ,dz max The lower limit and the upper limit of the distance threshold values of the points in the measuring point cloud in the directions of the x axis, the y axis and the z axis under the coordinate system of the robot base are respectively obtained through a simulation model. And traversing all points in the measured point cloud by using the distance threshold criterion, deleting all data outside the field to remove the background data, wherein fig. 4 is a schematic diagram of a result of the automatic background removal of the multi-field based on the distance threshold for the point cloud data of fig. 3.
S4, multi-view point cloud fine splicing based on graph optimization framework
On the basis of the conversion relation obtained through the eyes, the global pose is further optimized based on the pose graph optimizing frame, the multi-view point cloud registration problem is modeled under the pose graph optimizing frame, the corresponding point distance error information between related point clouds is represented by a pre-calculated covariance matrix, and the multi-view point cloud registration problem is rapidly optimized based on the pose graph optimizing frame.
The step S4 specifically comprises the following sub-steps:
s41, pose graph modeling
The method is based on a pose graph optimization method, distance information between corresponding three-dimensional point pairs is ignored, and only inconsistency between the global pose and the relative pose is optimized. Taking the pose obtained by the hand-eye stitching method as an initial value of the space pose of the measurement viewpoint, taking the global pose of the multi-view point cloud data as a node to be optimized, and taking the directed edge of the connected node as the relative pose T between the nodes i,j And modeling the pose map. The directed connection relationship can be stored and queried in the pose graph by using a corresponding adjacency matrix, and the obtained adjacency matrix is an information matrix.
S42, multi-view point cloud fine splicing based on pose diagram optimization frame
Two associated point cloud data f given different viewpoints i, j i And f j The shape inconsistency between them can be measured as:
since the spatial rigid transformation does not affect the distance between the three-dimensional points before and after the transformation, the above equation is equivalent to:
point cloud f i Three-dimensional point p in (2) i And its point cloud f j Corresponding point p in (a) j In the process of registration transformation T i,j The back distance error is small, namely, the following conditions are satisfied:
||T i,j p j -p i ||<∈ (10)
then equation (9) may be approximated as:
global pose T of two measurement viewpoints j ,T i And the relative pose T j,i The relationship between them satisfies:
thus formula (11) can be written as:
wherein,is an antisymmetric (Shew symmetry) operator.
A rigid transformation matrix T representing a rigid transformation in three-dimensional space belongs to a special euclidean group (Special Euclidean group), defined as:
the lie algebra corresponding to SE (3) is defined as:
wherein the method comprises the steps of
ζ is a six-dimensional vector, the front three-dimensional is rotation, denoted phi, the back three-dimensional is translation, denoted rho, ≡denotes the conversion from vector to matrix. In the invention, xi i,j The six-dimensional pose error vector is shown as xi.
Definition G i =[-[p i ]×I]Then equation (13) above can be written as:
the above shows that the registration error between any two pieces of point cloud data can be used as a pose error term delta zeta i,j And a covariance matrix omega i,j Representing, wherein delta is a pose error term delta zeta i,j To multiply the small disturbance factor by the left of (a). Wherein the covariance matrix Ω i,j The covariance matrix is calculated in advance in the course of the previous coarse registration and does not need to be changed in the iterative optimization process.
After calculating the geometric registration error representation between two point clouds, the multi-view point cloud registration optimization target is:
modeling a multi-view point cloud registration optimization target under a pose diagram optimization framework through the formula (14), and optimizing the formula by using a Levenberg Marquardt algorithm to finally obtain an optimized global pose relation.
S43, obtaining the coordinates of the measurement points by applying the rigid transformation matrix
And transforming the global pose relation obtained by optimization into a final unified coordinate through matrix multiplication, and performing fine stitching to obtain a complete measurement data model, thereby completing the rigid transformation of the measurement point cloud.
Compared with the existing method, the method has the advantages that the mark points are not required to be pasted, the influence of a positioning device is avoided, no additional auxiliary device is required, no light background data can be automatically removed, the measurement precision is high, the requirement on the environment is low, the efficiency is high, and in actual measurement, the method has high robustness.
The invention also discloses an electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus; the processor, the memory and the communication interface complete communication with each other through the bus; the memory stores program instructions executable by the processor that the processor invokes to implement the aforementioned methods of the present invention.
The invention also discloses a computer readable storage medium storing computer instructions for causing a computer to implement all or part of the steps of the methods of the embodiments of the invention. The storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic or optical disk, or other various media capable of storing program code.
The system embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, i.e., may be distributed over a plurality of network elements. One of ordinary skill in the art may select some or all of the modules according to actual needs without performing any inventive effort to achieve the objectives of the present embodiment.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (9)

1. An automated three-dimensional measurement method without landmark splicing, the method comprising:
calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
converting points in the measurement point cloud of the object to be measured obtained by the measuring device from the measuring coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough stitching;
setting a distance threshold criterion, traversing all points in a measurement point cloud of an object to be measured based on the distance threshold, and automatically removing redundant background data;
setting a multi-view point cloud registration optimization target based on a pose map optimization framework, and solving the optimization target to obtain an optimized global pose relationship; rigidly transforming the optimized global pose relationship to a unified coordinate system, and performing multi-view point cloud fine splicing to obtain complete measurement data;
the setting of the multi-view point cloud registration optimization target based on the pose diagram optimization framework specifically comprises the following steps:
taking the global pose of the measurement viewpoint as a node to be optimized, and taking the directed edges of the connected nodes as phases among the nodesPosition and orientation T i,j Modeling a pose map;
the distance error information of the corresponding points between the two associated point clouds is used for a pre-calculated covariance matrix omega i,j A representation;
calculating associated point cloud f corresponding to measurement viewpoints i and j i And f j Geometric registration error between:
wherein T is j ,T i Global pose, T, of two measurement viewpoints j, i, respectively j,i The relative pose is adopted; p is p i Is the point cloud f i Three-dimensional points, p j Is the point cloud f j Intermediate and p i Corresponding three-dimensional points;
ignoring two associated point clouds f i And f j The distance error of the corresponding three-dimensional points after registration transformation is compared with the geometric registration error E i,j Transforming, namely using pose error term delta zeta for registration error between any two pieces of associated point cloud data i,j And a covariance matrix omega i,j The representation is performed, namely:
wherein delta is a pose error term delta zeta i,j Is multiplied by a small disturbance coefficient, ζ i,j 、ξ j,i The pose error vector is uniformly represented,G i =[-[p i ] × I],/>is an antisymmetric operator, I is an identity matrix;
according to geometrical registration error E i,j Constructing multi-view point cloudRegistering the optimization target, and performing global pose optimization.
2. The automated three-dimensional measurement method without landmark stitching according to claim 1, wherein the formula for converting the points in the measured point cloud of the object to be measured obtained by the measurement device from the measurement coordinate system to the robot base coordinate system is:
wherein, (x) i ,y i ,z i ) For the coordinates of points in the measurement point cloud of the object to be measured obtained by the measuring device in the measuring device coordinate system, (x' i ,y′ i ,z′ i ) To measure points in the point cloud to coordinates in the robot base coordinate system.
3. The automated three-dimensional measurement method of no-landmark stitching according to claim 2, wherein the distance threshold criteria is:
{x′ i |dx min ≤x′ i ≤dx max ,x′ i ∈R 3 }
{y′ i |dy min ≤y′ i ≤dy max ,y′ i ∈R 3 }
{z′ i |dz min ≤z′ i ≤dz max ,z′ i ∈R 3 }
wherein dx is min ,dy min ,dz min ,dx max ,dy max ,dz max And traversing all points in the measuring point cloud by using the distance threshold criterion, and deleting all data outside the field to remove background data.
4. Automated three-dimensional non-marker splice according to claim 1The measuring method is characterized in that the point cloud f i Three-dimensional point p in (2) i And its point cloud f j Corresponding point p in (a) j In the process of registration transformation T i,j The following distance errors satisfy:
||T i,j p j -p i ||<∈
global pose T of two measurement viewpoints j, i j ,T i And the relative pose T j,i The relationship between them satisfies:
wherein I is an identity matrix,for the pose error term, a represents the conversion from vector to matrix.
5. The automated three-dimensional measurement method without landmark stitching according to claim 1, wherein the formula of the multi-view point cloud registration optimization target is:
wherein,for measuring the view-point set, the optimization objective of the multi-view point cloud registration is to minimize E (T).
6. The automated three-dimensional measurement method without landmark stitching according to claim 5, wherein,
and solving the optimization target through a Levenberg Marquardt algorithm to obtain the optimized global pose relationship.
7. An automated three-dimensional measurement system for marker-free stitching using the method of any one of claims 1 to 6, the system comprising:
parameter calibration module: the method comprises the steps of calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
and (3) a rough splicing module: the method comprises the steps of converting points in a measurement point cloud of an object to be measured, which is acquired by a measuring device, from a measuring coordinate system to a robot base coordinate system according to a transformation matrix X from the robot end coordinate system to the measuring device coordinate system and a transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough stitching;
background removal module: the method comprises the steps of setting a distance threshold criterion, traversing all points in a measurement point cloud of an object to be measured based on the distance threshold, and automatically removing redundant background data;
pose optimization module: the method comprises the steps of setting a multi-view point cloud registration optimization target based on a pose diagram optimization framework, and solving the optimization target to obtain an optimized global pose relation;
and (3) a fine splicing module: the method is used for rigidly transforming the optimized global pose relationship into a unified coordinate system, and performing multi-view point cloud fine splicing to obtain complete measurement data.
8. An electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete communication with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to implement the method of any of claims 1-6.
9. A computer readable storage medium storing computer instructions for causing a computer to implement the method of any one of claims 1 to 6.
CN202111057005.4A 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing Active CN113865506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111057005.4A CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111057005.4A CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Publications (2)

Publication Number Publication Date
CN113865506A CN113865506A (en) 2021-12-31
CN113865506B true CN113865506B (en) 2023-11-24

Family

ID=78995190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111057005.4A Active CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Country Status (1)

Country Link
CN (1) CN113865506B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631227A (en) * 2022-10-28 2023-01-20 中山大学 High-precision measurement method and system for object surface rotation angle

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN108986149A (en) * 2018-07-16 2018-12-11 武汉惟景三维科技有限公司 A kind of point cloud Precision Registration based on adaptive threshold
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN112232319A (en) * 2020-12-14 2021-01-15 成都飞机工业(集团)有限责任公司 Scanning splicing method based on monocular vision positioning
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112964196A (en) * 2021-02-05 2021-06-15 杭州思锐迪科技有限公司 Three-dimensional scanning method, system, electronic device and computer equipment
CN113205466A (en) * 2021-05-10 2021-08-03 南京航空航天大学 Incomplete point cloud completion method based on hidden space topological structure constraint

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7865316B2 (en) * 2008-03-28 2011-01-04 Lockheed Martin Corporation System, program product, and related methods for registering three-dimensional models to point data representing the pose of a part
CN109655024B (en) * 2019-01-24 2020-05-19 大连理工大学 Method for calibrating external parameters of displacement sensor by adopting space transformation technology

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN108986149A (en) * 2018-07-16 2018-12-11 武汉惟景三维科技有限公司 A kind of point cloud Precision Registration based on adaptive threshold
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112232319A (en) * 2020-12-14 2021-01-15 成都飞机工业(集团)有限责任公司 Scanning splicing method based on monocular vision positioning
CN112964196A (en) * 2021-02-05 2021-06-15 杭州思锐迪科技有限公司 Three-dimensional scanning method, system, electronic device and computer equipment
CN113205466A (en) * 2021-05-10 2021-08-03 南京航空航天大学 Incomplete point cloud completion method based on hidden space topological structure constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
组合式大尺寸三维测量系统中的结构参数标定算法;钟凯;李中伟;史玉升;王从军;张李超;黄奎;;天津大学学报(05);全文 *

Also Published As

Publication number Publication date
CN113865506A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
CN110426051B (en) Lane line drawing method and device and storage medium
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
CN111089569B (en) Large box body measuring method based on monocular vision
JP6324025B2 (en) Information processing apparatus and information processing method
CN111735439B (en) Map construction method, map construction device and computer-readable storage medium
CN105021124A (en) Planar component three-dimensional position and normal vector calculation method based on depth map
CN102472612A (en) Three-dimensional object recognizing device and three-dimensional object recognizing method
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
WO2021036587A1 (en) Positioning method and system for electric power patrol scenario
CN112017248B (en) 2D laser radar camera multi-frame single-step calibration method based on dotted line characteristics
CN113865506B (en) Automatic three-dimensional measurement method and system without mark point splicing
CN112197773B (en) Visual and laser positioning mapping method based on plane information
CN114474056A (en) Grabbing operation-oriented monocular vision high-precision target positioning method
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
CN114001651B (en) Large-scale slender barrel type component pose in-situ measurement method based on binocular vision measurement and priori detection data
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN112508933B (en) Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning
CN110853103B (en) Data set manufacturing method for deep learning attitude estimation
CN111340884B (en) Dual-target positioning and identity identification method for binocular heterogeneous camera and RFID
CN116358517B (en) Height map construction method, system and storage medium for robot
CN111815684A (en) Space multivariate feature registration optimization method and device based on unified residual error model
CN113920191B (en) 6D data set construction method based on depth camera
CN112734842B (en) Auxiliary positioning method and system for centering installation of large ship equipment
Zhu et al. Multi-camera System Calibration of Indoor Mobile Robot Based on SLAM
Zhang et al. Point cloud registration with 2D and 3D fusion information on mobile robot integrated vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Li Zhongwei

Inventor after: Liu Yubao

Inventor after: Zhong Kai

Inventor after: Yuan Chaofei

Inventor after: Yang Liu

Inventor before: Li Zhongwei

Inventor before: Zhong Kai

Inventor before: Liu Yubao

Inventor before: Yuan Chaofei

Inventor before: Yang Liu

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant