CN113865506A - Automatic three-dimensional measurement method and system for non-mark point splicing - Google Patents

Automatic three-dimensional measurement method and system for non-mark point splicing Download PDF

Info

Publication number
CN113865506A
CN113865506A CN202111057005.4A CN202111057005A CN113865506A CN 113865506 A CN113865506 A CN 113865506A CN 202111057005 A CN202111057005 A CN 202111057005A CN 113865506 A CN113865506 A CN 113865506A
Authority
CN
China
Prior art keywords
coordinate system
point cloud
pose
measurement
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111057005.4A
Other languages
Chinese (zh)
Other versions
CN113865506B (en
Inventor
李中伟
钟凯
刘玉宝
袁超飞
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN POWER3D TECHNOLOGY Ltd
Original Assignee
WUHAN POWER3D TECHNOLOGY Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN POWER3D TECHNOLOGY Ltd filed Critical WUHAN POWER3D TECHNOLOGY Ltd
Priority to CN202111057005.4A priority Critical patent/CN113865506B/en
Publication of CN113865506A publication Critical patent/CN113865506A/en
Application granted granted Critical
Publication of CN113865506B publication Critical patent/CN113865506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

The invention discloses an automatic three-dimensional measurement method and system without mark point splicing, wherein the method comprises the following steps: calculating a transformation matrix X from a robot terminal coordinate system to a measuring device coordinate system through hand-eye calibration, converting points in measuring point cloud of an object to be measured acquired by the measuring device from the measuring coordinate system to a robot base coordinate system, and performing point cloud rough splicing; setting a distance threshold criterion, traversing all points in the measurement point cloud of the object to be measured based on the distance threshold, and automatically removing redundant background data; setting a multi-view point cloud registration optimization target based on a pose graph optimization framework, and solving the optimization target to obtain an optimized global pose relationship; and rigidly transforming the optimized global pose relationship to a unified coordinate system to obtain complete measurement data. According to the method, on the basis of rough splicing, background irrelevant data are automatically removed, the overall pose is further optimized on the basis of a graph optimization technology, and high-precision splicing of multi-view point cloud data is realized.

Description

Automatic three-dimensional measurement method and system for non-mark point splicing
Technical Field
The invention belongs to the technical field of automatic three-dimensional measurement of surface structured light, and particularly relates to an automatic three-dimensional measurement method and system without mark point splicing.
Background
The surface structured light three-dimensional measurement method has the advantages of high measurement speed, high precision, non-contact and the like, and is widely applied to the fields of model digitization, product quality control, cultural relic digitization, biological medicine, industrial production and the like.
For actual industrial measurement, most of the marked points need to be pasted for splicing, an additional auxiliary device is needed, and the requirement on the environment is high, so that complete point cloud data is obtained. In the environments of thermoforming, production lines and the like, because the measured object is a high-temperature casting or forging piece, the mark point cannot be pasted, and meanwhile, a complex background exists, during data acquisition, the background also enters a measurement space and is tightly connected with target data, and the background needs to be automatically removed. Meanwhile, due to the shielding problem of the measured object and the environment, or due to some limitations of the measurement range, the single measurement can only obtain the three-dimensional data of part of the surface, so that the measured object needs to be measured for multiple times from different view poses to obtain a complete three-dimensional data model. In summary, the following problems exist for the automatic measurement system of high-temperature workpieces:
1) the conventional measurement mode needs to manually measure after the workpiece is cooled, the measurement cycle time is long, and the efficiency is low.
2) The detection result is completely determined by the acceptance of workers, misjudgment is easily caused by negligence, and the measurement precision cannot be guaranteed.
3) The measurement of the high temperature thermal state cannot be performed by means of pasting the mark points.
4) Irrelevant data cannot be automatically removed.
Therefore, it is necessary to adopt a more practical and effective automatic monitoring method to overcome the above disadvantages, so as to achieve the purposes of no mark point sticking at high temperature, automatic background removal, high measurement accuracy, automatic measurement, and the like.
Disclosure of Invention
In view of this, the invention provides an automatic three-dimensional measurement method and system without mark point splicing, which are used for solving the problem that the measurement precision cannot be ensured in the three-dimensional measurement of a high-temperature workpiece.
The invention discloses a first aspect of an automatic three-dimensional measurement method without mark point splicing, which comprises the following steps:
calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
converting points in the measurement point cloud of the object to be measured acquired by the measuring device from the measurement coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough splicing;
setting a distance threshold criterion, traversing all points in the measurement point cloud of the object to be measured based on the distance threshold, and automatically removing redundant background data;
setting a multi-view point cloud registration optimization target based on a pose graph optimization framework, and solving the optimization target to obtain an optimized global pose relationship;
and rigidly transforming the optimized global pose relationship to a unified coordinate system, and carrying out multi-view point cloud precise splicing to obtain complete measurement data.
Preferably, the formula for converting the points in the measured point cloud of the object to be measured acquired by the measuring device from the measuring coordinate system to the robot base coordinate system is as follows:
Figure BDA0003254954710000021
wherein (x)i,yi,zi) Measurement of an object to be measured acquired for a measuring deviceCoordinates of points in the point cloud in the measuring device coordinate system, (x'i,y′i,z′i) And converting points in the measuring point cloud into coordinates under a robot base coordinate system.
Preferably, the distance threshold criterion is:
{x′i|dxmin≤x′i≤dxmax,x′i∈R3}
{y′i|dymin≤y′i′≤dymax,y′i∈R3}
{z′i|dzmin≤z′i≤dzmax,z′i∈R3}
wherein dxmin,dymin,dzmin,dxmax,dymax,dzmaxAnd respectively traversing all points in the measured point cloud by using the distance threshold criterion, and deleting all data outside the field to remove background data.
Preferably, the setting of the multi-view point cloud registration optimization target based on the pose graph optimization framework specifically includes:
taking the global pose of the measurement viewpoint as a node to be optimized, and taking the directed edge of the connecting node as the relative pose T between the nodesi,jModeling a pose graph;
using the pre-calculated covariance matrix omega to obtain the distance error information of corresponding point between two associated point cloudsi,jRepresents;
calculating two associated point clouds fiAnd fjGeometric registration error between:
Figure BDA0003254954710000031
wherein, Tj,TiGlobal position and attitude, T, of two measuring viewpoints j, i respectivelyj,iIs its relative pose;
neglecting two associated point clouds fiAnd fjThe distance error of the corresponding three-dimensional points after registration transformation, the geometric registration error Ei,jTransforming, and using position and pose error term delta xi for registration error between any two related point cloud datai,jAnd a covariance matrix omegai,jTo express, namely:
Figure BDA0003254954710000032
wherein, delta is a pose error term delta xii,jIs multiplied by the small perturbation coefficient to the left,
Figure BDA0003254954710000033
Gi=[-[pi]×I],
Figure BDA0003254954710000034
is an antisymmetric operator;
according to geometric registration error Ei,jAnd constructing a multi-view point cloud registration optimization target, and performing global pose optimization.
Preferably, the point cloud fiThree-dimensional point p in (1)jAnd it is at point cloud fjCorresponding point p in (1)jAfter being subjected to registration transformation Ti,jThe subsequent distance error satisfies:
||Ti,jpj-pi||<∈
global poses T of two measurement viewpoints j, ij,TiAnd its relative pose Tj,iThe relationship between them satisfies:
Figure BDA0003254954710000041
wherein, I is an identity matrix,
Figure BDA0003254954710000042
for the pose error term, ^ represents the transformation from the vector to the matrix.
Preferably, the formula of the multi-view point cloud registration optimization target is as follows:
Figure BDA0003254954710000043
wherein,
Figure BDA0003254954710000044
to measure the set of viewpoints, the optimization goal of the multi-view point cloud registration is to minimize e (t).
Preferably, the optimization target is solved through a Levenberg Marquardt algorithm, and the optimized global pose relationship is obtained.
In a second aspect of the present invention, an automated three-dimensional measurement system without mark point splicing is disclosed, the system comprising:
a parameter calibration module: the system comprises a robot, a measuring device and a control device, wherein the robot is used for measuring the coordinate system of the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
a coarse registration module: the system comprises a measuring device, a robot base coordinate system, a model point cloud, a transformation matrix X from the robot end coordinate system to the measuring device coordinate system and a transformation matrix T from the robot base coordinate system to the robot end coordinate system, wherein the transformation matrix X is used for transforming points in the measuring point cloud of the object to be measured, which is acquired by the measuring device, from the measuring coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough registration between the measuring point cloud of the object to be measured and the model point cloud;
a background removal module: the device comprises a distance threshold criterion, a distance threshold rule, a background data acquisition unit, a distance analysis unit and a background data acquisition unit, wherein the distance threshold criterion is used for traversing all points in the measurement point cloud of the object to be measured based on the distance threshold, and automatically removing redundant background data;
a pose optimization module: the system comprises a pose graph optimization framework, a multi-view point cloud registration optimization target and a global pose relationship optimization target, wherein the multi-view point cloud registration optimization target is set based on the pose graph optimization framework, and the optimization target is solved to obtain the optimized global pose relationship; and rigidly transforming the optimized global pose relationship to a unified coordinate system, and carrying out multi-view point cloud precise splicing to obtain complete measurement data.
In a third aspect of the present invention, an electronic device is disclosed, comprising: at least one processor, at least one memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete mutual communication through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to implement the method of any one of claims 1-7.
In a fourth aspect of the invention, a computer-readable storage medium is disclosed, which stores computer instructions for causing a computer to implement a method as claimed in any one of claims 1 to 7.
Compared with the prior art, the invention has the following beneficial effects:
1) according to the invention, three-dimensional data under different viewing angles are preliminarily unified to a global coordinate system through hand-eye calibration and coordinate system transformation, coarse splicing is realized, background-independent data is automatically removed through multi-field filtering based on a threshold value, the global position and pose are further optimized based on a graph optimization technology, and high-precision splicing of multi-viewing angle point cloud data is realized.
2) The invention can realize the automatic measurement of the high-temperature workpiece without pasting mark points and additional auxiliary devices, is not influenced by a positioning device, can automatically remove lightless background data, has high measurement precision and low requirement on the environment, and has higher efficiency and higher robustness in the actual measurement.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of a measurement system according to the present invention;
FIG. 2 is a flow chart of an automated three-dimensional measurement method of the present invention without landmark splicing;
FIG. 3 is a schematic diagram of point cloud data obtained by rough stitching according to the present invention;
FIG. 4 is a diagram illustrating the result of the present invention using multi-field automatic background removal based on distance threshold for the point cloud data of FIG. 3.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Referring to fig. 1, the measuring device is installed at the end of the robot, and the measuring device is driven by the end of the mobile robot to move to perform three-dimensional measurement on an object to be measured, and the measuring device can be a three-dimensional scanner. Specifically, ObXbYbZbIs a robot base coordinate system, OtXtYtZtAs a robot end coordinate system, OsXsYsZsFor the measurement device coordinate system, P is the object to be measured.
Referring to fig. 2, the present invention provides an automated three-dimensional measurement method without mark point splicing, including:
s1, calibrating parameters
The parameter calibration comprises the camera calibration of a three-dimensional scanner and the hand-eye calibration based on eyes on hands, and specifically comprises the following steps:
s11 camera calibration based on binocular stereo vision
The method mainly comprises the steps of calibrating internal and external parameters of each camera in a binocular camera of the three-dimensional scanner and a conversion relation between the two cameras.
S12 hand-eye calibration based on' eyes on hand
The robot base coordinate system to robot tip coordinate system transformation a and the sensor coordinate system to target coordinate system transformation B are known. The conversion relation X between the measurement coordinate system to the robot end coordinate system needs to be determined through hand-eye calibration.
The tail end of the robot drives the measuring device to move from the initial position to the first measuring point, and a transformation matrix A of a robot tail end coordinate system from the initial position to the first measuring point is obtained1And a transformation matrix B of the coordinate system of the measuring device from the initial position to the first measuring point1
The tail end of the robot drives the measuring device to move from the first measuring point to the second measuring point, and a transformation matrix A of a robot tail end coordinate system from the first measuring point to the second measuring point is obtained2And a transformation matrix B of the coordinate system of the measuring device from the first measuring point to the second measuring point2
According to the hand-eye calibration equation, the following results are obtained:
Figure BDA0003254954710000071
in particular, the method comprises the following steps of,
Figure BDA0003254954710000072
Figure BDA0003254954710000073
wherein R isX、RA1、RB1、RA2、RB2The rotation matrixes are corresponding rotation matrixes which are all 3 multiplied by 3 unit orthogonal matrixes; t isX、TA1、TB1、TA2、TB2The translation matrixes are corresponding, and are 3 multiplied by 1 matrixes;
equation (1) can be converted to:
Figure BDA0003254954710000074
solving the formula (2) by adopting a matrix direct product method to obtain a transformation matrix from the robot tail end coordinate system to the measuring device coordinate system:
X=(ATA)-1ATb (3)
wherein,
Figure BDA0003254954710000081
09×3zero matrix representing 9 rows and 3 columns, 09Representing a zero vector of 9 rows and 1 column. And X is the required hand-eye relationship.
S2, roughly splicing three-dimensional data under different pose view angles of the robot
After a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system is calculated through hand-eye calibration, a transformation matrix T from a robot base coordinate system to the robot tail end coordinate system is obtained;
converting points in the measurement point cloud of the object to be measured acquired by the measuring device from the measurement coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, wherein the transformation formula is as follows:
Figure BDA0003254954710000082
wherein (x)i,yi,zi) Coordinates of points in a measurement point cloud of an object to be measured acquired for a measuring device in the coordinate system of the measuring device, (x)i′,yi′,zi') are coordinates of points in the measurement point cloud converted to the robot base coordinate system. And carrying out point cloud rough registration between the measured point cloud of the object to be measured and the model point cloud under the robot base system and rough splicing of the measured point cloud under different pose visual angles of the robot.
S3, multi-field automatic background removal based on distance threshold
Due to the influence of the background of the measuring environment, many irrelevant data are collected at the same time, and as shown in fig. 3, many irrelevant backgrounds are around the measured target object. In order to achieve efficient automated measurements, automatic background removal is required. The invention adopts a multi-field filtering technology based on a distance threshold value to realize the automatic removal of background irrelevant data. The distance threshold criterion is:
{x′i|dxmin≤x′i≤dxmax,x′i∈R3} (5)
{y′i|dymin≤y′i≤dymax,y′i∈R3} (6)
{z′i|dzmin≤z′i≤dzmax,z′i∈R3} (7)
wherein dxmin,dymin,dzmin,dxmax,dymax,dzmaxThe lower limit and the upper limit of distance threshold values of points in the measured point cloud in the directions of the x axis, the y axis and the z axis are respectively set under the robot base coordinate system, and the six values can be obtained through a simulation model. Traversing all points in the measured point cloud by using the distance threshold criterion, and deleting all data outside the field to remove background data, and fig. 4 is a schematic diagram of the result of the invention adopting multi-field automatic background removal based on the distance threshold on the point cloud data of fig. 3.
S4, fine multi-view point cloud splicing based on graph optimization framework
On the basis of the conversion relation obtained by hands and eyes, the invention further optimizes the overall pose based on a graph optimization framework, models the multi-view point cloud registration problem under the pose graph optimization framework, expresses the corresponding point distance error information between the associated point clouds by a pre-calculated covariance matrix, and rapidly optimizes the multi-view point cloud registration problem based on the pose graph optimization framework.
Step S4 specifically includes the following sub-steps:
s41 pose graph modeling
The invention is based on a pose graph optimization method, ignores distance information between corresponding three-dimensional point pairs, and only optimizes inconsistency between the global pose and the relative pose. The pose obtained by the hand-eye splicing method is used as an initial value for measuring the spatial pose of the viewpoint, and multi-view point cloud data is usedThe global pose is a node to be optimized, and the directed edges connecting the nodes are taken as the relative pose T between the nodesi,jAnd modeling the pose graph. The directed connection relation can be stored and inquired in the pose graph by using a corresponding adjacency matrix, and the obtained adjacency matrix is also an information matrix.
S42, fine multi-view point cloud splicing based on pose graph optimization framework
Giving two correlated point cloud data f of different viewpoints i, jiAnd fjThe shape inconsistency between them can be measured as:
Figure BDA0003254954710000091
since the spatial rigid transformation does not affect the distance between the three-dimensional points before and after transformation, the above equation is equivalent to:
Figure BDA0003254954710000092
point cloud fiThree-dimensional point p in (1)iAnd it is at point cloud fjCorresponding point p in (1)jAfter being subjected to registration transformation Ti,jThe rear distance error is small, namely:
||Ti,jpj-pi||<∈ (10)
then equation (9) can be approximated as:
Figure BDA0003254954710000101
global position T of two measuring viewpointsj,TiAnd its relative pose Tj,iThe relationship between them satisfies:
Figure BDA0003254954710000102
thus, equation (11) can be written as:
Figure BDA0003254954710000103
wherein,
Figure BDA0003254954710000104
is an antisymmetric (Sview symmetry) operator.
A rigid transformation matrix T representing a rigid transformation in a three-dimensional space belongs to a Special Euclidean group (Special Euclidean group) defined as:
Figure BDA0003254954710000105
the lie algebra corresponding to SE (3) is defined as:
Figure BDA0003254954710000106
wherein
Figure BDA0003254954710000107
Figure BDA0003254954710000108
Xi is a six-dimensional vector, the front three-dimension is rotation, denoted as phi, and the rear three-dimension is translation, denoted as rho, and lambda represents the conversion from the vector to the matrix. Xi in the inventioni,jNamely a six-dimensional pose error vector shaped as xi.
Definition Gi=[-[pi]×I]Then, equation (13) above can be written as:
Figure BDA0003254954710000111
the above formula shows that the registration error between any two point cloud data can be represented by a pose error term delta xii,jAnd a covariance matrix omegai,jExpressed, delta is a pose error term delta xii,jIs multiplied by the perturbation factor. Wherein the covariance matrix omegai,jThe covariance matrix is only related to the local three-dimensional coordinates of the corresponding points and is not related to the global pose of the point cloud, so that the covariance matrix can be calculated in advance in the previous coarse registration process, and does not need to be changed in the iterative optimization process.
After calculating the geometric registration error representation between the two point clouds, the multi-view point cloud registration optimization target is as follows:
Figure BDA0003254954710000112
modeling the multi-view point cloud registration optimization target under a pose graph optimization framework through the above formula (14), and optimizing the above formula by using a Levenberg Marquardt algorithm to finally obtain an optimized global pose relationship.
S43, obtaining coordinates of measuring points by applying rigid body transformation matrix
And transforming the optimized global pose relationship to a final unified coordinate through matrix multiplication, and precisely splicing to obtain a complete measurement data model so as to complete rigid transformation of the measurement point cloud.
Compared with the prior art, the method has the advantages that the mark point is not required to be pasted, the influence of a positioning device is avoided, an additional auxiliary device is not required, the lightless background data can be automatically removed, the measurement precision is high, the requirement on the environment is not high, the efficiency is high, and the method has high robustness in actual measurement.
The present invention also discloses an electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus; the processor, the memory and the communication interface complete mutual communication through the bus; the memory stores program instructions executable by the processor, which invokes the program instructions to implement the methods of the invention described above.
The invention also discloses a computer readable storage medium which stores computer instructions for causing the computer to implement all or part of the steps of the method of the embodiment of the invention. The storage medium includes: u disk, removable hard disk, ROM, RAM, magnetic disk or optical disk, etc.
The above-described system embodiments are merely illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts shown as units may or may not be physical units, i.e. may be distributed over a plurality of network units. Without creative labor, a person skilled in the art can select some or all of the modules according to actual needs to achieve the purpose of the solution of the embodiment.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An automated three-dimensional measurement method for landmark-free point stitching, the method comprising:
calculating a transformation matrix X from a robot tail end coordinate system to a measuring device coordinate system through hand-eye calibration, wherein the measuring device is arranged at the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
converting points in the measurement point cloud of the object to be measured acquired by the measuring device from the measurement coordinate system to the robot base coordinate system according to the transformation matrix X from the robot end coordinate system to the measuring device coordinate system and the transformation matrix T from the robot base coordinate system to the robot end coordinate system, and performing point cloud rough splicing;
setting a distance threshold criterion, traversing all points in the measurement point cloud of the object to be measured based on the distance threshold, and automatically removing redundant background data;
setting a multi-view point cloud registration optimization target based on a pose graph optimization framework, and solving the optimization target to obtain an optimized global pose relationship; and rigidly transforming the optimized global pose relationship to a unified coordinate system, and carrying out multi-view point cloud precise splicing to obtain complete measurement data.
2. The method for automatic three-dimensional measurement without mark point splicing according to claim 1, wherein the formula for converting the points in the measured point cloud of the object to be measured obtained by the measuring device from the measuring coordinate system to the robot base coordinate system is as follows:
Figure FDA0003254954700000011
wherein (x)i,yi,zi) Coordinates of points in a measurement point cloud of an object to be measured acquired by a measurement device in a measurement device coordinate system, (x'i,y′i,z′i) And converting points in the measuring point cloud into coordinates under a robot base coordinate system.
3. The automated three-dimensional measurement method for landmark-free stitching according to claim 2, wherein the distance threshold criterion is:
{x′i|dxmin≤x′i≤dxmax,x′i∈R3}
{y′i|dymin≤y′i≤dymax,y′i∈R3}
{z′i|dzmin≤z′i≤dzmax,z′i∈R3}
wherein dxmin,dymin,dzmin,dxmax,dymax,dzmaxRespectively measuring the lower limit and the upper limit of distance threshold values of points in the point cloud in the directions of an x axis, a y axis and a z axis under a robot base coordinate system, and utilizing the lower limit and the upper limitThe distance threshold criterion traverses all points in the measured point cloud, deleting all data outside this field to remove background data.
4. The automated three-dimensional measurement method without landmark point registration according to claim 1, wherein the setting of the multi-view point cloud registration optimization objective based on the pose graph optimization framework specifically comprises:
taking the global pose of the measurement viewpoint as a node to be optimized, and taking the directed edge of the connecting node as the relative pose T between the nodesi,jModeling a pose graph;
using the pre-calculated covariance matrix omega to obtain the distance error information of corresponding point between two associated point cloudsi,jRepresents;
calculating associated point cloud f corresponding to measurement viewpoint i, jiAnd fjGeometric registration error between:
Figure FDA0003254954700000021
wherein, Tj,TiGlobal position and attitude, T, of two measuring viewpoints j, i respectivelyj,iIs its relative pose;
neglecting two associated point clouds fiAnd fjThe distance error of the corresponding three-dimensional points after registration transformation, the geometric registration error Ei,jTransforming, and using position and pose error term delta xi for registration error between any two related point cloud datai,jAnd a covariance matrix omegai,jTo express, namely:
Figure FDA0003254954700000022
wherein, delta is a pose error term delta xii,jIs multiplied by the small perturbation coefficient to the left,
Figure FDA0003254954700000023
Gi=[-[pi]×I],
Figure FDA0003254954700000024
is an antisymmetric operator;
according to geometric registration error Ei,jAnd constructing a multi-view point cloud registration optimization target, and performing global pose optimization.
5. The method of claim 4, wherein the point cloud f is a point cloudiThree-dimensional point p in (1)iAnd it is at point cloud fjCorresponding point p in (1)jAfter being subjected to registration transformation Ti,jThe subsequent distance error satisfies:
||Ti,jpj-pi||<∈
global poses T of two measurement viewpoints j, ij,TiAnd its relative pose Tj,iThe relationship between them satisfies:
Figure FDA0003254954700000031
wherein, I is an identity matrix,
Figure FDA0003254954700000032
for the pose error term, ^ represents the transformation from the vector to the matrix.
6. The automated three-dimensional measurement method without landmark point registration according to claim 4, wherein the formula of the multi-view point cloud registration optimization objective is:
Figure FDA0003254954700000033
wherein,
Figure FDA0003254954700000034
to measure the set of viewpoints, the optimization goal of the multi-view point cloud registration is to minimize e (t).
7. The method of claim 6, wherein the method comprises the steps of,
and solving the optimization target through a Levenberg Marquardt algorithm to obtain an optimized global pose relationship.
8. An automated three-dimensional measurement system without landmark joining, the system comprising:
a parameter calibration module: the system comprises a robot, a measuring device and a control device, wherein the robot is used for measuring the coordinate system of the tail end of the robot; acquiring a transformation matrix T from a robot base coordinate system to a robot tail end coordinate system;
a rough splicing module: the system comprises a measuring device, a measuring device and a robot base, wherein the measuring device is used for measuring the coordinate system of an object to be measured, and the robot base is used for measuring the coordinate system of the object to be measured;
a background removal module: the device comprises a distance threshold criterion, a distance threshold rule, a background data acquisition unit, a distance analysis unit and a background data acquisition unit, wherein the distance threshold criterion is used for traversing all points in the measurement point cloud of the object to be measured based on the distance threshold, and automatically removing redundant background data;
a pose optimization module: the system comprises a pose graph optimization framework, a multi-view point cloud registration optimization target and a global pose relationship optimization target, wherein the multi-view point cloud registration optimization target is set based on the pose graph optimization framework, and the optimization target is solved to obtain the optimized global pose relationship;
the fine splicing module: and the system is used for rigidly transforming the optimized global pose relationship to a unified coordinate system, and performing multi-view point cloud precise splicing to obtain complete measurement data.
9. An electronic device, comprising: at least one processor, at least one memory, a communication interface, and a bus;
the processor, the memory and the communication interface complete mutual communication through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to implement the method of any one of claims 1-7.
10. A computer-readable storage medium storing computer instructions for causing a computer to implement the method of any one of claims 1 to 7.
CN202111057005.4A 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing Active CN113865506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111057005.4A CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111057005.4A CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Publications (2)

Publication Number Publication Date
CN113865506A true CN113865506A (en) 2021-12-31
CN113865506B CN113865506B (en) 2023-11-24

Family

ID=78995190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111057005.4A Active CN113865506B (en) 2021-09-09 2021-09-09 Automatic three-dimensional measurement method and system without mark point splicing

Country Status (1)

Country Link
CN (1) CN113865506B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087448A1 (en) * 2022-10-28 2024-05-02 中山大学 High-precision measurement method and system for surface rotation angle of object

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248323A1 (en) * 2008-03-28 2009-10-01 Lockheed Martin Corporation System, program product, and related methods for registering three-dimensional models to point data representing the pose of a part
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN108986149A (en) * 2018-07-16 2018-12-11 武汉惟景三维科技有限公司 A kind of point cloud Precision Registration based on adaptive threshold
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN112232319A (en) * 2020-12-14 2021-01-15 成都飞机工业(集团)有限责任公司 Scanning splicing method based on monocular vision positioning
US20210095959A1 (en) * 2019-01-24 2021-04-01 Dalian University Of Technology 3D measurement model and spatial calibration method based on 1D displacement sensor
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112964196A (en) * 2021-02-05 2021-06-15 杭州思锐迪科技有限公司 Three-dimensional scanning method, system, electronic device and computer equipment
CN113205466A (en) * 2021-05-10 2021-08-03 南京航空航天大学 Incomplete point cloud completion method based on hidden space topological structure constraint

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090248323A1 (en) * 2008-03-28 2009-10-01 Lockheed Martin Corporation System, program product, and related methods for registering three-dimensional models to point data representing the pose of a part
CN104463894A (en) * 2014-12-26 2015-03-25 山东理工大学 Overall registering method for global optimization of multi-view three-dimensional laser point clouds
CN106780459A (en) * 2016-12-12 2017-05-31 华中科技大学 A kind of three dimensional point cloud autoegistration method
CN108375382A (en) * 2018-02-22 2018-08-07 北京航空航天大学 Position and attitude measuring system precision calibration method based on monocular vision and device
CN108986149A (en) * 2018-07-16 2018-12-11 武汉惟景三维科技有限公司 A kind of point cloud Precision Registration based on adaptive threshold
US20210095959A1 (en) * 2019-01-24 2021-04-01 Dalian University Of Technology 3D measurement model and spatial calibration method based on 1D displacement sensor
CN110335296A (en) * 2019-06-21 2019-10-15 华中科技大学 A kind of point cloud registration method based on hand and eye calibrating
CN110227876A (en) * 2019-07-15 2019-09-13 西华大学 Robot welding autonomous path planning method based on 3D point cloud data
US11037346B1 (en) * 2020-04-29 2021-06-15 Nanjing University Of Aeronautics And Astronautics Multi-station scanning global point cloud registration method based on graph optimization
CN112232319A (en) * 2020-12-14 2021-01-15 成都飞机工业(集团)有限责任公司 Scanning splicing method based on monocular vision positioning
CN112964196A (en) * 2021-02-05 2021-06-15 杭州思锐迪科技有限公司 Three-dimensional scanning method, system, electronic device and computer equipment
CN113205466A (en) * 2021-05-10 2021-08-03 南京航空航天大学 Incomplete point cloud completion method based on hidden space topological structure constraint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟凯;李中伟;史玉升;王从军;张李超;黄奎;: "组合式大尺寸三维测量系统中的结构参数标定算法", 天津大学学报, no. 05 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024087448A1 (en) * 2022-10-28 2024-05-02 中山大学 High-precision measurement method and system for surface rotation angle of object

Also Published As

Publication number Publication date
CN113865506B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN110426051B (en) Lane line drawing method and device and storage medium
CN109658457B (en) Method for calibrating arbitrary relative pose relationship between laser and camera
WO2021004416A1 (en) Method and apparatus for establishing beacon map on basis of visual beacons
CN105021124B (en) A kind of planar part three-dimensional position and normal vector computational methods based on depth map
JP5839971B2 (en) Information processing apparatus, information processing method, and program
CN109579695B (en) Part measuring method based on heterogeneous stereoscopic vision
CN114474056B (en) Monocular vision high-precision target positioning method for grabbing operation
CN111524194A (en) Positioning method and terminal for mutual fusion of laser radar and binocular vision
CN113777593B (en) Multi-laser radar external parameter calibration method and device based on servo motor auxiliary motion
CN114677435A (en) Point cloud panoramic fusion element extraction method and system
CN114022542A (en) Three-dimensional reconstruction-based 3D database manufacturing method
CN115457130A (en) Electric vehicle charging port detection and positioning method based on depth key point regression
CN116309882A (en) Tray detection and positioning method and system for unmanned forklift application
CN113865506B (en) Automatic three-dimensional measurement method and system without mark point splicing
CN113920191B (en) 6D data set construction method based on depth camera
CN111583342A (en) Target rapid positioning method and device based on binocular vision
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN112508933B (en) Flexible mechanical arm movement obstacle avoidance method based on complex space obstacle positioning
CN110853103B (en) Data set manufacturing method for deep learning attitude estimation
CN116358517B (en) Height map construction method, system and storage medium for robot
CN111415384B (en) Industrial image component accurate positioning system based on deep learning
CN115100287B (en) External parameter calibration method and robot
CN112819900B (en) Method for calibrating internal azimuth, relative orientation and distortion coefficient of intelligent stereography
CN112734842B (en) Auxiliary positioning method and system for centering installation of large ship equipment
CN110232715B (en) Method, device and system for self calibration of multi-depth camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Li Zhongwei

Inventor after: Liu Yubao

Inventor after: Zhong Kai

Inventor after: Yuan Chaofei

Inventor after: Yang Liu

Inventor before: Li Zhongwei

Inventor before: Zhong Kai

Inventor before: Liu Yubao

Inventor before: Yuan Chaofei

Inventor before: Yang Liu

GR01 Patent grant
GR01 Patent grant