CN111975756A - Hand-eye calibration system and method of 3D vision measurement system - Google Patents

Hand-eye calibration system and method of 3D vision measurement system Download PDF

Info

Publication number
CN111975756A
CN111975756A CN202010448062.4A CN202010448062A CN111975756A CN 111975756 A CN111975756 A CN 111975756A CN 202010448062 A CN202010448062 A CN 202010448062A CN 111975756 A CN111975756 A CN 111975756A
Authority
CN
China
Prior art keywords
coordinate
coordinate system
robot
vision sensor
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010448062.4A
Other languages
Chinese (zh)
Other versions
CN111975756B (en
Inventor
李文亮
曾恺田
陈海亮
李佳昌
王平江
郑康
吴梓鸿
洪东方
苏惠阳
何钊滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou-Hust Intelligent Manufacturing Future
Huazhong University of Science and Technology
Original Assignee
Quanzhou-Hust Intelligent Manufacturing Future
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou-Hust Intelligent Manufacturing Future, Huazhong University of Science and Technology filed Critical Quanzhou-Hust Intelligent Manufacturing Future
Priority to CN202010448062.4A priority Critical patent/CN111975756B/en
Publication of CN111975756A publication Critical patent/CN111975756A/en
Application granted granted Critical
Publication of CN111975756B publication Critical patent/CN111975756B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/023Cartesian coordinate type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to a hand-eye calibration system and a hand-eye calibration method of a 3D vision measurement system, wherein the method comprises the following steps: the method comprises the following steps of mounting a 3D vision sensor at the tail end of a robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually perpendicular and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor to obtain coordinate positions of the three first characteristic points; the robot translates the 3D vision sensor, three second characteristic points on the three straight lines are measured, and coordinate positions of the three second characteristic points are obtained; recording the pose of the robot after the 3D vision sensor is translated; calculating a translation vector of the 3D vision sensor, and calculating the pose of the intersection point of the three straight lines in a second measurement coordinate system; and calibrating a robot base coordinate system, and performing coordinate conversion to obtain a hand-eye matrix. Compared with the prior art, the method has the advantages of simple steps, workload and working time reduction, and high hand-eye calibration stability.

Description

Hand-eye calibration system and method of 3D vision measurement system
Technical Field
The invention relates to the technical field of robot vision sensing, in particular to a hand-eye calibration system and a calibration method of a 3D vision measurement system.
Background
In the prior art, an on-line hand-eye calibration and grabbing pose calculation method for a four-degree-of-freedom 4-R (2-SS) parallel robot stereoscopic vision hand-eye system, as disclosed in application number 201910446270.8, includes the following steps: stereoscopic Eye-to-hand model improvement with motion error compensation: an Eye-to-hand basic model of a camera fixed outside a robot body and a three-dimensional vision model based on nonlinear distortion in a hand-Eye system are constructed, and meanwhile, a hand-Eye model group of each camera and the robot in the three-dimensional vision is constructed according to the attitude relation between the cameras so as to improve the Eye-to-hand basic model of a single camera; the improved Eye-to-hand model is subjected to robot motion error compensation; solving an Eye-to-hand model based on vertical component correction: correcting vertical components in hand-eye calibration pose parameters based on vertical constraints of a calibration plate and a tail end clamping mechanism in the parallel robot according to calibration data of multiple motions of the robot acquired by each camera, and accurately solving all poses and motion errors in hand-eye calibration of the four-degree-of-freedom 4-R (2-SS) parallel robot with rotational motion constraints; 4-R (2-SS) parallel robot calibration motion planning based on Eye-to-hand model non-trivial solution constraint: establishing non-trivial solution constraint of an Eye-to-hand model based on the pose relationship between the calibration motions of the tail end clamping mechanism, and eliminating invalid poses in the calibration motions to plan the hand-Eye calibration motions of the tail end clamping mechanism of the parallel robot, thereby realizing the on-line hand-Eye calibration of the four-freedom-degree 4-R (2-SS) parallel robot with high precision and high efficiency; calculating the grabbing pose based on the stereoscopic vision and the 4-R (2-SS) parallel robot: the method comprises the steps of constructing a parallel robot grabbing model with error compensation by adopting robot motion errors obtained based on hand-eye calibration, meanwhile, calculating the optimal grabbing pose of an object under a camera coordinate system based on a stereoscopic vision model, calculating the current pose of a tail end clamping mechanism under a parallel robot basic coordinate system based on a parallel robot kinematic equation, calculating a conversion matrix between the current pose of the tail end clamping mechanism and the optimal grabbing pose by combining the grabbing model and the pose of the camera basic coordinate system obtained by on-line hand-eye calibration under the parallel robot basic coordinate system, and realizing grabbing pose calculation based on stereoscopic vision and the tail end clamping mechanism of the 4-R (2-SS) parallel robot. As can be seen from the above description, in the prior art, the line-to-hand-eye calibration method for the robot has complicated steps, needs to spend a large amount of hand-to-eye calibration workload and time, has high requirements on the professional level of technicians, and has poor stability of hand-to-eye calibration. It is necessary to solve these problems.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the above-mentioned problems in the prior art. Therefore, an object of the present invention is to provide a hand-eye calibration system and method for a 3D vision measurement system with simple steps, reduced workload and working time, and high stability of hand-eye calibration.
The technical scheme for solving the technical problems is as follows: a hand-eye calibration method of a 3D vision measurement system comprises the following steps:
a hand-eye calibration method of a 3D vision measurement system comprises the following steps:
step 1, mounting a 3D vision sensor at the tail end of a robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually vertical and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
step 2, measuring three second characteristic points on three straight lines by the robot translation 3D vision sensor, wherein the three second characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three second characteristic points; recording the pose of the robot after the 3D vision sensor is translated;
calculating a translation vector of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the pose of the intersection points of the three straight lines in a second measurement coordinate system { C } of the 3D vision sensor by using the translation vector;
and 4, calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot, performing coordinate conversion, and calculating the pose of a second measurement coordinate system { C } of the 3D vision sensor in a robot tool coordinate system { T }, namely a hand-eye matrix.
On the basis of the technical scheme, the invention can be further improved as follows.
Further, in the step 1, the three straight lines are respectively a straight line OA, a straight line OB and a straight line OC, and the straight line OA, the straight line OB and the straight line OC intersect at a point O; the coordinate position of the first feature point P1 of the straight line OA is [ X ]1 Y1 Z1]T(ii) a The coordinate position of the first feature point P2 of the straight line OB is [ X ]2 Y2 Z2]T(ii) a The coordinate position of the first feature point P3 of the straight line OC is [ X ]3 Y3 Z3]T
The coordinate position of the second feature point P4 of the straight line OA in step 2 is [ X4 Y4 Z4]T(ii) a The coordinate position of the second feature point P5 of the straight line OB is [ X ]5 Y5 Z5]T(ii) a The coordinate position of the second feature point P6 of the straight line OC is [ X ]6 Y6 Z6]T(ii) a The pose of the robot after the 3D vision sensor is translated is a homogeneous coordinate
Figure RE-GDA0002699984570000035
Where { T } represents the robot tool coordinate system and { B } represents the base coordinate system of the robot.
Further, the algorithm for calculating the translation vector of the 3D vision sensor in step 3 is:
let the translation vector of the 3D vision sensor be
Figure RE-GDA0002699984570000031
(x, y, z); converting the coordinate positions of the three first characteristic points into a coordinate system of three second characteristic points, namely a second measurement coordinate system { C } of the 3D vision sensor; the coordinate position of point P1 is converted to [ X ]1+x Y1+y Z1+z]TThe coordinate position of point P2 is converted to [ X ]2+x Y2+y Z2+z]TThe coordinate position of point P3 is converted to [ X ]3+x Y3+y Z3+z]T
The point P1 and the point P4 form a vector by using the coordinate system of the second feature point
Figure RE-GDA0002699984570000032
(X4-X1-x,Y4-Y1-y,Z4-Z1-z); p2 and P5 form a vector of
Figure RE-GDA0002699984570000033
(X5-X2-x,Y5-Y2-y,Z5-Z2-z); p3 and P6 form a vector of
Figure RE-GDA0002699984570000034
(X6-X3-x,Y6-Y3-y,Z6-Z3-z);
An algorithm is constructed by utilizing the included angles among the straight line OA, the straight line OB and the straight line OC:
Figure RE-GDA0002699984570000041
since the lines OA, OB and OC are perpendicular to each other two by two, the above formula can be simplified as follows:
Figure RE-GDA0002699984570000042
that is:
Figure RE-GDA0002699984570000043
unfolding to obtain:
Figure RE-GDA0002699984570000044
make a difference every 2 equations:
Figure RE-GDA0002699984570000045
writing in matrix form:
Figure RE-GDA0002699984570000046
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be solved, namely the translation vector of the 3D vision sensor is recorded as
Figure RE-GDA0002699984570000047
(a,b,c)。
Further, the plane equation of the plane OAB obtained by the point-normal formula of the coordinate position of the intersection O of the three straight lines in step 3 is as follows:
(X6-X3-a)(x-X4)+(Y6-Y3-b)(y-Y4)+(Z6-Z3-c)(z-Z4)=0;
and obtaining an equation set for solving the coordinate value of the point O of the intersection of the three straight lines by the same method:
Figure RE-GDA0002699984570000051
and (3) expanding the equation set to obtain:
Figure RE-GDA0002699984570000052
obtaining by solution:
Figure RE-GDA0002699984570000053
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be obtained by solution, namely the coordinate of the point O of the intersection of the three straight lines is obtained by solution and is marked as [ ijk ]]T
Further, in the step 3, a cartesian rectangular coordinate system { O } is established with the intersection point O of the three straight lines as the origin of the coordinate system; the X axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OA, the Y axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OB, and the Z axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OC; using vectors
Figure RE-GDA0002699984570000054
The coordinate value of the coordinate system is calculated, and the pose of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is calculated
Figure RE-GDA0002699984570000055
The method specifically comprises the following steps:
Figure RE-GDA0002699984570000056
wherein the column vector
Figure RE-GDA0002699984570000057
The position of the origin of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is represented, namely the obtained coordinate [ ijk ] of the intersection point O point of the three straight lines]T
Further, in the step 4:
calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot to obtain a matrix
Figure RE-GDA0002699984570000058
Perform coordinate conversion, i.e.
Figure RE-GDA0002699984570000059
According to the matrix operation rule, deducing
Figure RE-GDA00026999845700000510
Wherein the content of the first and second substances,
Figure RE-GDA00026999845700000511
the matrix describes the pose of the second measurement coordinate system { C } of the 3D vision sensor in the robot tool coordinate system { T }, namely the hand-eye matrix to be solved;
Figure RE-GDA00026999845700000512
the matrix is pose data of the robot when controlling the 3D vision sensor to perform the second measurement,
Figure RE-GDA00026999845700000513
the matrix being a transit machineThe robot is calibrated by calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines,
Figure RE-GDA0002699984570000061
the matrix is the pose of a Cartesian rectangular coordinate system { O } in a second measurement coordinate system { C } of the 3D vision sensor
Figure RE-GDA0002699984570000062
The invention has the beneficial effects that: the method has the advantages of simple steps, capability of greatly reducing the workload and the working time of the hand-eye calibration of the robot vision system, low requirement on the professional level of technicians and high stability of the hand-eye calibration.
Another technical solution of the present invention for solving the above technical problems is as follows: a hand-eye calibration system for a 3D vision measurement system, comprising:
the first coordinate acquisition module is used for installing the 3D vision sensor at the tail end of the robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually perpendicular and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
the first coordinate acquisition module measures three second characteristic points on three straight lines through the robot translation 3D vision sensor, the three second characteristic points are located on the same laser plane, and coordinate positions of the three second characteristic points are acquired; recording the pose of the robot after the 3D vision sensor translates;
the calculation module is used for calculating translation vectors of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the positions of the coordinate systems at the intersection points of the three straight lines in a second-time measurement coordinate system { C } of the 3D vision sensor by using the translation vectors;
and the coordinate conversion module calibrates a robot base coordinate system at the point O of the intersection point of the three straight lines through the robot, performs coordinate conversion, and calculates the pose of a second measurement coordinate system { C } of the 3D vision sensor in a robot tool coordinate system { T }, namely a hand-eye matrix.
The invention has the beneficial effects that: the workload and the working time of calibrating the hands and the eyes of the robot vision system can be greatly reduced, the requirement on the professional level of technicians is low, and the stability of calibrating the hands and the eyes is high.
Drawings
FIG. 1 is a flow chart of a hand-eye calibration method of a 3D vision measurement system according to the present invention;
FIG. 2 is a schematic view of the second measurement performed by the translational 3D vision sensor of the present invention;
FIG. 3 is a schematic diagram of a Cartesian rectangular coordinate system { O } established in accordance with the present invention;
fig. 4 is a block diagram of a hand-eye calibration system of a 3D vision measurement system according to the present invention.
In the drawings, the components represented by the respective reference numerals are listed below:
1. the device comprises a first coordinate obtaining module, a second coordinate obtaining module, a calculating module and a coordinate converting module.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
Example 1:
as shown in fig. 1 to 3, a hand-eye calibration method of a 3D vision measurement system includes the following steps:
step 1, mounting a 3D vision sensor at the tail end of a robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually vertical and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
step 2, measuring three second characteristic points on three straight lines by the robot translation 3D vision sensor, wherein the three second characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three second characteristic points; recording the pose of the robot after the 3D vision sensor is translated;
calculating a translation vector of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the pose of the intersection points of the three straight lines in a second measurement coordinate system { C } of the 3D vision sensor by using the translation vector;
and 4, calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot, performing coordinate conversion, and calculating the pose of a second measurement coordinate system { C } of the 3D vision sensor in a robot tool coordinate system { T }, namely a hand-eye matrix.
In the above embodiment, the three straight lines in step 1 are respectively a straight line OA, a straight line OB and a straight line OC, and the straight line OA, the straight line OB and the straight line OC intersect at a point O; the coordinate position of the first feature point P1 of the straight line OA is [ X ]1 Y1 Z1]T(ii) a The coordinate position of the first feature point P2 of the straight line OB is [ X ]2 Y2 Z2]T(ii) a The coordinate position of the first feature point P3 of the straight line OC is [ X ]3 Y3 Z3]T
The coordinate position of the second feature point P4 of the straight line OA in step 2 is [ X4 Y4 Z4]T(ii) a The coordinate position of the second feature point P5 of the straight line OB is [ X ]5 Y5 Z5]T(ii) a The coordinate position of the second feature point P6 of the straight line OC is [ X ]6 Y6 Z6]T(ii) a The pose of the robot after the 3D vision sensor is translated is a homogeneous coordinate
Figure RE-GDA0002699984570000086
Where { T } represents the robot tool coordinate system and { B } represents the base coordinate system of the robot.
In the above embodiment, the algorithm for calculating the translation vector of the 3D vision sensor in step 3 is as follows:
let the translation vector of the 3D vision sensor be
Figure RE-GDA0002699984570000081
(x, y, z); converting the coordinate positions of the three first characteristic points into a coordinate system of three second characteristic points, namely a second measurement coordinate system { C } of the 3D vision sensor; the coordinate position of point P1 is converted to [ X ]1+x Y1+y Z1+z]TThe coordinate position of point P2 is converted to [ X ]2+x Y2+y Z2+z]TThe coordinate position of point P3 is converted to [ X ]3+x Y3+y Z3+z]T
The point P1 and the point P4 form a vector by using the coordinate system of the second feature point
Figure RE-GDA0002699984570000082
(X4-X1-x,Y4-Y1-y,Z4-Z1-z); p2 and P5 form a vector of
Figure RE-GDA0002699984570000083
(X5-X2-x,Y5-Y2-y,Z5-Z2-z); p3 and P6 form a vector of
Figure RE-GDA0002699984570000084
(X6-X3-x,Y6-Y3-y,Z6-Z3-z);
An algorithm is constructed by utilizing the included angles among the straight line OA, the straight line OB and the straight line OC:
Figure RE-GDA0002699984570000085
since the lines OA, OB and OC are perpendicular to each other two by two, the above formula can be simplified as follows:
Figure RE-GDA0002699984570000091
that is:
Figure RE-GDA0002699984570000092
unfolding to obtain:
Figure RE-GDA0002699984570000093
make a difference every 2 equations:
Figure RE-GDA0002699984570000094
writing in matrix form:
Figure RE-GDA0002699984570000095
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be solved, namely the translation vector of the 3D vision sensor is recorded as
Figure RE-GDA0002699984570000096
(a,b,c)。
In the above embodiment, the plane equation of the plane OAB obtained by the point-normal formula of the coordinate position of the intersection O of the three straight lines in step 3 is as follows:
(X6-X3-a)(x-X4)+(Y6-Y3-b)(y-Y4)+(Z6-Z3-c)(z-Z4)=0;
and obtaining an equation set for solving the coordinate value of the point O of the intersection of the three straight lines by the same method:
Figure RE-GDA0002699984570000097
and (3) expanding the equation set to obtain:
Figure RE-GDA0002699984570000098
obtaining by solution:
Figure RE-GDA0002699984570000101
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be obtained by solution, namely the coordinate of the point O of the intersection of the three straight lines is obtained by solution and is marked as [ ijk ]]T
In the above embodiment, in step 3, a cartesian rectangular coordinate system { O } is established with an intersection O of the three straight lines as an origin of the coordinate system; the X axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OA, the Y axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OB, and the Z axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OC; using vectors
Figure RE-GDA0002699984570000102
The coordinate value of the coordinate system is calculated, and the pose of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is calculated
Figure RE-GDA0002699984570000103
The method specifically comprises the following steps:
Figure RE-GDA0002699984570000104
unfolding:
Figure RE-GDA0002699984570000105
wherein the column vector
Figure RE-GDA0002699984570000106
Representing the position of the origin of the Cartesian rectangular coordinate system { O } in the 3D vision sensor second measurement coordinate system { C }, i.e. the three obtained straight lines intersectCoordinate [ ijk ] of point O]T
In the above embodiment, in step 4:
calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot to obtain a matrix
Figure RE-GDA0002699984570000107
Perform coordinate conversion, i.e.
Figure RE-GDA0002699984570000108
According to the matrix operation rule, deducing
Figure RE-GDA0002699984570000109
Wherein the content of the first and second substances,
Figure RE-GDA00026999845700001010
the matrix describes the pose of the second measurement coordinate system { C } of the 3D vision sensor in the robot tool coordinate system { T }, namely the hand-eye matrix to be solved;
Figure RE-GDA00026999845700001011
the matrix is pose data of the robot when controlling the 3D vision sensor to perform the second measurement,
Figure RE-GDA00026999845700001012
the matrix is calibrated by a robot in a robot base coordinate calibration system at the point O of the intersection point of the three straight lines,
Figure RE-GDA0002699984570000111
the matrix is the pose of a Cartesian rectangular coordinate system { O } in a second measurement coordinate system { C } of the 3D vision sensor
Figure RE-GDA0002699984570000112
The hand-eye calibration method of the 3D vision measurement system is simple in steps, workload and working time of the hand-eye calibration of the robot vision system can be greatly reduced, requirements on professional levels of technicians are low, and the hand-eye calibration stability is high.
Example 2:
as shown in fig. 2 to 4, a hand-eye calibration system of a 3D vision measurement system includes:
the first coordinate acquisition module 1 is used for installing the 3D vision sensor at the tail end of the robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually perpendicular and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
the first coordinate acquisition module 1 measures three second characteristic points on three straight lines through the robot translation 3D vision sensor, the three second characteristic points are located on the same laser plane, and coordinate positions of the three second characteristic points are acquired; recording the pose of the robot after the 3D vision sensor translates;
the calculation module 3 is used for calculating translation vectors of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the positions of the coordinate systems at the intersection points of the three straight lines in a second measurement coordinate system { C } of the 3D vision sensor by using the translation vectors;
and the coordinate conversion module 4 is used for calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines through the robot, performing coordinate conversion, and calculating the pose of the second measurement coordinate system { C } of the 3D vision sensor in the robot tool coordinate system { T }, namely a hand-eye matrix.
In the above embodiment, the three straight lines of the first obtained coordinate module are respectively a straight line OA, a straight line OB and a straight line OC, and the straight line OA, the straight line OB and the straight line OC intersect at the point O; the coordinate position of the first feature point P1 of the straight line OA is [ X ]1 Y1 Z1]T(ii) a The coordinate position of the first feature point P2 of the straight line OB is [ X ]2 Y2 Z2]T(ii) a Coordinates of first feature point P3 of straight line OCPosition is [ X ]3Y3 Z3]T
The coordinate position of the second feature point P4 of the straight line OA of the second acquisition coordinate module 2 is [ X4 Y4 Z4]T(ii) a The coordinate position of the second feature point P5 of the straight line OB is [ X ]5 Y5 Z5]T(ii) a The coordinate position of the second feature point P6 of the straight line OC is [ X ]6Y6 Z6]T(ii) a The pose of the robot after the 3D vision sensor is translated is a homogeneous coordinate
Figure RE-GDA0002699984570000121
Where { T } represents the robot tool coordinate system and { B } represents the base coordinate system of the robot.
In the above embodiment, the algorithm for calculating the translation vector of the 3D vision sensor by the calculating module 3 is as follows:
let the translation vector of the 3D vision sensor be
Figure RE-GDA0002699984570000122
(x, y, z); converting the coordinate positions of the three first characteristic points into a coordinate system of three second characteristic points, namely a second measurement coordinate system { C } of the 3D vision sensor; the coordinate position of point P1 is converted to [ X ]1+x Y1+y Z1+z]TThe coordinate position of point P2 is converted to [ X ]2+x Y2+y Z2+z]TThe coordinate position of point P3 is converted to [ X ]3+x Y3+y Z3+z]T
The point P1 and the point P4 form a vector by using the coordinate system of the second feature point
Figure RE-GDA0002699984570000123
(X4-X1-x,Y4-Y1-y,Z4-Z1-z); p2 and P5 form a vector of
Figure RE-GDA0002699984570000124
(X5-X2-x,Y5-Y2-y,Z5-Z2-z); p3 and P6 form a vector of
Figure RE-GDA0002699984570000125
(X6-X3-x,Y6-Y3-y,Z6-Z3-z);
An algorithm is constructed by utilizing the included angles among the straight line OA, the straight line OB and the straight line OC:
Figure RE-GDA0002699984570000126
since the lines OA, OB and OC are perpendicular to each other two by two, the above formula can be simplified as follows:
Figure RE-GDA0002699984570000127
that is:
Figure RE-GDA0002699984570000131
unfolding to obtain:
Figure RE-GDA0002699984570000132
make a difference every 2 equations:
Figure RE-GDA0002699984570000133
writing in matrix form:
Figure RE-GDA0002699984570000134
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
can be solved to obtain the values of x, y and z,i.e. the translation vector of the 3D vision sensor is noted as
Figure RE-GDA0002699984570000135
(a,b,c)。
In the above embodiment, the plane equation of the plane OAB obtained by the point-normal formula of the coordinate position of the intersection O of the three straight lines of the calculation module 3 is as follows:
(X6-X3-a)(x-X4)+(Y6-Y3-b)(y-Y4)+(Z6-Z3-c)(z-Z4)=0;
and obtaining an equation set for solving the coordinate value of the point O of the intersection of the three straight lines by the same method:
Figure RE-GDA0002699984570000136
and (3) expanding the equation set to obtain:
Figure RE-GDA0002699984570000137
obtaining by solution:
Figure RE-GDA0002699984570000138
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be obtained by solution, namely the coordinate of the point O of the intersection of the three straight lines is obtained by solution and is marked as [ ijk ]]T
In the above embodiment, the calculating module 3 establishes a cartesian rectangular coordinate system { O } with an intersection O of three straight lines as an origin of the coordinate system; the X axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OA, the Y axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OB, and the Z axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OC; using vectors
Figure RE-GDA0002699984570000141
The coordinate value of the coordinate system is calculated, and the pose of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is calculated
Figure RE-GDA0002699984570000142
The method specifically comprises the following steps:
Figure RE-GDA0002699984570000143
unfolding:
Figure RE-GDA0002699984570000144
wherein the column vector
Figure RE-GDA0002699984570000145
The position of the origin of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is represented, namely the obtained coordinate [ ijk ] of the intersection point O point of the three straight lines]T
In the above embodiment, the coordinate conversion module 4:
calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot to obtain a matrix
Figure RE-GDA0002699984570000146
Perform coordinate conversion, i.e.
Figure RE-GDA0002699984570000147
According to the matrix operation rule, deducing
Figure RE-GDA0002699984570000148
Wherein the content of the first and second substances,
Figure RE-GDA0002699984570000149
the matrix describes the pose of the second measurement coordinate system C of the 3D vision sensor in the robot tool coordinate system T,namely, the hand-eye matrix to be solved;
Figure RE-GDA00026999845700001410
the matrix is pose data of the robot when controlling the 3D vision sensor to perform the second measurement,
Figure RE-GDA00026999845700001411
the matrix is calibrated by a robot in a robot base coordinate calibration system at the point O of the intersection point of the three straight lines,
Figure RE-GDA00026999845700001412
the matrix is the pose of a Cartesian rectangular coordinate system { O } in a second measurement coordinate system { C } of the 3D vision sensor
Figure RE-GDA00026999845700001413
The hand-eye calibration system of the 3D vision measurement system can greatly reduce the workload and the working time of the hand-eye calibration of the robot vision system, has low requirements on the professional level of technicians, and has high stability of the hand-eye calibration.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A hand-eye calibration method of a 3D vision measurement system is characterized by comprising the following steps:
step 1, mounting a 3D vision sensor at the tail end of a robot, setting three non-coplanar straight lines in a space, wherein the three straight lines are mutually vertical and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
step 2, measuring three second characteristic points on three straight lines by the robot translation 3D vision sensor, wherein the three second characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three second characteristic points; recording the pose of the robot after the 3D vision sensor is translated;
calculating translation vectors of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the pose of the coordinate system at the intersection of the three straight lines in the second-time measurement coordinate system { C } of the 3D vision sensor by using the translation vectors;
and 4, calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot, performing coordinate conversion, and calculating the pose of a second measurement coordinate system { C } of the 3D vision sensor in a robot tool coordinate system { T }, namely a hand-eye matrix.
2. The hand-eye calibration method of the 3D vision measurement system according to claim 1, wherein the three straight lines in step 1 are a straight line OA, a straight line OB and a straight line OC, respectively, and the straight line OA, the straight line OB and the straight line OC intersect at a point O; the coordinate position of the first feature point P1 of the straight line OA is [ X ]1 Y1 Z1]T(ii) a The coordinate position of the first feature point P2 of the straight line OB is [ X ]2 Y2Z2]T(ii) a The coordinate position of the first feature point P3 of the straight line OC is [ X ]3 Y3 Z3]T
The coordinate position of the second feature point P4 of the straight line OA in step 2 is [ X4 Y4 Z4]T(ii) a The coordinate position of the second feature point P5 of the straight line OB is [ X ]5 Y5 Z5]T(ii) a The coordinate position of the second feature point P6 of the straight line OC is [ X ]6 Y6 Z6]T(ii) a The pose of the robot after the 3D vision sensor is translated is a homogeneous coordinate
Figure RE-FDA0002699984560000011
Where { T } represents the robot tool coordinate system and { B } represents the base coordinate system of the robot.
3. The hand-eye calibration method of 3D vision measurement system according to claim 2, wherein the algorithm for calculating the translation vector of the 3D vision sensor in step 3 is:
let the translation vector of the 3D vision sensor be
Figure RE-FDA0002699984560000025
(x, y, z); converting the coordinate positions of the three first characteristic points into a coordinate system of three second characteristic points, namely a second measurement coordinate system { C } of the 3D vision sensor; the coordinate position of point P1 is converted to [ X ]1+x Y1+y Z1+z]TThe coordinate position of point P2 is converted to [ X ]2+x Y2+y Z2+z]TThe coordinate position of point P3 is converted to [ X ]3+x Y3+y Z3+z]T
The point P1 and the point P4 form a vector by using the coordinate system of the second feature point
Figure RE-FDA0002699984560000026
(X4-X1-x,Y4-Y1-y,Z4-Z1-z); p2 and P5 form a vector of
Figure RE-FDA0002699984560000027
(X5-X2-x,Y5-Y2-y,Z5-Z2-z); p3 and P6 form a vector of
Figure RE-FDA0002699984560000028
(X6-X3-x,Y6-Y3-y,Z6-Z3-z);
An algorithm is constructed by utilizing the included angles among the straight line OA, the straight line OB and the straight line OC:
Figure RE-FDA0002699984560000021
since the lines OA, OB and OC are perpendicular to each other two by two, the above formula can be simplified as follows:
Figure RE-FDA0002699984560000022
that is:
Figure RE-FDA0002699984560000023
unfolding to obtain:
Figure RE-FDA0002699984560000024
make a difference every 2 equations:
Figure RE-FDA0002699984560000031
writing in matrix form:
Figure RE-FDA0002699984560000032
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be solved, namely the translation vector of the 3D vision sensor is recorded as
Figure RE-FDA0002699984560000033
(a,b,c)。
4. The hand-eye calibration method of 3D vision measuring system according to claim 3, wherein the plane equation of the plane OAB obtained by the point method of the coordinate position of the intersection O of the three straight lines in step 3 is as follows:
(X6-X3-a)(x-X4)+(Y6-Y3-b)(y-Y4)+(Z6-Z3-c)(z-Z4)=0;
and obtaining an equation set for solving the coordinate value of the point O of the intersection of the three straight lines by the same method:
Figure RE-FDA0002699984560000034
and (3) expanding the equation set to obtain:
Figure RE-FDA0002699984560000035
obtaining by solution:
Figure RE-FDA0002699984560000036
wherein, the meaning of the upper right corner + of the matrix: a. the+A generalized inverse matrix representing an A matrix;
the values of x, y and z can be obtained by solution, namely the coordinate of the point O of the intersection of the three straight lines is obtained by solution and is marked as [ ijk ]]T
5. The hand-eye calibration method of the 3D vision measurement system as claimed in claim 4, wherein in the step 3, a Cartesian rectangular coordinate system { O } is established with an intersection point O of three straight lines as an origin of the coordinate system; the X axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OA, the Y axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OB, and the Z axis of the Cartesian rectangular coordinate system (O) coincides with a straight line OC; using vectors
Figure RE-FDA0002699984560000041
The coordinate value of the coordinate system is calculated, and the pose of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is calculated
Figure RE-FDA0002699984560000042
Figure RE-FDA0002699984560000043
The method specifically comprises the following steps:
Figure RE-FDA0002699984560000044
wherein the column vector
Figure RE-FDA0002699984560000045
The position of the origin of the Cartesian rectangular coordinate system { O } in the 3D vision sensor secondary measurement coordinate system { C } is represented, namely the obtained coordinate [ ijk ] of the intersection point O point of the three straight lines]T
6. The hand-eye calibration method of 3D vision measurement system according to claim 4, wherein in step 4:
calibrating a robot base coordinate system at the point O of the intersection point of the three straight lines by the robot to obtain a matrix
Figure RE-FDA0002699984560000046
Perform coordinate conversion, i.e.
Figure RE-FDA0002699984560000047
According to the matrix operation rule, deducing
Figure RE-FDA0002699984560000048
Wherein the content of the first and second substances,
Figure RE-FDA0002699984560000049
the matrix describes the pose of the second measurement coordinate system { C } of the 3D vision sensor in the robot tool coordinate system { T }, namely the hand-eye matrix to be solved;
Figure RE-FDA00026999845600000410
the matrix is pose data of the robot when controlling the 3D vision sensor to perform the second measurement,
Figure RE-FDA00026999845600000411
the matrix is calibrated by a robot in a robot base coordinate calibration system at the point O of the intersection point of the three straight lines,
Figure RE-FDA00026999845600000412
the matrix is the pose of a Cartesian rectangular coordinate system { O } in a second measurement coordinate system { C } of the 3D vision sensor
Figure RE-FDA00026999845600000413
7. A hand-eye calibration system for a 3D vision measurement system, comprising:
the first coordinate acquisition module (1) is used for installing the 3D vision sensor at the tail end of the robot, setting three non-coplanar straight lines in a space, and enabling the three straight lines to be mutually perpendicular and intersect at the same point; measuring three first characteristic points on three straight lines by using a 3D vision sensor, wherein the three first characteristic points are positioned on the same laser plane, and acquiring coordinate positions of the three first characteristic points;
the first coordinate acquisition module (1) measures three second characteristic points on three straight lines through the robot translation 3D vision sensor, the three second characteristic points are located on the same laser plane, and coordinate positions of the three second characteristic points are acquired; recording the pose of the robot after the 3D vision sensor translates;
the calculation module (3) is used for calculating translation vectors of the 3D vision sensor by using the coordinate positions of the three first characteristic points and the coordinate positions of the three second characteristic points, and calculating the positions of the coordinate systems at the intersection points of the three straight lines in a second measurement coordinate system { C } of the 3D vision sensor by using the translation vectors;
and the coordinate conversion module (4) calibrates a robot base coordinate system at the point O of the intersection point of the three straight lines through the robot, performs coordinate conversion, and calculates the pose of a second measurement coordinate system { C } of the 3D vision sensor in a robot tool coordinate system { T }, namely a hand-eye matrix.
CN202010448062.4A 2020-05-25 2020-05-25 Hand-eye calibration system and method of 3D vision measurement system Active CN111975756B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010448062.4A CN111975756B (en) 2020-05-25 2020-05-25 Hand-eye calibration system and method of 3D vision measurement system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010448062.4A CN111975756B (en) 2020-05-25 2020-05-25 Hand-eye calibration system and method of 3D vision measurement system

Publications (2)

Publication Number Publication Date
CN111975756A true CN111975756A (en) 2020-11-24
CN111975756B CN111975756B (en) 2022-02-15

Family

ID=73442019

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010448062.4A Active CN111975756B (en) 2020-05-25 2020-05-25 Hand-eye calibration system and method of 3D vision measurement system

Country Status (1)

Country Link
CN (1) CN111975756B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113500584A (en) * 2021-07-15 2021-10-15 西北工业大学 Tail end error correction system and method of three-degree-of-freedom parallel robot
CN113514017A (en) * 2021-05-06 2021-10-19 南京理工大学 Parallel driving mechanism moving platform pose measuring method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377404A (en) * 2008-07-11 2009-03-04 北京航空航天大学 Method for disambiguating space round gesture recognition ambiguity based on angle restriction
US20090157226A1 (en) * 2004-11-19 2009-06-18 Dynalog ,Inc. Robot-cell calibration
CN102460065A (en) * 2009-06-25 2012-05-16 佳能株式会社 Information processing apparatus, information processing method, and program
CN104006825A (en) * 2013-02-25 2014-08-27 康耐视公司 System and method for calibration of machine vision cameras along at least three discrete planes
CN106940894A (en) * 2017-04-12 2017-07-11 无锡职业技术学院 A kind of hand-eye system self-calibrating method based on active vision
US9821466B2 (en) * 2014-12-21 2017-11-21 X Development Llc Devices and methods for encoder calibration
WO2019019136A1 (en) * 2017-07-28 2019-01-31 Qualcomm Incorporated Systems and methods for utilizing semantic information for navigation of a robotic device
CN209820416U (en) * 2019-04-29 2019-12-20 泉州华数机器人有限公司 Calibration device for monocular vision system
CN110645956A (en) * 2019-09-24 2020-01-03 南通大学 Multi-channel visual ranging method imitating stereoscopic vision of insect compound eye
CN110827359A (en) * 2019-10-29 2020-02-21 武汉大学 Checkerboard trihedron-based camera and laser external reference checking and correcting method and device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090157226A1 (en) * 2004-11-19 2009-06-18 Dynalog ,Inc. Robot-cell calibration
CN101377404A (en) * 2008-07-11 2009-03-04 北京航空航天大学 Method for disambiguating space round gesture recognition ambiguity based on angle restriction
CN102460065A (en) * 2009-06-25 2012-05-16 佳能株式会社 Information processing apparatus, information processing method, and program
CN104006825A (en) * 2013-02-25 2014-08-27 康耐视公司 System and method for calibration of machine vision cameras along at least three discrete planes
US9821466B2 (en) * 2014-12-21 2017-11-21 X Development Llc Devices and methods for encoder calibration
CN106940894A (en) * 2017-04-12 2017-07-11 无锡职业技术学院 A kind of hand-eye system self-calibrating method based on active vision
WO2019019136A1 (en) * 2017-07-28 2019-01-31 Qualcomm Incorporated Systems and methods for utilizing semantic information for navigation of a robotic device
CN209820416U (en) * 2019-04-29 2019-12-20 泉州华数机器人有限公司 Calibration device for monocular vision system
CN110645956A (en) * 2019-09-24 2020-01-03 南通大学 Multi-channel visual ranging method imitating stereoscopic vision of insect compound eye
CN110827359A (en) * 2019-10-29 2020-02-21 武汉大学 Checkerboard trihedron-based camera and laser external reference checking and correcting method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PINGJIANG WANG等: "Calibration Method of Roof Weld Coating Robot System Based", 《INTERNATIONAL JOURNAL OF PRECISION ENGINEERING AND MANUFACTURING》 *
郭新年等: "基于主动视觉的手眼矩阵和光平面标定方法", 《计算机工程与应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113514017A (en) * 2021-05-06 2021-10-19 南京理工大学 Parallel driving mechanism moving platform pose measuring method
CN113500584A (en) * 2021-07-15 2021-10-15 西北工业大学 Tail end error correction system and method of three-degree-of-freedom parallel robot
CN113500584B (en) * 2021-07-15 2022-06-28 西北工业大学 Tail end error correction system and method of three-degree-of-freedom parallel robot

Also Published As

Publication number Publication date
CN111975756B (en) 2022-02-15

Similar Documents

Publication Publication Date Title
CN110640745B (en) Vision-based robot automatic calibration method, equipment and storage medium
EP3091405B1 (en) Method, device and system for improving system accuracy of x-y motion platform
JP4021413B2 (en) Measuring device
CN106777656B (en) Industrial robot absolute accuracy calibration method based on PMPSD
CN110108208B (en) Error compensation method of five-axis non-contact measuring machine
CN110757504B (en) Positioning error compensation method of high-precision movable robot
JP5618770B2 (en) Robot calibration apparatus and calibration method
CN111975756B (en) Hand-eye calibration system and method of 3D vision measurement system
CN113386136B (en) Robot posture correction method and system based on standard spherical array target estimation
CN110136204B (en) Sound film dome assembly system based on calibration of machine tool position of bilateral telecentric lens camera
CN110666798A (en) Robot vision calibration method based on perspective transformation model
JP2009534681A (en) Differential calibration
CN113211431B (en) Pose estimation method based on two-dimensional code correction robot system
CN112873213B (en) Method for improving coordinate system calibration precision of six-joint robot tool
CN112454366A (en) Hand-eye calibration method
CN114347013A (en) Method for assembling printed circuit board and FPC flexible cable and related equipment
CN115284292A (en) Mechanical arm hand-eye calibration method and device based on laser camera
CN115179323A (en) Machine end pose measuring device based on telecentric vision constraint and precision improving method
CN114029982A (en) Hand-eye calibration device and calibration method of camera outside robot arm
CN112975959B (en) Machine vision-based radiator assembling and positioning method, system and medium
CN112762822B (en) Mechanical arm calibration method and system based on laser tracker
JP6912529B2 (en) How to correct the visual guidance robot arm
CN108109179B (en) Camera attitude correction method based on pinhole camera model
CN116394254A (en) Zero calibration method and device for robot and computer storage medium
CN115205390A (en) Industrial robot surface structured light stereo camera pose online calibration method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201124

Assignee: QUANZHOU HUASHU ROBOT CO.,LTD.

Assignor: QUANZHOU-HUST INTELLIGENT MANUFACTURING FUTURE

Contract record no.: X2024350000062

Denomination of invention: A Hand Eye Calibration System and Method for 3D Vision Measurement System

Granted publication date: 20220215

License type: Common License

Record date: 20240429