CN110555889B - CALTag and point cloud information-based depth camera hand-eye calibration method - Google Patents

CALTag and point cloud information-based depth camera hand-eye calibration method Download PDF

Info

Publication number
CN110555889B
CN110555889B CN201910793218.XA CN201910793218A CN110555889B CN 110555889 B CN110555889 B CN 110555889B CN 201910793218 A CN201910793218 A CN 201910793218A CN 110555889 B CN110555889 B CN 110555889B
Authority
CN
China
Prior art keywords
calibration
matrix
hand
eye
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910793218.XA
Other languages
Chinese (zh)
Other versions
CN110555889A (en
Inventor
陶唐飞
王伟
徐佳宇
杨兴宇
徐光华
郑翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201910793218.XA priority Critical patent/CN110555889B/en
Publication of CN110555889A publication Critical patent/CN110555889A/en
Application granted granted Critical
Publication of CN110555889B publication Critical patent/CN110555889B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

A depth camera hand-eye calibration method based on CALTag and point cloud information comprises the steps of firstly establishing a mathematical model of hand-eye calibration to obtain a hand-eye calibration equation AX ═ XB; then, a CALTag calibration plate is used for replacing a traditional chessboard pattern calibration plate, so that the identification precision of the position and posture of the calibration plate is improved, and meanwhile, a matrix A in a hand-eye calibration equation is calculated; then combining the obtained matrix A and positive kinematics of the mechanical arm to solve a matrix B, and solving a hand-eye calibration equation A.X ═ X.B based on the lie group theory; finally, by using the acquired point cloud depth information, a calibration matrix more suitable for a three-dimensional visual scene is obtained based on a trust domain reflection optimization iterative algorithm; the method can accurately determine the coordinate transformation between the point cloud coordinate system and the robot coordinate system, has high grabbing precision, and is suitable for application scenes of grabbing objects by a mechanical arm in three-dimensional vision.

Description

CALTag and point cloud information-based depth camera hand-eye calibration method
Technical Field
The invention relates to the technical field of machine vision, in particular to a CALTag and point cloud information-based hand-eye calibration method for a depth camera.
Background
In industrial robot applications, the identification and gripping of target objects is the most common application method for industrial robots in production. The robot system based on machine vision comprises a vision part, a mechanical arm part and a working environment, the calibration problem of the robot system is the problem of solving the coordinate transformation relation between the vision part and the mechanical arm part, and the problem of solving the transformation relation between a robot base coordinate system and a camera coordinate system is called as an eye-hand calibration problem in an academic sense. The robot needs to determine the grabbing pose of the robot according to the pose of a target object obtained by a camera, the pose is in a camera coordinate system and needs to be converted into a robot tool coordinate system to be used for further determining the grabbing pose of the robot end effector, and hand-eye calibration is the premise of completing the conversion and realizing hand-eye coordination. In addition, the hand-eye calibration precision can directly influence the final robot grabbing precision.
At present, a plurality of hand-eye calibration methods exist, most of the methods are robot systems aiming at a mechanical arm and a monocular industrial camera, and robot systems aiming at a mechanical arm plus a depth camera also exist. Because the point cloud coordinate system of the depth camera and the two-dimensional image coordinate system are not absolutely superposed, a certain deviation exists when the traditional monocular industrial camera hand-eye calibration method is directly applied to the depth camera. In a scene of grabbing objects in any posture, the mechanical arm is required to be capable of accurately positioning to any position of a working space, and the existing hand-eye calibration method cannot meet the grabbing requirement due to the fact that the coordinate transformation of a point cloud coordinate system and a robot coordinate system cannot be accurately determined, so that the grabbing precision is not high.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a CALTag and point cloud information-based depth camera hand-eye calibration method, which can accurately determine the coordinate transformation between a point cloud coordinate system and a robot coordinate system, has high capture precision, and is suitable for an application scene in which a mechanical arm captures an object in three-dimensional vision.
In order to achieve the purpose, the invention adopts the technical scheme that:
a CALTag and point cloud information-based hand-eye calibration method for a depth camera comprises the following steps:
step 1), establishing a mathematical model of a hand-eye calibration system:
in the Hand-Eye calibration system, according to the difference of the mutual orientation relation between the camera and the robot, the Hand-Eye relation is divided into an Eye-in-Hand model and an Eye-to-Hand model; the Eye-in-Hand model is that a video camera is arranged at the tail end of a robot, and the video camera moves along with the tail end in the working process of the robot; the camera in the Eye-to-Hand model is fixedly installed, and the camera is kept static relative to the target in the moving process of the robot; for the Eye-to-Hand model, a calibration plate is arranged at the tail end of a mechanical arm, and the pose of the calibration plate is estimated by a camera, so that the automatic calibration of hands and eyes is realized;
the Eye-to-Hand model comprises four coordinate systems: a robot origin coordinate system delta base, a robot end joint coordinate system delta tool, a calibration board origin coordinate system delta cal, a depth camera origin coordinate system delta cam,
the variation between coordinate systems is defined as follows:
TbaseHtool(i): coordinate transformation from Δ base to Δ tool in the ith experiment;
TtoolHcal: coordinate transformation from Δ tool to Δ cal;
TcamHcal(i): coordinate transformation from Δ cam to Δ cal in the ith experiment;
TcamHbase: coordinate transformation from deltacam to deltabase,
in hand-eye calibration, the position of the mechanical arm is changed, then an image of a calibration plate on the mechanical arm is shot through a camera, the position value of each joint of the mechanical arm is recorded at the moment, and the calibration times are more than 12; according to the relationship among the coordinate systems, the coordinate transformation relation in the ith calibration is obtained as follows:
Figure BDA0002180094670000031
then, coordinate transformation relational expression in the (i + 1) th calibration is established in a simultaneous manner, and T is eliminatedtoolHcalThe following formula is obtained:
Figure BDA0002180094670000032
note the book
Figure BDA0002180094670000033
Calculate the solution target TcamHbaseFor X, the formula is obtained:
A·X=X·B
the physical meaning of the matrix A is a coordinate transformation matrix generated by the movement of the calibration plate under the camera coordinate system before and after the movement of the mechanical arm; the physical meaning of the matrix B is a coordinate transformation matrix generated by the robot end joint coordinate system delta tool moving under the robot coordinate system before and after the movement of the mechanical arm; after the matrix A and the matrix B are calculated, the hand-eye calibration problem is converted into a problem of solving a classical equation A.X.X.B;
step 2), identifying the pose of the CALTag calibration plate:
the method for identifying the pose of the CALTag calibration plate comprises the following steps of solving a matrix A in a classical equation A.X.X.B, wherein the matrix A is obtained by determining the pose of the calibration plate in a camera coordinate system:
2.1) calculating the connected region: converting an original image into a gray image, detecting the edge of the gray image by using a Sobel filter, removing non-edge pixel points by using morphology, then carrying out self-adaptive thresholding on the image, and finally calculating a connected region in the image;
2.2) removing invalid connected regions: filtering out connected regions in the background contained in the connected regions generated in the step 2.1), wherein the filtering criteria include two: one is a connected region having an area less than 256 pixels and a size larger than one of the input images 1/8; second, the Euler number is smaller than 0 and greater than the connected region of 3, the Euler number is defined as the total number of objects in the picture minus the number of inner holes in them;
2.3) quadrilateral fitting: calculating gradient of each pixel point on the rear edge of the communication area removed in the step 2.2), obtaining four main edge directions by using a K-Means clustering algorithm of Lloyd, obtaining boundary lines by using a least square method, determining the parallel relation between the boundary lines, and finally fitting a quadrangle by using two pairs of found parallel lines;
2.4) corner detection: for the quadrangle fitted in the step 2.3), the pixel point in the x-neighborhood of the marking point is p, the product of the gradient of the corner point and the pixel point in the neighborhood in the edge direction is 0, namely
Figure 1
. Because of the fact that
Figure 2
The system is a linear system and can calculate the accurate position of the angle in an iterative manner, wherein the initial value is the quadrilateral vertex value obtained in the step 2.3);
2.5) mark verification: determining the position of the binary code at the center of each square grid according to the corner point position obtained in the step 2.4), then checking the binary code, and only keeping a mark for checking the binary code to be correct;
2.6) locating missing corner points: finding the corner points which are visible in the input image but are omitted during detection, if at least one binary code is correctly identified, the theoretical positions of other corner points in the image can be deduced as the positions of the corner points in the chessboard pattern are known in advance;
2.7) corner point verification: verifying the authenticity of the corner determined in step 2.6), which must pass two tests to be considered as a true corner: the first test judges whether the pixel intensity of the local neighborhood of the corner point conforms to beta distribution; secondly, testing whether small circles around the corner points meet the characteristic of black-white intersection or not;
step 3), solving a calibration matrix based on the lie group theory:
as known from the mathematical equation a · X ═ X · B of the hand-eye calibration, before solving the calibration matrix X, the matrix a and the matrix B must be determined, where the matrix a has been obtained in step 2); the matrix B is obtained by acquiring joint values of the mechanical arm at each position and analyzing by utilizing forward kinematics of the mechanical arm; obtaining a plurality of groups of observed values by changing the position of the mechanical arm for a plurality of times (A)1,B1), (A2,B2)…(Ak,Bk) Converting the solution of the formula a · X ═ X · B into a minimization problem, and mathematically describing the following:
Figure BDA0002180094670000043
wherein d represents a distance measure over the euclidean group;
for the formula a · X ═ X · B, the following matrix form is developed
Figure BDA0002180094670000051
The simplification is as follows:
RARx=RxRB
RAtx+tA=RxtB+tx
memory logRA=[α],logRB=[β]Obtained by the lie group theory
α=βRx
When there are more than two groups of observed values, the solution is
Rx=(MTM)-0.5MT
tx=(RA-I)-1(RxtB-tA)
Wherein:
Figure BDA0002180094670000052
step 4), optimizing the calibration matrix obtained in the step 3) by using the point cloud depth information: by introducing a marking point on the end effector of the mechanical arm, coordinate values of the marking point on a robot coordinate system and a camera coordinate system are respectively obtained, and in the neighborhood of the calculation result obtained in the step 3), a calibration matrix is optimized through multiple iterations, so that the error between the calculated value and the measured value is minimized;
when the marking mechanical arm is at the position i, the coordinate of the marking point in the camera coordinate system is pcam(i)The coordinate in the robot coordinate system is Pbase(i)The formula is obtained as follows.
Figure BDA0002180094670000053
Solving in equations
Figure BDA0002180094670000054
The time is solved by a trust region reflection (Trustregion reflection) algorithm
Figure BDA0002180094670000055
The euler angle R, P, Y and the displacement X, Y, Z;
the trust domain reflection algorithm is very sensitive to the setting of the initial value, the result obtained by calculation in the step 3) is used as the initial value, and a calibration matrix between the camera and the mechanical arm coordinate system is obtained by calculation through multiple iterations.
For solving Pbase(i)Firstly, the precise position of the marking point on the end effector of the mechanical arm is calculated by a four-point method, and then P is read on a handheld teaching board of the mechanical armbase(i)
For solving Pcam(i)Moving the mechanical arm below the camera, collecting the point cloud of the end effector of the mechanical arm, visualizing the collected point cloud, manually selecting the position of a mark point in the visual interface of the point cloud, and recording the coordinate value at the moment to obtain Pcam(i)
The invention has the beneficial effects that:
1. the CALTag calibration is used for replacing the traditional chessboard grid calibration plate, so that the accuracy of identifying the posture of the calibration plate is improved.
2. And obtaining a calibration matrix more suitable for a three-dimensional visual scene based on a trust domain reflection optimization iterative algorithm by using the acquired point cloud depth information.
3. The CALTag and point cloud information-based hand-eye calibration technology for the depth camera can be successfully applied to hand-eye calibration of a robot system for guiding a mechanical arm to grab by the depth camera.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of an Eye-to-Hand calibration model according to an embodiment.
FIG. 3 is a graph of the residual modulus as a function of iteration number for an embodiment.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
Referring to fig. 1, a method for calibrating a hand-eye of a depth camera based on CALTag and point cloud information includes the following steps:
step 1), establishing a mathematical model of a hand-eye calibration system:
in the Hand-Eye calibration system, the Hand-Eye relationship can be divided into an Eye-in-Hand model and an Eye-to-Hand model according to the difference of the mutual orientation relationship between the camera and the robot. The Eye-in-Hand model is to mount a camera at the end of the robot so that the camera moves with the end during the robot operation. The camera in the Eye-to-Hand model is then fixedly mounted, and the camera remains stationary relative to the target during the movement of the robot.
For a Hand-Eye system with an externally fixed Eye, namely an Eye-to-Hand model, the position and posture of a calibration plate can be estimated by a camera through mounting the calibration plate at the tail end of a mechanical arm, so that the automatic calibration of the Hand-Eye is realized. As shown in FIG. 2, the Eye-to-Hand model includes four coordinate systems: the robot comprises a robot origin coordinate system delta base, a robot end joint (six-joint) coordinate system delta tool, a calibration board origin coordinate system delta cal and a depth camera origin coordinate system delta cam.
The variation between coordinate systems is defined as follows:
TbaseHtool(i): coordinate transformation from Δ base to Δ tool in the ith experiment;
TtoolHcal: coordinate transformation from Δ tool to Δ cal;
TcamHcal(i): coordinate transformation from Δ cam to Δ cal in the ith experiment;
TcamHbase: coordinate transformation from Δ cam to Δ base.
In the hand-eye calibration experiment, the position of the mechanical arm is changed, then the image of the calibration plate on the mechanical arm is shot through the camera, and the position value of each joint of the mechanical arm at the moment is recorded. In order to ensure that the calculation result obtains high enough precision, the number of times of experiments is generally more than 12. From the relationship between the respective coordinate systems, the coordinate transformation relationship in the i-th experiment can be obtained as follows.
Figure BDA0002180094670000071
Then, coordinate transformation relational expression in the (i + 1) th experiment is combined to eliminate TtoolHcalThe formula is obtained as follows.
Figure BDA0002180094670000072
Note the book
Figure BDA0002180094670000073
Calculate the solution target TcamHbaseFor X, the formula is obtained:
A·X=X·B
from the definition of the matrix A and the matrix B, the physical meaning of the matrix A is a coordinate transformation matrix generated by the mechanical arm before and after moving and the calibration plate moving under the camera coordinate system; the physical meaning of the matrix B is a coordinate transformation matrix generated when the robot end joint coordinate system delta tool moves under the robot coordinate system before and after the mechanical arm moves. After the matrix A and the matrix B are calculated, the hand-eye calibration problem is converted into a problem of solving a classical equation A.X.X.B;
step 2, identifying the pose of the CALTag calibration plate:
before solving the calibration matrix, one of the key steps is to solve a matrix a in a classical equation a · X ═ X · B, and the matrix a is obtained by determining the pose of the calibration plate in the camera coordinate system, so the pose recognition problem of the calibration plate is mainly solved in this section.
Caltag (calibration tag) is a planar self-identifying mark that can be used for automatic camera calibration. Compared with the traditional chessboard grid calibration plate, the CALTag adds one mark in each grid, and higher precision can be obtained through relocation; the method can be successfully identified even under the condition that the CALTag calibration plate is partially shielded, so that the requirement on the quality of an experimental sample is greatly reduced; furthermore, the angular points in the image do not need to be manually selected and complicated parameters do not need to be set, so that the calibration technology based on the CALTag calibration board can be used for automatic calibration.
The method for identifying the pose of the CALTag calibration plate comprises the following steps:
2.1) calculating the connected region: converting an original image into a gray image, detecting the edge of the gray image by using a Sobel filter, removing non-edge pixel points by using morphology, then carrying out self-adaptive thresholding on the image, and finally calculating a connected region in the image;
2.2) removing invalid connected regions: the connected regions in the background contained in the connected regions generated in the step 2.1) need to be filtered, and the filtering criteria include two: one is a connected region having an area less than 256 pixels and a size larger than one of the input images 1/8; second, the euler number is smaller than 0 and greater than the connected region of 3, the definition of euler number is the total number of objects in the picture minus the number of inner holes in them, the advantage of filtering based on euler number lies in that they are independent of resolution, do not need parameter adjustment;
2.3) quadrilateral fitting: calculating gradient of each pixel point on the rear edge of the communication area removed in the step 2.2), obtaining four main edge directions by using a K-Means clustering algorithm of Lloyd, obtaining boundary lines by using a least square method, determining the parallel relation between the boundary lines, and finally fitting a quadrangle by using two pairs of found parallel lines;
2.4) corner detection: for the quadrangle fitted in the step 2.3), the pixel point in the x-neighborhood of the angular point is marked as p, and the gradient of the pixel point in the edge direction of the image except the angular point is not zero, so that the product of the angular point and the gradient of the pixel point in the neighborhood of the angular point in the edge direction is 0, namely the product is
Figure 1
. Because of the fact that
Figure 4
The linear system can calculate the accurate position of the angle in an iterative manner, wherein the initial value is the quadrilateral vertex value obtained in the step 2.3);
2.5) mark verification: determining the position of the binary code at the center of each square grid according to the corner point position obtained in the step 2.4), then checking the binary code, and only keeping a mark for checking the binary code to be correct;
2.6) locating missing corner points: an attempt is made to find corner points visible in the input image but missing during detection. If at least one binary code is correctly identified, the theoretical positions of other corner points in the image can be deduced as the positions of the binary code in the chessboard pattern are known in advance;
2.7) corner point verification: conditions such as occlusion and reflection may possibly generate false alarm of the corner point, so the authenticity of the corner point determined in step 2.6) must be verified. A corner must pass two tests to be identified as a true corner: the first test judges whether the pixel intensity of the local neighborhood of the corner point conforms to beta distribution; secondly, testing whether small circles around the corner points meet the characteristic of black-white intersection or not;
in the hand-eye calibration experiment, the CALTag calibration plate is fixed on an end effector of the mechanical arm, and the Kinect camera is fixed on the blank table. Kinect is an XBOX360 body sensing peripheral device published in 2010 by microsoft corporation, is an active 3D measuring device, and is widely applied to the fields of object identification, three-dimensional reconstruction, indoor three-dimensional measurement and the like nowadays due to the characteristics of low cost, high speed and high reliability. The Kinect camera integrates an infrared camera, a color camera and a depth camera, and can achieve synchronous acquisition of color images, depth images and audio signals. Wherein the highest resolution of the infrared camera is 1280 pixels by 1024 pixels, and the size of the field of view is 57 degrees by 45 degrees; the highest resolution of the color camera is 1280 by 1024 pixels, and the size of the field of view is 63 by 50 degrees; the resolution of the collected depth image is 640 × 480 pixels, synchronous output of the depth image with 640 × 480 pixels and the color image can be realized, the depth measurement range is 0.5m-4m, and the measurement accuracy and the depth resolution of the depth image are reduced along with the increase of the measurement distance.
Through the position of transform arm many times, Kinect gathers the picture of calibration plate on different positions arm end effector simultaneously. In the experiment, the sampling resolution of the camera was 640 × 480. The specification of the CALTag calibration plate is 8 rows and 4 columns of squares, the side length of each square is 25.4mm, and the number of corresponding corner points is 45. The iteration times for calculating the corner points are set to be 100 times, and the sample size of the input picture is 15.
In order to measure the recognition accuracy of the recognition calibration, the size of the residual mode is used to reflect the recognition effect, as shown in fig. 3, and fig. 3 shows the change of the residual mode with the iteration number. Residual refers to the difference between the actual observed value and the estimated value, while residual modulo refers to the square root of each residual, which represents the effect of random error. As can be seen from fig. 3, when the number of iterations reaches about 20, the values of the residual modulus start to converge and finally maintain around 0.000657, and the average pixel error of the corner point is 0.1254 pixels.
Step 3, solving a calibration matrix based on the lie group theory:
known from the mathematical equation a · X ═ X · B of the hand-eye calibration, before solving the calibration matrix X, the matrix a and the matrix B must be determined, where the matrix a has been obtained in step two; the matrix B can be obtained by acquiring a joint value of the robot arm at each position and analyzing the forward kinematics of the robot arm. By changing the position of the mechanical arm for multiple times, multiple groups of observed values { (A) can be obtained1,B1),(A2,B2)…(Ak,Bk) The solution of the formula a · X ═ X · B is converted into a minimization problem, and the mathematical description is as follows.
Figure BDA0002180094670000101
Wherein d represents a distance measure over the euclidean group;
for the formula a · X ═ X · B, the following matrix form is developed
Figure BDA0002180094670000102
The simplification is as follows:
RARx=RxRB
RAtx+tA=RxtB+tx
memory logRA=[α],logRB=[β]Obtained by the lie group theory
α=βRx
When there are more than two groups of observed values, the solution is
Rx=(MTM)-0.5MT
tx=(RA-I)-1(RxtB-tA)
Wherein:
Figure BDA0002180094670000111
step 4, optimizing the calibration matrix obtained in the step 3) by using the point cloud depth information:
aiming at the problem that the capturing precision is not high due to the fact that the coordinate transformation of the point cloud coordinate system and the robot coordinate system cannot be accurately determined through the calibration result, the calibration matrix result obtained in the third step needs to be optimized. The main idea of the algorithm is that a marking point on the end effector of the mechanical arm is introduced to respectively obtain the coordinate values of the marking point on the robot coordinate system and the camera coordinate system, and a calibration matrix is optimized through multiple iterations in the neighborhood of the calculation result obtained in the third step, so that the error between the calculated value and the measured value is minimized.
When the marking mechanical arm is at the position i, the coordinate of the marking point in the camera coordinate system is pcam(i)The coordinate in the robot coordinate system is Pbase(i)The formula can be obtained as follows.
Figure BDA0002180094670000112
For solving Pbase(i)Firstly, the precise position of the marking point on the end effector of the mechanical arm is calculated by a four-point method, and then P is read on a handheld teaching board of the mechanical armbase(i)
For solving Pcam(i)Moving the mechanical arm to a position below the Kinect camera, collecting point cloud of the mechanical arm end effector, visualizing the collected point cloud, manually selecting the position of a mark point in a visual interface of the point cloud, and recording the coordinate value at the moment to obtain Pcam(i)
Finally, solve for
Figure BDA0002180094670000113
The time is solved by a trust region reflection (Trustregion reflection) algorithm
Figure BDA0002180094670000114
Six parameters (euler angle RPY, displacement XYZ). The belief domain algorithm is a numerical method for solving the nonlinear optimization problem. The belief domain algorithm is an iterative algorithm that proceeds from a given initial solution through stepwise iterations, continually refining until a satisfactory near-optimal solution is obtained. The basic idea is to transform the optimization problem into a series of simple local optimization problems. The confidence domain reflection algorithm is very sensitive to the setting of the initial value, so the initial value is necessarily in the neighborhood of the optimal solution, and the result obtained by calculation in the step 3.2) is a suboptimal result and accords with the condition, so the result is used as the initial value, and a calibration matrix between the camera and the mechanical arm coordinate system is obtained by multiple iterations.

Claims (3)

1. A CALTag and point cloud information-based depth camera hand-eye calibration method is characterized by comprising the following steps:
step 1), establishing a mathematical model of a hand-eye calibration system:
in the Hand-Eye calibration system, according to the difference of the mutual orientation relation between the camera and the robot, the Hand-Eye relation is divided into an Eye-in-Hand model and an Eye-to-Hand model; the Eye-in-Hand model is that a video camera is arranged at the tail end of a robot, and the video camera moves along with the tail end in the working process of the robot; the camera in the Eye-to-Hand model is fixedly installed, and the camera is kept static relative to the target in the moving process of the robot; for the Eye-to-Hand model, a calibration plate is arranged at the tail end of a mechanical arm, and the pose of the calibration plate is estimated by a camera, so that the automatic calibration of hands and eyes is realized;
the Eye-to-Hand model comprises four coordinate systems: a robot origin coordinate system delta base, a robot end joint coordinate system delta tool, a calibration board origin coordinate system delta cal, a depth camera origin coordinate system delta cam,
the variation between coordinate systems is defined as follows:
TbaseHtool(i): coordinate transformation from Δ base to Δ tool in the ith experiment;
TtoolHcal: from delta toCoordinate transformation of ol to Δ cal;
TcamHcal(i): coordinate transformation from Δ cam to Δ cal in the ith experiment;
TcamHbase: coordinate transformation from deltacam to deltabase,
in hand-eye calibration, the position of the mechanical arm is changed, then an image of a calibration plate on the mechanical arm is shot through a camera, the position value of each joint of the mechanical arm is recorded at the moment, and the calibration times are more than 12; according to the relationship among the coordinate systems, the coordinate transformation relation in the ith calibration is obtained as follows:
Figure FDA0002826972290000011
then, coordinate transformation relational expression in the (i + 1) th calibration is established in a simultaneous manner, and T is eliminatedtoolHcalThe following formula is obtained:
Figure FDA0002826972290000021
note the book
Figure FDA0002826972290000022
Calculate the solution target TcamHbaseFor X, the formula is obtained:
A·X=X·B
the physical meaning of the matrix A is a coordinate transformation matrix generated by the movement of the calibration plate under the camera coordinate system before and after the movement of the mechanical arm; the physical meaning of the matrix B is a coordinate transformation matrix generated by the robot end joint coordinate system delta tool moving under the robot coordinate system before and after the movement of the mechanical arm; after the matrix A and the matrix B are calculated, the hand-eye calibration problem is converted into a problem of solving a classical equation A.X.X.B;
step 2), identifying the pose of the CALTag calibration plate:
the method for identifying the pose of the CALTag calibration plate comprises the following steps of solving a matrix A in a classical equation A.X.X.B, wherein the matrix A is obtained by determining the pose of the calibration plate in a camera coordinate system:
2.1) calculating the connected region: converting an original image into a gray image, detecting the edge of the gray image by using a Sobel filter, removing non-edge pixel points by using morphology, then carrying out self-adaptive thresholding on the image, and finally calculating a connected region in the image;
2.2) removing invalid connected regions: filtering out connected regions in the background contained in the connected regions generated in the step 2.1), wherein the filtering criteria include two: one is a connected region having an area less than 256 pixels and a size greater than one of the input images 1/8; second, the Euler number is smaller than 0 and greater than the connected region of 3, the Euler number is defined as the total number of objects in the picture minus the number of inner holes in them;
2.3) quadrilateral fitting: calculating gradient of each pixel point on the rear edge of the communication area removed in the step 2.2), obtaining four main edge directions by using a K-Means clustering algorithm of Lloyd, obtaining boundary lines by using a least square method, determining the parallel relation between the boundary lines, and finally fitting a quadrangle by using two pairs of found parallel lines;
2.4) corner detection: for the quadrangle fitted in the step 2.3), a pixel point in the x-neighborhood of the corner point is p, and the product of the gradient of the corner point and the gradient of the pixel point in the neighborhood in the edge direction is 0, i (p) (x-p) ═ 0, since i (p) (x-p) ═ 0 is a linear system, the precise position of the angle can be calculated iteratively, wherein the initial value is the vertex value of the quadrangle obtained in the step 2.3);
2.5) mark verification: determining the position of the binary code at the center of each square grid according to the corner point position obtained in the step 2.4), then checking the binary code, and only keeping a mark for checking the binary code to be correct;
2.6) locating missing corner points: finding the corner points which are visible in the input image but are omitted during detection, if at least one binary code is correctly identified, the theoretical positions of other corner points in the image can be deduced as the positions of the corner points in the chessboard pattern are known in advance;
2.7) corner point verification: verifying the authenticity of the corner determined in step 2.6), which must pass two tests to be considered as a true corner: the first test judges whether the pixel intensity of the local neighborhood of the corner point conforms to beta distribution; secondly, testing whether small circles around the corner points meet the characteristic of black-white intersection or not;
step 3), solving a calibration matrix based on the lie group theory:
as known from the mathematical equation a · X ═ X · B of the hand-eye calibration, before solving the calibration matrix X, the matrix a and the matrix B must be determined, where the matrix a has been obtained in step 2); the matrix B is obtained by acquiring joint values of the mechanical arm at each position and analyzing by utilizing forward kinematics of the mechanical arm; obtaining a plurality of groups of observed values by changing the position of the mechanical arm for a plurality of times (A)1,B1),(A2,B2)…(Ak,Bk) Converting the solution of the formula a · X ═ X · B into a minimization problem, and mathematically describing the following:
Figure FDA0002826972290000031
wherein d represents a distance measure over the euclidean group;
for the formula a · X ═ X · B, the following matrix form is developed
Figure FDA0002826972290000032
The simplification is as follows:
RARx=RxRB
RAtx+tA=RxtB+tx
memory logRA=[α],logRB=[β]Obtained by the lie group theory
α=βRx
When there are more than two groups of observed values, the solution is
Rx=(MTM)-0.5MT
tx=(RA-I)-1(RxtB-tA)
Wherein:
Figure FDA0002826972290000041
step 4), optimizing the calibration matrix obtained in the step 3) by using the point cloud depth information: by introducing a marking point on the end effector of the mechanical arm, coordinate values of the marking point on a robot coordinate system and a camera coordinate system are respectively obtained, and in the neighborhood of the calculation result obtained in the step 3), a calibration matrix is optimized through multiple iterations, so that the error between the calculated value and the measured value is minimized;
when the marking mechanical arm is at the position i, the coordinate of the marking point in the camera coordinate system is pcam(i)The coordinate in the robot coordinate system is Pbase(i)Obtaining a formula as follows;
Figure FDA0002826972290000042
solving in equations
Figure FDA0002826972290000043
The time is solved by a Trust region reflection (Trust region reflection) algorithm
Figure FDA0002826972290000044
The euler angle R, P, Y and the displacement X, Y, Z;
the trust domain reflection algorithm is very sensitive to the setting of the initial value, the result obtained by calculation in the step 3) is used as the initial value, and a calibration matrix between the camera and the mechanical arm coordinate system is obtained by calculation through multiple iterations.
2. The CALTag and point cloud information-based depth camera hand-eye calibration method as claimed in claim 1, wherein: for solving Pbase(i)Firstly, the mark point is calculated by a four-point methodThe precise position on the end effector of the robot arm, then the P is read on the hand-held teaching board of the robot armbase(i)
3. The CALTag and point cloud information-based depth camera hand-eye calibration method as claimed in claim 1, wherein: for solving Pcam(i)Moving the mechanical arm below the camera, collecting the point cloud of the end effector of the mechanical arm, visualizing the collected point cloud, manually selecting the position of a mark point in the visual interface of the point cloud, and recording the coordinate value at the moment to obtain Pcam(i)
CN201910793218.XA 2019-08-27 2019-08-27 CALTag and point cloud information-based depth camera hand-eye calibration method Active CN110555889B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910793218.XA CN110555889B (en) 2019-08-27 2019-08-27 CALTag and point cloud information-based depth camera hand-eye calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910793218.XA CN110555889B (en) 2019-08-27 2019-08-27 CALTag and point cloud information-based depth camera hand-eye calibration method

Publications (2)

Publication Number Publication Date
CN110555889A CN110555889A (en) 2019-12-10
CN110555889B true CN110555889B (en) 2021-01-15

Family

ID=68738387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910793218.XA Active CN110555889B (en) 2019-08-27 2019-08-27 CALTag and point cloud information-based depth camera hand-eye calibration method

Country Status (1)

Country Link
CN (1) CN110555889B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4101604A1 (en) * 2021-06-09 2022-12-14 Ebots, Inc. System and method for improving accuracy of 3d eye-to-hand coordination of a robotic system

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111775146B (en) * 2020-06-08 2022-07-12 南京航空航天大学 Visual alignment method under industrial mechanical arm multi-station operation
CN112223285B (en) * 2020-09-30 2022-02-01 南京航空航天大学 Robot hand-eye calibration method based on combined measurement
CN112465898B (en) * 2020-11-20 2023-01-03 上海交通大学 Object 3D pose tag acquisition method based on checkerboard calibration plate
CN112525074B (en) * 2020-11-24 2022-04-12 杭州素问九州医疗科技有限公司 Calibration method, calibration system, robot, computer device and navigation system
CN112720458B (en) * 2020-12-04 2022-08-12 上海航天设备制造总厂有限公司 System and method for online real-time correction of robot tool coordinate system
CN112634435A (en) * 2020-12-17 2021-04-09 中国地质大学(武汉) Ceramic product three-dimensional point cloud reconstruction method based on Eye in Hand model
CN112790786A (en) * 2020-12-30 2021-05-14 无锡祥生医疗科技股份有限公司 Point cloud data registration method and device, ultrasonic equipment and storage medium
CN112971984B (en) * 2021-02-05 2022-05-31 上海阅行医疗科技有限公司 Coordinate registration method based on integrated surgical robot
CN113043334B (en) * 2021-02-23 2022-12-06 上海埃奇机器人技术有限公司 Robot-based photovoltaic cell string positioning method
CN113276106B (en) * 2021-04-06 2022-06-03 广东工业大学 Climbing robot space positioning method and space positioning system
CN113400301B (en) * 2021-05-28 2023-03-21 深圳市智能机器人研究院 Robot 3D hand-eye calibration method, system, device and medium
CN113635311B (en) * 2021-10-18 2021-12-31 杭州灵西机器人智能科技有限公司 Method and system for out-of-hand calibration of eye for fixing calibration plate
CN113771096A (en) * 2021-11-09 2021-12-10 北京壹点灵动科技有限公司 Method and device for processing pose information of mechanical arm
CN116117785A (en) * 2021-11-12 2023-05-16 华为技术有限公司 Method and device for calibrating kinematic parameters of a robot
CN114953548B (en) * 2022-05-31 2022-12-20 武汉金顿激光科技有限公司 Tire mold cleaning method, system and storage medium
CN115139283B (en) * 2022-07-18 2023-10-24 中船重工鹏力(南京)智能装备系统有限公司 Robot hand-eye calibration method based on random mark dot matrix
CN115345943B (en) * 2022-08-08 2024-04-16 恩纳基智能装备(无锡)股份有限公司 Calibration method based on differential mode concept
CN115070779B (en) * 2022-08-22 2023-03-24 菲特(天津)检测技术有限公司 Robot grabbing control method and system and electronic equipment
CN115227398B (en) * 2022-09-19 2023-03-03 杭州三坛医疗科技有限公司 Automatic positioning method and device for registration plate
CN115861445B (en) * 2022-12-23 2023-07-04 广东工业大学 Hand-eye calibration method based on three-dimensional point cloud of calibration plate
CN116423498A (en) * 2023-03-08 2023-07-14 湖北普罗格科技股份有限公司 Calibration method, device, medium and equipment based on tail end of mechanical arm
CN116160454A (en) * 2023-03-28 2023-05-26 重庆智能机器人研究院 Robot tail end plane vision hand-eye calibration algorithm model
CN116038720B (en) * 2023-04-03 2023-08-11 广东工业大学 Hand-eye calibration method, device and equipment based on point cloud registration
CN117853579A (en) * 2023-10-07 2024-04-09 湖州丽天智能科技有限公司 Photovoltaic panel pose correction method, photovoltaic robot and storage medium
CN117428777A (en) * 2023-11-28 2024-01-23 北华航天工业学院 Hand-eye calibration method of bag-removing robot

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100468857B1 (en) * 2002-11-21 2005-01-29 삼성전자주식회사 Method for calibrating hand/eye using projective invariant shape descriptor for 2-dimensional shape
CN105096317B (en) * 2015-07-03 2018-05-08 吴晓军 A kind of high-performance camera full automatic calibration method in complex background
CN108198223B (en) * 2018-01-29 2020-04-07 清华大学 Method for quickly and accurately calibrating mapping relation between laser point cloud and visual image
CN109102547A (en) * 2018-07-20 2018-12-28 上海节卡机器人科技有限公司 Robot based on object identification deep learning model grabs position and orientation estimation method
CN109702738B (en) * 2018-11-06 2021-12-07 深圳大学 Mechanical arm hand-eye calibration method and device based on three-dimensional object recognition

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4101604A1 (en) * 2021-06-09 2022-12-14 Ebots, Inc. System and method for improving accuracy of 3d eye-to-hand coordination of a robotic system

Also Published As

Publication number Publication date
CN110555889A (en) 2019-12-10

Similar Documents

Publication Publication Date Title
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN110599541B (en) Method and device for calibrating multiple sensors and storage medium
CN111627072B (en) Method, device and storage medium for calibrating multiple sensors
Chen et al. High-accuracy multi-camera reconstruction enhanced by adaptive point cloud correction algorithm
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
JP6280525B2 (en) System and method for runtime determination of camera miscalibration
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN115131444B (en) Calibration method based on monocular vision dispensing platform
CN110146017B (en) Industrial robot repeated positioning precision measuring method
US11504855B2 (en) System, method and marker for the determination of the position of a movable object in space
CN110926330A (en) Image processing apparatus, image processing method, and program
CN115187612A (en) Plane area measuring method, device and system based on machine vision
CN111681268A (en) Method, device, equipment and storage medium for identifying and detecting sequence number of optical mark point by mistake
CN113221953B (en) Target attitude identification system and method based on example segmentation and binocular depth estimation
CN112767494A (en) Precise measurement positioning method based on calibration algorithm
CN112347837A (en) Image processing system
CN112935562A (en) Laser precision machining method based on paraxial offline measurement
CN114266822A (en) Workpiece quality inspection method and device based on binocular robot, robot and medium
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
CN109493369B (en) Intelligent robot vision dynamic positioning and tracking method and system
CN113012238A (en) Method for rapid calibration and data fusion of multi-depth camera
CN112584041A (en) Image identification dynamic deviation rectifying method
Roussel et al. 3D surface reconstruction of plant seeds by volume carving
CN112507755A (en) Target object six-degree-of-freedom positioning method and system for minimizing two-dimensional code corner re-projection error

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant