CN110009689B - Image data set rapid construction method for collaborative robot pose estimation - Google Patents

Image data set rapid construction method for collaborative robot pose estimation Download PDF

Info

Publication number
CN110009689B
CN110009689B CN201910215968.9A CN201910215968A CN110009689B CN 110009689 B CN110009689 B CN 110009689B CN 201910215968 A CN201910215968 A CN 201910215968A CN 110009689 B CN110009689 B CN 110009689B
Authority
CN
China
Prior art keywords
robot
coordinate system
camera
point
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910215968.9A
Other languages
Chinese (zh)
Other versions
CN110009689A (en
Inventor
庄春刚
朱向阳
周凡
沈逸超
赵恒�
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910215968.9A priority Critical patent/CN110009689B/en
Publication of CN110009689A publication Critical patent/CN110009689A/en
Application granted granted Critical
Publication of CN110009689B publication Critical patent/CN110009689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a method for quickly constructing an image data set for collaborative robot pose estimation, which comprises the following steps: under the condition that the robot base and the camera are relatively fixed, ensuring that the motion range of the robot is within the visual field range of the camera, fixedly connecting the calibration plate with the tail end of the robot, ensuring the uniqueness of a coordinate system of the calibration plate, and simultaneously storing tool center point pose data of the tail end of the robot corresponding to the picture of each calibration plate; according to the obtained pictures of the multiple groups of calibration plates and the corresponding tool center point pose data of the tail end of the robot, a calibration algorithm is designed to solve a first transformation matrix from a base coordinate system of the robot to a coordinate system of a camera and camera internal parameters, mapping of key point image pixel points of the robot is completed according to DH parameters and configuration of the robot, and relevant data information is further generated. The method has the advantages of high automation degree, high speed, high accuracy, rich information and the like, and ensures the reliability of subsequent work.

Description

Image data set rapid construction method for collaborative robot pose estimation
Technical Field
The invention relates to the field of robots, in particular to a method for quickly constructing an image data set for collaborative robot pose estimation.
Background
In recent years, with the major breakthrough of novel machine learning algorithms such as deep learning in the image field, how to migrate the novel algorithms such as deep learning to the robot field gradually becomes a research hotspot in the field, the most successful case of combining the deep learning algorithm in the robot field at present is Bin-Picking, namely, the identification and the grabbing of the pose of a scattered object, wherein a two-dimensional image or three-dimensional point cloud data set is usually used, although visual information such as images is not necessarily used in other tasks such as robot obstacle avoidance motion planning, and a certain progress is obtained, the visual information is usually realized in an experimental environment or a simulation environment and is difficult to be applied to actual production work, and therefore, the adoption of the visual information to construct the data set is the mainstream direction of the deep learning algorithm.
Machine learning algorithms such as deep learning often require large-scale data sets to obtain ideal results, and if a method for manually labeling the data sets in the image field is too time-consuming and labor-consuming, it is difficult to construct large-scale data sets in a short time. The robot field uses image information as a data set, and has the advantages that the robot can use key parameters of a body such as DH (distributed data link) to carry out accurate modeling to obtain detailed information of any position of the robot, meanwhile, a simple camera calibration method is provided by combining Zhang-Zhen.
The cooperative robot is a hot spot in the robot field in recent years, and in order to ensure the intrinsic safety of the cooperative robot, the cooperative robot is different from a traditional industrial robot in design, structure and use of a sensor, so that a man-machine cooperative task in the same working area can be realized. However, at the present stage, due to the unicity of data of sensors inside the robot, for example, whether the robot collides or not can be often judged only according to the magnitude of external torque applied to the robot, and the safety of the whole task is difficult to guarantee, so that an external sensor is required to be added in a real man-machine cooperation scene to monitor the pose of the cooperative robot in real time by combining with a safety detection algorithm, and the pose is taken as another layer of guarantee for the safety of the whole system.
Therefore, the technical personnel in the field are dedicated to developing a method for quickly constructing an image data set for estimating the pose of the cooperative robot, and compared with a manual labeling method in the image field, the method has the advantages of high speed, high accuracy, rich information and the like, has no specific requirements on the type of the used cooperative robot, and can be quickly applied to different cooperative robot scenes.
Disclosure of Invention
In view of the above defects in the prior art, the technical problem to be solved by the present invention is how to solve the data set acquisition problem of using a machine learning algorithm to perform collaborative robot pose estimation, which can greatly improve the data set acquisition efficiency and ensure the accuracy and stability of the data set.
In order to achieve the above object, the present invention provides a method for quickly constructing an image data set for collaborative robot pose estimation, which comprises the following steps:
step 1: under the condition that a robot base and a camera are relatively fixed, ensuring that the motion range of the robot is within the visual field range of the camera, fixedly connecting a calibration plate with the tail end of the robot, ensuring the uniqueness of a coordinate system of the calibration plate, and simultaneously storing tool center point pose data of the tail end of the robot corresponding to each calibration plate picture;
step 2: calibrating internal and external parameters of a camera based on a Zhangyingyou chessboard lattice calibration method according to the pictures of the plurality of groups of calibration plates obtained in the step 1 and corresponding tool center point pose data of the tail end of the robot, and designing a calibration algorithm to solve a first transformation matrix from a base coordinate system of the robot to a coordinate system of the camera;
and step 3: and (3) according to the joint-axis parameters and the configuration of the robot, calculating the position of the key point of the robot under a robot base coordinate system, and simultaneously combining the first transformation matrix obtained in the step (2) and the camera internal reference matrix to complete the mapping of the key point of the robot on the image pixel points.
Furthermore, the number of the checkerboards on both sides of the calibration plate in the step 1 is respectively odd and even, so as to ensure the invariance of the origin of the checkerboard coordinate system and the stability of the calibration plate coordinate system in the acquisition process.
Further, the tool center point pose of the robot in step 1 is set to change in six degrees of freedom, so as to obtain multiple sets of images of the calibration plate and corresponding tool center point pose data of the tail end of the robot.
Further, acquiring tool center point pose data corresponding to a plurality of sets of images of the calibration plate comprises the steps of:
step 1.1: recording the initial pose of the central point of the tool at the tail end of the robot as (x, y, z, rx, ry, rz), wherein x, y and z are initial coordinate data of the central point of the tool, and rx, ry and rz are rotation vector representations of the central point of the tool;
step 1.2: converting the tool center point pose of the step 1.1 into a rotation matrix form, as shown in formula (1):
Figure BDA0002002108970000021
in the formula (1), the reaction mixture is,
Figure BDA0002002108970000022
θ = norm (r), I is diagonal;
step 1.3: converting the rotation matrix obtained in the step 1.2 into a homogeneous matrix T 0 :
Figure BDA0002002108970000023
Wherein the content of the first and second substances,
Figure BDA0002002108970000024
step 1.4: let the second transformation matrix be
Figure BDA0002002108970000031
Wherein r is 1 ,r 2 ,r 3 And t is a 3 multiplied by 1 column vector respectively, and the pose formula of the tool center point after transformation is calculated as shown in the formula (2):
T n =T f T 0 (2)
step 1.5: by changing r in turn 1 ,r 2 ,r 3 Value of t vector, obtaining tool center pointThe pose after the change of the six degrees of freedom.
Further, the solving process of the first transformation matrix in step 2 is as follows:
step 2.1: an equation is constructed: t is G2T =T G2C T C2B T B2T
Wherein, T G2C A transformation matrix for camera external reference, namely the coordinate system of the calibration plate to the coordinate system of the camera; t is G2T A transformation matrix from the coordinate system of the calibration plate to the coordinate system of the tool center point is fixed and unchanged; t is a unit of T2B A transformation matrix of the pose data of the robot, i.e. the tool center point coordinate system to the coordinate system of the base of the robot; t is B2C A transformation matrix of a coordinate system of a base of the robot to a coordinate system of the camera; t is C2B ,T B2T Are each T B2C ,T T2B The inverse matrix of (c);
step 2.2: note T G2C =D,T B2T =E,T C2B And (2) constructing a matrix A and a matrix B, and satisfying the formula (3):
Figure BDA0002002108970000032
wherein, the index n of the matrix represents the data collected at the nth time;
step 2.3: solving the matrix X in the step 2.2, wherein the inverse matrix of X is the first transformation matrix T B2C
Further, the camera internal reference matrix in step 2 is a transformation matrix from the camera coordinate system to the image coordinate system, and is denoted as I I
Figure BDA0002002108970000033
Wherein, f x ,f y And u and v are calibration parameters.
Further, the step 3 of calculating the positions of the key points of the robot under the robot base coordinate system specifically includes the following steps:
step 3.1: selecting a key point which is not movable relative to the robot base asThe key point reference origin and is recorded as
Figure BDA0002002108970000034
Step 3.2: with p of step 3.1 1 Taking other key points of the robot as starting points, sequentially calculating the positions of the key points under a robot base coordinate system according to a DH method by taking the action freedom degrees of the other key points of the robot to be far and near from a reference origin point, and recording the coordinates as p i (i=2,3,4…);p i Coordinate vectors for other keypoints.
Further, the step 3 of obtaining the mapping from the key points of the robot to the image pixel points comprises the following steps:
step 8.1: constructing the homogeneous coordinates
Figure BDA0002002108970000035
Wherein p is i Is the space coordinate of the key point i (i =1,2,3,4 \8230;) under the coordinate system of the robot base;
step 8.2: note book
Figure BDA0002002108970000041
Where x, y are the result of multiplying the actual pixel location by the scale factor and z is the scale factor; t is a first transformation matrix;
step 8.3: actual pixel position of keypoint i
Figure BDA0002002108970000042
Further, the method also comprises the step of generating a bounding box of the robot in the image based on the actual pixel position of the key point and the resolution of the camera image, and the specific steps are as follows:
step 9.1: constructing a point set containing all the actual pixel positions of the key points<p i >;
Step 9.2: for the point set<p i >Respectively obtaining the maximum and minimum arrangement according to the x and y directions
p min (x min ,y min ),p max (x max ,y max ) The points with the minimum horizontal and vertical coordinates and the maximum horizontal and vertical coordinates are collected in the point set;
step 9.3: on the basis of the step 9.2, generating an enclosing frame of the robot in the image according to the resolution of the actual camera image, wherein coordinates of upper left corner and lower right corner points of the enclosing frame are respectively as follows:
p box1 =(x min -0.05h,y min -0.05h)
p box2 =(x max +0.05h,y max +0.05h)
where h is the pixel height of the image, p box1 Is the upper left corner point, p, of the bounding box box2 Is the right lower corner point of the bounding box.
Further, the step 3 is followed by a storing step, wherein the stored contents include the image acquired by the camera, the horizontal and vertical coordinates of each key point in the image pixel, and the three-dimensional coordinates of each key point in the camera coordinate system.
Compared with the prior art, the invention provides the method for quickly constructing the image data set for the collaborative robot pose estimation by combining the self configuration characteristics of the robot aiming at the defect that the current image data set uses manual labeling, and the method has the advantages of high automation degree, high speed, high accuracy, rich information and the like, and ensures the reliability of subsequent work.
The conception, specific structure and technical effects of the present invention will be further described in conjunction with the accompanying drawings to fully understand the purpose, characteristics and effects of the present invention.
Drawings
FIG. 1 is a flow chart of the image dataset fast construction system for collaborative robot pose estimation according to a preferred embodiment of the present invention;
FIG. 2 is a schematic diagram of 6 joint position distribution on UR5 robot
FIG. 3 is a schematic diagram of 9 key point locations of the UR5 robot of an embodiment of the present invention;
FIG. 4 is a schematic diagram of DH parameters on UR5 robot of the embodiment shown in FIG. 2;
FIG. 5 is a diagram of the results of the first transformation matrix and camera reference matrix calculations for the embodiment shown in FIG. 2;
FIG. 6 is a diagram of a specific storage form of the actual data set of the embodiment shown in FIG. 2;
fig. 7 is a schematic diagram of a bounding box of the UR5 robot in images generated by the RGB camera of the embodiment shown in fig. 2 under different viewing angles.
Detailed Description
The technical contents of the preferred embodiments of the present invention will be more clearly and easily understood by referring to the drawings attached to the specification. The present invention may be embodied in many different forms of embodiments and the scope of the invention is not limited to the embodiments set forth herein.
In the drawings, elements that are structurally identical are represented by like reference numerals, and elements that are structurally or functionally similar in each instance are represented by like reference numerals. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. The thickness of the components may be exaggerated where appropriate in the figures to improve clarity.
Example one
Fig. 1 shows a specific process of the image dataset rapid construction system for collaborative robot pose estimation according to this embodiment:
the method comprises the following steps: a hardware platform is built, the robot base and the camera are kept relatively fixed, and the movement range of the robot is ensured to be within the visual field range of the camera;
step two: adjusting the initial pose of the robot, fixedly connecting a calibration plate at the tail end of the robot, then superposing and changing the six degrees of freedom, changing the pose of a tool central point (TCP for short) at the tail end of the robot, collecting calibration data, and storing the TCP pose data at the tail end of the robot corresponding to each calibration plate picture;
step three: designing a corresponding calibration algorithm, and using a plurality of groups of calibration plate pictures collected in the second step and corresponding robot TCP position data to solve a first transformation matrix from a robot base coordinate system to a camera coordinate system, so that the TCP position data of the robot space can be conveniently unified under the camera coordinate system;
step four: obtaining DH parameters of the robot through robot modeling and configuration analysis, analyzing the specific configuration of the robot, calculating the position of a key point of the robot under a robot coordinate system, combining a first transformation matrix obtained in the third step and a camera internal reference matrix, completing the mapping from the key point of the robot to an image pixel point in space, and further generating a series of related data;
step five: the processed image dataset data is saved in the form of a text file.
In order to ensure the stability of the coordinate system of the calibration plate, the checkerboard numbers on two sides of the calibration plate are respectively odd numbers and even numbers.
In this embodiment, the initial posture of the robot is adjusted by using a teaching machine.
In this embodiment, the corresponding calibration algorithm is to calibrate internal and external parameters of the camera by using a Zhangingyou chessboard grid calibration method, convert corresponding robot TCP pose data into a uniform matrix form, and calculate a first transformation matrix from a robot base coordinate system to a camera coordinate system by combining external parameters of a calibration plate.
Example two
In this embodiment, a UR5 robot and an RGB camera are used.
The following describes the method for quickly constructing an image data set in detail with reference to the UR5 robot and the RGB camera employed in this embodiment.
Aiming at solving the problem of a fixed transformation matrix from a robot base to a camera coordinate system, designing a corresponding calibration algorithm, recording the initial pose of the tool center point at the tail end of the robot as (x, y, z, rx, ry, rz), wherein x, y and z are tool center point coordinate data, rx, ry and rz are rotation vector representations of the poses, and converting the pose into a rotation matrix as shown in formula (1):
Figure BDA0002002108970000061
in the formula (1), the acid-base catalyst,
Figure BDA0002002108970000062
θ = norm (r), I is diagonal;
converting the rotation matrix of the formula (1) into a uniform matrix form, wherein the specific formula is as follows:
Figure BDA0002002108970000063
wherein the content of the first and second substances,
Figure BDA0002002108970000064
noting the second transformation matrix as
Figure BDA0002002108970000065
Wherein r is 1 ,r 2 ,r 3 T is a 3 × 1 column vector, and r is sequentially changed 1 ,r 2 ,r 3 And the value of the t vector can realize that the TCP changes in six degrees of freedom, and the transformed position and orientation coordinates of the TCP are as shown in the formula (2):
T n =T f T 0 (2)
in order to solve the problem of a fixed transformation matrix from a robot base to a camera coordinate system, data sets A and B are constructed, and a first transformation matrix from the robot base coordinate system to the camera coordinate system is solved by adopting a classic AX = XB solving algorithm; the essence of camera-robot calibration is X in the known multiple sets AX = XB, where a and B are constructed as follows:
according to the principle of camera calibration, the camera internal parameter is a transformation matrix from a camera coordinate system to an image coordinate system, and is marked as I I (ii) a The camera external reference represents a transformation matrix from a calibration plate coordinate system to a camera coordinate system, and is marked as T G2C (ii) a The robot pose data represents a transformation matrix from a tool center point coordinate system to a robot base coordinate system, and is marked as T T2B (ii) a Because the calibration plate is fixedly connected with the tail end of the robot, a transformation matrix from a coordinate system of the calibration plate to a coordinate system of the center point of the tool is fixed and unchanged and is marked as T G2T (ii) a Transformation of the coordinate system of the robot base into the camera coordinate system, denoted T B2C
From the above transformation relationship, the transformation from the checkerboard coordinate system to the tool center point coordinate system satisfies the equation:
T G2T =T G2C T G2B T B2T wherein T is C2B ,T B2T Are each T B2C ,T T2B Inverse matrix of
In the above formula, T is collected G2C And T B2T Respectively marked as D, E, constant T C2B ,T G2T When the formula is marked as X and C, the formula is marked as C = D 1 XE 1 =D 2 XE 2 =...=D n XE n Where the subscript indicates the nth acquisition.
Then there is
Figure BDA0002002108970000066
Order to
Figure BDA0002002108970000067
If AX = XB, the construction of the data A and B is completed, and the result obtained at this time is the transformation matrix from the camera coordinate system to the robot base coordinate system, and the first transformation matrix T from the robot base coordinate system to the camera coordinate system is only needed to be inverted B2C
Further, the camera internal reference matrix is denoted as I I As shown in formula (4):
Figure BDA0002002108970000071
in the formula (4), f x ,f y U, v are calibration parameters;
in order to solve the positions of the key points of the robot under the actual pixels of the camera, a DH method is adopted to carry out kinematic modeling on the robot, spatial position expressions of all the key points are solved by combining the specific configuration of the robot, accurate positioning of the key points of the robot is realized, the key points are accurately mapped onto image pixels according to a first transformation matrix and a camera internal parameter matrix, and a series of related data information is further generated. It should be noted that when modeling by the DH method, the spatial positions of the key points of the actual robot are not exactly coincident with the origin of the DH coordinate system, and then need to be analyzed in combination with the specific configuration of the robot.
Fig. 2 is a schematic diagram of key points of UR5 in this embodiment, and fig. 3 is a schematic diagram of key parameters of DH in UR5 in this embodiment. Fig. 4 is a diagram of DH parameters on UR5 robot of the illustrated embodiment. According to the UR5 data of this embodiment, the DH parameters of the robot are known, and the specific parameters are as follows:
Figure BDA0002002108970000072
the coordinate calculation process of each key point of the robot in the robot base coordinate system is as follows:
1) The coordinates of the base (key point 1) of the robot are recorded as
Figure BDA0002002108970000073
2) The coordinates of key point 2 are readily known from the DH parameters as
Figure BDA0002002108970000074
3) The coordinates of the key points 3 are given by considering that the movement of the joint 1 causes the coordinates of the key points 3 to change
Figure BDA0002002108970000081
Therein, delta 1 Is a constant value that is an actual measurement other than the DH parameter.
4) Because the joints 2,3 and 4 form a planar mechanical arm, the key point 4 can be considered to be superposed with the influence generated by the movement of the joint 2 on the basis of the key point 3, so that the method has the advantages of
Figure BDA0002002108970000082
Where abs (x) denotes the absolute value of a real number x.
Therefore, the temperature of the molten metal is controlled,
Figure BDA0002002108970000083
5) In the same way, key points5 can be considered to differ from the key point 4 in the radius of rotation of the joint 1 upon movement, which is measured as delta for the key point 5 2 Is and delta 1 The properties are the same.
Order to
Figure BDA0002002108970000084
The coordinates of the key point 5 are
Figure BDA0002002108970000085
6) Key point 6 is affected by joints 1,2,3, having
Figure BDA0002002108970000086
The coordinates of the key point 6 are
Figure BDA0002002108970000087
7) Similarly, key point 7 differs from key point 6 by the radius of rotation of joint 1 during motion, and the measurement is delta 3 =d 4
Therefore, the temperature of the molten metal is controlled,
Figure BDA0002002108970000088
the coordinates of the keypoint 7 are:
Figure BDA0002002108970000089
8) Key points 8 are affected by joints 1,2,3,4 and therefore have
Figure BDA0002002108970000091
The coordinates of the keypoint 8 are:
Figure BDA0002002108970000092
9) The coordinate of the key point 9 is the terminal flange coordinate p 9 Directly obtained by DH parameter modeling, and the space positions of 9 key points under the robot base coordinate system are all calculated.
Constructing homogeneous coordinates of keypoints
Figure BDA0002002108970000093
Wherein p is i The key point i has a space coordinate under the robot base coordinate system
Figure BDA0002002108970000094
Where x, y are the result of multiplying the actual pixel location by a scale factor, and z is the scale factor, so the actual pixel location of the keypoint i is
Figure BDA0002002108970000095
After the 9 key points are processed in such a way, a point set containing 9 elements can be obtained<p i >。
To further refine the range of the robot in the image, a set of points is set<p i >The elements in (1) are arranged in the maximum and minimum directions in the x and y directions respectively to obtain p min (x min ,y min ),p max (x max ,y max ) And generating a bounding box of the robot in the image according to the resolution of the actual image on the basis of the points with the minimum horizontal coordinates and the maximum vertical coordinates in the point set:
p box1 =(x min -0.05h,y min -0.05h)
p box2 =(x max +0.05h,y max +0.05h)
wherein p is box1 Is the upper left corner point, p, of the bounding box box2 And h is the pixel height of the image, and is the right lower corner point of the surrounding frame.
In order to further establish a large-scale data set for deep learning, the embodiment is further provided with a storage function, and the processed posture data of the cooperative robot is stored in a document form.
FIG. 5 is a diagram showing the calculation results of the first transformation matrix and the camera reference matrix according to this embodiment;
fig. 6 shows a storage form of a data set, where the content includes an image acquired by a camera and a text file automatically labeled with key information such as a position of a robot key point in the image, coordinates of a corner point of a bounding box, and spatial coordinates of key points in a camera coordinate system, and specifically, each line represents a sample, the stored information includes a local storage name of the image, a pixel coordinate of an upper left corner of the bounding box and a pixel width and a height of the bounding box, a horizontal coordinate of a pixel of each key point on the image, and a three-dimensional coordinate of each key point in the camera coordinate system, and each type of information is divided by "/".
Fig. 7 shows the bounding box of the robot in the image generated by calculation under different postures of UR5 under different viewing angles in this embodiment.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

Claims (5)

1. A method for quickly constructing an image data set for collaborative robot pose estimation is characterized by comprising the following steps:
step 1: under the condition that a robot base and a camera are relatively fixed, ensuring that the motion range of the robot is within the visual field range of the camera, fixedly connecting a calibration plate with the tail end of the robot, ensuring the uniqueness of a coordinate system of the calibration plate, and simultaneously storing tool center point position and pose data of the tail end of the robot corresponding to each calibration plate picture; the tool center point pose of the robot is set to change in six degrees of freedom to obtain a plurality of sets of images of the calibration plate and corresponding tool center point pose data of the tail end of the robot, and the step of saving the tool center point pose data corresponding to the images of the calibration plate comprises the following steps:
step 1.1: recording the initial pose of the tool center point of the robot end as (x, y, z, rx, ry, rz), wherein x, y and z are initial coordinate data of the tool center point, and rx, ry and rz are rotation vector representations of the tool center point;
step 1.2: converting the tool center point pose of the step 1.1 into a rotation matrix form, as shown in formula (1):
Figure FDA0003992864860000011
in the formula (1), the reaction mixture is,
Figure FDA0003992864860000012
θ = norm (r), I is diagonal;
step 1.3: converting the rotation matrix obtained in the step 1.2 into a homogeneous matrix T 0 :
Figure FDA0003992864860000013
Wherein the content of the first and second substances,
Figure FDA0003992864860000014
step 1.4: let the second transformation matrix be
Figure FDA0003992864860000015
Wherein r is 1 ,r 2 ,r 3 And t is a 3 multiplied by 1 column vector respectively, and the pose formula of the tool center point after transformation is calculated as shown in a formula (2):
T n =T f T 0 (2)
step 1.5: by changing r in turn 1 ,r 2 ,r 3 Obtaining the pose of the tool center point after the six degrees of freedom change according to the value of the t vector;
and 2, step: calibrating internal and external parameters of a camera based on a Zhang-Zhengyou chessboard lattice calibration method according to the pictures of the plurality of groups of calibration plates obtained in the step 1 and the corresponding tool center point position and pose data of the tail end of the robot, and solving a first transformation matrix from a base coordinate system of the robot to a coordinate system of the camera; the camera internal reference matrix is a transformation matrix from a camera coordinate system to an image coordinate system and is marked as I I
Figure FDA0003992864860000016
Wherein, f x ,f y U, v are calibration parameters;
and step 3: the method comprises the following steps of solving the position of a key point of the robot under a robot base coordinate system according to the joint-axis parameter and the configuration of the robot, and specifically comprises the following steps:
step 3.1: selecting a key point which is not movable relative to the robot base as a key point reference origin point and recording the key point reference origin point
Figure FDA0003992864860000021
Step 3.2: with p of step 3.1 1 Taking other key points of the robot as starting points, sequentially calculating the positions of the key points under a robot base coordinate system according to a DH method by taking the action freedom degrees of the other key points of the robot to be far and near from a reference origin point, and marking the coordinates as p i (i=2,3,4···);p i Coordinate vectors of other key points;
and simultaneously combining the first transformation matrix obtained in the step 2 and the camera internal reference matrix to complete the mapping of the key points of the robot on image pixel points, and specifically comprising the following steps:
step 8.1: constructing homogeneous coordinates
Figure FDA0003992864860000022
Wherein p is i Is the spatial coordinate of the key point i (i =1,2,3, 4-) under the robot base coordinate system;
step 8.2: note book
Figure FDA0003992864860000023
Where x, y are the result of multiplying the actual pixel location by the scale factor and z is the scale factor; t is a first transformation matrix;
step 8.3: actual pixel position of keypoint i
Figure FDA0003992864860000024
2. The method as claimed in claim 1, wherein the number of checkerboards on both sides of the calibration plate in step 1 is odd and even respectively, so as to ensure the invariance of the origin of the checkerboard coordinate system and the stability of the coordinate system of the calibration plate during the acquisition process.
3. The method for rapidly constructing the image dataset for collaborative robot pose estimation according to claim 1, wherein the solving process of the first transformation matrix in the step 2 is as follows:
step 2.1: the equation is constructed as follows: t is G2T =T G2C T C2B T B2T
Wherein, T G2C A transformation matrix for a camera external reference, namely a coordinate system of the calibration plate to a coordinate system of the camera; t is G2T A transformation matrix from the coordinate system of the calibration plate to the coordinate system of the tool center point is fixed and unchanged; t is T2B The pose data of the robot is a transformation matrix from the tool center point coordinate system to the coordinate system of the base of the robot; t is B2C A transformation matrix of a coordinate system of a base of the robot to a coordinate system of the camera; t is a unit of C2B ,T B2T Are each T B2C ,T T2B The inverse matrix of (c);
step 2.2: note T G2C =D,T B2T =E,T C2B And (2) constructing a matrix A and a matrix B, and satisfying the formula (3):
Figure FDA0003992864860000031
wherein, the index n of the matrix represents the data acquired at the nth time;
step 2.3: solving the matrix X in the step 2.2, wherein the inverse matrix of X is the first transformation matrix T B2C
4. The method for rapidly constructing the image dataset for collaborative robot pose estimation according to claim 1, further comprising generating a bounding box of the robot in the image based on the actual pixel position of the key point and the resolution size of the camera image, the specific steps being as follows:
step 9.1: constructing a point set containing the actual pixel positions of all the key points<p i >;
Step 9.2: for the point set<p i >Respectively according to the x and y directions to obtain p min (x min ,y min ),p max (x max ,y max ) The points with the minimum horizontal and vertical coordinates and the maximum horizontal and vertical coordinates are collected in the point set;
step 9.3: on the basis of the step 9.2, generating an enclosing frame of the robot in the image according to the actual resolution of the camera image, wherein coordinates of upper left corners and lower right corners of the enclosing frame are respectively as follows:
p box1 =(x min -0.05h,y min -0.05h)
p box2 =(x max +0.05h,y max +0.05h)
where h is the pixel height of the image, p box1 Is the upper left corner point, p, of the bounding box box2 Is the right lower corner point of the bounding box.
5. The method for rapidly constructing the image dataset for collaborative robot pose estimation according to claim 1, wherein the step 3 further comprises a saving step, and the saving content comprises the image acquired by the camera, the horizontal and vertical coordinates of each key point in the image pixel, and the three-dimensional coordinates of each key point in the camera coordinate system.
CN201910215968.9A 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation Active CN110009689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910215968.9A CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910215968.9A CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Publications (2)

Publication Number Publication Date
CN110009689A CN110009689A (en) 2019-07-12
CN110009689B true CN110009689B (en) 2023-02-28

Family

ID=67167566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910215968.9A Active CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Country Status (1)

Country Link
CN (1) CN110009689B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127568B (en) * 2019-12-31 2023-07-04 南京埃克里得视觉技术有限公司 Camera pose calibration method based on spatial point location information
CN113662669A (en) * 2021-08-30 2021-11-19 华南理工大学 Optical power fusion tail end clamp holder and positioning control method thereof
CN114211483A (en) * 2021-11-17 2022-03-22 合肥联宝信息技术有限公司 Robot tool center point calibration method, device and storage medium
CN115611009B (en) * 2022-12-01 2023-03-21 中煤科工西安研究院(集团)有限公司 Coal mine underground stacking type rod box and drill rod separation system and method

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105014667A (en) * 2015-08-06 2015-11-04 浙江大学 Camera and robot relative pose calibration method based on pixel space optimization
CN105716525A (en) * 2016-03-30 2016-06-29 西北工业大学 Robot end effector coordinate system calibration method based on laser tracker
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
CN107256567A (en) * 2017-01-22 2017-10-17 梅卡曼德(北京)机器人科技有限公司 A kind of automatic calibration device and scaling method for industrial robot trick camera
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108346165A (en) * 2018-01-30 2018-07-31 深圳市易尚展示股份有限公司 Robot and three-dimensional sensing components in combination scaling method and device
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN108972544A (en) * 2018-06-21 2018-12-11 华南理工大学 A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9188973B2 (en) * 2011-07-08 2015-11-17 Restoration Robotics, Inc. Calibration and transformation of a camera system's coordinate system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105014667A (en) * 2015-08-06 2015-11-04 浙江大学 Camera and robot relative pose calibration method based on pixel space optimization
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
CN105716525A (en) * 2016-03-30 2016-06-29 西北工业大学 Robot end effector coordinate system calibration method based on laser tracker
CN107256567A (en) * 2017-01-22 2017-10-17 梅卡曼德(北京)机器人科技有限公司 A kind of automatic calibration device and scaling method for industrial robot trick camera
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108346165A (en) * 2018-01-30 2018-07-31 深圳市易尚展示股份有限公司 Robot and three-dimensional sensing components in combination scaling method and device
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN108972544A (en) * 2018-06-21 2018-12-11 华南理工大学 A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
装配机器人视觉系统应用与软件开发;傅华强;《中国优秀硕士学位论文全文数据库信息科技辑》;20170315(第3期);全文 *
面向位姿估计的相机系统标定方法研究;江士雄;《中国博士学位论文全文数据库信息科技辑》;20170815(第8期);全文 *

Also Published As

Publication number Publication date
CN110009689A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110009689B (en) Image data set rapid construction method for collaborative robot pose estimation
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
US9616569B2 (en) Method for calibrating an articulated end effector employing a remote digital camera
JP5897624B2 (en) Robot simulation device for simulating workpiece removal process
JP4021413B2 (en) Measuring device
JP6324025B2 (en) Information processing apparatus and information processing method
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN106104198A (en) Messaging device, information processing method and program
JP2012141962A (en) Position and orientation measurement device and position and orientation measurement method
CN103302666A (en) Information processing apparatus and information processing method
CN106323286B (en) A kind of robot coordinate system and the transform method of three-dimensional measurement coordinate system
WO2019059343A1 (en) Workpiece information processing device and recognition method of workpiece
CN114310901B (en) Coordinate system calibration method, device, system and medium for robot
Zhao et al. Image-based visual servoing using improved image moments in 6-DOF robot systems
CN115351482A (en) Welding robot control method, welding robot control device, welding robot, and storage medium
Niu et al. A stereoscopic eye-in-hand vision system for remote handling in ITER
Sentenac et al. Automated thermal 3D reconstruction based on a robot equipped with uncalibrated infrared stereovision cameras
CN117103277A (en) Mechanical arm sensing method based on multi-mode data fusion
US20230150142A1 (en) Device and method for training a machine learning model for generating descriptor images for images of objects
CN116309879A (en) Robot-assisted multi-view three-dimensional scanning measurement method
Liang et al. An integrated camera parameters calibration approach for robotic monocular vision guidance
JP2014238687A (en) Image processing apparatus, robot control system, robot, image processing method, and image processing program
JP2718678B2 (en) Coordinate system alignment method
Chowdhury et al. Neural Network-Based Pose Estimation Approaches for Mobile Manipulation
Rousopoulou et al. Automated mechanical multi-sensorial scanning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant