CN110009689A - A kind of image data set fast construction method for the robot pose estimation that cooperates - Google Patents

A kind of image data set fast construction method for the robot pose estimation that cooperates Download PDF

Info

Publication number
CN110009689A
CN110009689A CN201910215968.9A CN201910215968A CN110009689A CN 110009689 A CN110009689 A CN 110009689A CN 201910215968 A CN201910215968 A CN 201910215968A CN 110009689 A CN110009689 A CN 110009689A
Authority
CN
China
Prior art keywords
robot
camera
coordinate system
point
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910215968.9A
Other languages
Chinese (zh)
Other versions
CN110009689B (en
Inventor
庄春刚
朱向阳
周凡
沈逸超
赵恒�
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201910215968.9A priority Critical patent/CN110009689B/en
Publication of CN110009689A publication Critical patent/CN110009689A/en
Application granted granted Critical
Publication of CN110009689B publication Critical patent/CN110009689B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention discloses a kind of image data set fast construction methods for the robot pose estimation that cooperates, include step: in the case where robot base and camera are relatively fixed, guarantee that the motion range of robot is in camera within sweep of the eye, scaling board is fixedly connected with the end of robot, guarantee the uniqueness of the coordinate system of scaling board, while the tool center point pose data of the end for the corresponding robot of picture for saving every scaling board;According to the tool center point pose data of the picture of obtained multiple groups scaling board and the end of corresponding robot, design calibration algorithm with solve the base coordinate system of robot to camera coordinate system the first transformation matrix and camera internal reference, the mapping of the key point image slices vegetarian refreshments of robot is completed according to the DH parameter and configuration of robot, and further generates relevant data information.This kind of method has the advantages such as high degree of automation, speed is fast, accuracy rate is high, abundant information, ensure that the reliability of follow-up work.

Description

A kind of image data set fast construction method for the robot pose estimation that cooperates
Technical field
The present invention relates to robot field more particularly to a kind of image data set for the robot pose estimation that cooperates are fast Fast construction method.
Background technique
In recent years, important breakthrough is obtained in image domains with the new machines such as deep learning learning algorithm, how will be deep The New Algorithms such as degree study move to the research hotspot that robot field is increasingly becoming the field, and robot field combines at present Deep learning algorithm successful story the most is Bin-Picking, i.e., the pose identification and crawl of object at random, wherein using Be usually two dimensional image or three-dimensional point cloud data set, although in other tasks in such as robot obstacle-avoiding motion planning not It centainly using visual informations such as images, and makes some progress, but is usually all real under experimental situation or simulated environment It is existing, it is difficult to be applied in actual production work, be the mainstream side of deep learning algorithm using visual information building data set therefore To.
It is ideal as a result, such as that the machine learning algorithms such as deep learning often require that large-scale data set could obtain Fruit is excessively time-consuming and laborious using the method for the manual labeled data collection in image domains, it is difficult to which the short time constructs larger number According to collection.Robot field uses image information to be as the advantage of data set, and the keys such as the DH of ontology can be used in robot Parameter carries out Accurate Model, obtains the details of robot any position itself, a kind of letter proposed in combination with Zhang Zhengyou Single camera calibration method, by image procossing, can obtain the inside and outside ginseng of accurate camera by shooting multiple scaling board images Number, in conjunction with robot end's pose data, can calibrate the transformation relation of robot base and camera coordinates system, thus complete Arrive the Precision Mapping of image pixel coordinates system again at robot base coordinate sys-tem to camera coordinates system.
Cooperation robot is the hot spot of robot field in recent years, and in order to guarantee its intrinsic safety, it is in design, knot It is different from conventional industrial robot in the use of structure and sensor, it is possible thereby to realize that the man-machine collaboration in same workspace is appointed Business.But at this stage due to the unicity of robot interior sensing data, such as external force that often can only be suffered according to detection Square size is to determine whether collision, it is difficult to guarantee the safety of entire task, therefore need in real man-machine collaboration scene The pose that external sensor combination safety detection algorithm comes real-time monitoring cooperation robot is added, as the another of whole system safety One layer of guarantee.
Therefore, those skilled in the art is dedicated to developing a kind of image data set for the robot pose estimation that cooperates Fast construction method, this kind of method relative to image domains manual mask method have speed is fast, accuracy rate is high, abundant information Etc. advantages, there is no particular requirement to the cooperation robot type used, different cooperation robot scenes can be quickly applied to.
Summary of the invention
In view of the above drawbacks of the prior art, the technical problem to be solved by the present invention is to how solve using engineering The dataset acquisition problem that algorithm carries out cooperation robot pose estimation is practised, dataset acquisition efficiency can be greatly improved, And ensure the Stability and veracity of data set.
To achieve the above object, the present invention provides a kind of image data set for the robot pose estimation that cooperates is quick Construction method comprises the following steps:
Step 1: in the case where robot base and camera are relatively fixed, guaranteeing that the motion range of the robot is in The camera is fixedly connected with the end of the robot within sweep of the eye, by scaling board, guarantees the coordinate of the scaling board The uniqueness of system, while the tool center point pose of the end for the corresponding robot of picture for saving every scaling board Data;
Step 2: according to the end of the picture of scaling board described in multiple groups obtained in step 1 and the corresponding robot Tool center point pose data, be based on Zhang Zhengyou gridiron pattern standardization calibration for cameras inside and outside parameter, design calibration algorithm in the hope of Solve the base coordinate system of the robot to the camera coordinate system the first transformation matrix;
Step 3: the key point of the robot is calculated in machine according to the joint of the robot-axis parameter and configuration Position under people's base coordinate system, in combination with first transformation matrix obtained in step 2 and the camera internal reference matrix, Complete mapping of the key point in image slices vegetarian refreshments of the robot.
Further, the both sides gridiron pattern number of the scaling board in step 1 is respectively odd and even number, to guarantee chess The stability of scaling board coordinate system in the invariance and collection process of disk lattice coordinate origin.
Further, the tool center point pose of the robot in step 1 is arranged on six-freedom degree equal It changes, to obtain the picture of scaling board described in multiple groups and the tool center point pose number of the end of the corresponding robot According to.
Further, obtain corresponding with the picture of scaling board described in multiple groups tool center point pose data comprising steps of
Step 1.1: the note initial pose of robot end's tool center point is expressed as (x, y, z, rx, ry, rz), wherein x, y, Z is tool center point initial coordinate data, and rx, ry, rz is that the rotating vector of tool center point indicates;
Step 1.2: the tool center point pose of step 1.1 is converted into spin matrix form, such as formula (1):
In formula (1),θ=norm (r), I are diagonal matrix;
Step 1.3: by spin matrix obtained in step 1.2, being converted to homogeneous matrix T0:
Wherein,
Step 1.4: set the second transformation matrix asWherein r1,r2,r3, t is 3 × 1 column respectively Vector calculates and passes through transformed tool center point pose formula such as formula (2):
Tn=TfT0 (2)
Step 1.5: successively changing r1,r2,r3, the value of t vector obtains tool center point and changes in six-freedom degree Pose afterwards.
Further, the solution procedure of first transformation matrix is as follows in step 2:
Step 2.1: building equation: TG2T=TG2CTC2BTB2T
Wherein, TG2CFor Camera extrinsic, i.e., the transformation matrix of coordinate system of the coordinate system of the described scaling board to the camera; TG2TFor the changeless scaling board coordinate system to tool center point coordinate system transformation matrix;TT2BFor the robot Pose data, i.e., the transformation matrix of the coordinate system of pedestal of the described tool center point coordinate system to the robot;TB2CFor institute State the coordinate system of the pedestal of robot to the camera coordinate system transformation matrix;TC2B,TB2TIt is T respectivelyB2C,TT2BIt is inverse Battle array;
Step 2.2: note TG2C=D, TB2T=E, TC2B=X constructs matrix A and matrix B, meets formula (3):
Wherein, matrix subscript n indicates the acquisition data of n-th;
Step 2.3: matrix X described in solution procedure 2.2, the inverse matrix of X are the first transformation matrix TB2C
Further, the camera internal reference matrix in step 2 is transformation matrix of the camera coordinates system to image coordinate system, It is denoted as II,Wherein, fx,fy, u, v are calibrating parameters.
Further, position of the key point of the robot under robot base coordinate sys-tem is calculated in step 3, specifically The following steps are included:
Step 3.1: the key point for selecting opposed robots' pedestal motionless is denoted as key point benchmark origin
Step 3.2: with the p of step 3.11For starting point, with other key points of robot from the benchmark origin degree of freedom of motion Distance successively calculates position of each key point under robot base coordinate sys-tem according to DH method, and coordinate is denoted as pi (i=2,3,4 ...);piFor the coordinate vector of other key points.
Further, asked in step 3 key point of the robot to the mapping of image slices vegetarian refreshments the following steps are included:
Step 8.1: constructing the homogeneous coordinatesWherein piIt is key point i (i=1,2,3,4 ...) in robot Space coordinate under base coordinate system;
Step 8.2: note
Wherein x, y be actual pixels position multiplied by scale factor as a result, z be the scale factor;T is the first transformation square Battle array;
Step 8.3: the actual pixels position of key point i
Further, further include actual pixels position and camera image resolution sizes based on the key point, generate The encirclement frame of robot in image, the specific steps are as follows:
Step 9.1: building includes point set < p of all key point actual pixels positionsi>;
Step 9.2: to the point set < pi> element coordinate, respectively according to x, the direction y carries out minimax arrangement and obtains
pmin(xmin,ymin),pmax(xmax,ymax), the point of transverse and longitudinal coordinate minimum and maximum is concentrated for the point;
Step 9.3: on the basis of the step 9.2, machine in image being generated according to actual camera image resolution ratio size The encirclement frame of device people, wherein the upper left corner for surrounding frame and bottom right angular coordinate are respectively as follows:
pbox1=(xmin-0.05h,ymin-0.05h)
pbox2=(xmax+0.05h,ymax+0.05h)
Wherein, h is the pixels tall of image, pbox1For encirclement frame upper left angle point, pbox2For the encirclement frame lower right corner Point.
It further, further include saving step after step 3, preservation content includes the image of the camera acquisition, respectively closes Key point is in the three-dimensional coordinate of transverse and longitudinal coordinate, each key point under camera coordinates system where image pixel.
Compared with prior art, the shortcomings that present invention uses artificial mark for current image data collection, in conjunction with robot The configuration feature of itself proposes a kind of image data set fast construction method for the robot pose estimation that cooperates, this kind Method has the advantages such as high degree of automation, speed is fast, accuracy rate is high, abundant information, ensure that the reliability of follow-up work.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the quick structure of image data set for the robot pose estimation that cooperates of a preferred embodiment of the invention Build system flow chart;
Fig. 2 is the schematic diagram of the upper 6 joint positions distribution of UR5 robot
Fig. 3 is 9 key point position views of the UR5 robot of the embodiment of a preferred embodiment of the invention;
Fig. 4 is DH parameter schematic diagram in the UR5 robot of embodiment illustrated in fig. 2;
Fig. 5 is the first transformation matrix and camera internal reference matrix calculation result figure of embodiment illustrated in fig. 2;
Fig. 6 is that the real data collection of embodiment illustrated in fig. 2 specifically saves the schematic diagram of form;
Fig. 7 is the encirclement of RGB camera UR5 robot in the image of middle generation under different perspectives of embodiment illustrated in fig. 2 Frame schematic diagram.
Specific embodiment
Multiple preferred embodiments of the invention are introduced below with reference to Figure of description, keep its technology contents more clear and just In understanding.The present invention can be emerged from by many various forms of embodiments, and protection scope of the present invention not only limits The embodiment that Yu Wenzhong is mentioned.
In the accompanying drawings, the identical component of structure is indicated with same numbers label, everywhere the similar component of structure or function with Like numeral label indicates.The size and thickness of each component shown in the drawings are to be arbitrarily shown, and there is no limit by the present invention The size and thickness of each component.Apparent in order to make to illustrate, some places suitably exaggerate the thickness of component in attached drawing.
Embodiment one
Fig. 1 show the present embodiment for cooperate robot pose estimation image data set rapid build system it is specific Process:
Step 1: building hardware platform, and robot base is kept relatively fixed with camera, guarantees robot motion's range Within the scope of camera fields of view;
Step 2: adjustment robot initial pose consolidates scaling board in robot end, then folds on six-freedom degree Add variation, change robot end's tool center point (abbreviation TCP) pose, collect nominal data, saves every scaling board picture Corresponding robot end TCP pose data;
Step 3: designing corresponding calibration algorithm, uses the multiple groups scaling board picture acquired in step 2 and corresponding machine Device people's TCP pose data calculate robot base coordinate sys-tem to the first transformation matrix between camera coordinates system, are convenient for machine Under the TCP pose data unification to camera coordinates system in device people space;
Step 4: by robot modeling and conformational analysis, robot DH parameter is obtained, while analysis robot is specific Configuration calculates position of the robot key point under robot coordinate system, in combination with the first transformation obtained in step 3 Matrix and camera internal reference matrix, can be completed the mapping of the robot key point in space to image slices vegetarian refreshments, and further give birth to At a series of relevant data;
Step 5: treated image data set data are saved in the form of text file.
For the stability for guaranteeing scaling board coordinate system, the both sides gridiron pattern number of scaling board is respectively odd and even number.
In the present embodiment, using the initial attitude of teaching machine adjustment robot.
In the present embodiment, corresponding calibration algorithm refers to using Zhang Zhengyou gridiron pattern standardization calibration for cameras inside and outside parameter, And corresponding robot TCP pose data are converted into homogeneous matrix form, in conjunction with the external parameter of scaling board, calculate machine First transformation matrix of people's base coordinate system to camera coordinates system.
Embodiment two
In the present embodiment, using UR5 robot and RGB camera.
The UR5 robot and RGB camera, the image data set that the present invention will be described in detail used below with reference to the present embodiment is fast Fast construction method.
For robot base is solved to the fixation transformation matrix problem of camera coordinates system, corresponding calibration algorithm is designed, The note initial pose of robot end's tool center point is expressed as (x, y, z, rx, ry, rz), wherein x, and y, z are that tool center point is sat Data, rx, ry are marked, rz is that the rotating vector of pose indicates, be converted to spin matrix such as formula (1):
In formula (1),θ=norm (r), I are diagonal matrix;
By the spin matrix of formula (1), homogeneous matrix form is converted to, specific formula is as follows:
Wherein,
Remember that the second transformation matrix isWherein r1,r2,r3, t is 3 × 1 column vector respectively, successively Change r1,r2,r3, the value of t vector can be realized TCP and change in six-freedom degree, by transformed TCP pose coordinate Such as formula (2):
Tn=TfT0 (2)
To solve robot base to the fixation transformation matrix problem of camera coordinates system, data set A, B are constructed, using classics AX=XB derivation algorithm, calculate robot base coordinate sys-tem to the first transformation matrix between camera coordinates system;Camera-machine The essence of device people calibration is the X in known multiple groups AX=XB, and wherein the building mode of A and B is as follows:
By the principle of camera calibration it is found that camera internal reference is transformation matrix of the camera coordinates system to image coordinate system, it is denoted as II;What Camera extrinsic indicated is transformation matrix of the scaling board coordinate system to camera coordinates system, is denoted as TG2C;Robot pose data Representational tool center point coordinate system is denoted as T to the transformation matrix of robot base coordinate sys-temT2B;Since scaling board is to be consolidated in machine Device people end, therefore, the transformation matrix of scaling board coordinate system to tool center point coordinate system is fixed and invariable, and is denoted as TG2T;Machine The coordinate system of device people's pedestal is denoted as T to the transformation of camera coordinates systemB2C
By above transformation relation it is found that the transformation of gridiron pattern coordinate system to tool center point coordinate system meets equation:
TG2T=TG2CTG2BTB2T, wherein TC2B,TB2TIt is T respectivelyB2C,TT2BInverse matrix
In above formula, the T that collectsG2CWith TB2TIt is denoted as D, E, changeless T respectivelyC2B,TG2TIt is denoted as X, C, then above formula It is denoted as C=D1XE1=D2XE2=...=DnXEn, wherein what subscript indicated is the acquisition data of n-th.
Then haveIt enablesThen there is AX=XB to complete data A and B Building, it is acquiring at this time the result is that camera coordinates system to robot base coordinate sys-tem transformation matrix, need to invert after It is first transformation matrix T of the robot base coordinate sys-tem to camera coordinates systemB2C
Further, camera internal reference matrix is denoted as II, such as formula (4):
In formula (4), fx,fy, u, v are calibrating parameters;
To solve position of the robot key point under camera actual pixels, kinematics is carried out to robot using DH method and is built Mould solves the spatial position expression formula of all key points in conjunction with the specific configuration of robot, realizes the standard of robot key point It determines position, and according to the first transformation matrix and camera internal reference matrix, is accurately mapped on image pixel, further generates a series of Relevant data information.It should be noted that simultaneously non-sum DH is sat for the spatial position of actual robot key point when using DH Method Modeling The origin for marking system is completely the same, and the specific configuration in conjunction with robot is at this moment needed to be analyzed.
Fig. 2 show the key point schematic diagram of the present embodiment UR5, and the DH key parameter that Fig. 3 show the present embodiment UR5 shows It is intended to.Fig. 4 is DH parameter schematic diagram in the UR5 robot of illustrated embodiment.According to the UR5 data of the present embodiment, machine is learnt The DH parameter of people, design parameter are as follows:
Each key point of robot is as follows in the coordinate calculating process of robot base coordinate sys-tem:
1) coordinate of note robot base (key point 1) is
2) it is by the coordinate that DH parameter is apparent from key point 2
3) in view of the coordinate that the movement in joint 1 will lead to key point 3 changes, therefore the coordinate of key point 3 isWherein, delta1It is the actual measured value other than DH parameter, is a constant.
4) since joint 2,3,4 forms a planar manipulator, key point 4 on the basis of key point 3 it is considered that be superimposed The influence generated by the movement of joint 2, therefore haveWherein abs (x) expression takes real number x Absolute value.
Therefore,
5) similarly, key point 5 can consider that radius of gyration when being that joint 1 moves with the difference of key point 4 is different, Through measuring, the radius of gyration of key point 5 is delta2, with delta1Property is identical.
It enablesThen the coordinate of key point 5 is
6) key point 6 is influenced by joint 1,2,3, is had
Therefore the coordinate of key point 6 is
7) similarly, the difference of key point 7 and key point 6 is that radius of gyration when joint 1 moves is different, and measured value is delta3=d4
Therefore,The coordinate of key point 7 are as follows:
8) key point 8 is influenced by joint 1,2,3,4, therefore is had
The then coordinate of key point 8 are as follows:
9) coordinate of key point 9 is end flange coordinate p9It is directly obtained by DH parameter model, thus robot base The spatial position of lower 9 key points of coordinate system all calculates.
Construct the homogeneous coordinates of key pointWherein piIt is space of the key point i under robot base coordinate sys-tem Coordinate then has
Wherein x, y be actual pixels position multiplied by a scale factor as a result, z is the scale factor, therefore the reality of key point i Border location of pixels isAfter 9 key points are all carried out with such processing, the point set comprising 9 elements can be obtained <pi>。
For the range of further precise machine people in the picture, to point set < pi> in element respectively according to x, the direction y into The arrangement of row minimax obtains pmin(xmin,ymin),pmax(xmax,ymax), for concentrate transverse and longitudinal coordinate minimum and maximum a point, The encirclement frame of robot in image is generated on the basis of this according to real image resolution sizes:
pbox1=(xmin-0.05h,ymin-0.05h)
pbox2=(xmax+0.05h,ymax+0.05h)
Wherein, pbox1For encirclement frame upper left angle point, pbox2For encirclement frame bottom right angle point, h is that the pixel of image is high Degree.
For the large-scale dataset for further establishing deep learning, the present embodiment is additionally provided with preservation function, to processed Cooperation robot pose data save as document form.
Fig. 5 show first transformation matrix of the present embodiment and the calculated result of camera internal reference matrix;
Fig. 6 show the preservation form of data set, and content includes that the image of camera acquisition and automatic marking have machine in image Device people's key point position, the text text for surrounding the key messages such as crucial space of points coordinate under frame angular coordinate and camera coordinates system Part, specifically, every a line represent a sample, and the information of preservation is successively image in local preservation name, a left side for encirclement frame Upper angle pixel coordinate and surround the pixel wide of frame and height, the transverse and longitudinal coordinate where the pixel of each key point on the image, each Three-dimensional coordinate of the key point under camera coordinates system, various information are separated with "/".
Shown in Fig. 7, the packet of robot in the image that calculates to generate in different positions for UR5 under the present embodiment different perspectives Peripheral frame.
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea Scheme, all should be within the scope of protection determined by the claims.

Claims (10)

1. a kind of image data set fast construction method for the robot pose estimation that cooperates, which is characterized in that comprising as follows Step:
Step 1: in the case where robot base and camera are relatively fixed, guaranteeing that the motion range of the robot is in described Camera is fixedly connected with the end of the robot within sweep of the eye, by scaling board, guarantees the coordinate system of the scaling board Uniqueness, while the tool center point pose number of the end for the corresponding robot of picture for saving every scaling board According to;
Step 2: according to the work of the picture of scaling board described in multiple groups obtained in step 1 and the end of the corresponding robot Have central point pose data, is based on Zhang Zhengyou gridiron pattern standardization calibration for cameras inside and outside parameter, solves the pedestal of the robot Coordinate system to the camera coordinate system the first transformation matrix;
Step 3: the key point of the robot is calculated in robot base according to the joint of the robot-axis parameter and configuration It is completed in combination with first transformation matrix obtained in step 2 and the camera internal reference matrix position under seat coordinate system Mapping of the key point of the robot in image slices vegetarian refreshments.
2. the image data set fast construction method as described in claim 1 for the robot pose estimation that cooperates, feature It is, the both sides gridiron pattern number of the scaling board in step 1 is respectively odd and even number, to guarantee that gridiron pattern coordinate system is former The stability of scaling board coordinate system in the invariance and collection process of point.
3. the image data set fast construction method as described in claim 1 for the robot pose estimation that cooperates, feature It is, the tool center point pose of the robot in step 1 is arranged to change on six-freedom degree, to obtain Scaling board described in multiple groups picture and the tool center point pose data of the end of the corresponding robot.
4. the image data set fast construction method as claimed in claim 3 for the robot pose estimation that cooperates, feature Be, the step 1 obtain tool center point pose data corresponding with the picture of scaling board described in multiple groups the following steps are included:
Step 1.1: the note initial pose of robot end's tool center point is expressed as (x, y, z, rx, ry, rz), and wherein x, y, z are Tool center point initial coordinate data, rx, ry, rz are that the rotating vector of tool center point indicates;
Step 1.2: the tool center point pose of step 1.1 is converted into spin matrix form, such as formula (1):
In formula (1),θ=norm (r), I are diagonal matrix;
Step 1.3: by spin matrix obtained in step 1.2, being converted to homogeneous matrix T0:
Wherein,
Step 1.4: set the second transformation matrix asWherein r1,r2,r3, t is 3 × 1 column vector respectively, It calculates and passes through transformed tool center point pose formula such as formula (2):
Tn=TfT0 (2)
Step 1.5: successively changing r1,r2,r3, the value of t vector obtains tool center point after six-freedom degree changes Pose.
5. the image data set fast construction method as described in claim 1 for the robot pose estimation that cooperates, feature It is, the solution procedure of first transformation matrix is as follows in step 2:
Step 2.1: building equation: TG2T=TG2CTC2BTB2T
Wherein, TG2CFor Camera extrinsic, i.e., the transformation matrix of coordinate system of the coordinate system of the described scaling board to the camera;TG2TFor Transformation matrix of the coordinate system of the changeless scaling board to tool center point coordinate system;TT2BFor the position of the robot Appearance data, i.e., the transformation matrix of the coordinate system of pedestal of the described tool center point coordinate system to the robot;TB2CFor the machine The coordinate system of the pedestal of device people to the camera coordinate system transformation matrix;TC2B,TB2TIt is T respectivelyB2C,TT2BInverse matrix;
Step 2.2: note TG2C=D, TB2T=E, TC2B=X constructs matrix A and matrix B, meets formula (3):
Wherein, matrix subscript n indicates the acquisition data of n-th;
Step 2.3: matrix X described in solution procedure 2.2, the inverse matrix of X are the first transformation matrix TB2C
6. the image data set fast construction method as described in claim 1 for the robot pose estimation that cooperates, feature It is, the camera internal reference matrix in step 2 is transformation matrix of the camera coordinates system to image coordinate system, is denoted as II,Wherein, fx,fy, u, v are calibrating parameters.
7. the image data set fast construction method as claimed in claim 6 for the robot pose estimation that cooperates, feature It is, position of the key point of the robot under robot base coordinate sys-tem is calculated in step 3, specifically includes following step It is rapid:
Step 3.1: the key point for selecting opposed robots' pedestal motionless is denoted as key point benchmark origin
Step 3.2: with the p of step 3.11It is far and near from the benchmark origin degree of freedom of motion with other key points of robot for starting point, Position of each key point under robot base coordinate sys-tem is successively calculated according to DH method, and coordinate is denoted as pi(i=2, 3,4…);piFor the coordinate vector of other key points.
8. the image data set fast construction method as claimed in claim 7 for the robot pose estimation that cooperates, feature Be, asked in step 3 key point of the robot to the mapping of image slices vegetarian refreshments the following steps are included:
Step 8.1: constructing the homogeneous coordinatesWherein piIt is key point i (i=1,2,3,4 ...) in robot base Space coordinate under coordinate system;
Step 8.2: note
Wherein x, y be actual pixels position multiplied by scale factor as a result, z be the scale factor;T is the first transformation matrix;
Step 8.3: the actual pixels position of key point i
9. the image data set fast construction method as claimed in claim 8 for the robot pose estimation that cooperates, feature It is, further includes the actual pixels position based on the key point and camera image resolution sizes, generate robot in image Encirclement frame, the specific steps are as follows:
Step 9.1: building includes point set < p of all key point actual pixels positionsi>;
Step 9.2: to the point set < pi> element coordinate, respectively according to x, the direction y carries out minimax arrangement and obtains pmin (xmin,ymin),pmax(xmax,ymax), the point of transverse and longitudinal coordinate minimum and maximum is concentrated for the point;
Step 9.3: on the basis of the step 9.2, robot in image being generated according to camera image true resolution size Encirclement frame, wherein it is described surround frame the upper left corner and bottom right angular coordinate be respectively as follows:
pbox1=(xmin-0.05h,ymin-0.05h)
pbox2=(xmax+0.05h,ymax+0.05h)
Wherein, h is the pixels tall of image, pbox1For encirclement frame upper left angle point, pbox2For encirclement frame bottom right angle point.
10. the image data set fast construction method as described in claim 1 for the robot pose estimation that cooperates, feature It is, further includes saving step after step 3, saving content includes the image of the camera acquisition, each key point in image slices The three-dimensional coordinate of transverse and longitudinal coordinate, each key point under camera coordinates system where plain.
CN201910215968.9A 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation Active CN110009689B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910215968.9A CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910215968.9A CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Publications (2)

Publication Number Publication Date
CN110009689A true CN110009689A (en) 2019-07-12
CN110009689B CN110009689B (en) 2023-02-28

Family

ID=67167566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910215968.9A Active CN110009689B (en) 2019-03-21 2019-03-21 Image data set rapid construction method for collaborative robot pose estimation

Country Status (1)

Country Link
CN (1) CN110009689B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
CN113662669A (en) * 2021-08-30 2021-11-19 华南理工大学 Optical power fusion tail end clamp holder and positioning control method thereof
CN114211483A (en) * 2021-11-17 2022-03-22 合肥联宝信息技术有限公司 Robot tool center point calibration method, device and storage medium
CN115611009A (en) * 2022-12-01 2023-01-17 中煤科工西安研究院(集团)有限公司 Coal mine underground stacking type rod box and drill rod separation system and method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130010081A1 (en) * 2011-07-08 2013-01-10 Tenney John A Calibration and transformation of a camera system's coordinate system
CN105014667A (en) * 2015-08-06 2015-11-04 浙江大学 Camera and robot relative pose calibration method based on pixel space optimization
CN105716525A (en) * 2016-03-30 2016-06-29 西北工业大学 Robot end effector coordinate system calibration method based on laser tracker
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
CN107256567A (en) * 2017-01-22 2017-10-17 梅卡曼德(北京)机器人科技有限公司 A kind of automatic calibration device and scaling method for industrial robot trick camera
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108346165A (en) * 2018-01-30 2018-07-31 深圳市易尚展示股份有限公司 Robot and three-dimensional sensing components in combination scaling method and device
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN108972544A (en) * 2018-06-21 2018-12-11 华南理工大学 A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130010081A1 (en) * 2011-07-08 2013-01-10 Tenney John A Calibration and transformation of a camera system's coordinate system
CN105014667A (en) * 2015-08-06 2015-11-04 浙江大学 Camera and robot relative pose calibration method based on pixel space optimization
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
WO2017161608A1 (en) * 2016-03-21 2017-09-28 完美幻境(北京)科技有限公司 Geometric calibration processing method and device for camera
CN105716525A (en) * 2016-03-30 2016-06-29 西北工业大学 Robot end effector coordinate system calibration method based on laser tracker
CN107256567A (en) * 2017-01-22 2017-10-17 梅卡曼德(北京)机器人科技有限公司 A kind of automatic calibration device and scaling method for industrial robot trick camera
CN108198223A (en) * 2018-01-29 2018-06-22 清华大学 A kind of laser point cloud and the quick method for precisely marking of visual pattern mapping relations
CN108346165A (en) * 2018-01-30 2018-07-31 深圳市易尚展示股份有限公司 Robot and three-dimensional sensing components in combination scaling method and device
CN108748146A (en) * 2018-05-30 2018-11-06 武汉库柏特科技有限公司 A kind of Robotic Hand-Eye Calibration method and system
CN108972544A (en) * 2018-06-21 2018-12-11 华南理工大学 A kind of vision laser sensor is fixed on the hand and eye calibrating method of robot
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
傅华强: "装配机器人视觉系统应用与软件开发", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
江士雄: "面向位姿估计的相机系统标定方法研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127568A (en) * 2019-12-31 2020-05-08 南京埃克里得视觉技术有限公司 Camera pose calibration method based on space point location information
CN113662669A (en) * 2021-08-30 2021-11-19 华南理工大学 Optical power fusion tail end clamp holder and positioning control method thereof
CN114211483A (en) * 2021-11-17 2022-03-22 合肥联宝信息技术有限公司 Robot tool center point calibration method, device and storage medium
CN115611009A (en) * 2022-12-01 2023-01-17 中煤科工西安研究院(集团)有限公司 Coal mine underground stacking type rod box and drill rod separation system and method
CN115611009B (en) * 2022-12-01 2023-03-21 中煤科工西安研究院(集团)有限公司 Coal mine underground stacking type rod box and drill rod separation system and method

Also Published As

Publication number Publication date
CN110009689B (en) 2023-02-28

Similar Documents

Publication Publication Date Title
CN110009689A (en) A kind of image data set fast construction method for the robot pose estimation that cooperates
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
CN105818167B (en) The method that hinged end effector is calibrated using long distance digital camera
CN107808407A (en) Unmanned plane vision SLAM methods, unmanned plane and storage medium based on binocular camera
CN108401461A (en) Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product
CN110334701B (en) Data acquisition method based on deep learning and multi-vision in digital twin environment
CN111476841B (en) Point cloud and image-based identification and positioning method and system
CN107300382A (en) A kind of monocular visual positioning method for underwater robot
CN108748149A (en) Based on deep learning without calibration mechanical arm grasping means under a kind of complex environment
WO2021085561A1 (en) Training data generation method
CN110065075A (en) A kind of spatial cell robot external status cognitive method of view-based access control model
CN108073855A (en) A kind of recognition methods of human face expression and system
CN110264527A (en) Real-time binocular stereo vision output method based on ZYNQ
CN106371442A (en) Tensor-product-model-transformation-based mobile robot control method
CN107972027A (en) The localization method and device of robot, robot
CN107860390A (en) The nonholonomic mobile robot of view-based access control model ROS systems remotely pinpoints auto-navigation method
CN109993801A (en) A kind of caliberating device and scaling method for two-dimensional camera and three-dimension sensor
Ni et al. 3D-point-cloud registration and real-world dynamic modelling-based virtual environment building method for teleoperation
CN109636856A (en) Object 6 DOF degree posture information union measuring method based on HOG Fusion Features operator
Gayanov et al. Estimating the trajectory of a thrown object from video signal with use of genetic programming
CN110400333A (en) Coach&#39;s formula binocular stereo vision device and High Precision Stereo visual pattern acquisition methods
CN113313824B (en) Three-dimensional semantic map construction method
CN109542094A (en) Mobile robot visual point stabilization without desired image
KR20170012717A (en) Method and apparatus for generating location information based on video image
Tomari et al. Wide field of view Kinect undistortion for social navigation implementation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant