A kind of robot eye system and its self-calibrating method based on RGB-D camera
Technical field
The present invention relates to robotic technology field more particularly to a kind of robot eye system based on RGB-D camera and
Its self-calibrating method
Background technique
In the case where " made in China 2025 " plans the background proposed, need of the Chinese industrial to flexible production line and intelligent robot
Ask more more and more urgent.A kind of important means of robot automtion is allowed to be exactly the ability giving robot using machine vision and seeing.
Machine vision refers to the image information that environment is obtained by visual sensor, so that machine has the function of visual perception.It is logical
Machine vision is crossed, robot can be allowed to recognize object, and determine the position of object.
It is combined between the kinetic coordinate system of robot and the coordinate system of camera by " hand and eye calibrating ", can be incited somebody to action here
The end effector of robot regards the hand of people as, and visual sensor regards the eyes of people as.Robot eye system is generally divided into
Two kinds, eye outside hand (eye-to-hand) in (eye-in-hand) on hand and eye, visual sensor is fixed on machine by the former
It on people's end effector, can be moved with robot end, the latter is fixed in the environment by visual sensor, not random device people
End movement.The former often has higher flexibility, and the movement of robot is made to have higher precision.
Traditional robot eye system uses high-precision marker, such as cube or gridiron pattern device, is scheming
Angle point is extracted as in, and camera is calculated to the relative position of marker by projective geometry mode, while recording robot
The position of end, by repeatedly shooting to obtain multi-group data calculating robot end to the relative pose between camera.But this
Kind method generally requires to do off-line calibration, greatly limits the use of this scaling method due to needing high-precision marker
Range.
Therefore, those skilled in the art be dedicated to developing a kind of robot eye system based on RGB-D camera and its
Self-calibrating method, this method do not need special marker, it is only necessary to and robot has certain space complexity,
The offline or on-line proving for carrying out hand-eye system can be calculated by algorithm.
Summary of the invention
In view of the above drawbacks of the prior art, the technical problem to be solved by the present invention is to overcome to need in the prior art
High-precision marker, the shortcomings that limiting the use scope of traditional scaling method, solve traditional scaling method and need using high
The marker of precision leads to the problem of can only carrying out off-line calibration.
To achieve the above object, the present invention provides a kind of Robot Hand-eye system self-calibration sides based on RGB-D camera
Method, comprising the following steps:
Step S1, the RGB-D camera motion for being mounted on robot end is driven by robot, obtains a series of surrounding rings
Border depth map, and record the posture information of synchronization robot;
Step S2, three-dimensional reconstruction is carried out to captured scene using a series of continuous depth maps, to obtain every depth
Camera pose when degree figure shooting;
Step S3, the camera pose of synchronization and robot pose are combined into a data pair, and by any two groups
The data of different moments are to the constraint equation being combined with composition about robot end and camera relative pose;
Step S4, all moment pose combination of two are constructed one big about robot end and camera relative pose
Equation group;
Step S5, equation group is solved using Tsai-Lenz algorithm, obtains the opposite position of robot end and camera
Appearance.
Simultaneously the present invention provides a kind of robot eye system based on RGB-D camera, including robot, workbench,
Robot base, rigid connector, RGB-D camera and end effector of robot;The robot base is arranged in the work
Make on platform;The robot is arranged in the robot base;The rigid connector is arranged in the robot end;Institute
RGB-D camera is stated to be arranged on the rigid connector;The end effector of robot is arranged on the rigid connector.
Further, the depth map of the robot pose for obtaining synchronization and RGB-D camera, is touched by hardware
The hair mode that perhaps software triggers triggers the RGB-D camera and obtains a frame image while by socket or robot API
Obtain the robot pose.
Further, described that three-dimensional reconstruction is carried out to captured scene according to a series of continuous depth maps, it is to pass through
TSDF Volume model merges multiframe depth map, carries out three-dimensional reconstruction to the scene and estimates each depth
Scheme the corresponding RGB-D camera pose.
Further, the data by two groups of different moments to be combined with constitute about robot end and camera
The constraint equation of relative pose is combined by arbitrarily choosing the data at two moment in continuous more time datas,
The constraint equation of foundation is most basic position auto―control transformation, without other a priori assumptions.
Further, described that equation group is solved to obtain robot end and camera by Tsai-Lenz method
Relative pose, be by by the rotation and translation part of relative transform matrix between the robot end and the camera point
Solution is opened, the variation of pose is indicated using Douglas Rodríguez parameter, first solves rotating vector, then solve spin matrix.
Further, RGB-D camera position is arranged to the origin of reference frame in the step S1, and direction matrix is
Unit matrix.
Further, the robot meets following formula between any two pose in moving process:
Wherein subscript g indicates that robot end's coordinate system, c represent camera coordinates system, and i, j indicate the pose serial number of record,
Such as: c_i indicates camera coordinates system in the pose of i-th group of record, Hcjci expression camera when from i pose is in by space midpoint
Coordinate in coordinate system is converted to coordinate when in j pose in camera coordinates system.
Further, the robot base and the workbench are rigidly connected, the RGB-D camera and the robot
It is rigidly connected between end effector, the robot can be started shipment with the RGB-D camera and the end effector one
It is dynamic.
Further, the workbench is all environmental informations around the visual sensor including work top.
Compared with prior art, the present invention is not needed by special marker, and algorithm is simple, can be used for marking online
It is fixed, substantially increase the efficiency of Robotic Hand-Eye Calibration.
It is described further below with reference to technical effect of the attached drawing to design of the invention, specific structure and generation, with
It is fully understood from the purpose of the present invention, feature and effect.
Detailed description of the invention
Fig. 1 is the hand and eye calibrating basic model schematic diagram of a preferred embodiment of the invention;
Fig. 2 is the hand and eye calibrating system structure diagram of a preferred embodiment of the invention.
Specific embodiment
Multiple preferred embodiments of the invention are introduced below with reference to Figure of description, keep its technology contents more clear and just
In understanding.The present invention can be emerged from by many various forms of embodiments, and protection scope of the present invention not only limits
The embodiment that Yu Wenzhong is mentioned.
In the accompanying drawings, the identical component of structure is indicated with same numbers label, everywhere the similar component of structure or function with
Like numeral label indicates.The size and thickness of each component shown in the drawings are to be arbitrarily shown, and there is no limit by the present invention
The size and thickness of each component.Apparent in order to make to illustrate, some places suitably exaggerate the thickness of component in attached drawing.
The invention adopts the following technical scheme:
RGB-D camera is fixed on end effector of robot, RGB-D camera can be moved with robot end
It is dynamic, and the connection of the two is kept to be rigid, relative movement is not had.Operation robot carries out moving through depth sensing
The depth map data of device acquisition robot.The posture of camera is as reference coordinate when collecting first depth map
System rebuilds the environment around robot into real-time three-dimensional, at the same when recording sampling depth diagram data each time robot position
Appearance.
Camera when collecting depth map each time is estimated that while rebuilding to robot
The pose of opposite reference frame.
For the robot eye system of this form, basic model is as shown in Figure 1.
In robot moving process, meet following formula between any two pose:
Wherein subscript g indicates that robot end's coordinate system, c represent camera coordinates system, and i, j indicate the pose serial number of record,
Such as: ciIndicate camera coordinates system in the pose of i-th group of record, HcjciExpression camera coordinates when from i pose is in by space midpoint
Coordinate in system is converted to coordinate when in j pose in camera coordinates system.Due to opposite between robot end and camera
Position is fixed, therefore HgcSubscript i, j is omitted.Enable A=Hgjgi, B=Hgjgi, X=Hgc, then formula (1) can be abbreviated are as follows:
AX=XB (2)
Above formula is split, is write as the form of matrix in block form are as follows:
Wherein R and t is the rotation and translation component in transformation matrix respectively, compares the available equation group in equation two sides:
Solution for equation group (5), the present invention use two step solving method of Tsai, and this method is commonly known as Tsai-
Lenz method is one of most widely used Robotic Hand-Eye Calibration method.
As shown in Fig. 2, Robot Hand-eye self-calibrating method of the invention includes: robot base 1, robot 2, robot
The conducting wire rigid-connecting device 3 of end and visual sensor, end effector of robot 4, the end effector are that robot executes tool
The executing agency of body task, RGB-D visual sensor 5, workbench environment 6, signified workbench environment is to include in this patent
All environmental informations around visual sensor including work top.
(1) operation robot 2 is returned to initial position, so that the major part of work top environment 6 is in RGB-D
Camera 5 is within sweep of the eye.Set camera position at this time to the origin of reference frame, direction matrix is unit battle array.Initially
Change the working space of optical rehabilitation, the reconstruction model that the present invention uses is TSDF (Trunked Signed Distance
Function) model.
(2) operation robot 2 is moved along the track (such as zigzag route) of setting with RGB-D camera 5 together,
Constantly scene 6 is shot during movement, obtained depth map Ik, while recording the pose of robot end 4
Tbgk。
(3) to the depth map I obtained each timek, according to the method for ray cast (Ray casting) by TSDFVolume
In data according to Ik-1The pose at moment is projected, and I ' is obtainedk-1, I ' is found out by ICP algorithmk-1And IkBetween it is opposite
Pose Tck,ck-1.Therefore the camera pose at k moment is
(4) according to the camera pose T at k momentckBy depth map IkThree-dimensional space is projected to, is determined later according to TSDF model
Right way of conduct formula is fused in TSDF Volume.
(5) a series of camera pose T can be obtained by carrying out circulation according to above-mentioned 2 to 4 stepckWith corresponding robot end
Hold pose TbgkData.It is grouped two-by-two to therein, such as to one group of data that i moment and j time data are constituted, Ying You
(6) assume to amount to have obtained the data of N number of pose, then available N* (N-1)/2 group data, wherein each group of number
According to form equation as shown in formula (7) can be listed.This N* (N-1)/2 equation constitutes a big equation group.
(7) equation group is solved according to Tsai-Lenz method after having obtained multi-group data, finally obtains camera
Relative pose T between robot endgc。
The preferred embodiment of the present invention has been described in detail above.It should be appreciated that the ordinary skill of this field is without wound
The property made labour, which according to the present invention can conceive, makes many modifications and variations.Therefore, all technician in the art
Pass through the available technology of logical analysis, reasoning, or a limited experiment on the basis of existing technology under this invention's idea
Scheme, all should be within the scope of protection determined by the claims.