A kind of robotic workstation strengthened based on many depth camera
Technical field
The invention belongs to computer intellectual technology field, be specifically related to a kind of robotic workstation strengthened based on many depth camera.
Background technology
Along with the fast development of intelligent robot, intelligent robot applies to the various industries such as industry, medical treatment, service widely.Mechanical hand plays vital effect in robot task completes, and in conjunction with the feature of mechanical hand self structure, the degree of freedom according to self, it is possible to complete specific task, such as moves to the position of target object, realizes the operations such as the crawl to object.In order to make mechanical hand more intelligent, installing external sensor for mechanical hand, non-contact sensor such as video camera and laser scanner have important effect for robot perception external environment.Vision sensor can make the environment around mechanical hand perception better, thus mechanical hand can interact with people better, it is provided that more service.
Vision has had very ripe application in the identification of object, detection, tracking etc..Visual system has a wide range of applications intelligent robot, mobile apparatus people, it can make robot perception surrounding enviroment better, positional information is provided for robot, such as WillowGaragePR2 robot, it can be carried out the identification of object by the binocular camera that robot configures, be realized the grasping manipulation to object.
Monocular cam and binocular camera are widely used in robotic vision system.Actual indoor environment Pioneer3 robot based on monocular vision realizes mobile robot global location.Monocular cam simple in construction, but the three-dimensional information of object can not be obtained.Binocular camera can obtain the three dimensional local information of object, and the path planning hence for the coupling of target object, mobile apparatus people provides positional information.The Bumblebee2 of PointGrey company of utilization Canada carries out HSV (hue-saturation-value) Threshold segmentation as a kind of home-services robot of binocular vision sensor by the colouring information of target object, obtains the three-dimensional coordinate of target object.Binocular camera is when obtaining the three-dimensional information of object, it is necessary to photographic head is demarcated, corrects, and demarcates the error produced and influences whether in follow-up object matches, and then affects the accuracy of object dimensional coordinate.In binocular or single camera vision system, photographic head is arranged on robot, and the environment faced for robot carries out perception, and cannot obtain after robot or the environment of periphery.
Summary of the invention
Above-mentioned technical problem existing for prior art, the invention provides a kind of robotic workstation strengthened based on many depth camera, it is capable of the auto-scaling between depth camera and robot, the robot accurate location according to the multiple depth camera identifications and location target object of arranging different angles on the table, moves to the position of target object and captures.
A kind of robotic workstation strengthened based on many depth camera, including: work platforms, the mechanical hand being located on work platforms, the multiple stage depth camera being arranged in work platforms surrounding and a control processor;Described mechanical hand is with gripper, and the centre of the palm of gripper is provided with Quick Response Code, and this Quick Response Code includes and is positioned at mechanical hand and can touch spatial dimension not four fixed points at grade positional information under robot coordinate system;
Described depth camera for carrying out image acquisition to work platforms, and the image collected is supplied to control processor;For arbitrary depth camera, its image collected is processed by the described processor that controls, identify the target object on work platforms to determine its three dimensional local information under this camera coordinate system, simultaneously to this camera coordinate system and robot coordinate system's auto-scaling, calculate target object three dimensional local information under robot coordinate system by Coordinate Conversion, and then make mechanical hand move near target object and control gripper to capture target object.
The number of units of described depth camera is be more than or equal to 3, and depth camera adopts Kinect video camera.
Described controls processor identification target object to determine its three dimensional local information under camera coordinate system, and detailed process is as follows:
First, the multiple templates about target object are obtained;
Then, the image that depth camera is collected carries out cutting, obtains ROI region;
Finally, target object is scanned for coupling to find target object in ROI region according to described template, and then determine target object three dimensional local information in camera coordinate system.
The image that depth camera is collected by described control processor carries out cutting and namely removes the area of space that in image, mechanical hand cannot touch.
Described control processor utilizes Affine-SIFT algorithm that target object scans for coupling in ROI region according to template.
Namely camera coordinate system and robot coordinate system's auto-scaling are calculated spin matrix between the two and translation vector by the described processor that controls.
Described control processor is as follows to the detailed process of camera coordinate system and robot coordinate system's auto-scaling:
First, control processor to occurring that Quick Response Code in the picture resolves, obtain four fixed points positional information under robot coordinate system, and then control mechanical hand arrives this four fixed points one by one;
After mechanical hand arrives arbitrary fixed point, controlling processor utilizes depth camera to pass through image acquisition acquisition current two-dimension central point positional information under camera coordinate system, and then in conjunction with this fixed point positional information under robot coordinate system, calculate current two-dimension central point positional information under robot coordinate system;Four fixed points of traversal according to this;
Finally, four groups obtained according to correspondence, about the Quick Response Code central point positional information under camera coordinate system and robot coordinate system respectively, calculate the spin matrix between camera coordinate system and robot coordinate system and translation vector.
The connecting points that the standard of the described mechanical hand arbitrary fixed point of arrival is defined as between mechanical hand with gripper overlaps with this fixed point.
Corresponding multiple stage depth camera, the described processor that controls obtains many groups about target object three dimensional local information under robot coordinate system by calculating, comprehensive each group information rejects the error three dimensional local information beyond tolerance interval, appoint from remaining information and take one group or to determine final goal object three dimensional local information under robot coordinate system in the way of being averaging, and then make mechanical hand move near target object and control gripper to capture target object.
The present invention strengthens the visually-perceptible of robotic workstation by many depth camera, the method using depth camera and robot auto-scaling, makes to determine calibration method more convenient and quicker;The present invention utilizes multiple stage depth camera can identify and position the three-dimensional coordinate of target object under object is blocked situation more exactly;When target object identification, the region that the present invention is mated by downscaled images, substantially increase the efficiency of image recognition.
Accompanying drawing explanation
Fig. 1 is the structural representation of robotic workstation of the present invention.
Fig. 2 is the workflow schematic diagram of robotic workstation of the present invention.
Fig. 3 is the schematic flow sheet of robotic workstation's auto-scaling of the present invention.
Fig. 4 is the Quick Response Code schematic diagram on gripper.
Detailed description of the invention
In order to more specifically describe the present invention, below in conjunction with the drawings and the specific embodiments, technical scheme is described in detail.
As it is shown in figure 1, the robotic workstation that the present invention strengthens based on many depth camera, including multiple depth camera, work platforms, mechanical hand, gripper and control processor, wherein:
Multiple depth camera are fixed on the periphery of robot, and for 3, before laying respectively at and the right and left, depth camera is for the image information of Real-time Collection robot surrounding enviroment.
Control processor and go out target object according to the image recognition of depth camera collection, its three-dimensional coordinate under camera coordinate system is converted to the three-dimensional coordinate under robot coordinate system, and then control mechanical hand moves to the position of target object, adjust the direction of gripper, angle and pattern and capture target object.
The depth camera adopted in the present embodiment is Kinect video camera, and what mechanical hand adopted is EPSON robot, and what gripper adopted is Robotiq3-FingerAdaptiveGripper gripper.
As in figure 2 it is shown, the method for work of the robotic workstation of present embodiment many depth camera enhancing is as follows:
First, utilizing control processor that with robot, depth camera is carried out auto-scaling, concrete steps are as shown in Figure 3.
The Quick Response Code (as shown in Figure 4) for identifying gripper central point is sticked in the center of the end gripper of robot, the content of Quick Response Code is the index point that mechanical hand is automatically moved to, and wherein index point have chosen 4 points of not coplanar in robot coordinate system.Robot by adjust end angle (U, V, W) control gripper towards.(U, V, W) represents coordinate system respectively around X-axis, Y-axis, Z axis anglec of rotation W, V, U, then total spin matrix R=Rz*Ry*Rx。
Wherein:
Therefore, robot coordinate system the coordinate of 4 index points set can obtain the index point coordinate robot coordinate system: Crobot=M*R+Ctool, wherein CrobotBeing index point coordinate in robot coordinate system, M is the arm end vector to index point, CtoolIt it is the coordinate of arm end.The positional information of 4 index points is encoded into Quick Response Code.
Control processor and identify Quick Response Code from the image that depth camera collects, decode the positional information of 4 index points, control robot afterwards and be automatically moved to the position of 4 index points respectively successively, control processor in the image of depth camera Real-time Collection, obtain the position of Quick Response Code central point, the Quick Response Code central point identified is as index point position in the picture, and then the two-dimensional coordinate of central point is converted to central point three-dimensional coordinate in camera coordinate system, it is designated as Ccamera.According to Ccamera=R (Crobot-T) obtain camera coordinates and be tied to the spin matrix R and translation vector T of robot coordinate system.3 Kinect each self calibrations respectively, obtain camera coordinates corresponding for this Kinect and are tied to the spin matrix R and translation vector T of robot coordinate system.
In order at object by other object circumstance of occlusions under also can be identified by Kinect, the multiple Kinect of table set are separately fixed at multiple angles of robot.Experiment set up 3 Kinect and be separately mounted to front and left and right two side of robot, respectively target object be identified 3 angles and position.In order to mate target object more accurately, control processor for each Kinect, have chosen the target object multiple images under different angles as matching template, mates in the Kinect image gathered with Affine-SIFT algorithm when to object identification.
In order to improve the efficiency of matching algorithm, it is the region that mechanical hand is movable to by the image down of collection, decreases extraction and the coupling of redundant character point, substantially increase the efficiency of object identification.Control the central point of the characteristic point that processor will identify that as the position of target object, be transformed into the characteristic of depth image by coloured image according to Kinect, it is possible to obtained central point three-dimensional coordinate in camera coordinate system by the two-dimensional coordinate of central point.Control the processor Coordinate Conversion by camera coordinate system and robot coordinate system, obtain the target object three-dimensional coordinate robot coordinate system, and then the three-dimensional coordinate of 3 Kinect object obtained respectively is analyzed calibration, if the error of three coordinate figures is within the threshold value accepted, this accepts this point is the coordinate of object;If error within the threshold range accepted, is not then given up this coordinate points, is carried out abnormality processing.
The above-mentioned description to embodiment is to be understood that for ease of those skilled in the art and apply the present invention.Above-described embodiment obviously easily can be made various amendment by person skilled in the art, and General Principle described herein is applied in other embodiments without through performing creative labour.Therefore, the invention is not restricted to above-described embodiment, those skilled in the art's announcement according to the present invention, the improvement made for the present invention and amendment all should within protection scope of the present invention.