CN108247635B - Method for grabbing object by depth vision robot - Google Patents
Method for grabbing object by depth vision robot Download PDFInfo
- Publication number
- CN108247635B CN108247635B CN201810034599.9A CN201810034599A CN108247635B CN 108247635 B CN108247635 B CN 108247635B CN 201810034599 A CN201810034599 A CN 201810034599A CN 108247635 B CN108247635 B CN 108247635B
- Authority
- CN
- China
- Prior art keywords
- points
- point
- depth
- robot
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40113—Task planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a method for grabbing an object by a depth vision robot. The system mainly uses a hand-eye system experimental platform constructed by a Mitsubishi mechanical arm carrying a Realsense depth camera. The robot target grabbing method based on the depth vision mainly comprises the following steps: (1) acquiring object point cloud data and a depth image; (2) removing point cloud data of a plane where the object is located; (3) dividing the object by using the point cloud data after the plane is removed by using Euclidean clustering, LCCP (LCCP) and CPC (CPC) methods; (4) selecting an interested area according to the segmentation result; (5) calculating a gradient map of the depth image corresponding to the region of interest; (6) selecting an optimal grabbing point on the depth map corresponding to the obtained contour of the gradient image; (7) and calculating the motion trail of the robot according to the inverse kinematics of the robot and controlling the robot to grab through a serial port instruction.
Description
Technical Field
The invention relates to the technical field of intelligent robots, in particular to a method for grabbing articles by a robot.
Background
Due to the development of artificial intelligence technology, the automation requirements of the robot are gradually becoming higher, which requires the robot to perform autonomous operations such as grasping and transferring an object according to human instructions.
At present, most robots acquire external data through cameras, position and capture objects through image processing and other modes, and the RGB cameras are mostly used.
When the inventor of the present invention realizes the technical scheme of the present invention, the inventor finds that at least the following problems exist in the prior art:
the existing method for grabbing and searching the identification points through the two-dimensional camera is slow in time and has great dependence on ambient light. The method using the depth camera requires training of a large number of data sets on one hand, and cannot effectively solve the problem of object occlusion on the other hand.
In a real environment, an object with an unknown structure and a situation that the object is shielded are common, so that the grabbing of the unknown object is an important subject.
Disclosure of Invention
In view of the above-mentioned defects of the prior art, the problem to be solved by the present invention is to provide a robot grasping method which can solve the occlusion problem and can stably grasp an object of unknown structure.
In order to achieve the above object, the technical solution provided by this patent is: which comprises the following steps:
(1) acquiring object point cloud data and a depth image;
(2) removing point cloud data of a plane where the object is located;
(3) performing object segmentation by using the point cloud data after the plane is removed;
(4) selecting an interested area according to the segmentation result;
(5) calculating a gradient map of the depth image corresponding to the region of interest by using an edge detection operator;
(6) selecting an optimal grabbing point on the depth map corresponding to the obtained contour of the gradient image;
(7) and calculating the motion trail of the robot according to the inverse kinematics of the robot and controlling the robot to grab through a serial port instruction.
Further, the specific steps of point cloud segmentation and upsampling in the step (3) are as follows:
1) performing clustering analysis on the point cloud after the plane is removed by using Euclidean clustering, and preliminarily dividing the point cloud of the object into a plurality of parts;
2) partitioning each part of point cloud after clustering by using an LCCP algorithm to prevent different objects from clustering into the same class caused by object shielding, thereby partitioning the point cloud into a plurality of independent point clouds;
3) after the LCCP divides the point cloud, the MLS fitting algorithm is used for up-sampling the object point cloud and enabling the object point cloud to be uniformly distributed;
4) and further, the point cloud of the object is segmented by using a CPC method, and the point cloud of a single object is segmented into different parts based on geometric features, so that the time for searching for effective capturing points is reduced as much as possible, and the point cloud information of the whole object does not need to be traversed.
Further, in the step (4), gradient processing of the depth map is performed in the depth map corresponding to the region of interest, and the specific steps are as follows:
1) respectively extracting point cloud centers of the objects segmented by the LCCP;
2) selecting the point cloud where the point cloud center (marked as M) closest to the Euclidean distance of the camera is located as the point cloud for further processing;
3) respectively calculating the centers of the sub-point clouds divided by the CPC method, and sequencing the sub-point clouds from near to far according to the distance from M;
4) and sequentially taking the sorted sub-point clouds as interested parts to calculate gradient images in the corresponding depth maps.
Further, in the step (6), an optimal capture point is selected on the depth map corresponding to the obtained profile of the gradient image according to a certain method, and the specific steps are as follows:
1) selecting any two points on the gradient map outline, and calculating the depth information and the normal magnitude of each point;
2) taking two points as reference points respectively, and taking a series of points which are basically the same as a normal vector of the reference points and have Euclidean distances with the reference points on spatial positions not exceeding a threshold (the threshold is 3 used herein, and the value can be adjusted to be small if the size of an object is small) as contact lines;
3) the reliability (denoted as P) of the two reference points as the grasping points is determined based on the following three conditionsgrasp):
a. The difference of the depth values between the point closest to the camera and the two reference points in the area defined by the two contact lines does not exceed the length of the fingers of the manipulator;
b. the vertical distance between line segments formed by fitting the two contact lines does not exceed the maximum opening size of the manipulator;
c. and calculating the reliability of the grabbing point according to the following formula after the two requirements are met:
number representing contact line, NjRepresenting the total number of points, m, on the j-th contact linejiRepresents the normal vector value, L, of the ith point on the jth contact line1、L2Each representing the length of two contact lines, L representing the width v of the gripper clamping plate12Representing the vector connecting the midpoints of two lines of contact, m1Indicating the normal vector value of the first reference point, PgraspSet to 0.8 in this context, if the final result is greater than 0.8, it is assumed that these two points make it possible to grasp the point of contact, and if the accuracy is to be improved, the value can be made appropriately large.
Drawings
FIG. 1 is a flow chart of a depth vision based object capture method in accordance with an embodiment of the present invention;
FIG. 2 is a graphical illustration of various parameters in a catch point confidence approach;
FIG. 3 is a graph showing the effect of the experimental process in each step;
Detailed Description
Table 1 shows the results of the grab experiment using the present invention;
whereinAndis a clamping plate of the hand grip,andis a contact line, GgraspThe depth of the finger grip, W is the maximum opening size of the manipulator, and n1、n2Respectively representing the mean vector of the points on two contact lines, L1、L2Respectively representing the length of the short and the length of the long of the two contact lines, v12Representing the vector connecting the midpoints of the two contact lines.
TABLE 1
Object | Success rate | Mean time |
Dish with a cover | 9/10 | 2.324 |
Adhesive tape | 10/10 | 2.012 |
Cup with elastic band | 10/10 | 2.435 |
Spoon | 10/10 | 2.044 |
Basin | 10/10 | 2.145 |
Total of | 98% | 2.192 |
Claims (1)
1. A method for robot grabbing based on depth vision is characterized by comprising the following steps:
(1) acquiring object point cloud data and a depth image;
(2) removing point cloud data of a plane where the object is located;
(3) performing object segmentation by using the point cloud data after the plane is removed;
(4) selecting an interested area according to the segmentation result;
(5) calculating a gradient map of the depth image corresponding to the region of interest by using an edge detection operator;
(6) selecting an optimal grabbing point on the depth map corresponding to the obtained contour of the gradient image;
(7) calculating the motion track of the robot according to inverse kinematics of the robot and controlling the robot to grab through a serial port instruction;
the step (6) comprises the following specific steps:
1) selecting any two points on the gradient map outline, and calculating the depth information and the normal magnitude of each point;
2) taking two points as datum points, and recording a series of points which are basically the same as a normal vector of the datum point and have Euclidean distances with the datum point on a space position not exceeding a threshold value as contact lines; the threshold value is 3;
3) the reliability of the two reference points as the capture points is judged according to the following three conditions and is marked as Pgrasp):
a. The difference of the depth values between the point closest to the camera and the two reference points in the area defined by the two contact lines does not exceed the length of the fingers of the manipulator;
b. the vertical distance between line segments formed by fitting the two contact lines does not exceed the maximum opening size of the manipulator;
c. and calculating the reliability of the grabbing point according to the following formula after the two requirements are met:
j represents the serial number of the contact line, NjRepresenting the total number of points, m, on the j-th contact linejiRepresents the normal vector value, L, of the ith point on the jth contact line1、L2Respectively representing the lengths of the two contact lines, and L representing the width of the gripper clamping plate; v. of12Representing the vector connecting the midpoints of two lines of contact, m1Indicating the normal vector value of the first reference point, PgraspSet to 0.8 in this context, these two points can be used as contact points for grasping if the final result is greater than 0.8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810034599.9A CN108247635B (en) | 2018-01-15 | 2018-01-15 | Method for grabbing object by depth vision robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810034599.9A CN108247635B (en) | 2018-01-15 | 2018-01-15 | Method for grabbing object by depth vision robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108247635A CN108247635A (en) | 2018-07-06 |
CN108247635B true CN108247635B (en) | 2021-03-26 |
Family
ID=62726997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810034599.9A Active CN108247635B (en) | 2018-01-15 | 2018-01-15 | Method for grabbing object by depth vision robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108247635B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109101967A (en) * | 2018-08-02 | 2018-12-28 | 苏州中德睿博智能科技有限公司 | The recongnition of objects and localization method, terminal and storage medium of view-based access control model |
CN109579698B (en) * | 2018-12-05 | 2020-11-27 | 普达迪泰(天津)智能装备科技有限公司 | Intelligent cargo detection system and detection method thereof |
CN109859208A (en) * | 2019-01-03 | 2019-06-07 | 北京化工大学 | Scene cut and Target Modeling method based on concavity and convexity and RSD feature |
CN110264441A (en) * | 2019-05-15 | 2019-09-20 | 北京化工大学 | Optimum contact line detecting method between robot parallel plate fixtures and target object |
CN110275153B (en) * | 2019-07-05 | 2021-04-27 | 上海大学 | Water surface target detection and tracking method based on laser radar |
CN112991356B (en) * | 2019-12-12 | 2023-08-01 | 中国科学院沈阳自动化研究所 | Rapid segmentation method of mechanical arm in complex environment |
CN111906782B (en) * | 2020-07-08 | 2021-07-13 | 西安交通大学 | Intelligent robot grabbing method based on three-dimensional vision |
CN112171664B (en) * | 2020-09-10 | 2021-10-08 | 敬科(深圳)机器人科技有限公司 | Production line robot track compensation method, device and system based on visual identification |
CN112605986B (en) * | 2020-11-09 | 2022-04-19 | 深圳先进技术研究院 | Method, device and equipment for automatically picking up goods and computer readable storage medium |
CN113011486A (en) * | 2021-03-12 | 2021-06-22 | 重庆理工大学 | Chicken claw classification and positioning model construction method and system and chicken claw sorting method |
CN114454168B (en) * | 2022-02-14 | 2024-03-22 | 赛那德数字技术(上海)有限公司 | Dynamic vision mechanical arm grabbing method and system and electronic equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102009007024A1 (en) * | 2009-01-31 | 2010-08-05 | Daimler Ag | Method and device for separating components |
US8095237B2 (en) * | 2002-01-31 | 2012-01-10 | Roboticvisiontech Llc | Method and apparatus for single image 3D vision guided robotics |
CN106570903A (en) * | 2016-10-13 | 2017-04-19 | 华南理工大学 | Visual identification and positioning method based on RGB-D camera |
CN106737692A (en) * | 2017-02-10 | 2017-05-31 | 杭州迦智科技有限公司 | A kind of mechanical paw Grasp Planning method and control device based on depth projection |
CN107053173A (en) * | 2016-12-29 | 2017-08-18 | 芜湖哈特机器人产业技术研究院有限公司 | The method of robot grasping system and grabbing workpiece |
CN107186708A (en) * | 2017-04-25 | 2017-09-22 | 江苏安格尔机器人有限公司 | Trick servo robot grasping system and method based on deep learning image Segmentation Technology |
-
2018
- 2018-01-15 CN CN201810034599.9A patent/CN108247635B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8095237B2 (en) * | 2002-01-31 | 2012-01-10 | Roboticvisiontech Llc | Method and apparatus for single image 3D vision guided robotics |
DE102009007024A1 (en) * | 2009-01-31 | 2010-08-05 | Daimler Ag | Method and device for separating components |
CN106570903A (en) * | 2016-10-13 | 2017-04-19 | 华南理工大学 | Visual identification and positioning method based on RGB-D camera |
CN107053173A (en) * | 2016-12-29 | 2017-08-18 | 芜湖哈特机器人产业技术研究院有限公司 | The method of robot grasping system and grabbing workpiece |
CN106737692A (en) * | 2017-02-10 | 2017-05-31 | 杭州迦智科技有限公司 | A kind of mechanical paw Grasp Planning method and control device based on depth projection |
CN107186708A (en) * | 2017-04-25 | 2017-09-22 | 江苏安格尔机器人有限公司 | Trick servo robot grasping system and method based on deep learning image Segmentation Technology |
Non-Patent Citations (1)
Title |
---|
基于深度视觉的机器人自动抓取技术研究;罗锦聪;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170515;第10-59页 * |
Also Published As
Publication number | Publication date |
---|---|
CN108247635A (en) | 2018-07-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108247635B (en) | Method for grabbing object by depth vision robot | |
CN108280856B (en) | Unknown object grabbing pose estimation method based on mixed information input network model | |
US11144787B2 (en) | Object location method, device and storage medium based on image segmentation | |
CN108171748B (en) | Visual identification and positioning method for intelligent robot grabbing application | |
CN107186708B (en) | Hand-eye servo robot grabbing system and method based on deep learning image segmentation technology | |
CN109986560B (en) | Mechanical arm self-adaptive grabbing method for multiple target types | |
CN111080693A (en) | Robot autonomous classification grabbing method based on YOLOv3 | |
CN110509273B (en) | Robot manipulator detection and grabbing method based on visual deep learning features | |
CN112518748B (en) | Automatic grabbing method and system for visual mechanical arm for moving object | |
CN110298886B (en) | Dexterous hand grabbing planning method based on four-stage convolutional neural network | |
CN111553949B (en) | Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning | |
CN107705322A (en) | Motion estimate tracking and system | |
CN109461184B (en) | Automatic positioning method for grabbing point for grabbing object by robot mechanical arm | |
CN115553132A (en) | Litchi recognition method based on visual algorithm and bionic litchi picking robot | |
CN110298885B (en) | Stereoscopic vision recognition method and positioning clamping detection device for non-smooth spheroid target and application of stereoscopic vision recognition method and positioning clamping detection device | |
CN110969660A (en) | Robot feeding system based on three-dimensional stereoscopic vision and point cloud depth learning | |
CN113762159B (en) | Target grabbing detection method and system based on directional arrow model | |
CN111598172A (en) | Dynamic target grabbing posture rapid detection method based on heterogeneous deep network fusion | |
CN113420746A (en) | Robot visual sorting method and device, electronic equipment and storage medium | |
JP2022047508A (en) | Three-dimensional detection of multiple transparent objects | |
CN115393696A (en) | Object bin picking with rotation compensation | |
CN115861999A (en) | Robot grabbing detection method based on multi-mode visual information fusion | |
CN114029941B (en) | Robot grabbing method and device, electronic equipment and computer medium | |
CN114998573A (en) | Grabbing pose detection method based on RGB-D feature depth fusion | |
CN113894058A (en) | Quality detection and sorting method and system based on deep learning and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |