CN113034526B - Grabbing method, grabbing device and robot - Google Patents
Grabbing method, grabbing device and robot Download PDFInfo
- Publication number
- CN113034526B CN113034526B CN202110336211.2A CN202110336211A CN113034526B CN 113034526 B CN113034526 B CN 113034526B CN 202110336211 A CN202110336211 A CN 202110336211A CN 113034526 B CN113034526 B CN 113034526B
- Authority
- CN
- China
- Prior art keywords
- grabbing
- point
- reachable
- contour
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000004458 analytical method Methods 0.000 claims abstract description 18
- 238000004590 computer program Methods 0.000 claims description 20
- 230000007613 environmental effect Effects 0.000 claims description 17
- 238000001514 detection method Methods 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
Abstract
The application discloses a grabbing method, a grabbing device, a robot and a computer readable storage medium. The method is applied to a robot and comprises the following steps: acquiring an object contour point set from an environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed; extracting at least one alternative grabbing point based on the object contour point set; detecting whether an accessible grabbing point exists or not through accessibility analysis of the alternative grabbing points; and if the accessible grabbing point exists, determining a target grabbing point in the accessible grabbing point based on grabbing quality so as to instruct the robot to grab the object to be grabbed based on the target grabbing point. According to the scheme, the robot can grasp unordered objects which are placed and not modeled, and the grasping efficiency of the objects is improved.
Description
Technical Field
The application belongs to the field of robot control, and particularly relates to a grabbing method, a grabbing device, a robot and a computer readable storage medium.
Background
With further development of robotics and artificial intelligence, requirements for autonomous operation of robots in a general environment are also increasing. However, the conventional grabbing schemes currently applied to robots are often only suitable for a fixed environment, and the object to be grabbed needs to be modeled in advance; that is, the conventional gripping scheme has difficulty in achieving the gripping operation of the robot on the unordered placed unmodeled objects.
Disclosure of Invention
The application provides a grabbing method, a grabbing device, a robot and a computer readable storage medium, which can realize grabbing operation of the robot on unordered placed unmodeled objects and improve grabbing efficiency of the objects.
In a first aspect, the present application provides a gripping method, where the gripping method is applied to a robot, including:
acquiring an object contour point set from an environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
extracting at least one alternative grabbing point based on the object contour point set;
detecting whether an accessible grabbing point exists or not through accessibility analysis of the alternative grabbing points;
and if the accessible grabbing point exists, determining a target grabbing point based on grabbing quality in the accessible grabbing point so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
In a second aspect, the present application provides a gripping device, where the gripping device is applied to a robot, including:
the acquisition unit is used for acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
the extraction unit is used for extracting at least one alternative grabbing point based on the object contour point set;
the detection unit is used for detecting whether the reachable grabbing points exist or not through reachability analysis of the alternative grabbing points;
and the determining unit is used for determining a target grabbing point based on grabbing quality in the reachable grabbing points if the reachable grabbing points exist, so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
In a third aspect, the present application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which, when executed by a processor, performs the steps of the method of the first aspect described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, implements the steps of the method of the first aspect described above.
Compared with the prior art, the beneficial effects that this application exists are: firstly, an object contour point set is obtained from an environment depth map, the object contour point set is composed of contour points of an object to be grabbed, then at least one alternative grabbing point is obtained through extraction based on the object contour point set, whether an accessible grabbing point exists or not is detected through accessibility analysis of the alternative grabbing point, if the accessible grabbing point exists, a target grabbing point is determined based on grabbing quality in the accessible grabbing point, and therefore the robot is instructed to grab the object to be grabbed based on the target grabbing point. The process extracts the alternative grabbing points based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, screens out the reachable grabbing points on the basis, and finally determines the target grabbing points through grabbing quality to instruct the robot to grab the object to be grabbed, so that the object to be grabbed is not needed to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to be grabbed can be improved to a certain extent. It will be appreciated that the advantages of the second to fifth aspects may be found in the relevant description of the first aspect, and are not described here again.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic implementation flow chart of a grabbing method provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a region of interest provided by an embodiment of the present application;
fig. 3 is a block diagram of a gripping device according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to illustrate the technical solutions proposed in the present application, the following description is made by specific embodiments.
The following describes a grasping method provided in the embodiment of the present application. The grabbing method is applied to a robot. Referring to fig. 1, the capturing method includes:
step 101, acquiring an object contour point set from an environment depth map.
In the embodiment of the application, the robot body is provided with the depth camera with the known internal parameters, and the RGB-D image of the environment where the robot is located can be shot through the depth camera, and the RGB-D image is an environment depth map. For example only, the depth camera may be a Realsense camera, a ZED camera, or a Mynt Eye camera, etc., without limitation herein.
In some embodiments, after obtaining the environmental depth map, the robot may define a region of interest (region of interest, ROI) in the environmental depth map, and identify the object in the region of interest, specifically identify the outline of the object to be grabbed, so that a plurality of pixel points forming the outline of the object to be grabbed in the environmental depth map may be obtained, where the pixel points are outline points of the object to be grabbed. The robot can construct a point set based on the contour points of the object to be grabbed, and the point set is recorded as a contour point set of the object; that is, the object contour point set is formed by contour points of an object to be grasped, and the robot can subsequently perform a series of operations related to grasping points based on the object contour point set.
In some embodiments, in view of practical application scenarios, in the case of a table or other higher plane, the object to be grabbed is usually placed on these planes; in fact, even if there are few situations such as placing the object to be grasped under the tabletop or the table top, the robot is difficult to grasp due to the fact that the placement positions are very tight and hidden, that is, the situation may not be considered in the embodiment of the present application. Based on this, the robot can delineate the region of interest in the environmental depth map by: firstly, carrying out linear detection on an environment depth map to determine the edge of a target placement surface, and then determining an interested region in the environment depth map according to the edge of the target placement surface. The object placing surface refers to a plane with the function of placing objects, such as a tabletop and a table top. Specifically, the line detection may be hough transform line detection (hough lines).
Referring to fig. 2, an example of a region of interest is given in fig. 2. The quadrilateral ABCD is the edge of the target placement surface obtained by the robot through linear detection; the area 1 shown in the shaded portion is the area of interest determined by the robot based on the quadrilateral ABCD.
In some embodiments, after obtaining the environmental depth map, in order to improve the efficiency of processing the environmental depth map in a subsequent series, the robot may first perform preprocessing on the environmental depth map, and then perform a demarcation operation of the region of interest and a contour recognition operation of the object to be grabbed on the preprocessed environmental depth map. The preprocessing may include noise processing, edge filtering, and the like, for example. The noise processing may be to perform average processing once for every n frames to eliminate noise; edge filtering may be implemented based on temporal filter, and is not limited herein to the specific implementation of noise processing and edge filtering.
Step 102, extracting at least one alternative grabbing point based on the object contour point set.
In the embodiment of the application, the robot can obtain a plurality of contour point pairs through the object contour point set. Wherein, a contour point pair can be constructed by the following way: randomly selecting one contour point in the object contour point set, and then carrying out baseAnd determining the normal line of the contour point in the contour of the object to be grabbed in the environment depth map, and mapping another contour point on the contour of the object to be grabbed based on the direction of the normal line, wherein the two contour points can form a contour point pair. Obviously, a pair of contour points is formed by two different contour points in the set of object contour points. After the contour point pairs are constructed, whether each contour point pair accords with a preset force closed-loop condition or not can be detected. The force closed loop condition refers to: when the robot performs grabbing operation on an object to be grabbed based on a contour point pair, the object to be grabbed cannot shake; that is, the gripping operation of the object to be gripped based on the pair of contour points, which are suitable for the gripping operation, does not occur in a slip condition. Subsequently, the robot may extract a corresponding alternative grabbing point based on the contour point pair meeting the force closed loop condition to obtain at least one alternative grabbing point. The extraction process of the alternative grabbing points specifically comprises the following steps: and determining the midpoint of two contour points of each contour point pair conforming to the force closed-loop condition based on the horizontal coordinate mean value and the vertical coordinate mean value of the two contour points of the contour point pair, wherein the midpoint is the alternative grabbing point corresponding to the contour point pair. It should be noted that the depth value of the alternative grabbing point may be obtained from the environmental depth map. For example, one contour point pair includes a first contour point A1 and a second contour point A2; the abscissa of the first contour point A1 in the image coordinate system is x A1 The abscissa of the second contour point A2 in the image coordinate system is x A2 The method comprises the steps of carrying out a first treatment on the surface of the The ordinate of the first contour point A1 in the image coordinate system is y A1 The ordinate of the first contour point A2 in the image coordinate system is y A2 The method comprises the steps of carrying out a first treatment on the surface of the Recording the midpoint of the first contour point A1 and the second contour point A2 as A0, wherein the midpoint A0 is an alternative grabbing point obtained based on the first contour point A1 and the second contour point A2, and each parameter can be obtained through the following process: the abscissa x of the midpoint A0 is obtained by calculation A0 Is thatOrdinate y A0 Is->In the environmental depth map, find the abscissa as +.>The ordinate is +.>The depth value of the pixel point of (2) is the depth value of the midpoint A0.
Step 103, detecting whether the reachable grabbing point exists or not through reachability analysis of the alternative grabbing point.
In this embodiment of the present application, the robot performs accessibility analysis on the candidate gripping point, which refers to the robot determining whether the candidate gripping point is within the reach of the robot, that is, whether the mechanical arm (or other location for gripping the object) of the robot can reach the position of the candidate gripping point. The reachable grabbing points and the unreachable grabbing points can be screened out through accessibility analysis of the alternative grabbing points.
In some embodiments, when the robot performs reachability analysis, the first coordinates of each candidate grabbing point under the image coordinate system can be converted into the second coordinates under the robot coordinate system through hand-eye calibration, so that the robot can clearly know the positions of each candidate grabbing point relative to the robot. Wherein the first coordinate of an alternative grabbing point is expressed in the form of (u, v, d, theta), u represents the abscissa of the alternative grabbing point in the image coordinate system, v represents the ordinate of the alternative grabbing point in the image coordinate system, d represents the depth value of the alternative grabbing point in the environment depth map, and theta represents the deflection angle of the connecting line from the origin of the image coordinate system to the alternative grabbing point relative to the preset coordinate axis (such as the abscissa) of the image coordinate system. Subsequently, for each alternative gripping point, the robot may solve IK (inverse kinematics) for the alternative gripping point based on its second coordinate, and if there is a solvable IK, consider the alternative gripping point reachable. However, in view of the high requirement on the operation performance of the robot in this way, in order to improve the efficiency of the reachability analysis, the robot may also create a reachability coordinate database in advance, where a large number of reachable second coordinates are stored in advance, so that the robot may quickly implement the reachability analysis by detecting whether the second coordinates of the candidate grabbing points are in the reachability coordinate database, specifically: and matching the second coordinates of each alternative grabbing point with each reachable second coordinate stored in the reachable coordinate database, and determining all the alternative grabbing points of the second coordinates in the reachable coordinate database as reachable grabbing points.
It should be noted that the environmental depth map acquired by the depth camera can only express the depth value of the surface of the object to be grabbed, but in reality, the object to be grabbed is a three-dimensional object, based on this, after determining the abscissa and the depth value of the candidate grabbing point in the image coordinate system, a plurality of depth values can be continuously sampled, for example, 1 cm, 3 cm, 5 cm and the like are respectively added on the basis of the original depth value, so as to obtain the new depth value of the candidate grabbing point, and further accurately express the actual depth information of the candidate grabbing point.
And 104, if the reachable grabbing point exists, determining a target grabbing point based on grabbing quality in the reachable grabbing point to instruct the robot to grab the object to be grabbed based on the target grabbing point.
In this embodiment of the present application, when there are reachable gripping points, the robot may determine a unique target gripping point from the reachable gripping points, and then control its own mechanical arm (or other parts for gripping an object) based on the target gripping point to perform a gripping operation on the object to be gripped. Specifically, the target grabbing point is determined as follows:
in case there is only one reachable grip point, the robot has no more choice but to determine this one reachable grip point directly as the target grip point.
In the case where there are more than two reachable gripping points, the robot may calculate a gripping quality score for each reachable gripping point, and determine the reachable gripping point with the highest gripping quality score as the target gripping point. For example, after accessibility analysis is performed on the candidate grabbing points, three reachable grabbing points are obtained, namely a reachable grabbing point A, a reachable grabbing point B and a reachable grabbing point C; the robot calculates the grabbing quality scores of the three reachable grabbing points respectively to obtain grabbing quality score a of reachable grabbing point A, grabbing quality score B of reachable grabbing point B and grabbing quality score C of reachable grabbing point C; assuming B > a > c, the grasping quality score of the reachable grasping point B, which is to be determined as the target grasping point, is found to be highest.
In some embodiments, the capture quality score of each reachable capture point may be calculated by a preset capture quality function, specifically: and calculating a first score, a second score and a third score of each reachable grabbing point according to the grabbing quality function, and carrying out weighted average operation on the first score, the second score and the third score to obtain the grabbing quality score of the reachable grabbing point. The first score is a normalized score calculated based on the surface friction force of the reachable gripping point, the second score is a normalized score calculated based on the control accuracy of the robot reaching the reachable gripping point, and the third score is a normalized score calculated based on the distance between the deflection angle theta of the reachable gripping point and the deflection angle of the main shaft of the object to be gripped.
In some embodiments, after the accessibility analysis is performed on the candidate grabbing points, if it is detected that there are no reachable grabbing points (i.e. the number of reachable grabbing points is 0), this means that the distance between the current robot and the object to be grabbed is too far, and the robot cannot reach the object to be grabbed temporarily, and needs to move in the direction of the object to be grabbed first, so that the grabbing operation can be performed. Therefore, the robot can calculate the distance between each alternative grabbing point and the robot, then determine the target moving direction based on the direction of the alternative grabbing point corresponding to the minimum value of the distance relative to the robot, and finally control the robot to move towards the target moving direction; that is, the candidate gripping point corresponding to the minimum value of the distance is determined as the moving target, and the robot is moved in the direction in which the candidate gripping point is located (i.e., the target moving direction). Alternatively, the direction of the center point of the object to be grasped with respect to the robot may be directly calculated, and the direction may be used as the target moving direction to control the robot to move in the target moving direction. In fact, whatever the way in which the target movement direction is determined, its final aim is to enable the robot to approach the object to be gripped gradually, reaching as soon as possible the range in which it is possible to grip the object to be gripped. It should be noted that during this movement, interference from other obstacles (e.g., obstacles such as a table top) needs to be eliminated.
From the above, according to the embodiment of the application, the candidate grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, and finally the target grabbing points are determined through grabbing quality, so that the robot is instructed to grab the object to be grabbed, the object to be grabbed is not needed to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
Corresponding to the gripping method proposed in the foregoing, the embodiment of the present application provides a gripping device, which is applied to a robot. Referring to fig. 3, a gripping device 300 in an embodiment of the present application includes:
an obtaining unit 301, configured to obtain an object contour point set from an environmental depth map, where the object contour point set is formed by contour points of an object to be grabbed;
an extracting unit 302, configured to extract at least one candidate grabbing point based on the object contour point set;
a detecting unit 303, configured to detect whether an reachable grabbing point exists through reachability analysis on the candidate grabbing point;
and the determining unit 304 is configured to determine, if there is an reachable grabbing point, a target grabbing point based on grabbing quality in the reachable grabbing point, so as to instruct the robot to perform grabbing operation on the object to be grabbed based on the target grabbing point.
Optionally, the acquiring unit 301 includes:
the contour point acquisition subunit is used for carrying out object contour identification in the interested area of the environment depth map so as to obtain contour points of the object to be grabbed;
and the set construction subunit is used for constructing the object contour point set based on the contour points of the object to be grabbed.
Optionally, the acquiring unit 301 further includes:
the linear detection subunit is used for carrying out linear detection on the environment depth map so as to determine the edge of the target placement surface;
and the interested region determining subunit is used for determining the interested region in the environment depth map according to the edge of the target placement surface.
Optionally, the extracting unit 302 includes:
a force closed-loop condition detection subunit, configured to detect whether each contour point pair meets a preset force closed-loop condition, where the contour point pair is formed by two different contour points in the object contour point set;
and the alternative grabbing point extraction subunit is used for extracting corresponding alternative grabbing points based on the profile point pairs meeting the force closed-loop condition so as to obtain at least one alternative grabbing point.
Optionally, the detecting unit 303 includes:
a coordinate conversion subunit, configured to convert a first coordinate of the candidate capture point in the image coordinate system into a second coordinate in the robot coordinate system;
the accessibility detection subunit is used for detecting whether the second coordinates of each alternative grabbing point are in a preset accessibility coordinate database or not according to each alternative grabbing point;
and the reachable determination subunit is used for determining the alternative grabbing point of the second coordinate in the reachable coordinate database as a reachable grabbing point.
Optionally, the determining unit 304 includes:
a first determining subunit, configured to determine, if there is one reachable grabbing point, the one reachable grabbing point as a target grabbing point;
and the second determination subunit is used for calculating the grabbing quality score of each reachable grabbing point if more than two reachable grabbing points exist, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point.
Optionally, the gripping device 300 further includes:
a distance calculating unit, configured to calculate a distance between each candidate gripping point and the robot if no reachable gripping point exists after the detecting unit 303 detects whether the reachable gripping point exists;
a direction determining unit, configured to determine a target moving direction based on a direction of the candidate gripping point corresponding to the minimum value of the distance relative to the robot;
and a movement control unit configured to control the robot to move in the target movement direction.
From the above, according to the embodiment of the application, the candidate grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, and finally the target grabbing points are determined through grabbing quality, so that the robot is instructed to grab the object to be grabbed, the object to be grabbed is not needed to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
The embodiment of the application further provides a robot, please refer to fig. 4, the robot 4 in the embodiment of the application includes: a memory 401, one or more processors 402 (only one shown in fig. 4) and a computer program stored on the memory 401 and executable on the processors. The memory 401 is used for storing software programs and units, and the processor 402 executes various functional applications and data processing by running the software programs and units stored in the memory 401 to obtain resources corresponding to the preset events. Specifically, the processor 402 realizes the following steps by running the above-described computer program stored in the memory 401:
acquiring an object contour point set from an environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
extracting at least one alternative grabbing point based on the object contour point set;
detecting whether an accessible grabbing point exists or not through accessibility analysis of the alternative grabbing points;
and if the accessible grabbing point exists, determining a target grabbing point based on grabbing quality in the accessible grabbing point so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
Assuming that the above is a first possible embodiment, in a second possible embodiment provided by way of example on the basis of the first possible embodiment, the acquiring the object contour point set from the environmental depth map includes:
carrying out object contour recognition in the interested region of the environment depth map to obtain contour points of the object to be grabbed;
and constructing the object contour point set based on the contour points of the object to be grabbed.
In a third possible implementation provided on the basis of the second possible implementation, before the object contour recognition in the region of interest of the environmental depth map, the processor 402 further implements the following steps by running the computer program stored in the memory 401:
performing linear detection on the environment depth map to determine the edge of the target placement surface;
and determining the region of interest in the environmental depth map according to the edge of the target placement surface.
In a fourth possible implementation manner provided by the first possible implementation manner, the extracting, based on the object contour point set, at least one candidate grabbing point includes:
detecting whether each contour point pair meets a preset force closed-loop condition, wherein the contour point pairs are formed by two different contour points in the object contour point set;
and extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed-loop condition to obtain at least one alternative grabbing point.
In a fifth possible implementation manner provided by the first possible implementation manner, the detecting whether the reachable grabbing point exists through accessibility analysis of the candidate grabbing point includes:
converting a first coordinate of the alternative grabbing point under the image coordinate system into a second coordinate under the robot coordinate system;
for each alternative grabbing point, detecting whether the second coordinates of the alternative grabbing points are in a preset reachable coordinate database or not;
and determining the alternative grabbing point of the second coordinate in the reachable coordinate database as a reachable grabbing point.
In a sixth possible implementation manner provided by the first possible implementation manner, the determining, based on the grasping quality, the target grasping point among the reachable grasping points includes:
if one reachable grabbing point exists, determining the one reachable grabbing point as a target grabbing point;
if more than two reachable grabbing points exist, grabbing quality scores of all the reachable grabbing points are calculated, and the reachable grabbing point with the highest grabbing quality score is determined to be the target grabbing point.
In a seventh possible implementation provided on the basis of the above first possible implementation, or on the basis of the above second possible implementation, or on the basis of the above third possible implementation, or on the basis of the above fourth possible implementation, or on the basis of the above fifth possible implementation, or on the basis of the above sixth possible implementation, after the above detection of whether or not there is an reachable grip point, the processor 402 implements the following steps by running the above computer program stored in the memory 401:
if no reachable grabbing points exist, calculating the distance between each alternative grabbing point and the robot;
determining a target moving direction based on the direction of the candidate grabbing point corresponding to the minimum value of the distance relative to the robot;
and controlling the robot to move in the target moving direction.
It should be appreciated that in embodiments of the present application, the processor 402 may be a CPU, which may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 401 may include read-only memory and random access memory, and provides instructions and data to processor 402. Some or all of memory 401 may also include non-volatile random access memory. For example, the memory 401 may also store information of a device class.
From the above, according to the embodiment of the application, the candidate grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, and finally the target grabbing points are determined through grabbing quality, so that the robot is instructed to grab the object to be grabbed, the object to be grabbed is not needed to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative, e.g., the division of modules or units described above is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct associated hardware, where the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program comprises computer program code, and the computer program code can be in a source code form, an object code form, an executable file or some intermediate form and the like. The above computer readable storage medium may include: any entity or device capable of carrying the computer program code described above, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer readable Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier wave signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium described above may be appropriately increased or decreased according to the requirements of the jurisdiction's legislation and the patent practice, for example, in some jurisdictions, the computer readable storage medium does not include electrical carrier signals and telecommunication signals according to the legislation and the patent practice.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting thereof; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.
Claims (8)
1. A gripping method, characterized in that the gripping method is applied to a robot, comprising:
acquiring an object contour point set from an environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
detecting whether each contour point pair meets a preset force closed-loop condition, wherein the contour point pairs are formed by two different contour points in the object contour point set;
extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed-loop condition to obtain at least one alternative grabbing point;
detecting whether an accessible grabbing point exists or not through accessibility analysis of the alternative grabbing points;
if one reachable grabbing point exists, determining the one reachable grabbing point as a target grabbing point, if more than two reachable grabbing points exist, calculating grabbing quality scores of the reachable grabbing points, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
2. The method of claim 1, wherein the acquiring the object contour point set from the environmental depth map comprises:
carrying out object contour recognition in the interested region of the environment depth map to obtain contour points of the object to be grabbed;
and constructing the object contour point set based on the contour points of the object to be grabbed.
3. The method of grabbing according to claim 2, wherein before object contour recognition in the region of interest of the environmental depth map, the method of grabbing further comprises:
performing linear detection on the environment depth map to determine the edge of a target placement surface;
and determining the region of interest in the environment depth map according to the edge of the target placement surface.
4. The method of claim 1, wherein the detecting whether an reachable grip point exists by reachability analysis of the candidate grip point comprises:
converting a first coordinate of the alternative grabbing point under an image coordinate system into a second coordinate under a robot coordinate system;
for each alternative grabbing point, detecting whether a second coordinate of the alternative grabbing point is in a preset reachable coordinate database or not;
and determining the alternative grabbing point of the second coordinate in the reachable coordinate database as a reachable grabbing point.
5. The grasping method according to any one of claims 1 to 4, wherein after the detecting whether there is an reachable grasping point, the grasping method further comprises:
if no reachable grabbing points exist, calculating the distance between each alternative grabbing point and the robot;
determining a target moving direction based on the direction of the candidate grabbing point corresponding to the minimum value of the distance relative to the robot;
and controlling the robot to move towards the target moving direction.
6. A gripping device, characterized in that it is applied to a robot, comprising:
the acquisition unit is used for acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
the extraction unit is used for extracting at least one alternative grabbing point based on the object contour point set;
the detection unit is used for detecting whether the reachable grabbing points exist or not through reachability analysis of the alternative grabbing points;
the determining unit is used for determining a target grabbing point based on grabbing quality in the reachable grabbing points if the reachable grabbing points exist, so as to instruct the robot to grab the object to be grabbed based on the target grabbing point;
wherein the extraction unit includes:
a force closed-loop condition detection subunit, configured to detect whether each contour point pair meets a preset force closed-loop condition, where the contour point pair is formed by two different contour points in the object contour point set;
the alternative grabbing point extraction subunit is used for extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed-loop condition so as to obtain at least one alternative grabbing point;
wherein the determining unit includes:
a first determining subunit, configured to determine, if there is one reachable grabbing point, the one reachable grabbing point as a target grabbing point;
and the second determination subunit is used for calculating the grabbing quality score of each reachable grabbing point if more than two reachable grabbing points exist, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point.
7. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 5 when executing the computer program.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110336211.2A CN113034526B (en) | 2021-03-29 | 2021-03-29 | Grabbing method, grabbing device and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110336211.2A CN113034526B (en) | 2021-03-29 | 2021-03-29 | Grabbing method, grabbing device and robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113034526A CN113034526A (en) | 2021-06-25 |
CN113034526B true CN113034526B (en) | 2024-01-16 |
Family
ID=76452760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110336211.2A Active CN113034526B (en) | 2021-03-29 | 2021-03-29 | Grabbing method, grabbing device and robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113034526B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113344307B (en) * | 2021-08-09 | 2021-11-26 | 常州唯实智能物联创新中心有限公司 | Disordered grabbing multi-target optimization method and system based on deep reinforcement learning |
CN116416444B (en) * | 2021-12-29 | 2024-04-16 | 广东美的白色家电技术创新中心有限公司 | Object grabbing point estimation, model training and data generation method, device and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
CN109508707A (en) * | 2019-01-08 | 2019-03-22 | 中国科学院自动化研究所 | The crawl point acquisition methods of robot stabilized crawl object based on monocular vision |
CN110660104A (en) * | 2019-09-29 | 2020-01-07 | 珠海格力电器股份有限公司 | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium |
CN111015655A (en) * | 2019-12-18 | 2020-04-17 | 深圳市优必选科技股份有限公司 | Mechanical arm grabbing method and device, computer readable storage medium and robot |
CN111844019A (en) * | 2020-06-10 | 2020-10-30 | 安徽鸿程光电有限公司 | Method and device for determining grabbing position of machine, electronic device and storage medium |
CN111932490A (en) * | 2020-06-05 | 2020-11-13 | 浙江大学 | Method for extracting grabbing information of visual system of industrial robot |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9889564B2 (en) * | 2015-07-08 | 2018-02-13 | Empire Technology Development Llc | Stable grasp point selection for robotic grippers with machine vision and ultrasound beam forming |
-
2021
- 2021-03-29 CN CN202110336211.2A patent/CN113034526B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108381549A (en) * | 2018-01-26 | 2018-08-10 | 广东三三智能科技有限公司 | A kind of quick grasping means of binocular vision guided robot, device and storage medium |
CN109508707A (en) * | 2019-01-08 | 2019-03-22 | 中国科学院自动化研究所 | The crawl point acquisition methods of robot stabilized crawl object based on monocular vision |
CN110660104A (en) * | 2019-09-29 | 2020-01-07 | 珠海格力电器股份有限公司 | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium |
CN111015655A (en) * | 2019-12-18 | 2020-04-17 | 深圳市优必选科技股份有限公司 | Mechanical arm grabbing method and device, computer readable storage medium and robot |
CN111932490A (en) * | 2020-06-05 | 2020-11-13 | 浙江大学 | Method for extracting grabbing information of visual system of industrial robot |
CN111844019A (en) * | 2020-06-10 | 2020-10-30 | 安徽鸿程光电有限公司 | Method and device for determining grabbing position of machine, electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113034526A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106709950B (en) | Binocular vision-based inspection robot obstacle crossing wire positioning method | |
CN108044627B (en) | Method and device for detecting grabbing position and mechanical arm | |
CN110866903B (en) | Ping-pong ball identification method based on Hough circle transformation technology | |
CN110660104A (en) | Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium | |
CN109978925B (en) | Robot pose recognition method and robot thereof | |
WO2016055031A1 (en) | Straight line detection and image processing method and relevant device | |
CN111627072B (en) | Method, device and storage medium for calibrating multiple sensors | |
CN110648367A (en) | Geometric object positioning method based on multilayer depth and color visual information | |
CN110555889A (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN108335331B (en) | Binocular vision positioning method and equipment for steel coil | |
CN113034526B (en) | Grabbing method, grabbing device and robot | |
CN109308718B (en) | Space personnel positioning device and method based on multiple depth cameras | |
CN110928301A (en) | Method, device and medium for detecting tiny obstacles | |
KR100823549B1 (en) | Recognition method of welding line position in shipbuilding subassembly stage | |
CN104268519B (en) | Image recognition terminal and its recognition methods based on pattern match | |
CN111402330B (en) | Laser line key point extraction method based on planar target | |
CN115205286B (en) | Method for identifying and positioning bolts of mechanical arm of tower-climbing robot, storage medium and terminal | |
CN111292376B (en) | Visual target tracking method of bionic retina | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
Han et al. | Target positioning method in binocular vision manipulator control based on improved canny operator | |
CN109313708B (en) | Image matching method and vision system | |
CN114310887A (en) | 3D human leg recognition method and device, computer equipment and storage medium | |
CN108388854A (en) | A kind of localization method based on improvement FAST-SURF algorithms | |
Ogas et al. | A robotic grasping method using convnets | |
Gui et al. | Visual image processing of humanoid go game robot based on OpenCV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |