CN113034526A - Grabbing method, grabbing device and robot - Google Patents

Grabbing method, grabbing device and robot Download PDF

Info

Publication number
CN113034526A
CN113034526A CN202110336211.2A CN202110336211A CN113034526A CN 113034526 A CN113034526 A CN 113034526A CN 202110336211 A CN202110336211 A CN 202110336211A CN 113034526 A CN113034526 A CN 113034526A
Authority
CN
China
Prior art keywords
grabbing
point
reachable
points
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110336211.2A
Other languages
Chinese (zh)
Other versions
CN113034526B (en
Inventor
曾英夫
刘益彰
陈金亮
熊友军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ubtech Technology Co ltd
Original Assignee
Shenzhen Ubtech Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ubtech Technology Co ltd filed Critical Shenzhen Ubtech Technology Co ltd
Priority to CN202110336211.2A priority Critical patent/CN113034526B/en
Publication of CN113034526A publication Critical patent/CN113034526A/en
Application granted granted Critical
Publication of CN113034526B publication Critical patent/CN113034526B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)

Abstract

The application discloses a grabbing method, a grabbing device, a robot and a computer readable storage medium. The method is applied to a robot and comprises the following steps: acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of objects to be grabbed; extracting and obtaining at least one alternative grabbing point based on the object contour point set; detecting whether the reachable grabbing point exists or not through reachability analysis of the alternative grabbing point; and if the reachable grabbing points exist, determining target grabbing points in the reachable grabbing points based on the grabbing quality so as to instruct the robot to grab the object to be grabbed based on the target grabbing points. Through this application scheme, can realize the robot to the operation of snatching of the unordered object of not modelling of putting, improve the efficiency of snatching of object.

Description

Grabbing method, grabbing device and robot
Technical Field
The present application relates to the field of robot control, and in particular, to a grasping method, a grasping apparatus, a robot, and a computer-readable storage medium.
Background
With the further development of the robot technology and the artificial intelligence, the requirement for autonomous operation of the robot in a general environment is higher and higher. However, the conventional grabbing scheme applied to the robot at present is usually only suitable for a fixed environment, and needs to model an object to be grabbed in advance; that is, the traditional grabbing scheme is difficult to realize the grabbing operation of the robot on the unmodeled objects which are placed in disorder.
Disclosure of Invention
The application provides a grabbing method, a grabbing device, a robot and a computer readable storage medium, which can realize grabbing operation of the robot on an unordered object which is not modeled, and improve grabbing efficiency of the object.
In a first aspect, the present application provides a grasping method applied to a robot, including:
acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of objects to be grabbed;
extracting and obtaining at least one alternative grabbing point based on the object contour point set;
detecting whether the reachable grabbing point exists or not by analyzing the accessibility of the alternative grabbing point;
and if the reachable grabbing points exist, determining target grabbing points based on the grabbing quality in the reachable grabbing points to instruct the robot to grab the object to be grabbed based on the target grabbing points.
In a second aspect, the present application provides a grasping apparatus applied to a robot, including:
the acquisition unit is used for acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
the extraction unit is used for extracting and obtaining at least one alternative grabbing point based on the object contour point set;
the detection unit is used for detecting whether the reachable grabbing point exists or not through the reachability analysis of the alternative grabbing point;
and the determining unit is used for determining a target grabbing point based on the grabbing quality in the reachable grabbing points if the reachable grabbing points exist so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
In a third aspect, the present application provides a robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by one or more processors, performs the steps of the method of the first aspect as described above.
Compared with the prior art, the application has the beneficial effects that: the method comprises the steps of firstly obtaining an object contour point set from an environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed, then extracting and obtaining at least one alternative grabbing point based on the object contour point set, then detecting whether a reachable grabbing point exists through reachability analysis of the alternative grabbing point, and if the reachable grabbing point exists, determining a target grabbing point based on grabbing quality in the reachable grabbing point to instruct the robot to grab the object to be grabbed based on the target grabbing point. In the process, alternative grabbing points are extracted based on an object contour point set formed by contour points of the object to be grabbed in the environment depth map, the accessible grabbing points are screened out on the basis, the target grabbing points are finally determined through grabbing quality, the robot is instructed to grab the object to be grabbed, the object to be grabbed does not need to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent. It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart illustrating an implementation of a grabbing method provided in an embodiment of the present application;
FIG. 2 is an exemplary diagram of a region of interest provided by an embodiment of the present application;
fig. 3 is a block diagram of a grasping apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a robot provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution proposed in the present application, the following description will be given by way of specific examples.
A description is given below of a grasping method provided in an embodiment of the present application. The grasping method is applied to a robot. Referring to fig. 1, the capturing method includes:
step 101, acquiring an object contour point set from an environment depth map.
In the embodiment of the application, a depth camera with known internal reference is installed on the robot, and an RGB-D image of the environment where the robot is located can be captured by the depth camera, and the RGB-D image is an environment depth map. By way of example only, the depth camera may be a Realsense camera, a ZED camera, a Mynt Eye camera, or the like, and is not limited thereto.
In some embodiments, after obtaining the environment depth map, the robot may define a region of interest (ROI) in the environment depth map, and perform object identification in the region of interest, specifically, identify the outline of the object to be grasped, so as to obtain a plurality of pixel points constituting the outline of the object to be grasped in the environment depth map, where the pixel points are outline points of the object to be grasped. The robot can construct a point set based on the contour points of the object to be grabbed, and record the point set as the contour point set of the object; that is, the set of object contour points is formed by contour points of the object to be grabbed, and the robot can subsequently perform a series of operations related to the grabbing points based on the set of object contour points.
In some embodiments, in consideration of practical application scenarios, in the case of a table or other higher plane, the object to be grabbed is usually placed on the plane; in fact, even if a few situations that an object to be grabbed is placed under a desktop or a table top occur, the robot is difficult to grab due to narrow and hidden placing positions, that is, the situation is usually not considered in the embodiment of the application. Based on this, the robot can define the region of interest in the environment depth map by: firstly, carrying out linear detection on the environment depth map to determine the edge of a target placing surface, and then determining an interested area in the environment depth map according to the edge of the target placing surface. The target placing surface refers to a plane having a function of placing an object, such as a desktop or a table top. Specifically, the line detection may be hough transform line detection (houghs lines).
Referring to fig. 2, fig. 2 illustrates an example of a region of interest. The quadrilateral ABCD is the edge of a target placing surface obtained by the robot through linear detection; the area 1 shown in shaded portion is the region of interest determined by the robot based on the quadrilateral ABCD.
In some embodiments, after obtaining the environmental depth map, in order to improve the efficiency of a subsequent series of processing on the environmental depth map, the robot may perform preprocessing on the environmental depth map, and then perform an operation of defining a region of interest, an operation of identifying a contour of an object to be grasped, and the like on the preprocessed environmental depth map. Illustratively, the preprocessing may include noise processing, edge filtering, and the like. The noise point processing may be mean processing once per n frames to eliminate noise points; the edge filtering may be implemented based on temporal filter, and no limitation is made to the specific implementation process of noise processing and edge filtering.
And 102, extracting and obtaining at least one alternative grabbing point based on the object contour point set.
In the embodiment of the present application, the robot may first obtain a plurality of contour point pairs through the set of contour points of the object. Wherein, one contour point pair can be constructed by the following method: firstly, randomly selecting a contour point from a contour point set of an object, then determining a normal of the contour point based on the contour of the object to be grabbed in the environment depth map, and mapping the contour of the object to be grabbed on the basis of the direction of the normal to obtain another contour point, wherein the two contour points can form a contour point pair. Obviously, a contour point pair is composed of two different contour points in the set of object contour points. After the contour point pairs are constructed, whether each contour point pair meets the preset force closed loop condition or not can be detected. The force closed loop condition refers to: when the robot carries out grabbing operation on the object to be grabbed based on one contour point pair, the object to be grabbed does not shake; that is, the gripping operation of the object to be gripped based on the pair of outline points, which are suitable for the gripping operation, does not occur a slip situation. Subsequently, the robot may extract a corresponding candidate grabbing point based on the pair of contour points meeting the force closed-loop condition to obtain at least one candidate grabbing point. The extraction process of the alternative grabbing point specifically comprises the following steps: and for each contour point pair meeting the force closed-loop condition, determining the midpoint of the two contour points based on the mean value of the abscissa and the mean value of the ordinate of the two contour points in the contour point pair, wherein the midpoint is the alternative grabbing point corresponding to the contour point pair. It should be noted that the depth value of the alternative capture point can be obtained by the environment depth map. For example, one contour point pair includes a first contour point a1 and a second contour point a 2; the first contour point A1 has an x-axis abscissa in the image coordinate systemA1The abscissa of the second contour point A2 in the image coordinate system is xA2(ii) a The first contour point A1 has a vertical coordinate y in the image coordinate systemA1The ordinate of the first contour point a2 in the image coordinate system is yA2(ii) a Then, the midpoint of the first contour point a1 and the second contour point a2 is a0, the midpoint a0 is an alternative grabbing point obtained based on the first contour point a1 and the second contour point a2, and the parameters thereof can be obtained by the following processes: the abscissa x of the midpoint A0 is obtained by calculationA0Is composed of
Figure BDA0002997824340000051
Ordinate yA0Is composed of
Figure BDA0002997824340000052
In the environmental depth map, find the abscissa as
Figure BDA0002997824340000053
The ordinate is
Figure BDA0002997824340000054
The depth value of the pixel point of (a) is the depth value of the midpoint a 0.
And 103, detecting whether the reachable grabbing points exist or not through the reachability analysis of the alternative grabbing points.
In the embodiment of the present application, the performing, by the robot, accessibility analysis on the candidate grabbing point refers to the robot determining whether the candidate grabbing point is within the reach of the robot, that is, whether a mechanical arm (or other part for grabbing an object) of the robot can reach the position of the candidate grabbing point. And through the accessibility analysis of the alternative grabbing points, the reachable grabbing points and the unreachable grabbing points can be screened out.
In some embodiments, when performing reachability analysis, the robot may first perform calibration by hands and eyes, and convert a first coordinate of each candidate grasping point in an image coordinate system into a second coordinate in a robot coordinate system, so that the robot may clearly know a position of each candidate grasping point relative to the robot. Wherein the first coordinate of a candidate grabbing point is expressed in the form of (u, v, d, θ), u represents the abscissa of the candidate grabbing point in the image coordinate system, v represents the ordinate of the candidate grabbing point in the image coordinate system, d represents the depth value of the candidate grabbing point in the environment depth map, and θ represents the deflection angle of the connecting line from the origin of the image coordinate system to the candidate grabbing point with respect to a preset coordinate axis (e.g. the abscissa) of the image coordinate system. Subsequently, for each candidate grabbing point, the robot may solve IK (inverse kinematics) based on the second coordinate of the candidate grabbing point, and if there is a solvable IK, the candidate grabbing point is considered reachable. However, considering that the requirement on the operation performance of the robot is high in this way, in order to improve the efficiency of the reachability analysis, the robot may also create a reachable coordinate database in advance, where a large number of reachable second coordinates are stored in advance, so that the robot can quickly implement the reachability analysis by detecting whether the second coordinate of the candidate capture point is in the reachable coordinate database, specifically: and matching the second coordinate of each alternative grabbing point with each reachable second coordinate stored in the reachable coordinate database, and determining all alternative grabbing points of which the second coordinates are in the reachable coordinate database as reachable grabbing points.
It should be noted that the environmental depth map acquired by the depth camera can only express the depth value of the surface of the object to be grabbed, but actually the object to be grabbed is a three-dimensional object, and based on this, after determining the horizontal and vertical coordinates and the depth value of the candidate grabbing point in the image coordinate system, a plurality of depth values may be continuously sampled, for example, 1 cm, 3 cm, 5 cm, and the like are added to the original depth value to obtain a new depth value of the candidate grabbing point, so as to more accurately express the actual depth information of the candidate grabbing point.
And 104, if the reachable grabbing points exist, determining target grabbing points in the reachable grabbing points based on the grabbing quality so as to instruct the robot to grab the object to be grabbed based on the target grabbing points.
In the embodiment of the application, when there are reachable gripping points, the robot may determine only one target gripping point from the reachable gripping points, and then control its own mechanical arm (or other part for gripping the object) based on the target gripping point to perform a gripping operation on the object to be gripped. Specifically, the target grasp point is determined in the following manner:
in case of only one reachable grabbing point, the robot has no more choices and can directly determine the one reachable grabbing point as the target grabbing point.
In the case where there are more than two reachable grasping points, the robot may calculate the grasping quality score of each reachable grasping point, and determine the reachable grasping point with the highest grasping quality score as the target grasping point. For example, after performing reachability analysis on the alternative grabbing points, three reachable grabbing points, namely a reachable grabbing point a, a reachable grabbing point B and a reachable grabbing point C, are obtained; the robot respectively calculates the grabbing quality scores of the three reachable grabbing points to obtain a grabbing quality score of a reachable grabbing point A, a grabbing quality score of a reachable grabbing point B and a grabbing quality score of a reachable grabbing point C; assuming B > a > c, it is known that the grabbing quality score of the reachable grabbing point B, which will be determined as the target grabbing point, is the highest.
In some embodiments, the grabbing quality score of each reachable grabbing point may be calculated by a preset grabbing quality function, specifically: and aiming at each reachable grabbing point, calculating a first score, a second score and a third score of the reachable grabbing point through a grabbing quality function, and performing weighted average operation on the first score, the second score and the third score to obtain the grabbing quality score of the reachable grabbing point. The first score is a normalized score calculated based on the surface friction of the reachable gripping point, the second score is a normalized score calculated based on the control accuracy of the robot reaching the reachable gripping point, and the third score is a normalized score calculated based on the distance between the deflection angle theta of the reachable gripping point and the deflection angle of the main axis of the object to be gripped.
In some embodiments, after performing reachability analysis on the candidate grabbing points, if it is detected that there are no reachable grabbing points (that is, the number of reachable grabbing points is 0), it means that the current robot is too far away from the object to be grabbed, and the robot cannot reach the object to be grabbed temporarily, and needs to move in the direction of the object to be grabbed first to perform grabbing operation. Therefore, the robot can calculate the distance between each candidate grabbing point and the robot, then determine the target moving direction based on the direction of the candidate grabbing point corresponding to the minimum value of the distance relative to the robot, and finally control the robot to move towards the target moving direction; that is, the candidate grabbing point corresponding to the minimum value of the distance is determined as the moving target, and the robot is made to move toward the direction (that is, the target moving direction) where the candidate grabbing point is located. Alternatively, the direction of the center point of the object to be grabbed relative to the robot may be directly calculated, and the direction may be used as the target moving direction to control the robot to move in the target moving direction. In fact, whatever the way the target movement direction is determined, the ultimate goal is to enable the robot to approach the object to be grasped gradually, reaching as quickly as possible within the range in which the object to be grasped can be grasped. It should be noted that during the moving process, it is also necessary to eliminate the interference of other obstacles (such as the obstacles like the table top).
Therefore, according to the embodiment of the application, the alternative grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, the target grabbing points are finally determined through grabbing quality, the robot is instructed to grab the object to be grabbed, the object to be grabbed does not need to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
Corresponding to the grabbing method proposed in the foregoing, the embodiment of the present application provides a grabbing device, which is applied to a robot. Referring to fig. 3, a grabbing apparatus 300 according to the embodiment of the present application includes:
an obtaining unit 301, configured to obtain an object contour point set from an environmental depth map, where the object contour point set is formed by contour points of an object to be grabbed;
an extracting unit 302, configured to extract and obtain at least one candidate grabbing point based on the object contour point set;
a detecting unit 303, configured to detect whether there is a reachable access point through reachability analysis of the candidate access point;
a determining unit 304, configured to determine, if there is a reachable gripping point, a target gripping point based on the gripping quality among the reachable gripping points, so as to instruct the robot to perform a gripping operation on the object to be gripped based on the target gripping point.
Optionally, the obtaining unit 301 includes:
a contour point acquisition subunit, configured to perform object contour identification in the region of interest of the environmental depth map to obtain contour points of the object to be grabbed;
and the set constructing subunit is used for constructing and obtaining the object contour point set based on the contour points of the object to be grabbed.
Optionally, the obtaining unit 301 further includes:
the linear detection subunit is used for carrying out linear detection on the environment depth map so as to determine the edge of the target placing surface;
and the interested region determining subunit is used for determining the interested region in the environment depth map according to the edge of the target placing surface.
Optionally, the extracting unit 302 includes:
a force closed loop condition detecting subunit, configured to detect whether each contour point pair meets a preset force closed loop condition, where the contour point pair is formed by two different contour points in the object contour point set;
and the alternative grabbing point extraction subunit is used for extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed loop condition so as to obtain at least one alternative grabbing point.
Optionally, the detecting unit 303 includes:
the coordinate conversion subunit is used for converting the first coordinate of the alternative grabbing point in the image coordinate system into a second coordinate in the robot coordinate system;
the reachable detection subunit is used for detecting whether the second coordinate of each alternative grabbing point is in a preset reachable coordinate database or not;
and the reachable determining subunit is configured to determine the candidate grabbing point of the second coordinate in the reachable coordinate database as a reachable grabbing point.
Optionally, the determining unit 304 includes:
the first determining subunit is used for determining one reachable grabbing point as a target grabbing point if the reachable grabbing point exists;
and the second determining subunit is used for calculating the grabbing quality score of each reachable grabbing point if more than two reachable grabbing points exist, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point.
Optionally, the grasping apparatus 300 further includes:
a distance calculating unit, configured to calculate a distance between each candidate grabbing point and the robot after the detecting unit 303 detects whether there is a reachable grabbing point, if there is no reachable grabbing point;
a direction determining unit configured to determine a target moving direction based on a direction of the candidate grab point corresponding to the minimum value of the distance with respect to the robot;
and a movement control unit for controlling the robot to move in the target movement direction.
Therefore, according to the embodiment of the application, the alternative grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, the target grabbing points are finally determined through grabbing quality, the robot is instructed to grab the object to be grabbed, the object to be grabbed does not need to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
An embodiment of the present application further provides a robot, please refer to fig. 4, where the robot 4 in the embodiment of the present application includes: a memory 401, one or more processors 402 (only one shown in fig. 4), and computer programs stored on the memory 401 and executable on the processors. The memory 401 is used for storing software programs and units, and the processor 402 executes various functional applications and data processing by operating the software programs and units stored in the memory 401, so as to obtain resources corresponding to the preset events. Specifically, the processor 402, by running the above-mentioned computer program stored in the memory 401, implements the steps of:
acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of objects to be grabbed;
extracting and obtaining at least one alternative grabbing point based on the object contour point set;
detecting whether the reachable grabbing point exists or not by analyzing the accessibility of the alternative grabbing point;
and if the reachable grabbing points exist, determining target grabbing points based on the grabbing quality in the reachable grabbing points to instruct the robot to grab the object to be grabbed based on the target grabbing points.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the acquiring the set of object contour points from the environmental depth map includes:
carrying out object contour recognition in the region of interest of the environment depth map to obtain contour points of the object to be grabbed;
and constructing and obtaining the object contour point set based on the contour points of the object to be grabbed.
In a third possible implementation manner provided on the basis of the second possible implementation manner, before performing object contour recognition in the region of interest of the environmental depth map, the processor 402 further implements the following steps when running the computer program stored in the memory 401:
performing linear detection on the environment depth map to determine the edge of a target placing surface;
and determining the region of interest in the environmental depth map according to the edge of the target placing surface.
In a fourth possible implementation manner provided on the basis of the first possible implementation manner, the extracting and obtaining at least one candidate grabbing point based on the object contour point set includes:
detecting whether each contour point pair meets a preset force closed loop condition or not, wherein the contour point pair is formed by two different contour points in the object contour point set;
and extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed loop condition to obtain at least one alternative grabbing point.
In a fifth possible implementation manner provided on the basis of the first possible implementation manner, the detecting whether there is a reachable access point through the reachability analysis of the candidate access point includes:
converting a first coordinate of the alternative grabbing point in an image coordinate system into a second coordinate in a robot coordinate system;
detecting whether a second coordinate of each alternative grabbing point is in a preset reachable coordinate database or not according to each alternative grabbing point;
and determining the alternative grabbing points of the second coordinates in the reachable coordinate database as reachable grabbing points.
In a sixth possible embodiment based on the first possible embodiment, the determining a target grasp point based on the grasp quality in the reachable grasp points includes:
if one reachable grabbing point exists, determining the reachable grabbing point as a target grabbing point;
and if more than two reachable grabbing points exist, calculating the grabbing quality score of each reachable grabbing point, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point.
In a seventh possible implementation form based on the first possible implementation form, the second possible implementation form, the third possible implementation form, the fourth possible implementation form, the fifth possible implementation form, or the sixth possible implementation form, after detecting whether there is a reachable capture point, the processor 402 implements the following steps when running the computer program stored in the memory 401:
if no reachable grabbing point exists, calculating the distance between each alternative grabbing point and the robot;
determining a target moving direction based on the direction of the candidate grabbing point corresponding to the minimum distance relative to the robot;
and controlling the robot to move towards the target moving direction.
It should be understood that in the embodiments of the present Application, the Processor 402 may be a CPU, and the Processor may be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Memory 401 may include both read-only memory and random-access memory, and provides instructions and data to processor 402. Some or all of memory 401 may also include non-volatile random access memory. For example, the memory 401 may also store information of device classes.
Therefore, according to the embodiment of the application, the alternative grabbing points are extracted based on the object contour point set formed by the contour points of the object to be grabbed in the environment depth map, the reachable grabbing points are screened out on the basis, the target grabbing points are finally determined through grabbing quality, the robot is instructed to grab the object to be grabbed, the object to be grabbed does not need to be modeled in advance, the application scene is wider, and the grabbing efficiency of the robot to the object to be grabbed can be improved to a certain extent.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of external device software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the above-described modules or units is only one logical functional division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer-readable storage medium may include: any entity or device capable of carrying the above-described computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer readable Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the computer readable storage medium may contain other contents which can be appropriately increased or decreased according to the requirements of the legislation and the patent practice in the jurisdiction, for example, in some jurisdictions, the computer readable storage medium does not include an electrical carrier signal and a telecommunication signal according to the legislation and the patent practice.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A grasping method, applied to a robot, comprising:
acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of objects to be grabbed;
extracting and obtaining at least one alternative grabbing point based on the object contour point set;
detecting whether the reachable grabbing point exists or not through reachability analysis of the alternative grabbing point;
and if the reachable grabbing points exist, determining target grabbing points in the reachable grabbing points based on the grabbing quality so as to instruct the robot to grab the object to be grabbed based on the target grabbing points.
2. The method of claim 1, wherein the obtaining the set of object contour points from the environmental depth map comprises:
carrying out object contour recognition in the region of interest of the environment depth map to obtain contour points of the object to be grabbed;
and constructing and obtaining the object contour point set based on the contour points of the object to be grabbed.
3. The grasping method according to claim 2, wherein before the object contour identification in the region of interest of the environmental depth map, the grasping method further includes:
performing linear detection on the environment depth map to determine the edge of a target placing surface;
and determining the region of interest in the environment depth map according to the edge of the target placing surface.
4. The grabbing method of claim 1, wherein the extracting at least one candidate grabbing point based on the set of object contour points comprises:
detecting whether each contour point pair meets a preset force closed loop condition, wherein the contour point pair is formed by two different contour points in the object contour point set;
and extracting corresponding alternative grabbing points based on the contour point pairs meeting the force closed loop condition to obtain at least one alternative grabbing point.
5. The crawling method of claim 1, wherein said detecting whether there is a reachable crawl point by reachability analysis of said candidate crawl points comprises:
converting a first coordinate of the alternative grabbing point in an image coordinate system into a second coordinate in a robot coordinate system;
detecting whether a second coordinate of each alternative grabbing point is in a preset reachable coordinate database or not for each alternative grabbing point;
and determining the alternative grabbing point of the second coordinate in the reachable coordinate database as a reachable grabbing point.
6. The method of grabbing as claimed in claim 1, wherein said determining a target grabbing point based on grabbing quality among said reachable grabbing points comprises:
if one reachable grabbing point exists, determining the reachable grabbing point as a target grabbing point;
and if more than two reachable grabbing points exist, calculating the grabbing quality score of each reachable grabbing point, and determining the reachable grabbing point with the highest grabbing quality score as the target grabbing point.
7. The grasping method according to any one of claims 1 to 6, wherein after the detecting whether there is a reachable grasping point, the grasping method further includes:
if no reachable grabbing point exists, calculating the distance between each alternative grabbing point and the robot;
determining a target moving direction based on the direction of the candidate grabbing point corresponding to the minimum value of the distance relative to the robot;
and controlling the robot to move towards the target moving direction.
8. A gripping device, applied to a robot, comprising:
the acquisition unit is used for acquiring an object contour point set from the environment depth map, wherein the object contour point set is formed by contour points of an object to be grabbed;
the extraction unit is used for extracting and obtaining at least one alternative grabbing point based on the object contour point set;
the detection unit is used for detecting whether the reachable grabbing point exists or not through reachability analysis of the alternative grabbing point;
and the determining unit is used for determining a target grabbing point based on the grabbing quality in the reachable grabbing points if the reachable grabbing points exist so as to instruct the robot to grab the object to be grabbed based on the target grabbing point.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN202110336211.2A 2021-03-29 2021-03-29 Grabbing method, grabbing device and robot Active CN113034526B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110336211.2A CN113034526B (en) 2021-03-29 2021-03-29 Grabbing method, grabbing device and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110336211.2A CN113034526B (en) 2021-03-29 2021-03-29 Grabbing method, grabbing device and robot

Publications (2)

Publication Number Publication Date
CN113034526A true CN113034526A (en) 2021-06-25
CN113034526B CN113034526B (en) 2024-01-16

Family

ID=76452760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110336211.2A Active CN113034526B (en) 2021-03-29 2021-03-29 Grabbing method, grabbing device and robot

Country Status (1)

Country Link
CN (1) CN113034526B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344307A (en) * 2021-08-09 2021-09-03 常州唯实智能物联创新中心有限公司 Disordered grabbing multi-target optimization method and system based on deep reinforcement learning
CN116416444A (en) * 2021-12-29 2023-07-11 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170008172A1 (en) * 2015-07-08 2017-01-12 Empire Technology Development Llc Stable grasp point selection for robotic grippers with machine vision and ultrasound beam forming
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN109508707A (en) * 2019-01-08 2019-03-22 中国科学院自动化研究所 The crawl point acquisition methods of robot stabilized crawl object based on monocular vision
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111844019A (en) * 2020-06-10 2020-10-30 安徽鸿程光电有限公司 Method and device for determining grabbing position of machine, electronic device and storage medium
CN111932490A (en) * 2020-06-05 2020-11-13 浙江大学 Method for extracting grabbing information of visual system of industrial robot

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170008172A1 (en) * 2015-07-08 2017-01-12 Empire Technology Development Llc Stable grasp point selection for robotic grippers with machine vision and ultrasound beam forming
CN108381549A (en) * 2018-01-26 2018-08-10 广东三三智能科技有限公司 A kind of quick grasping means of binocular vision guided robot, device and storage medium
CN109508707A (en) * 2019-01-08 2019-03-22 中国科学院自动化研究所 The crawl point acquisition methods of robot stabilized crawl object based on monocular vision
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
CN111015655A (en) * 2019-12-18 2020-04-17 深圳市优必选科技股份有限公司 Mechanical arm grabbing method and device, computer readable storage medium and robot
CN111932490A (en) * 2020-06-05 2020-11-13 浙江大学 Method for extracting grabbing information of visual system of industrial robot
CN111844019A (en) * 2020-06-10 2020-10-30 安徽鸿程光电有限公司 Method and device for determining grabbing position of machine, electronic device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344307A (en) * 2021-08-09 2021-09-03 常州唯实智能物联创新中心有限公司 Disordered grabbing multi-target optimization method and system based on deep reinforcement learning
CN113344307B (en) * 2021-08-09 2021-11-26 常州唯实智能物联创新中心有限公司 Disordered grabbing multi-target optimization method and system based on deep reinforcement learning
CN116416444A (en) * 2021-12-29 2023-07-11 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system
CN116416444B (en) * 2021-12-29 2024-04-16 广东美的白色家电技术创新中心有限公司 Object grabbing point estimation, model training and data generation method, device and system

Also Published As

Publication number Publication date
CN113034526B (en) 2024-01-16

Similar Documents

Publication Publication Date Title
CN112070818B (en) Robot disordered grabbing method and system based on machine vision and storage medium
CN110866903B (en) Ping-pong ball identification method based on Hough circle transformation technology
CN106709950B (en) Binocular vision-based inspection robot obstacle crossing wire positioning method
CN111627072B (en) Method, device and storage medium for calibrating multiple sensors
TWI395145B (en) Hand gesture recognition system and method
CN106981077B (en) Infrared image and visible light image registration method based on DCE and LSS
JP6007682B2 (en) Image processing apparatus, image processing method, and program
CN113284179B (en) Robot multi-object sorting method based on deep learning
WO2022042304A1 (en) Method and apparatus for identifying scene contour, and computer-readable medium and electronic device
CN113034526B (en) Grabbing method, grabbing device and robot
CN108647597B (en) Wrist identification method, gesture identification method and device and electronic equipment
KR20080032856A (en) Recognition method of welding line position in shipbuilding subassembly stage
Cinaroglu et al. A direct approach for human detection with catadioptric omnidirectional cameras
JP2018169660A (en) Object attitude detection apparatus, control apparatus, robot and robot system
CN115205286B (en) Method for identifying and positioning bolts of mechanical arm of tower-climbing robot, storage medium and terminal
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN112070837A (en) Part positioning and grabbing method and system based on visual analysis
WO2020014913A1 (en) Method for measuring volume of object, related device, and computer readable storage medium
CN114310892B (en) Object grabbing method, device and equipment based on point cloud data collision detection
WO2021056501A1 (en) Feature point extraction method, movable platform and storage medium
CN117689716B (en) Plate visual positioning, identifying and grabbing method, control system and plate production line
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN114638891A (en) Target detection positioning method and system based on image and point cloud fusion
CN113012181B (en) Novel quasi-circular detection method based on Hough transformation
CN114310887A (en) 3D human leg recognition method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant