CN113894774A - Robot grabbing control method and device, storage medium and robot - Google Patents

Robot grabbing control method and device, storage medium and robot Download PDF

Info

Publication number
CN113894774A
CN113894774A CN202111248281.9A CN202111248281A CN113894774A CN 113894774 A CN113894774 A CN 113894774A CN 202111248281 A CN202111248281 A CN 202111248281A CN 113894774 A CN113894774 A CN 113894774A
Authority
CN
China
Prior art keywords
pose
grabbing
teaching
robot
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111248281.9A
Other languages
Chinese (zh)
Inventor
高翔
温志庆
周德成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ji Hua Laboratory
Original Assignee
Ji Hua Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ji Hua Laboratory filed Critical Ji Hua Laboratory
Priority to CN202111248281.9A priority Critical patent/CN113894774A/en
Publication of CN113894774A publication Critical patent/CN113894774A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The embodiment of the disclosure relates to a robot grabbing control method, a device, a storage medium and a robot, comprising: acquiring an image of a teaching object; determining the pose of the teaching object according to the image of the teaching object; acquiring a hand grabbing image of the teaching object when the hand grabs the teaching object at different pose positions; determining a robot grabbing pose according to the hand grabbing image; establishing a teaching one-to-one correspondence relationship between the pose of the teaching object and the grabbing pose of the robot and storing the teaching one-to-one correspondence relationship; the teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object. According to the embodiment of the robot grabbing system, the teaching process of the robot is simplified, the efficiency is improved, a proper grabbing pose can be appointed for an object, and the grabbing precision and reliability are guaranteed.

Description

Robot grabbing control method and device, storage medium and robot
Technical Field
The disclosure relates to the technical field of robots, in particular to a robot grabbing control method and device, a storage medium and a robot.
Background
With the development of the robot technology, the robot technology is increasingly applied to industrial production, and the robot needs to adapt to the change of products on a production line.
In order to improve the production efficiency, a robot is used to grasp the target product. And the action of snatching of present robot mainly realizes through artificial modes such as demonstrator debugging repeatedly or dragging the teaching, and this requires that operating personnel have abundant operation experience to the robot, and current demonstrator and the complex operation of dragging the teaching in addition, efficiency are lower. In addition, in the prior art, a robot grabbing pose is automatically generated through artificial intelligence and machine vision, so that a proper grabbing pose is difficult to assign to a grabbed object.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems, the present disclosure provides a robot gripping control method, apparatus, storage medium, and robot.
In a first aspect, the present disclosure provides a robot grabbing control method, including:
acquiring an image of a teaching object;
determining the pose of the teaching object according to the image of the teaching object;
acquiring a hand grabbing image of the teaching object when the hand grabs the teaching object at different pose positions;
determining a robot grabbing pose according to the hand grabbing image;
establishing a teaching one-to-one correspondence relationship between the pose of the teaching object and the grabbing pose of the robot and storing the teaching one-to-one correspondence relationship;
the teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object.
In some embodiments, further comprising:
acquiring an image of an object to be grabbed;
determining the pose of the object to be grabbed according to the image of the object to be grabbed;
according to the teaching one-to-one correspondence between the positions of the teaching objects and the positions of the robot grabbing positions, determining the robot grabbing positions corresponding to the positions of the objects to be grabbed;
and controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
In some embodiments, before determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the teaching object and the pose of the robot grabbing, the method further includes:
and determining whether the object to be grabbed belongs to a preset object to be grabbed or not according to the image of the object to be grabbed, and executing the operation of determining the grabbing pose of the robot corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot when the object to be grabbed belongs to the preset object to be grabbed.
In some embodiments, the determining a robot grip pose from the human hand grip image includes:
determining a hand grabbing pose according to the hand grabbing image;
and converting the hand grabbing pose into a robot grabbing pose.
In some embodiments, the determining a human hand grip pose from the human hand grip image comprises:
determining a hand grabbing gesture according to the hand grabbing image, and determining key point coordinates in the hand grabbing gesture;
determining the hand grabbing direction according to the key point coordinates;
and determining the hand grabbing pose according to the key point coordinates and the hand grabbing direction.
In some embodiments, converting the human hand grasp pose to a robotic grasp pose comprises:
and converting the hand grabbing pose into a robot grabbing pose through a preset hand-eye calibration relation.
In a second aspect, the present disclosure also provides a robot gripping control device, including:
the first image acquisition module is used for acquiring an image of a teaching object;
the object pose determining module is used for determining the pose of the teaching object according to the image of the teaching object;
the second image acquisition module is used for acquiring a hand grabbing image of the teaching object when the teaching object is grabbed by a hand at different pose positions;
the grabbing pose determining module is used for determining a grabbing pose of the robot according to the hand grabbing image;
the teaching one-to-one correspondence determining module is used for establishing and storing teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot;
the teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object.
In some embodiments, further comprising:
the third image acquisition module is used for acquiring an image of an object to be grabbed;
the to-be-grabbed object pose determining module is used for determining the pose of the to-be-grabbed object according to the to-be-grabbed object image;
the robot grabbing pose determining module is used for determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the object to be taught and the grabbing pose of the robot;
and the object to be grabbed grabbing module is used for controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
In a third aspect, the present disclosure also provides a storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of the method of any one of the first aspect.
In a fourth aspect, the present disclosure also provides a robot, comprising:
the system comprises an image acquisition device, a mechanical arm, an end effector and a controller;
the image acquisition device and the end effector are positioned on the mechanical arm; the controller is electrically connected with the image acquisition device, the mechanical arm and the end effector respectively; the image acquisition device is used for acquiring images of a teaching object and hand-grabbed images; the end effector is used for grabbing a teaching object or an object to be grabbed under the control of the controller; the controller is configured to implement the steps of the robot gripping control method according to any one of the embodiments of the first aspect.
According to the robot grabbing control method provided by the embodiment of the disclosure, the position and posture of a teaching object are determined according to the image of the teaching object by acquiring the image of the teaching object. Acquiring hand grabbing images of a hand grabbing teaching object at different positions and postures of the teaching object, determining the robot grabbing position and posture according to the hand grabbing images, establishing a teaching one-to-one correspondence relation between the positions and postures of the teaching object and the robot grabbing position and storing the teaching one-to-one correspondence relation. The teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object. Because the grasping pose can be taught by a human hand, a suitable grasping pose can be specified for the object. In addition, the embodiment of the disclosure can automatically realize the grabbing teaching of the robot, the grabbing pose is not required to be calculated in real time in the grabbing process, the repeated debugging through a demonstrator and the like is not required, the experience of an operator is not relied on, and when the weight center of an object is unchanged but the three-dimensional coordinate and/or the three-dimensional rotation angle is changed, the object can also correspond to a proper grabbing pose, so that the efficiency can be improved, and the grabbing precision and reliability can be ensured.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments or technical solutions in the prior art of the present disclosure, the drawings used in the description of the embodiments or prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a robot grabbing control method teaching process according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a robot grabbing control method according to an embodiment of the present disclosure during a grabbing process;
fig. 3 is a block diagram of a robot gripping control device according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a robot according to an embodiment of the present disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, aspects of the present disclosure will be further described below. It should be noted that the embodiments and features of the embodiments of the present disclosure may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced in other ways than those described herein; it is to be understood that the embodiments disclosed in the specification are only a few embodiments of the present disclosure, and not all embodiments.
In industrial production, the existing robot is mainly realized in a manual mode of repeatedly debugging or dragging by a demonstrator and the like, so that an operator is required to have abundant operation experience on the robot. The other method is that an artificial intelligence and machine vision mode is adopted, the grabbing pose can be automatically generated, but the appointed grabbing pose is difficult to generate due to the automatic generation of the grabbing pose, and grabbing is difficult to complete under the condition that an appropriate grabbing pose needs to be appointed for an object.
In view of this, the disclosed embodiments provide a robot gripping control method, which may be executed by a robot gripping control apparatus. The robot gripping control device may be implemented in the form of software and/or hardware. The robot gripper control device can also be integrated, for example, in the controller of the robot. For describing the robot grabbing control method provided by the embodiment of the present disclosure in detail, the embodiment of the present disclosure exemplarily provides an applicable scenario, which may include, for example, a robot and a teaching object. The robot includes an image acquisition device, a mechanical arm, an end effector, and a controller. Wherein, the image acquisition device and the end effector can be positioned on the mechanical arm. The controller is respectively electrically connected with the image acquisition device, the mechanical arm and the end effector. The image acquisition device is used for acquiring images of the teaching object and hand grabbing images, and the end effector is used for grabbing the teaching object or an object to be grabbed under the control of the controller. Fig. 1 is a schematic flow chart of a robot grabbing control method teaching process according to an embodiment of the present disclosure. As shown in fig. 1, the teaching process of the robot gripping control method includes S101 to S105:
s101, acquiring an image of a teaching object.
For example, a teaching object is placed on a table, and an image of the teaching object is acquired by an image capturing device on a robot arm of the robot. The image capturing device may be, for example, a camera or the like. The teaching object can be placed at different angles, so that the image acquisition device can acquire images of the teaching object with different teaching object poses.
In the teaching process, images of teaching objects of various different types can be acquired, so that teaching of the following objects of various different types can be completed.
And S102, determining the pose of the teaching object according to the image of the teaching object.
The pose of the teaching object can be determined according to the image of the teaching object, namely the pose of the teaching object is determined from the image of the teaching object by acquiring the image of the teaching object through a machine vision technology without establishing a three-dimensional model of the teaching object in advance.
S103, acquiring hand grabbing images of the teaching object in different positions and postures.
After the teaching object is placed on the workbench, the grabbing of the teaching object by the end effector of the hand simulation robot can be realized, and at the moment, the grabbing of the image by the hand can be realized by the image acquisition device on the mechanical arm. The teaching object is placed on the workbench for multiple times, and the pose of the teaching object can be changed to place the teaching object. For example, when the three-dimensional coordinates of the teaching object are changed, the pose of the teaching object is changed correspondingly, or when the three-dimensional rotation angle of the teaching object is changed, the pose of the teaching object is changed correspondingly. The step is essentially to acquire a hand grabbing image when the hand grabs teaching objects with different poses. In the hand grabbing process, different hand grabbing poses can be changed for teaching objects with different poses.
And S104, determining the grabbing pose of the robot according to the hand grabbing image.
The robot grabbing pose can be determined according to the hand grabbing image, namely the robot grabbing pose is determined according to the hand grabbing image by acquiring the hand grabbing image through a machine vision technology without establishing a three-dimensional model of the robot grabbing pose in advance.
And S105, establishing a teaching one-to-one corresponding relation between the pose of the teaching object and the grabbing pose of the robot and storing the teaching one-to-one corresponding relation.
The teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object. The 6D pose includes a three-dimensional position and a three-dimensional rotation angle. The teaching object poses comprise teaching object poses of different teaching objects and also comprise 6D poses of the same teaching object when three-dimensional positions and/or three-dimensional rotation angles change.
Through the steps, the pose of the teaching object and the grabbing pose of the robot can be obtained, and the teaching one-to-one correspondence relation between the pose of the teaching object and the grabbing pose of the robot corresponding to the pose of the teaching object in grabbing is established and stored. According to the teaching method and the teaching device, the position and the attitude of the teaching object are determined through the image of the teaching object, the grabbing position and the attitude of the robot are determined through grabbing the image by the hand, a three-dimensional model of the teaching object and a three-dimensional model of the grabbing position and attitude of the robot are not required to be obtained in advance, the position and the attitude of the teaching object and the grabbing position and attitude of the robot corresponding to the position and attitude of the teaching object during grabbing are established and stored, and therefore the whole teaching process is automatically completed. Therefore, the teaching device does not need to be repeatedly debugged in the grabbing teaching process, and therefore teaching efficiency can be improved. Because the teaching process does not depend on the operation experience of an operator on the robot, the grabbing precision and reliability can be ensured. In addition, aiming at the traditional mode of calculating the grabbing pose through machine vision and artificial intelligence, the embodiment of the disclosure does not need to calculate the grabbing pose in real time in the grabbing process, so that the operation complexity can be reduced. Because the teaching one-to-one correspondence of the robot grabbing pose corresponding to each other can be established for the teaching object poses of different teaching objects and different 6D poses of the same teaching object in the teaching process, the embodiment of the disclosure can produce grabbing when various types of objects are placed on the line and flexible grabbing when the objects are placed at will. In addition, when the object to be grasped is replaced, the teaching or grasping process does not need to be rewritten, and only the image of the object to be taught needs to be obtained again and the hand-grasped image needs to be taught. For example, the object to be grasped may be taught again in the manner of the above steps 110 to 150, so that the method has high flexibility in the case of product model change on a production line.
The embodiment of the disclosure further provides a robot grabbing control method, and fig. 2 is a schematic flow chart of the robot grabbing control method provided by the embodiment of the disclosure during a grabbing process. As shown in FIG. 2, comprising S201-S204:
s201, obtaining an image of an object to be grabbed.
And S202, determining the pose of the object to be grabbed according to the image of the object to be grabbed.
After the teaching is finished, image acquisition can be carried out on the object to be grabbed through an image acquisition device arranged on the mechanical arm, and the pose of the object to be grabbed is determined according to the acquired image of the object to be grabbed.
S203, determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the object to be taught and the grabbing pose of the robot.
And S204, controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
Because the teaching one-to-one correspondence relationship between the pose of the teaching object and the grabbing pose of the robot is established in the teaching process, after the pose of the object to be grabbed is determined, the teaching one-to-one correspondence relationship between the pose of the teaching object and the grabbing pose of the robot can be found, and the grabbing pose of the robot corresponding to the pose of the object to be grabbed can be found. After the robot grabbing pose corresponding to the pose of the object to be grabbed is determined, the robot can be controlled to grab the object to be grabbed according to the robot grabbing pose.
According to the robot grabbing control method provided by the embodiment of the disclosure, in the grabbing working process after teaching, the position and posture of an object to be grabbed are determined by acquiring the image of the object to be grabbed and according to the image of the object to be grabbed, then the grabbing position and posture of the robot corresponding to the position and posture of the object to be grabbed are determined according to the teaching one-to-one correspondence relationship between the position and posture of the object to be taught established in the teaching process and the grabbing position and posture of the robot, the robot is controlled to grab the object to be grabbed according to the grabbing position and posture of the robot, and accurate grabbing of the object to be grabbed can be achieved. In the embodiment of the disclosure, different postures of the teaching object correspond to the robot grabbing postures, so that in the grabbing work after teaching, the placing posture of the object on the workbench can be fixed, and the object can be accurately grabbed without arranging a positioning tool.
In some embodiments, before determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the teaching object and the robot grabbing pose, the method further includes:
and determining whether the object to be grabbed belongs to a preset object to be grabbed or not according to the image of the object to be grabbed, and executing the operation of determining the grabbing pose of the robot corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot when the object to be grabbed belongs to the preset object to be grabbed.
Before the robot grabs the object, firstly judging whether the object to be grabbed belongs to a preset object to be grabbed or not, if the object to be grabbed is judged to belong to the preset object to be grabbed, grabbing the object, and determining the grabbing pose of the robot corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence relationship between the pose of the object to be taught and the grabbing pose of the robot. And if the object does not belong to the preset object to be grabbed after judgment, not grabbing the object. The comparison may also be performed, and the object to be grabbed belongs to a preset object to be grabbed, but the robot cannot form a preset grabbing pose due to the problem of object placement, and the robot does not grab the preset object. The hand teaching can be carried out again or the proper grabbing pose can be adjusted for the object. The object image is judged before grabbing, so that the situation that the object to be grabbed does not accord with an consistent grabbing pose and grabbing failure is caused is effectively avoided, and grabbing efficiency is improved.
In some embodiments, the S104 determining the robot grip pose from the human hand grip image includes:
s1041, determining a hand grabbing pose according to the hand grabbing image.
And S1042, converting the hand grabbing pose into a robot grabbing pose.
According to the embodiment of the invention, the hand grabbing pose is determined according to the hand grabbing image by the gesture recognition technology, and then the hand grabbing pose is converted into the robot grabbing pose. The specific algorithm of the gesture recognition technology is not limited in the embodiments of the present disclosure. The specific conversion algorithm in the process of converting the hand grabbing pose into the robot grabbing pose is not particularly limited.
In some embodiments, the determining the hand grab pose from the hand grab image S1041 includes:
s10411, determining a hand grabbing gesture according to the hand grabbing image, and determining key point coordinates in the hand grabbing gesture.
In the process of teaching the human hand, firstly, recognizing a human hand grabbing gesture according to a human hand grabbing image, and determining key point coordinates in the human hand grabbing gesture. For example, the key point in the hand grabbing gesture is A, and the three-dimensional coordinates (x, y, z) of the key point A are determined. The key points in the hand grabbing gesture can refer to contact points of the hand and the object to be grabbed, joints, tiger mouths and the like of the hand. The setting is not limited herein, but may be set according to different end effectors, different objects to be grabbed, and the grabbing condition of the human hand.
And S10412, determining the hand grabbing direction according to the key point coordinates.
And S10413, determining a hand grabbing pose according to the key point coordinates and the hand grabbing direction.
Because the teaching object is a three-dimensional figure and the human hand has a certain grabbing direction in the process of grabbing the object, after the three-dimensional coordinates of the key points are obtained, the grabbing direction (rx, ry, rz) of the human hand needs to be determined. And obtaining the three-dimensional coordinates (x, y, z) of the key point A and the hand grabbing direction (rx, ry, rz) to uniquely determine the hand grabbing pose (x, y, z, rx, ry, rz) immediately.
In some embodiments, the end effector on the robotic arm may be, for example, a suction cup, and accordingly, the human hand teaching action may be, for example, an action of an index finger pointing to a contact point on the surface of the object to be grasped. At the moment, an image acquisition device (such as an RGBD camera) on a robot mechanical arm shoots teaching actions of a human hand, the grabbing gesture of the human hand is recognized through a gesture recognition algorithm, the position of a key point of an index finger in an RGBD camera coordinate system is recognized, the three-dimensional coordinates (x, y, z) of a contact point of the index finger and an object to be grabbed under the RGBD camera coordinate system are further determined, and the grabbing direction (rx, ry, rz) is determined according to the three-dimensional coordinates (x, y, z) of the contact point A and the key point of the index finger.
In some embodiments, the end effector on the robotic arm may be, for example, a two-finger gripper, and the human hand teaching action may be, for example, a four-finger closing and a thumb gripping action on the object to be gripped. At the moment, an image acquisition device (such as an RGBD camera) on the robot mechanical arm shoots teaching actions of the human hand, positions of hand key points such as a thumb and a middle finger in an RGBD camera coordinate system are identified through a gesture identification algorithm, three-dimensional coordinates of a contact point A (x1, y1, z1) of the thumb and an object, a contact point B (x2, y2, z2) of the middle finger and the object, and a tiger mouth position C (x3, y3, z3) of the thumb and the object under the RGBD camera coordinate system are further determined, and the grabbing direction (rx, ry, rz) can be determined through the three-dimensional coordinates of the three points A, B and C. Wherein, the points A and B are contact points, and the point C is a central point of the clamping jaw, so as to obtain a grabbing pose (x3, y3, z3, rx, ry, rz) under a camera coordinate system.
The embodiment of the disclosure can directly recognize the hand grabbing gesture in the hand grabbing image by adopting a gesture recognition technology, and key points in the hand grabbing gesture can be directly recognized without marking markers on the hand (such as the back of the hand). Therefore, the back of the hand of the person does not need to be kept facing the image acquisition device, and the more flexible 6D hand grabbing pose can be obtained in the teaching process, so that the hand grabbing device is suitable for the grabbing requirements of various products.
Optionally, determining a hand grabbing gesture according to the hand grabbing image, and determining a coordinate of a key point in the hand grabbing gesture, for example, the following method may be used:
determining the hand grabbing gesture according to the hand grabbing image, performing feature extraction on the hand grabbing gesture by adopting a full convolution neural network generation model, and generating a hand dense feature representation image, wherein the representation mode can keep the spatial relationship among the hand dense features. And further converting the hand dense feature representation image into key point coordinates in a hand grabbing gesture based on a supervised learning model.
Therefore, the teaching grabbing point of the plurality of fingers of the hand on the object can be directly recognized through the gesture recognition algorithm, and the teaching grabbing point recognition method and the teaching grabbing point recognition device can adapt to different teaching persons.
It should be noted that the end effector of the robot can be a suction cup, a parallel two-finger clamping jaw, a three-finger clamping jaw, etc., and the form of the end effector for grabbing objects can be flexibly selected according to the characteristics of different objects.
In some embodiments, S1042, converting the human hand grabbing pose to a robot grabbing pose includes: and the hand grabbing pose is converted into the robot grabbing pose by presetting the hand-eye calibration relation.
The hand-eye calibration relationship, i.e., the coordinate transformation relationship between the coordinate system of the image acquisition device and the base coordinate system of the robot, can be preset. Because the hand grabbing pose determined by the machine vision is the coordinate under the coordinate system of the image acquisition device, the hand grabbing pose under the coordinate system of the image acquisition device needs to be converted into the robot grabbing pose under the robot base coordinate system after the hand grabbing pose is determined, so that how the end effector moves to grab can be calculated according to the robot grabbing pose under the robot base coordinate system.
The embodiment of the present disclosure provides a robot grasping control device, and fig. 3 is a block diagram of a structure of a robot grasping control device provided in the embodiment of the present disclosure. As shown in fig. 3, the system includes a first image acquisition module 11, an object pose determination module 12, a second image acquisition module 13, a grasp pose determination module 14, and a teaching one-to-one correspondence determination module 15.
The first image acquisition module 11 is configured to acquire an image of a teaching object. The object pose determination module 12 is configured to determine a pose of the teaching object according to the image of the teaching object. The second image acquisition module 13 is configured to acquire a hand grabbing image of the teaching object when the hand grabs the teaching object at different pose positions. The grabbing pose determining module 14 is used for determining a grabbing pose of the robot according to the hand grabbing image. The teaching one-to-one correspondence determining module 15 is used for establishing teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot and storing the teaching one-to-one correspondence.
Because the grasping pose can be taught by a human hand, a suitable grasping pose can be specified for the object. In addition, the robot grabbing teaching can be automatically realized, repeated debugging through a demonstrator and the like is not needed, grabbing pose calculation in real time in the grabbing process is not needed, and the experience of an operator is not relied on, so that the efficiency can be improved, and the grabbing precision and reliability can be guaranteed.
It should be noted that the explanation of the embodiment of the robot gripping control method described above is also applicable to the robot gripping control apparatus of this embodiment. The specific manner in which the various modules in the embodiments of the robot gripper control device perform operations has been described in detail in relation to the embodiments of the method and will not be elaborated upon here.
In some embodiments, the robot gripping control apparatus may further include a third image acquisition module, an object to be gripped pose determination module, an object to be gripped pose corresponding robot gripping pose determination module, and an object to be gripped gripping module.
The third image acquisition module is used for acquiring an image of an object to be grabbed. And the to-be-grabbed object pose determining module is used for determining the pose of the to-be-grabbed object according to the to-be-grabbed object image. The robot grabbing pose determining module corresponding to the pose of the object to be grabbed is used for determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence relationship between the pose of the object to be taught and the grabbing pose of the robot. And the object to be grabbed grabbing module is used for controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
The embodiments of the present disclosure further provide a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps of the robot capture control method in any of the embodiments above may be implemented, which are not described herein again.
It should be noted that examples of the storage medium include, but are not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the storage medium include: an Electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an erasable Programmable Read-Only Memory (EPROM), an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The storage medium provided by the above embodiment of the present application and the robot grasping control method provided by the embodiment of the present application have the same beneficial effects as methods adopted, executed or implemented by application programs or instructions stored in the storage medium.
The embodiment of the disclosure also provides a robot. As shown in fig. 4, it includes: an image capture device 21, a robotic arm 22, an end effector 23, and a controller (not shown). The image acquisition device 21 and the end effector 23 are located on the robot arm 22, and the controller is electrically connected to the image acquisition device 21, the robot arm 22, and the end effector 23, respectively. The image acquisition device 21 is used for acquiring an image of a teaching object and a hand grabbing image to acquire teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot, and is used for acquiring an image of an object to be grabbed to acquire the pose of the object to be grabbed. The end effector 23 is used for grabbing a teaching object or an object to be grabbed under the control of the controller, and when the teaching object or the object to be grabbed appears, the end effector 23 correspondingly grabs according to the pose of the teaching object or the pose of the object to be grabbed. The controller is used for realizing the steps of the robot gripping control method in any one of the above embodiments.
When grabbing control operation is performed through the robot, a three-dimensional model of a teaching object and a three-dimensional model of a robot grabbing pose do not need to be obtained in advance, an image of the teaching object and a hand grabbing image can be obtained through an image acquisition device 21 arranged on a mechanical arm 22, so that the pose of the teaching object and the grabbing pose of the robot corresponding to the pose of the teaching object during grabbing are obtained, a teaching one-to-one correspondence relation is established and stored, and the whole teaching process is automatically completed. Therefore, the teaching device does not need to be repeatedly debugged in the grabbing teaching process, and the grabbing pose does not need to be calculated in real time in the grabbing process, so that the teaching efficiency can be improved. Because the teaching process does not depend on the operation experience of an operator on the robot, the grabbing precision and reliability can be ensured.
In some embodiments, the image acquisition device 21 may include, for example, at least one camera that photographs the teaching object when the teaching object is placed on the table to acquire an image of the teaching object. The image acquisition device 21 can also acquire a human hand-captured image. It should be noted that the same image capturing device may be used to capture an image of a teaching object and a hand-captured image. In other embodiments, the first image acquisition device can be further arranged to acquire an image of the teaching object, and the second image acquisition device is arranged to acquire a hand-grabbing image.
In some embodiments, the end effector may include a suction cup, a parallel two-finger grip, a three-finger grip, etc., and the form of the end effector may be flexibly selected according to the characteristics of the object to be gripped.
In some embodiments, the robotic arm may be configured as a three degree-of-freedom robotic arm, a six degree-of-freedom robotic arm, or the like. The type of the mechanical arm can be set according to actual needs. The arm can be accurately positioned to a certain point under the program instruction and carry out the operation, has ensured the precision of snatching the process, practices thrift the manpower.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A robot gripping control method is characterized by comprising the following steps:
acquiring an image of a teaching object;
determining the pose of the teaching object according to the image of the teaching object;
acquiring a hand grabbing image of the teaching object when the hand grabs the teaching object at different pose positions;
determining a robot grabbing pose according to the hand grabbing image;
establishing a teaching one-to-one correspondence relationship between the pose of the teaching object and the grabbing pose of the robot and storing the teaching one-to-one correspondence relationship;
the teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object.
2. The robot gripping control method according to claim 1, further comprising:
acquiring an image of an object to be grabbed;
determining the pose of the object to be grabbed according to the image of the object to be grabbed;
according to the teaching one-to-one correspondence between the positions of the teaching objects and the positions of the robot grabbing positions, determining the robot grabbing positions corresponding to the positions of the objects to be grabbed;
and controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
3. The robot gripper control method according to claim 2, wherein before determining the robot gripper pose corresponding to the object pose to be gripped according to the teaching one-to-one correspondence between the object pose to be taught and the robot gripper pose, the method further comprises:
and determining whether the object to be grabbed belongs to a preset object to be grabbed or not according to the image of the object to be grabbed, and executing the operation of determining the grabbing pose of the robot corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot when the object to be grabbed belongs to the preset object to be grabbed.
4. The robot grip control method according to claim 1, wherein the determining of the robot grip pose from the human hand grip image includes:
determining a hand grabbing pose according to the hand grabbing image;
and converting the hand grabbing pose into a robot grabbing pose.
5. The robot grip control method according to claim 4, wherein the determining of the hand grip pose from the hand grip image includes:
determining a hand grabbing gesture according to the hand grabbing image, and determining key point coordinates in the hand grabbing gesture;
determining the hand grabbing direction according to the key point coordinates;
and determining the hand grabbing pose according to the key point coordinates and the hand grabbing direction.
6. The robot gripping control method according to claim 4, wherein converting the hand gripping pose into a robot gripping pose includes:
and converting the hand grabbing pose into a robot grabbing pose through a preset hand-eye calibration relation.
7. A robot gripping control device, characterized by comprising:
the first image acquisition module is used for acquiring an image of a teaching object;
the object pose determining module is used for determining the pose of the teaching object according to the image of the teaching object;
the second image acquisition module is used for acquiring a hand grabbing image of the teaching object when the teaching object is grabbed by a hand at different pose positions;
the grabbing pose determining module is used for determining a grabbing pose of the robot according to the hand grabbing image;
the teaching one-to-one correspondence determining module is used for establishing and storing teaching one-to-one correspondence between the pose of the teaching object and the grabbing pose of the robot;
the teaching object poses comprise teaching object poses of different teaching objects and different 6D poses of the same teaching object.
8. The robotic gripper control device according to claim 7, further comprising:
the third image acquisition module is used for acquiring an image of an object to be grabbed;
the to-be-grabbed object pose determining module is used for determining the pose of the to-be-grabbed object according to the to-be-grabbed object image;
the robot grabbing pose determining module is used for determining the robot grabbing pose corresponding to the pose of the object to be grabbed according to the teaching one-to-one correspondence between the pose of the object to be taught and the grabbing pose of the robot;
and the object to be grabbed grabbing module is used for controlling the robot to grab the object to be grabbed according to the grabbing pose of the robot.
9. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1-6.
10. A robot, comprising:
the system comprises an image acquisition device, a mechanical arm, an end effector and a controller;
the image acquisition device and the end effector are positioned on the mechanical arm; the controller is electrically connected with the image acquisition device, the mechanical arm and the end effector respectively; the image acquisition device is used for acquiring images of a teaching object and hand-grabbed images; the end effector is used for grabbing a teaching object or an object to be grabbed under the control of the controller; the controller is adapted to implement the steps of the robot gripping control method according to any of claims 1 to 6.
CN202111248281.9A 2021-10-26 2021-10-26 Robot grabbing control method and device, storage medium and robot Pending CN113894774A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111248281.9A CN113894774A (en) 2021-10-26 2021-10-26 Robot grabbing control method and device, storage medium and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111248281.9A CN113894774A (en) 2021-10-26 2021-10-26 Robot grabbing control method and device, storage medium and robot

Publications (1)

Publication Number Publication Date
CN113894774A true CN113894774A (en) 2022-01-07

Family

ID=79026367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111248281.9A Pending CN113894774A (en) 2021-10-26 2021-10-26 Robot grabbing control method and device, storage medium and robot

Country Status (1)

Country Link
CN (1) CN113894774A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115401698A (en) * 2022-10-17 2022-11-29 福州大学 Grabbing gesture detection-based manipulator dexterous grabbing planning method and system
CN115570562A (en) * 2022-09-05 2023-01-06 梅卡曼德(北京)机器人科技有限公司 Robot assembly pose determining method and device, robot and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011065035A1 (en) * 2009-11-24 2011-06-03 株式会社豊田自動織機 Method of creating teaching data for robot, and teaching system for robot
US10166676B1 (en) * 2016-06-08 2019-01-01 X Development Llc Kinesthetic teaching of grasp parameters for grasping of objects by a grasping end effector of a robot
CN111002295A (en) * 2019-12-30 2020-04-14 中国地质大学(武汉) Teaching glove and teaching system of two-finger grabbing robot
CN111958604A (en) * 2020-08-20 2020-11-20 扬州蓝邦数控制刷设备有限公司 Efficient special-shaped brush monocular vision teaching grabbing method based on CAD model
CN113492393A (en) * 2020-04-08 2021-10-12 发那科株式会社 Robot teaching demonstration by human

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011065035A1 (en) * 2009-11-24 2011-06-03 株式会社豊田自動織機 Method of creating teaching data for robot, and teaching system for robot
US10166676B1 (en) * 2016-06-08 2019-01-01 X Development Llc Kinesthetic teaching of grasp parameters for grasping of objects by a grasping end effector of a robot
CN111002295A (en) * 2019-12-30 2020-04-14 中国地质大学(武汉) Teaching glove and teaching system of two-finger grabbing robot
CN113492393A (en) * 2020-04-08 2021-10-12 发那科株式会社 Robot teaching demonstration by human
CN111958604A (en) * 2020-08-20 2020-11-20 扬州蓝邦数控制刷设备有限公司 Efficient special-shaped brush monocular vision teaching grabbing method based on CAD model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115570562A (en) * 2022-09-05 2023-01-06 梅卡曼德(北京)机器人科技有限公司 Robot assembly pose determining method and device, robot and storage medium
CN115401698A (en) * 2022-10-17 2022-11-29 福州大学 Grabbing gesture detection-based manipulator dexterous grabbing planning method and system

Similar Documents

Publication Publication Date Title
CN108399639B (en) Rapid automatic grabbing and placing method based on deep learning
CN105598987B (en) Determination of a gripping space for an object by means of a robot
JP5685027B2 (en) Information processing apparatus, object gripping system, robot system, information processing method, object gripping method, and program
CN108858193B (en) Mechanical arm grabbing method and system
CN113894774A (en) Robot grabbing control method and device, storage medium and robot
CN113492393A (en) Robot teaching demonstration by human
CN113785303A (en) Machine learning object recognition by means of a robot-guided camera
CN112775959A (en) Method and system for determining grabbing pose of manipulator and storage medium
US20220331964A1 (en) Device and method for controlling a robot to insert an object into an insertion
US20220335622A1 (en) Device and method for training a neural network for controlling a robot for an inserting task
CN110605711A (en) Method, device and system for controlling cooperative robot to grab object
CN114025928A (en) End effector control system and end effector control method
CN114347008A (en) Industrial robot-based method and device for grabbing workpieces out of order and intelligent terminal
Çoban et al. Wireless teleoperation of an industrial robot by using myo arm band
CN112372641A (en) Family service robot figure article grabbing method based on visual feedforward and visual feedback
Kita et al. A method for handling a specific part of clothing by dual arms
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
CN114463244A (en) Vision robot grabbing system and control method thereof
CN115635482B (en) Vision-based robot-to-person body transfer method, device, medium and terminal
Liu et al. Mapping human hand motion to dexterous robotic hand
US20220203517A1 (en) Non-transitory storage medium and method and system of creating control program for robot
JP2022133256A (en) Device and method for controlling robot for picking up object
CN114494426A (en) Apparatus and method for controlling a robot to pick up an object in different orientations
Zhu et al. A robotic semantic grasping method for pick-and-place tasks
Vatsal et al. Augmenting vision-based grasp plans for soft robotic grippers using reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220107