CN114851187A - Obstacle avoidance mechanical arm grabbing method, system and device and storage medium - Google Patents

Obstacle avoidance mechanical arm grabbing method, system and device and storage medium Download PDF

Info

Publication number
CN114851187A
CN114851187A CN202210300135.4A CN202210300135A CN114851187A CN 114851187 A CN114851187 A CN 114851187A CN 202210300135 A CN202210300135 A CN 202210300135A CN 114851187 A CN114851187 A CN 114851187A
Authority
CN
China
Prior art keywords
mechanical arm
mechanical
grabbing
clamping jaw
collision
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210300135.4A
Other languages
Chinese (zh)
Other versions
CN114851187B (en
Inventor
陈建
贾奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cross Dimension Shenzhen Intelligent Digital Technology Co ltd
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210300135.4A priority Critical patent/CN114851187B/en
Publication of CN114851187A publication Critical patent/CN114851187A/en
Application granted granted Critical
Publication of CN114851187B publication Critical patent/CN114851187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method, a system, a device and a storage medium for grabbing an obstacle avoidance mechanical arm, wherein the method comprises the following steps: establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm; calibrating a space conversion relation between the camera and the mechanical arm base; matching the three-dimensional model with point cloud acquired by a depth camera, and solving the posture of an object to be grabbed in an actual scene; acquiring a grabbing posture of the mechanical arm for avoiding collision; and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path. The invention can utilize the whole scene information and the signed distance field model shot by the camera in the grabbing process, thereby obtaining a grabbing attitude which has higher robustness and can resist stronger external environment interference force after grabbing an object in a complex actual scene, and the invention can be widely applied to the technical field of robot control.

Description

Obstacle avoidance mechanical arm grabbing method, system and device and storage medium
Technical Field
The invention relates to the technical field of robot control, in particular to a grabbing method, a grabbing system, a grabbing device and a grabbing storage medium for an obstacle avoidance mechanical arm.
Background
The mechanical arm automatic grabbing system can successfully drive the mechanical arm to move to a specific posture after the camera observes an object, grab the object and place the object on a specific position. The system can replace human beings to complete high-repeatability and heavy work such as manufacturing plant part assembly, conveyor belt sorting and the like, and can bring convenience to package sorting in the logistics field. However, the traditional automatic mechanical arm grabbing depends on manual teaching to grab the posture, time and labor are wasted, and the step can be omitted by a stable and accurate grabbing posture estimation method, so that the automation degree of a factory is greatly improved.
The gripping pose estimation method will eventually output one or more mechanical jaw gripping poses. The traditional grabbing posture estimation method is more focused on how to enable the inner side of the clamping jaw to be more tightly attached to the surface of an object to be grabbed, and the object is guaranteed to be more difficult to slide off after being grabbed. Before actually closing the mechanical clamping jaw and grabbing and lifting an object, the grabbing posture reached by the mechanical clamping jaw cannot collide with other objects in a scene, otherwise, the mechanical arm is easily damaged, the mechanical arm cannot collide with the object to be grabbed in advance, otherwise, the position of the object can be changed, and finally, the object is grabbed empty or the object slides down. However, at present, research on how to calculate a gripping posture for avoiding collision is still insufficient, and a gripping posture causing collision between a mechanical clamping jaw and an object to be gripped or collision between other objects in a scene or the ground and the like before a gripping action is performed is easily estimated. Moreover, a mechanical arm collision model established by the traditional method, such as a convex hull segmentation scheme, is mainly based on collision judgment on a plurality of convex hulls after the convex hull of an object is segmented, the number of faces and the precision are low, and the calculation speed is also low.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art to a certain extent, the invention aims to provide a method, a system, a device and a storage medium for grabbing an obstacle avoidance mechanical arm.
The technical scheme adopted by the invention is as follows:
an obstacle avoidance mechanical arm grabbing method comprises the following steps:
establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
acquiring a color image and a depth image of a depth camera and the posture of a mechanical clamping jaw, and calibrating a space conversion relation between the camera and a mechanical arm base;
acquiring a three-dimensional model of an object to be grabbed, matching the three-dimensional model with point cloud acquired by a depth camera, and solving the posture of the object to be grabbed in an actual scene;
acquiring a mechanical arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera;
and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab the object.
Further, the establishing of the signed distance field model of the robot arm and the robot gripping jaw mounted at the end of the robot arm includes:
obtaining model files of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, obtaining a collision structure on the outermost layer of the mechanical clamping jaw, and obtaining a field model with a symbolic distance.
Further, obtaining model files of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, obtaining an outermost collision structure of the mechanical clamping jaw, and obtaining a signed distance field model includes:
importing model files of the mechanical arm and the mechanical clamping jaw, and acquiring physical models of the mechanical arm and the mechanical clamping jaw;
the mechanical link for each of the mechanical arms is processed as follows:
establishing a cuboid bounding box for each mechanical connecting rod, and dividing a preset number of square voxels in the cuboid bounding box;
calculating the distance from the center point of each voxel to the surface of the mechanical connecting rod, and establishing an unsigned distance field of each mechanical connecting rod according to the distance;
subtracting a specific value a from the unsigned distance field to enable the value stored on part of voxels to be smaller than 0, and extracting isosurface by using a marching cubes method to extract a plurality of mutually disjoint isosurfaces;
establishing bounding boxes of all the isosurfaces, selecting all the isosurfaces which are not bounded by other isosurface bounding boxes, discarding the rest isosurfaces, wherein the non-bounded isosurfaces express the outermost layer structure of the mechanical connecting rod;
performing s-shaped traversal on the vertex of the square voxel, if the vertex passes through the outermost layer isosurface in the traversal process, negating the distance value stored in the voxel to obtain a signed distance field;
the distance values stored for the interior voxels are recalculated according to the outermost equivalent surface to obtain the final signed distance field model.
Further, the acquiring a color image and a depth image of the depth camera, and the gesture of the mechanical clamping jaw, and calibrating a spatial transformation relationship between the camera and the mechanical arm base includes:
placing an asymmetric dot calibration plate at a preset fixed position, and hanging a depth camera at the tail end of a mechanical arm;
moving the tail end of the mechanical arm for multiple times, and recording a plurality of groups of color images and depth images of the calibration plate and the postures of the mechanical arm tail end actuator, which are shot by the depth camera at the same moment;
matching by utilizing an iterative closest point algorithm according to the color image and the depth image which are obtained by shooting by the depth camera to obtain the posture of the calibration plate at the corresponding moment;
and solving an equation to obtain a coordinate conversion relation between the depth camera and the mechanical arm end effector.
Further, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the posture of the object to be grabbed in the actual scene includes:
acquiring point cloud of an object to be grabbed;
matching the three-dimensional model of the object to be grabbed with the point cloud acquired by the depth camera by using a plane-to-point iterative closest point algorithm, and solving the posture of the object to be grabbed in the actual scene; wherein, the optimization formula of single iteration is as follows:
Figure BDA0003565312170000031
wherein,
Figure BDA0003565312170000032
and x ═ r T ,t T ] T Three-dimensional rotation and three-dimensional translation are included, and a three-dimensional posture is expressed; q. q.s i For points on the upper surface of the three-dimensional model of the object to be gripped, p i In order to capture a point cloud in a scene,
Figure BDA0003565312170000033
and m is the corresponding point pair number.
Further, the acquiring a robot arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera includes:
solving an optimal grabbing attitude according to the point cloud near the object obtained by the signed distance field model and the depth camera and the point cloud of the object to be grabbed after the object attitude transformation is applied; wherein, the optimization formula of a single iteration is as follows:
Figure BDA0003565312170000034
wherein x is [ r ] T ,t T ] T R is a rotation vector of the posture of the mechanical clamping jaw, and t is a translation vector of the posture of the mechanical clamping jaw;
Figure BDA0003565312170000035
respectively comprises three parts:
Figure BDA0003565312170000036
Figure BDA0003565312170000037
the three parts comprise a collision avoidance part, a matching part and a normal vector fitting part;
the collision avoidance part is as follows:
Figure BDA0003565312170000038
wherein f is ci The total number of the points which collide with the scene in the point cloud on the inner side of the mechanical clamping jaw is k, g ci For mechanical jaw collision model at point f ci The gradient of (a) is measured,
Figure BDA0003565312170000039
is f ci A normal vector of the surface of the mechanical clamping jaw;
the matching part is as follows:
Figure BDA0003565312170000041
wherein f is i Is a point at the inner side of the mechanical clamping jaw which is closer to the object, and has a total of l, p fi Is a point on the corresponding object which is closer to the inner side of the clamping jaw,
Figure BDA0003565312170000042
is f i A normal vector of the surface of the mechanical clamping jaw;
the normal vector fitting part is as follows:
Figure BDA0003565312170000043
wherein,
Figure BDA0003565312170000044
is a surface p of an object fi The normal vector of (c).
Further, the establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path includes:
acquiring a kinematic model of the mechanical arm;
establishing a forward kinematics function and a reverse kinematics function by utilizing a kinematics model of the mechanical arm;
establishing a collision judgment function of the mechanical arm by utilizing a forward kinematics function and a signed distance field model;
inputting the obtained mechanical arm grabbing gesture by using a reverse kinematics function to obtain a target mechanical arm joint gesture;
and planning an obstacle avoidance path in a joint attitude space according to the collision judgment function and the target mechanical arm joint attitude, driving the mechanical arm to bypass a collision object in the scene, reaching the target mechanical arm joint attitude and grabbing.
The other technical scheme adopted by the invention is as follows:
an obstacle avoidance robot grasping system comprising:
the model building module is used for building a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the posture of the object to be grabbed in the actual scene;
the gesture obtaining module is used for obtaining a mechanical arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud collected by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab the object.
The other technical scheme adopted by the invention is as follows:
an obstacle avoidance mechanical arm grabbing device comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method described above.
The other technical scheme adopted by the invention is as follows:
a computer readable storage medium in which a processor executable program is stored, which when executed by a processor is for performing the method as described above.
The invention has the beneficial effects that: the invention can utilize the whole scene information and the signed distance field model shot by the camera in the capturing process, thereby obtaining a capturing posture which has higher robustness and can resist stronger external environment interference force after an object is captured in a complex actual scene, and the computation speed in collision judgment is faster and more accurate, thereby having good application prospect in the fields of logistics and intelligent manufacturing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made on the drawings of the embodiments of the present invention or the related technical solutions in the prior art, it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for grabbing by an obstacle avoidance robot according to a three-dimensional vision system in an embodiment of the present invention;
FIG. 2 is a schematic representation of a mechanical jaw model in an embodiment of the present invention;
FIG. 3 is a graph of a reconstructed visualization of a signed distance field of the model of the mechanical jaw of FIG. 2 according to an embodiment of the invention;
FIG. 4 is a graph of a reconstructed visualization of a signed distance field for each mechanical link of the global manipulator model in an embodiment of the invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and larger, smaller, larger, etc. are understood as excluding the number, and larger, smaller, inner, etc. are understood as including the number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless otherwise explicitly limited, terms such as arrangement, installation, connection and the like should be understood in a broad sense, and those skilled in the art can reasonably determine the specific meanings of the above terms in the present invention in combination with the specific contents of the technical solutions.
As shown in fig. 1, the embodiment provides a three-dimensional visual obstacle avoidance robot grasping method, which specifically includes the following steps:
s1, establishing a signed distance field model of the mechanical arm and the mechanical clamping jaw mounted at the tail end of the mechanical arm.
Importing a mechanical arm and mechanical clamping jaw model file, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, only paying attention to an outermost collision structure, and calculating a signed distance field model.
Specifically, a physical model of the robot arm and the mechanical gripper is imported, and the mechanical link for each of them is processed as follows:
and establishing a cuboid bounding box which is slightly larger than the mechanical connecting rod for each mechanical connecting rod, and dividing square voxels with the number not less than 32 x 32 in the cuboid bounding box. The length of the cuboid bounding box is usually about 1.2 times of the size of the cuboid bounding box, so that too large a cuboid bounding box wastes memory space and causes speed reduction, and too small a cuboid bounding box causes reconstruction surface fragmentation.
The distance from the center point of each voxel to the surface of the mechanical link is calculated to establish an unsigned distance field for each mechanical link. And accelerating the distance calculation by using the k-dimensional tree during calculation, otherwise, slowing down the speed.
The iso-surface extraction is performed using the marching cubes method by subtracting a certain value a from the unsigned distance field so that the stored values on some voxels are less than 0. The value of a is 10% of the rectangular parallelepiped bounding box. By the method, a plurality of mutually disjoint isosurfaces can be extracted, and the isosurfaces are self-closed.
And establishing bounding boxes of all the isosurfaces, selecting all the isosurfaces which are not bounded by other isosurface bounding boxes, and discarding the rest isosurfaces. These unenclosed contour surfaces represent the structure of the outermost layer of the mechanical linkage. These unenclosed isosurface levels represent the outermost structure, since the isosurfaces do not intersect each other.
Performing s-shaped traversal on the vertex of the square voxel, if the vertex passes through the outermost layer isosurface in the traversal process, negating the distance value stored in the voxel to obtain a signed distance field; finally, this operation will make the distance values stored in voxels inside the object negative in sign and the distance values stored in voxels outside the object positive. This step creates a signed distance field.
The stored distance values for the voxels inside the object are recalculated. Since the inner iso-surface is removed, the distance values stored for the inner voxels need to be recalculated using the outermost iso-surface.
To compensate for the particular value a subtracted from before, the final signed distance field needs to be added with the particular value a.
A diagram of an original model of a mechanical jaw is shown in fig. 2, and a reconstructed visualization of the signed distance field of the model of the mechanical jaw of fig. 2 is shown in fig. 3. The result of the reconstruction visualization of the signed distance field of each mechanical link of the whole mechanical arm model is shown in fig. 4. It can be seen that the established collision model encloses some small holes and dimples, ensuring that the overall collision is as simple as possible, while ensuring substantial accuracy.
And S2, acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw, and calibrating the space conversion relation between the camera and the mechanical arm base.
The asymmetric dot calibration plate is placed in a fixed position and the depth camera is mounted at the end of the robotic arm.
And moving the tail end of the mechanical arm for multiple times, and recording a plurality of groups of color images, depth images and postures of the end effector of the mechanical arm of the calibration plate shot by the depth camera at the same moment. In this embodiment, at least 5 sets of data are acquired.
The gesture of the calibration plate at the moment can be matched by utilizing an iterative closest point algorithm by utilizing a color image and a depth image which are obtained by shooting through a depth camera.
And constructing an overdetermined equation in the form of AX (X) and XB (X-X) by using the data, and solving to obtain a coordinate conversion relation between the depth camera and the mechanical arm end effector.
And S3, acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the posture of the object to be grabbed in the actual scene.
And acquiring an object in the shooting scene.
The bounding box is built in a form with three-dimensional rotation, so that the object is ensured to be in the bounding box, and the subsequent algorithm only focuses on the point clouds in the bounding box.
And matching the three-dimensional model of the object to be grabbed with the point cloud acquired by the depth camera by using a plane-to-point iterative closest point algorithm, and solving the posture of the object to be grabbed in the actual scene. The optimization formula for a single iteration is as follows:
Figure BDA0003565312170000071
wherein,
Figure BDA0003565312170000072
and x ═ r T ,t T ] T ,r∈R 3 For the rotation vector of the attitude of the mechanical jaw, t ∈ R 3 The translation vector of the mechanical gripper attitude. q. q of i For points on the upper surface of the three-dimensional model of the object to be gripped, p i In order to capture a point cloud in a scene,
Figure BDA0003565312170000073
and m is the corresponding point pair number.
The above formula can be solved using a least squares method, each time x ═ a T A) -1 A T b. And repeating the iteration for multiple times, and finally converging to a stable result.
And S4, acquiring the grabbing gesture of the mechanical arm for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera.
And (5) grabbing attitude estimation. And (4) optimizing to obtain a robust mechanical arm grabbing gesture for avoiding collision by using the signed distance field of the mechanical clamping jaw obtained in the step (S1) and the color image and the depth image acquired by the depth camera:
and (5) solving an optimal grabbing attitude by using the mechanical clamping jaw collision model in the step (S1), the point cloud near the object acquired by the depth camera and the point cloud on the object model to be grabbed after the object attitude transformation is applied. The optimization formula for a single iteration is as follows:
Figure BDA0003565312170000081
wherein x is [ r ] T ,t T ] T
Figure BDA0003565312170000082
Respectively composed of three parts
Figure BDA0003565312170000083
Figure BDA0003565312170000084
The three portions include a collision avoidance portion, a matching portion, and a normal vector fit portion.
The collision avoidance part is as follows:
Figure BDA0003565312170000085
wherein, f ci The total number of the points which collide with the scene in the point cloud on the inner side of the mechanical clamping jaw is k, g ci For the mechanical jaw collision model (signed distance field) at point f ci The gradient of (a) is measured,
Figure BDA0003565312170000086
is f ci The normal vector of the surface of the mechanical clamping jaw. And (3) obtaining the object points which collide with the mechanical clamping jaw by using the coordinate of the object point cloud and the result of trilinear interpolation in the field with the symbolic distance of the mechanical clamping jaw.
The matching part is as follows:
Figure BDA0003565312170000087
wherein f is i Is a point at the inner side of the mechanical clamping jaw which is closer to the object, and has a total of l, p fi Is a point on the corresponding object which is closer to the inner side of the clamping jaw.
Figure BDA0003565312170000088
Is f i The normal vector of the surface of the mechanical clamping jaw.
The normal vector laminating part is as follows:
Figure BDA0003565312170000089
wherein,
Figure BDA00035653121700000810
is a surface p of an object fi The normal vector of (c).
The above formula can be solved using a least squares method, each time
Figure BDA00035653121700000811
Repeated iteration is carried out for multiple times, and a robust mechanical arm grabbing posture can be converged finally, so that collision is avoided.
And S5, establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab an object.
Wherein the step S5 specifically includes steps S51-S55:
s51, acquiring a global kinematic model of the mechanical arm and performing collision modeling in the step S1.
And S52, establishing a forward kinematics function and a reverse kinematics function by using the mechanical arm kinematics model.
And S53, establishing a collision judgment function of the mechanical arm by using the forward kinematics function and the collision modeling.
S54, inputting the finally optimized grabbing gesture obtained in the step S4 by using an inverse kinematics function, and obtaining the gesture of the joint of the target mechanical arm
And S55, performing obstacle avoidance path planning in the joint posture space by using an improved fast expansion random tree algorithm, and driving the mechanical arm to bypass other collision objects in the scene and reach the joint posture of the target mechanical arm for grabbing.
In summary, the embodiment provides an obstacle avoidance mechanical arm grabbing method based on three-dimensional vision, and a signed distance field with an internal structure removed is used for performing collision modeling, so that the precision is greatly improved compared with the conventional collision modeling such as convex hull segmentation, and the collision judgment speed is also improved compared with a convex polygon segmentation method; information such as the gradient of a signed distance field, point cloud matched with an object and the like can be directly optimized to obtain a robust, the mechanical clamping jaw avoiding collision grabs the posture, manual teaching is omitted, the grabbing stability is improved, and the automation level of the production line is also improved.
This embodiment still provides an obstacle avoidance manipulator grasping system, includes:
the model building module is used for building a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the posture of the object to be grabbed in the actual scene;
the gesture obtaining module is used for obtaining a mechanical arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud collected by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab the object.
The obstacle avoidance robot grasping system of the embodiment can execute the obstacle avoidance robot grasping method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
This embodiment still provides an obstacle avoidance mechanical arm grabbing device, include:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of fig. 1.
The obstacle avoidance robot gripping device of the embodiment can execute the obstacle avoidance robot gripping method provided by the method embodiment of the invention, can execute any combination of the implementation steps of the method embodiment, and has corresponding functions and beneficial effects of the method.
The embodiment of the application also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
The embodiment also provides a storage medium, which stores an instruction or a program capable of executing the obstacle avoidance robot grabbing method provided by the embodiment of the method of the invention, and when the instruction or the program is run, the optional combined implementation steps of the embodiment of the method can be executed, so that the corresponding functions and beneficial effects of the method are achieved.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the described functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A grabbing method of an obstacle avoidance mechanical arm is characterized by comprising the following steps:
establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
acquiring a color image and a depth image of a depth camera and the posture of a mechanical clamping jaw, and calibrating a space conversion relation between the camera and a mechanical arm base;
acquiring a three-dimensional model of an object to be grabbed, matching the three-dimensional model with point cloud acquired by a depth camera, and solving the posture of the object to be grabbed in an actual scene;
acquiring a mechanical arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera; and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab the object.
2. The obstacle avoidance robot grabbing method of claim 1, wherein the establishing of the signed distance field models of the robot and the robot gripper mounted on the end of the robot comprises:
obtaining model files of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, obtaining a collision structure on the outermost layer of the mechanical clamping jaw, and obtaining a field model with a symbolic distance.
3. The obstacle avoidance robot grabbing method according to claim 2, wherein the obtaining of the model files of the robot and the robot gripper performs collision modeling on the robot and the robot gripper, removes internal structures of the robot and the robot gripper, obtains a collision structure of an outermost layer of the robot gripper, and obtains a signed distance field model, includes:
importing model files of the mechanical arm and the mechanical clamping jaw, and acquiring physical models of the mechanical arm and the mechanical clamping jaw;
the mechanical link for each of the mechanical arms is processed as follows:
establishing a cuboid bounding box for each mechanical connecting rod, and dividing a preset number of square voxels in the cuboid bounding box;
calculating the distance from the center point of each voxel to the surface of the mechanical connecting rod, and establishing an unsigned distance field of each mechanical connecting rod according to the distance;
subtracting a specific value a from the unsigned distance field to enable the value stored on a part of voxels to be smaller than 0, and extracting isosurface by using a marching patches method to extract a plurality of mutually disjoint isosurfaces;
establishing bounding boxes of all the isosurfaces, selecting all the isosurfaces which are not bounded by other isosurface bounding boxes, discarding the rest isosurfaces, wherein the non-bounded isosurfaces express the outermost layer structure of the mechanical connecting rod;
performing s-shaped traversal on the vertex of the square voxel, if the vertex passes through the outermost layer isosurface in the traversal process, negating the distance value stored in the voxel to obtain a signed distance field;
the distance values stored for the interior voxels are recalculated according to the outermost equivalent surface to obtain the final signed distance field model.
4. The obstacle avoidance robot arm grabbing method according to claim 1, wherein the acquiring of the color image and the depth image of the depth camera and the posture of the mechanical clamping jaw, and the calibrating of the spatial conversion relationship between the camera and the robot arm base, comprises:
placing an asymmetric dot calibration plate at a preset fixed position, and hanging a depth camera at the tail end of a mechanical arm;
moving the tail end of the mechanical arm for multiple times, and recording a plurality of groups of color images and depth images of the calibration plate and the postures of the mechanical arm tail end actuator, which are shot by the depth camera at the same moment;
matching by utilizing an iterative closest point algorithm according to the color image and the depth image which are obtained by shooting by the depth camera to obtain the posture of the calibration plate at the corresponding moment;
and solving an equation to obtain a coordinate conversion relation between the depth camera and the mechanical arm end effector.
5. The obstacle avoidance mechanical arm grabbing method according to claim 1, wherein the step of matching the three-dimensional model with the point cloud collected by the depth camera and solving the posture of the object to be grabbed in the actual scene comprises the steps of:
acquiring point cloud of an object to be grabbed;
matching the three-dimensional model of the object to be grabbed with the point cloud acquired by the depth camera by using a plane-to-point iterative closest point algorithm, and solving the posture of the object to be grabbed in the actual scene; wherein, the optimization formula of single iteration is as follows:
Figure FDA0003565312160000021
wherein,
Figure FDA0003565312160000022
and x ═ r T ,t T ] T R is a rotation vector of the posture of the mechanical clamping jaw, and t is a translation vector of the posture of the mechanical clamping jaw; q. q.s i For points on the upper surface of the three-dimensional model of the object to be gripped, p i In order to capture a point cloud in a scene,
Figure FDA0003565312160000023
and m is the corresponding point pair number.
6. The obstacle avoidance robot grabbing method of claim 1, wherein the obtaining of the robot grabbing pose avoiding collision according to the signed distance field model and the point cloud collected by the depth camera comprises:
solving an optimal grabbing attitude according to the point cloud near the object obtained by the signed distance field model and the depth camera and the point cloud of the object to be grabbed after the object attitude transformation is applied; wherein, the optimization formula of a single iteration is as follows:
Figure FDA0003565312160000024
wherein x is [ r ] T ,t T ] T R is a rotation vector of the posture of the mechanical clamping jaw, and t is a translation vector of the posture of the mechanical clamping jaw;
Figure FDA0003565312160000025
respectively comprises three parts:
Figure FDA0003565312160000026
Figure FDA0003565312160000031
the three parts comprise a collision avoidance part, a matching part and a normal vector fitting part;
the collision avoidance part is as follows:
Figure FDA0003565312160000032
wherein f is ci The total number of the points which collide with the scene in the point cloud on the inner side of the mechanical clamping jaw is k, g ci For mechanical jaw collision model at point f ci The gradient of (a) is measured,
Figure FDA0003565312160000033
is f ci A normal vector of the surface of the mechanical clamping jaw;
the matching part is as follows:
Figure FDA0003565312160000034
wherein f is i Is a point at the inner side of the mechanical clamping jaw which is closer to the object, and has a total of l, p fi Is a point on the corresponding object which is closer to the inner side of the clamping jaw,
Figure FDA0003565312160000035
is f i A normal vector of the surface of the mechanical clamping jaw;
the normal vector laminating part is as follows:
Figure FDA0003565312160000036
wherein,
Figure FDA0003565312160000037
is a surface p of an object fi The normal vector of (c).
7. The obstacle avoidance robot grabbing method of claim 1, wherein the establishing of the collision function according to the signed distance field model, the designing of the obstacle avoidance kinematics planning solver according to the collision function and the robot grabbing posture, and the solving of the grabbing path comprise:
acquiring a kinematic model of the mechanical arm;
establishing a forward kinematics function and a reverse kinematics function by utilizing a kinematics model of the mechanical arm;
establishing a collision judgment function of the mechanical arm by utilizing a forward kinematics function and a signed distance field model;
inputting the obtained mechanical arm grabbing gesture by using a reverse kinematics function to obtain a target mechanical arm joint gesture;
and planning an obstacle avoidance path in a joint attitude space according to the collision judgment function and the target mechanical arm joint attitude, driving the mechanical arm to bypass a collision object in the scene, reaching the target mechanical arm joint attitude and grabbing.
8. The utility model provides a keep away barrier arm grasping system which characterized in that includes:
the model building module is used for building a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm; the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the posture of the object to be grabbed in the actual scene;
the gesture obtaining module is used for obtaining a mechanical arm grabbing gesture for avoiding collision according to the signed distance field model and the point cloud collected by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing posture of the mechanical arm, and solving a grabbing path so as to avoid a scene obstacle to grab the object.
9. The utility model provides a keep away barrier arm grabbing device which characterized in that includes:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, in which a program executable by a processor is stored, wherein the program executable by the processor is adapted to perform the method according to any one of claims 1 to 7 when executed by the processor.
CN202210300135.4A 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium Active CN114851187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210300135.4A CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210300135.4A CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN114851187A true CN114851187A (en) 2022-08-05
CN114851187B CN114851187B (en) 2023-07-07

Family

ID=82629604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210300135.4A Active CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114851187B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110295576A1 (en) * 2009-01-15 2011-12-01 Mitsubishi Electric Corporation Collision determination device and collision determination program
CN107907593A (en) * 2017-11-22 2018-04-13 中南大学 Manipulator collision-proof method in a kind of ultrasound detection
CN109767495A (en) * 2017-11-09 2019-05-17 达索系统公司 The increasing material manufacturing of 3D component
CN112873205A (en) * 2021-01-15 2021-06-01 陕西工业职业技术学院 Industrial robot disordered grabbing method based on real-time switching of double clamps
CN113192128A (en) * 2021-05-21 2021-07-30 华中科技大学 Mechanical arm grabbing planning method and system combined with self-supervision learning
CN113492402A (en) * 2020-04-03 2021-10-12 发那科株式会社 Fast robot motion optimization with distance field
CN114140508A (en) * 2021-11-26 2022-03-04 浪潮电子信息产业股份有限公司 Method, system and equipment for generating three-dimensional reconstruction model and readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110295576A1 (en) * 2009-01-15 2011-12-01 Mitsubishi Electric Corporation Collision determination device and collision determination program
CN109767495A (en) * 2017-11-09 2019-05-17 达索系统公司 The increasing material manufacturing of 3D component
CN107907593A (en) * 2017-11-22 2018-04-13 中南大学 Manipulator collision-proof method in a kind of ultrasound detection
CN113492402A (en) * 2020-04-03 2021-10-12 发那科株式会社 Fast robot motion optimization with distance field
CN112873205A (en) * 2021-01-15 2021-06-01 陕西工业职业技术学院 Industrial robot disordered grabbing method based on real-time switching of double clamps
CN113192128A (en) * 2021-05-21 2021-07-30 华中科技大学 Mechanical arm grabbing planning method and system combined with self-supervision learning
CN114140508A (en) * 2021-11-26 2022-03-04 浪潮电子信息产业股份有限公司 Method, system and equipment for generating three-dimensional reconstruction model and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马灼明;朱笑笑;孙明镜;曹其新;: "面向物流分拣任务的自主抓取机器人系统", 机械设计与研究, no. 06 *

Also Published As

Publication number Publication date
CN114851187B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN109986560B (en) Mechanical arm self-adaptive grabbing method for multiple target types
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
CN113409384B (en) Pose estimation method and system of target object and robot
EP3948782B1 (en) Robotic control based on 3d bounding shape, for an object, generated using edge-depth values for the object
CN108818530B (en) Mechanical arm grabbing scattered stacking piston motion planning method based on improved RRT algorithm
CN113511503B (en) Independent intelligent method for collecting, collecting and collecting uncertain objects by extraterrestrial detection
CN112605983B (en) Mechanical arm pushing and grabbing system suitable for intensive environment
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
JP2023059828A (en) Grasp generation for machine tending
Tang et al. Learning collaborative pushing and grasping policies in dense clutter
Aleotti et al. Perception and grasping of object parts from active robot exploration
CN110097599B (en) Workpiece pose estimation method based on component model expression
Abbeloos et al. Point pair feature based object detection for random bin picking
EP3790710A1 (en) Robotic manipulation using domain-invariant 3d representations predicted from 2.5d vision data
CN114882109A (en) Robot grabbing detection method and system for sheltering and disordered scenes
CN116018599A (en) Apparatus and method for training a machine learning model to identify an object topology of an object from an image of the object
CN117001675A (en) Double-arm cooperative control non-cooperative target obstacle avoidance trajectory planning method
Harada et al. Project on development of a robot system for random picking-grasp/manipulation planner for a dual-arm manipulator
JP2022187983A (en) Network modularization to learn high dimensional robot tasks
CN114700949B (en) Mechanical arm smart grabbing planning method based on voxel grabbing network
CN116276973A (en) Visual perception grabbing training method based on deep learning
Gao et al. Iterative interactive modeling for knotting plastic bags
CN114851187B (en) Obstacle avoidance mechanical arm grabbing method, system, device and storage medium
JP7373700B2 (en) Image processing device, bin picking system, image processing method, image processing program, control method and control program
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240221

Address after: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region after: China

Address before: 510641 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee before: SOUTH CHINA University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240401

Address after: 518057, Building 4, 512, Software Industry Base, No. 19, 17, and 18 Haitian Road, Binhai Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Cross dimension (Shenzhen) Intelligent Digital Technology Co.,Ltd.

Country or region after: China

Address before: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right