CN114851187B - Obstacle avoidance mechanical arm grabbing method, system, device and storage medium - Google Patents

Obstacle avoidance mechanical arm grabbing method, system, device and storage medium Download PDF

Info

Publication number
CN114851187B
CN114851187B CN202210300135.4A CN202210300135A CN114851187B CN 114851187 B CN114851187 B CN 114851187B CN 202210300135 A CN202210300135 A CN 202210300135A CN 114851187 B CN114851187 B CN 114851187B
Authority
CN
China
Prior art keywords
mechanical arm
mechanical
gesture
grabbing
clamping jaw
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210300135.4A
Other languages
Chinese (zh)
Other versions
CN114851187A (en
Inventor
陈建
贾奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cross Dimension Shenzhen Intelligent Digital Technology Co ltd
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210300135.4A priority Critical patent/CN114851187B/en
Publication of CN114851187A publication Critical patent/CN114851187A/en
Application granted granted Critical
Publication of CN114851187B publication Critical patent/CN114851187B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/04Viewing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a grabbing method, a grabbing system, a grabbing device and a storage medium for an obstacle avoidance mechanical arm, wherein the method comprises the following steps: establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm; calibrating a space conversion relation between the camera and the mechanical arm base; matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in the actual scene; acquiring a grabbing gesture of a mechanical arm avoiding collision; and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path. The invention can utilize the whole scene information shot by the upper camera and the signed distance field model in the grabbing process, thereby obtaining a grabbing gesture with higher robustness and stronger external environment interference force resistance after grabbing objects in a complex actual scene, and being widely applied to the technical field of robot control.

Description

Obstacle avoidance mechanical arm grabbing method, system, device and storage medium
Technical Field
The invention relates to the technical field of robot control, in particular to a grabbing method, a grabbing system, a grabbing device and a storage medium for an obstacle avoidance mechanical arm.
Background
The automatic grabbing system of the mechanical arm can successfully drive the mechanical arm to run under a specific gesture after the camera observes the object, grabs the object and places the object on a specific position. Such a system can replace humans to perform highly repetitive and heavy work such as assembly of parts in a manufacturing plant, sorting of conveyor belts, etc., and can also facilitate sorting of packages in the logistics field. However, the traditional automatic mechanical arm grabbing relies on manual teaching to grab the gesture, which is time-consuming and labor-consuming, and a stable and accurate grabbing gesture estimation method can omit the step, so that the automation degree of a factory is greatly improved.
The grasping gesture estimation method may ultimately output one or more mechanical jaw grasping gestures. The traditional grabbing gesture estimation method focuses more on how to enable the inner sides of the clamping jaws to be more closely attached to the surface of an object to be grabbed, and the object is more difficult to slide after grabbing. Before the mechanical clamping jaw is actually closed and the lifted object is grabbed, the grabbing gesture reached by the mechanical clamping jaw is not collided with other objects in the scene, otherwise, the mechanical arm is easy to damage, the mechanical arm is not collided with the object to be grabbed in advance, otherwise, the position of the object is changed, and finally, the object is grabbed or slides. However, the research on how to calculate the grabbing gesture avoiding collision is still insufficient at present, and the grabbing gesture of the mechanical clamping jaw, which causes the mechanical clamping jaw to collide with the object to be grabbed or other objects in the scene or the ground before the grabbing action is executed, is easy to estimate. In addition, the mechanical arm collision model established by the traditional method, such as a convex hull segmentation scheme, is mainly based on collision judgment on a plurality of convex hulls of an object after the convex hulls are segmented, and has lower surface number and accuracy and lower calculation speed.
Disclosure of Invention
In order to solve at least one of the technical problems existing in the prior art to a certain extent, the invention aims to provide a grabbing method, a grabbing system, a grabbing device and a storage medium for an obstacle avoidance mechanical arm.
The technical scheme adopted by the invention is as follows:
a grabbing method of an obstacle avoidance mechanical arm comprises the following steps:
establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
acquiring a color image and a depth image of a depth camera and the gesture of a mechanical clamping jaw, and calibrating the space conversion relation between the camera and a mechanical arm base;
acquiring a three-dimensional model of an object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in an actual scene;
acquiring a mechanical arm grabbing gesture avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera;
and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path to bypass a scene obstacle to grab an object.
Further, the establishing a signed distance field model of the mechanical arm and the mechanical clamping jaw mounted at the tail end of the mechanical arm comprises the following steps:
and acquiring model files of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, acquiring a collision structure of the outermost layer of the mechanical arm clamping jaw, and acquiring a signed distance field model.
Further, obtain the model file of arm and mechanical clamping jaw, bump the modeling to arm and mechanical clamping jaw, remove the inner structure of arm and mechanical clamping jaw, obtain the collision structure of arm clamping jaw outermost, obtain signed distance field model, include:
importing model files of the mechanical arm and the mechanical clamping jaw, and acquiring physical models of the mechanical arm and the mechanical clamping jaw;
the mechanical links for each of the robotic arms are processed as follows:
establishing a cuboid bounding box for each mechanical connecting rod, and dividing a preset number of square voxels in the cuboid bounding box;
calculating the distance from the center point of each voxel to the surface of the mechanical connecting rod, and establishing an unsigned distance field of each mechanical connecting rod according to the distance;
subtracting a specific value a from the unsigned distance field to enable the value stored on part of voxels to be smaller than 0, and extracting an isosurface by using a marking cube method to extract a plurality of isosurfaces which are mutually disjoint;
establishing bounding boxes of all isosurfaces, selecting all isosurfaces which are not surrounded by other isosurface bounding boxes, discarding the rest isosurfaces, and expressing the structure of the outermost layer of the mechanical connecting rod by the non-surrounded isosurfaces;
performing s-shaped traversal on the vertexes of the square voxels, and if the vertexes pass through the outermost equivalent surface in the traversal process, inverting the distance values stored in the voxels to obtain a signed distance field;
and (3) recalculating the distance value stored in the internal voxel according to the outermost equivalent surface to obtain a final signed distance field model.
Further, the obtaining the color map and the depth map of the depth camera, and the gesture of the mechanical clamping jaw, calibrating the spatial conversion relationship between the camera and the mechanical arm base, includes:
placing an asymmetric dot calibration plate at a preset fixed position, and hanging a depth camera at the tail end of the mechanical arm;
moving the tail end of the mechanical arm for multiple times, and recording the color image, the depth image and the gesture of an end effector of the mechanical arm of a plurality of groups of calibration plates shot by the depth camera at the same moment;
matching by using an iterative nearest point algorithm according to a color image and a depth image obtained by shooting by a depth camera to obtain the gesture of the calibration plate at a corresponding moment;
and solving an equation to obtain a coordinate conversion relation between the depth camera and the mechanical arm end effector.
Further, the matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in the actual scene includes:
acquiring a point cloud of an object to be grabbed;
matching a three-dimensional model of an object to be grabbed with a point cloud acquired by a depth camera by using a plane-to-point iterative nearest point algorithm, and solving the gesture of the object to be grabbed in an actual scene; wherein, the optimization formula of single iteration is as follows:
Figure BDA0003565312170000031
wherein,,
Figure BDA0003565312170000032
and x= [ r ] T ,t T ] T Including three-dimensional rotation and three-dimensional translation, expressing a three-dimensional gesture; q i For points of the upper surface of the three-dimensional model of the object to be grasped, p i For capturing a point cloud in a scene, +.>
Figure BDA0003565312170000033
And m is the number of corresponding point pairs, which is the normal vector of the point cloud on the surface of the three-dimensional model of the object to be grabbed.
Further, according to the signed distance field model and the point cloud acquired by the depth camera, acquiring the mechanical arm grabbing gesture for avoiding collision includes:
solving the optimal grabbing gesture according to the signed distance field model, the object near point cloud acquired by the depth camera and the point cloud of the object to be grabbed after the object gesture transformation is applied; wherein, the optimization formula of single iteration is as follows:
Figure BDA0003565312170000034
wherein x= [ r ] T ,t T ] T R is a rotation vector of the mechanical clamping jaw gesture, and t is a translation vector of the mechanical clamping jaw gesture;
Figure BDA0003565312170000035
consists of three parts:
Figure BDA0003565312170000036
Figure BDA0003565312170000037
the three parts comprise a collision avoidance part, a matching part and a normal vector fitting part;
the collision avoidance portion is:
Figure BDA0003565312170000038
wherein f ci K points and g are points in the point cloud on the inner side of the mechanical clamping jaw, wherein the points collide with a scene ci At point f for mechanical jaw collision model ci A gradient is provided at the point of the gradient,
Figure BDA0003565312170000039
is f ci Normal vector of the mechanical clamping jaw surface;
the matching part is as follows:
Figure BDA0003565312170000041
wherein f i For the point inside the mechanical jaw nearer to the object, there are a total of l, p fi For the point on the corresponding object that is nearer to the inside of the jaws,
Figure BDA0003565312170000042
is f i Normal vector of the mechanical clamping jaw surface;
the normal vector fitting part is as follows:
Figure BDA0003565312170000043
wherein,,
Figure BDA0003565312170000044
for the object surface p fi Normal vector at (a).
Further, the step of establishing a collision function according to the signed distance field model, and designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm to solve the grabbing path comprises the following steps:
acquiring a kinematic model of the mechanical arm;
establishing a forward kinematic function and a reverse kinematic function by using a kinematic model of the mechanical arm;
utilizing a forward kinematics function and a signed distance field model to establish a collision judgment function of the mechanical arm;
inputting the obtained mechanical arm grabbing gesture by using a reverse kinematics function to obtain a target mechanical arm joint gesture;
and according to the collision judging function and the joint gesture of the target mechanical arm, carrying out obstacle avoidance path planning in a joint gesture space, driving the mechanical arm to bypass a collision object in a scene, reaching the joint gesture of the target mechanical arm and carrying out grabbing.
The invention adopts another technical scheme that:
an obstacle avoidance robotic grasping system, comprising:
the model building module is used for building a signed distance field model of the mechanical arm and the mechanical clamping jaw mounted at the tail end of the mechanical arm;
the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw, and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in an actual scene;
the gesture acquisition module is used for acquiring a robot arm grabbing gesture avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path so as to bypass a scene obstacle to grab an object.
The invention adopts another technical scheme that:
an obstacle avoidance robot arm gripping device, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method described above.
The invention adopts another technical scheme that:
a computer readable storage medium, in which a processor executable program is stored, which when executed by a processor is adapted to carry out the method as described above.
The beneficial effects of the invention are as follows: the invention can utilize the whole scene information and the signed distance field model shot by the upper camera in the grabbing process, thereby obtaining a grabbing gesture with higher robustness and stronger external environment interference force resistance after grabbing objects in a complex actual scene, and the invention has higher and more accurate calculation speed in collision judgment and good application prospect in the fields of logistics and intelligent manufacturing.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description is made with reference to the accompanying drawings of the embodiments of the present invention or the related technical solutions in the prior art, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solutions of the present invention, and other drawings may be obtained according to these drawings without the need of inventive labor for those skilled in the art.
FIG. 1 is a flow chart of steps of a three-dimensional vision obstacle avoidance robot arm grabbing method in an embodiment of the invention;
FIG. 2 is a schematic diagram of a mechanical jaw model in accordance with an embodiment of the present invention;
FIG. 3 is a graph of a visual result of signed distance field reconstruction of the mechanical jaw model of FIG. 2 in an embodiment of the present invention;
FIG. 4 is a graph of a signed distance field reconstruction visualization of each mechanical link of a whole mechanical arm model in an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
As shown in fig. 1, the embodiment provides a three-dimensional vision obstacle avoidance mechanical arm grabbing method, which specifically includes the following steps:
s1, establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm.
And importing a mechanical arm and mechanical clamping jaw model file, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing the internal structures of the mechanical arm and the mechanical clamping jaw, focusing on only the collision structure of the outermost layer, and calculating a signed distance field model.
Specifically, a physical model of the mechanical arm and the mechanical jaw is imported, and the following treatment is performed for the mechanical link of each of them:
for each mechanical link, a cuboid bounding box slightly larger than the mechanical link is established, and square voxels with the number not smaller than 32 x 32 are divided in the cuboid bounding box. The length of the cuboid enclosure is usually about 1.2 times of the size of the cuboid enclosure, so that the memory space is wasted to reduce the speed, and the reconstruction surface is broken when the length of the cuboid enclosure is too small.
The distance of the center point of each voxel to the surface of the mechanical link is calculated to establish an unsigned distance field for each mechanical link. The distance calculation is accelerated by using a k-dimensional tree during calculation, otherwise, the speed is slower.
Subtracting a specific value a from the unsigned distance field to enable the value stored on part of voxels to be smaller than 0, and performing iso-surface extraction by using a marking samples method. The value of a is 10% of the cuboid enclosure. By using the method, a plurality of mutually disjoint isosurfaces can be extracted, and the isosurfaces are self-closing.
Establishing bounding boxes of all the isosurfaces, selecting all the isosurfaces which are not surrounded by other isosurfaces bounding boxes, and discarding the rest isosurfaces. These non-enclosed isosurfaces represent the structure of the outermost layer of the mechanical link. Since the isosurfaces do not intersect each other, these non-enclosed isosurface levels represent the structure of the outermost layer.
Performing s-shaped traversal on the vertexes of the square voxels, and if the vertexes pass through the outermost equivalent surface in the traversal process, inverting the distance values stored in the voxels to obtain a signed distance field; ultimately the operation will result in a negative sign for the distance value stored in the voxel inside the object and a positive sign for the distance value stored in the voxel outside the object. This step creates a signed distance field.
The distance value stored by the voxels inside the object is recalculated. Since the internal iso-surface is removed, the distance values stored by the internal voxels need to be recalculated using the outermost iso-surface.
Compensating the previously subtracted specific value a requires that the final signed distance field be added to the specific value a.
An original model of the mechanical clamping jaw is shown in fig. 2, and a signed distance field reconstruction visualization result of the mechanical clamping jaw model of fig. 2 is shown in fig. 3. The signed distance field reconstruction visualization results for each mechanical link of the whole mechanical arm model are shown in fig. 4. It can be seen that the established collision model encloses some small holes and pits, ensuring that the overall collision is as simple as possible, while ensuring substantial accuracy.
S2, acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw, and calibrating the space conversion relation between the camera and the mechanical arm base.
And placing the asymmetric dot calibration plate at a fixed position, and mounting the depth camera on the tail end of the mechanical arm.
And (3) moving the tail end of the mechanical arm for a plurality of times, and recording the color image, the depth image and the gesture of the tail end actuator of the mechanical arm of the calibration plate shot by the depth camera at the same moment. In this embodiment, at least 5 sets of data are acquired.
And matching the color map and the depth map obtained by shooting by using the depth camera by using an iterative nearest point algorithm to obtain the posture of the calibration plate at the moment.
By using the data, an overdetermined equation in the form of ax=xb can be constructed and solved, and a coordinate conversion relation between the depth camera and the mechanical arm end effector is obtained.
S3, acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in the actual scene.
And acquiring an object in the shooting scene.
The bounding box is built in the form of a three-dimensional rotation, ensuring that objects are within the bounding box, and subsequent algorithms focus only on these point clouds within the bounding box.
And matching the three-dimensional model of the object to be grabbed with the point cloud acquired by the depth camera by using a plane-to-point iterative nearest point algorithm, and solving the gesture of the object to be grabbed in an actual scene. The optimization formula for a single iteration is as follows:
Figure BDA0003565312170000071
wherein,,
Figure BDA0003565312170000072
and x= [ r ] T ,t T ] T ,r∈R 3 Is the rotation vector of the mechanical clamping jaw gesture, t epsilon R 3 Is a translation vector of the mechanical clamping jaw gesture. q i For points of the upper surface of the three-dimensional model of the object to be grasped, p i For capturing a point cloud in a scene, +.>
Figure BDA0003565312170000073
And m is the number of corresponding point pairs, which is the normal vector of the point cloud on the surface of the three-dimensional model of the object to be grabbed.
The above formula can be solved using least squares method, each time x= (a) T A) -1 A T b. And repeating the iteration for a plurality of times, and finally converging to a stable result.
S4, acquiring a grabbing gesture of the mechanical arm for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera.
And grabbing the attitude estimation. And (3) optimizing a color image and a depth image acquired by using the signed distance field of the mechanical clamping jaw obtained in the step (S1) and the depth image acquired by the depth camera to obtain a robust and collision-avoiding mechanical arm grabbing gesture:
and (3) solving an optimal grabbing gesture by utilizing the mechanical clamping jaw collision model in the step (S1), the object nearby point cloud acquired by the depth camera and the point cloud on the object model to be grabbed after the object gesture transformation is applied. The optimization formula for a single iteration is as follows:
Figure BDA0003565312170000081
wherein x= [ r ] T ,t T ] T
Figure BDA0003565312170000082
Respectively consists of three parts
Figure BDA0003565312170000083
Figure BDA0003565312170000084
The three parts comprise a collision avoidance part, a matching part and a normal vector fitting part.
The collision avoidance portion is:
Figure BDA0003565312170000085
wherein f ci K points and g are points in the point cloud on the inner side of the mechanical clamping jaw, wherein the points collide with a scene ci For the mechanical jaw collision model (signed distance field) at point f ci A gradient is provided at the point of the gradient,
Figure BDA0003565312170000086
is f ci Is the normal vector of the mechanical jaw surface. And (3) obtaining the result of tri-linear interpolation of the coordinates of the object point cloud in the signed distance field of the mechanical clamping jaw, and obtaining which object points collide with the mechanical clamping jaw.
The matching part is as follows:
Figure BDA0003565312170000087
wherein f i For the point inside the mechanical jaw nearer to the object, there are a total of l, p fi Is the point on the corresponding object that is nearer to the inside of the jaws.
Figure BDA0003565312170000088
Is f i Is the normal vector of the mechanical jaw surface.
The normal vector fitting part is as follows:
Figure BDA0003565312170000089
wherein,,
Figure BDA00035653121700000810
for the object surface p fi Normal vector at (a).
The above formula can be solved using least squares method, each time
Figure BDA00035653121700000811
Repeated iteration is carried out for a plurality of times, so that the robot arm grabbing gesture which is robust and avoids collision can be finally converged.
S5, establishing a collision function according to the signed distance field model, and designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm to solve a grabbing path so as to bypass a scene obstacle to grab an object.
Wherein, step S5 specifically includes steps S51-S55:
s51, acquiring an overall kinematic model of the mechanical arm and performing collision modeling in the step S1.
S52, a mechanical arm kinematic model is utilized to establish a forward kinematic function and a reverse kinematic function.
S53, establishing a collision judgment function of the mechanical arm by utilizing a forward kinematics function and collision modeling.
S54, inputting the grabbing gesture finally obtained by optimization in the step S4 by utilizing a reverse kinematics function, and obtaining the joint gesture of the target mechanical arm
S55, performing obstacle avoidance path planning in a joint gesture space by using an improved rapid expansion random tree algorithm, driving the mechanical arm to bypass other collision objects in the scene, and reaching the joint gesture of the target mechanical arm to grasp.
In summary, the embodiment provides a three-dimensional vision-based obstacle avoidance mechanical arm grabbing method, which uses a signed distance field with an internal structure removed to perform collision modeling, so that compared with the traditional collision modeling such as convex hull segmentation, the accuracy is greatly improved, and compared with the convex polygon segmentation method, the collision judgment speed is also improved; the information such as gradient of a signed distance field and point cloud matched with an object can be directly optimized to obtain a robust mechanical clamping jaw grabbing gesture capable of avoiding collision, so that manual teaching is omitted, grabbing stability is improved, and automation level of a production line is also improved.
The embodiment also provides an obstacle avoidance mechanical arm grabbing system, which comprises:
the model building module is used for building a signed distance field model of the mechanical arm and the mechanical clamping jaw mounted at the tail end of the mechanical arm;
the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw, and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in an actual scene;
the gesture acquisition module is used for acquiring a robot arm grabbing gesture avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path so as to bypass a scene obstacle to grab an object.
The obstacle avoidance mechanical arm grabbing system provided by the embodiment of the invention can be used for executing the method for grabbing the obstacle avoidance mechanical arm, and the steps can be implemented by any combination of the embodiments of the method, so that the method has corresponding functions and beneficial effects.
The embodiment also provides an obstacle avoidance mechanical arm grabbing device, which comprises:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method illustrated in fig. 1.
The obstacle avoidance mechanical arm grabbing device provided by the embodiment of the invention can be used for executing the method for grabbing the obstacle avoidance mechanical arm, and any combination implementation steps of the method embodiment can be executed, so that the method has corresponding functions and beneficial effects.
The present application also discloses a computer program product or a computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
The embodiment also provides a storage medium which stores instructions or programs for executing the method for grabbing the obstacle avoidance mechanical arm, which is provided by the embodiment of the method, and when the instructions or programs are run, the steps can be implemented by any combination of the embodiment of the executable method, so that the method has corresponding functions and beneficial effects.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the invention is described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the described functions and/or features may be integrated in a single physical device and/or software module or one or more functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. The grabbing method of the obstacle avoidance mechanical arm is characterized by comprising the following steps of:
establishing a signed distance field model of the mechanical arm and a mechanical clamping jaw mounted at the tail end of the mechanical arm;
acquiring a color image and a depth image of a depth camera and the gesture of a mechanical clamping jaw, and calibrating the space conversion relation between the camera and a mechanical arm base;
acquiring a three-dimensional model of an object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in an actual scene;
acquiring a mechanical arm grabbing gesture avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera; and establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path to bypass a scene obstacle to grab an object.
2. The method for grabbing an obstacle avoidance robot arm according to claim 1, wherein the step of establishing a signed distance field model of the robot arm and a mechanical clamping jaw mounted at the tail end of the robot arm comprises the steps of:
and acquiring model files of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing internal structures of the mechanical arm and the mechanical clamping jaw, acquiring a collision structure of the outermost layer of the mechanical arm clamping jaw, and acquiring a signed distance field model.
3. The method for capturing the mechanical arm according to claim 2, wherein the obtaining the model file of the mechanical arm and the mechanical clamping jaw, performing collision modeling on the mechanical arm and the mechanical clamping jaw, removing the internal structures of the mechanical arm and the mechanical clamping jaw, obtaining the collision structure of the outermost layer of the mechanical arm clamping jaw, and obtaining the signed distance field model includes:
importing model files of the mechanical arm and the mechanical clamping jaw, and acquiring physical models of the mechanical arm and the mechanical clamping jaw;
the mechanical links for each of the robotic arms are processed as follows:
establishing a cuboid bounding box for each mechanical connecting rod, and dividing a preset number of square voxels in the cuboid bounding box;
calculating the distance from the center point of each voxel to the surface of the mechanical connecting rod, and establishing an unsigned distance field of each mechanical connecting rod according to the distance;
subtracting a specific value a from the unsigned distance field to enable the value stored on part of voxels to be smaller than 0, and extracting an isosurface by using a marking cube method to extract a plurality of isosurfaces which are mutually disjoint;
establishing bounding boxes of all isosurfaces, selecting all isosurfaces which are not surrounded by other isosurface bounding boxes, discarding the rest isosurfaces, and expressing the structure of the outermost layer of the mechanical connecting rod by the non-surrounded isosurfaces;
performing s-shaped traversal on the vertexes of the square voxels, and if the vertexes pass through the outermost equivalent surface in the traversal process, inverting the distance values stored in the voxels to obtain a signed distance field;
and (3) recalculating the distance value stored in the internal voxel according to the outermost equivalent surface to obtain a final signed distance field model.
4. The method for capturing the obstacle avoidance robot arm according to claim 1, wherein the obtaining the color map and the depth map of the depth camera and the gesture of the mechanical clamping jaw, calibrating the spatial conversion relationship between the camera and the robot arm base, comprises:
placing an asymmetric dot calibration plate at a preset fixed position, and hanging a depth camera at the tail end of the mechanical arm;
moving the tail end of the mechanical arm for multiple times, and recording the color image, the depth image and the gesture of an end effector of the mechanical arm of a plurality of groups of calibration plates shot by the depth camera at the same moment;
matching by using an iterative nearest point algorithm according to a color image and a depth image obtained by shooting by a depth camera to obtain the gesture of the calibration plate at a corresponding moment;
and solving an equation to obtain a coordinate conversion relation between the depth camera and the mechanical arm end effector.
5. The method for grabbing the obstacle avoidance robot arm according to claim 1, wherein the matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the pose of the object to be grabbed in the actual scene, includes:
acquiring a point cloud of an object to be grabbed;
matching a three-dimensional model of an object to be grabbed with a point cloud acquired by a depth camera by using a plane-to-point iterative nearest point algorithm, and solving the gesture of the object to be grabbed in an actual scene; wherein, the optimization formula of single iteration is as follows:
Figure FDA0003565312160000021
wherein,,
Figure FDA0003565312160000022
and x= [ r ] T ,t T ] T R is a rotation vector of the mechanical clamping jaw gesture, and t is a translation vector of the mechanical clamping jaw gesture; q i For points of the upper surface of the three-dimensional model of the object to be grasped, p i For capturing a point cloud in a scene, +.>
Figure FDA0003565312160000023
And m is the number of corresponding point pairs, which is the normal vector of the point cloud on the surface of the three-dimensional model of the object to be grabbed.
6. The method for capturing the obstacle avoidance robot arm according to claim 1, wherein the step of obtaining the robot arm capturing gesture for avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera comprises the steps of:
solving the optimal grabbing gesture according to the signed distance field model, the object near point cloud acquired by the depth camera and the point cloud of the object to be grabbed after the object gesture transformation is applied; wherein, the optimization formula of single iteration is as follows:
Figure FDA0003565312160000024
wherein x= [ r ] T ,t T ] T R is a rotation vector of the mechanical clamping jaw gesture, and t is a translation vector of the mechanical clamping jaw gesture;
Figure FDA0003565312160000025
consists of three parts:
Figure FDA0003565312160000026
Figure FDA0003565312160000031
the three parts comprise a collision avoidance part, a matching part and a normal vector fitting part;
the collision avoidance portion is:
Figure FDA0003565312160000032
wherein f ci K points and g are points in the point cloud on the inner side of the mechanical clamping jaw, wherein the points collide with a scene ci At point f for mechanical jaw collision model ci A gradient is provided at the point of the gradient,
Figure FDA0003565312160000033
is f ci Normal vector of the mechanical clamping jaw surface;
the matching part is as follows:
Figure FDA0003565312160000034
wherein f i For the point inside the mechanical jaw nearer to the object, there are a total of l, p fi For the point on the corresponding object that is nearer to the inside of the jaws,
Figure FDA0003565312160000035
is f i Normal vector of the mechanical clamping jaw surface;
the normal vector fitting part is as follows:
Figure FDA0003565312160000036
wherein,,
Figure FDA0003565312160000037
for the object surface p fi Normal vector at (a).
7. The method for grabbing an obstacle avoidance robot arm according to claim 1, wherein the step of establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematic programming solver according to the collision function and the robot arm grabbing gesture, and solving a grabbing path comprises the steps of:
acquiring a kinematic model of the mechanical arm;
establishing a forward kinematic function and a reverse kinematic function by using a kinematic model of the mechanical arm;
utilizing a forward kinematics function and a signed distance field model to establish a collision judgment function of the mechanical arm;
inputting the obtained mechanical arm grabbing gesture by using a reverse kinematics function to obtain a target mechanical arm joint gesture;
and according to the collision judging function and the joint gesture of the target mechanical arm, carrying out obstacle avoidance path planning in a joint gesture space, driving the mechanical arm to bypass a collision object in a scene, reaching the joint gesture of the target mechanical arm and carrying out grabbing.
8. An obstacle avoidance robotic arm grasping system, comprising:
the model building module is used for building a signed distance field model of the mechanical arm and the mechanical clamping jaw mounted at the tail end of the mechanical arm; the hand-eye calibration module is used for acquiring a color image and a depth image of the depth camera and the gesture of the mechanical clamping jaw, and calibrating the space conversion relation between the camera and the mechanical arm base;
the point cloud matching module is used for acquiring a three-dimensional model of the object to be grabbed, matching the three-dimensional model with the point cloud acquired by the depth camera, and solving the gesture of the object to be grabbed in an actual scene;
the gesture acquisition module is used for acquiring a robot arm grabbing gesture avoiding collision according to the signed distance field model and the point cloud acquired by the depth camera;
and the path planning module is used for establishing a collision function according to the signed distance field model, designing an obstacle avoidance kinematics planning solver according to the collision function and the grabbing gesture of the mechanical arm, and solving a grabbing path so as to bypass a scene obstacle to grab an object.
9. The utility model provides an keep away barrier arm grabbing device which characterized in that includes:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any one of claims 1-7.
10. A computer readable storage medium, in which a processor executable program is stored, characterized in that the processor executable program is for performing the method according to any of claims 1-7 when being executed by a processor.
CN202210300135.4A 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium Active CN114851187B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210300135.4A CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210300135.4A CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Publications (2)

Publication Number Publication Date
CN114851187A CN114851187A (en) 2022-08-05
CN114851187B true CN114851187B (en) 2023-07-07

Family

ID=82629604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210300135.4A Active CN114851187B (en) 2022-03-25 2022-03-25 Obstacle avoidance mechanical arm grabbing method, system, device and storage medium

Country Status (1)

Country Link
CN (1) CN114851187B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107907593A (en) * 2017-11-22 2018-04-13 中南大学 Manipulator collision-proof method in a kind of ultrasound detection
CN109767495A (en) * 2017-11-09 2019-05-17 达索系统公司 The increasing material manufacturing of 3D component
CN112873205A (en) * 2021-01-15 2021-06-01 陕西工业职业技术学院 Industrial robot disordered grabbing method based on real-time switching of double clamps
CN113192128A (en) * 2021-05-21 2021-07-30 华中科技大学 Mechanical arm grabbing planning method and system combined with self-supervision learning
CN113492402A (en) * 2020-04-03 2021-10-12 发那科株式会社 Fast robot motion optimization with distance field
CN114140508A (en) * 2021-11-26 2022-03-04 浪潮电子信息产业股份有限公司 Method, system and equipment for generating three-dimensional reconstruction model and readable storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102282561B (en) * 2009-01-15 2014-11-12 三菱电机株式会社 Collision determination device and collision determination program

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767495A (en) * 2017-11-09 2019-05-17 达索系统公司 The increasing material manufacturing of 3D component
CN107907593A (en) * 2017-11-22 2018-04-13 中南大学 Manipulator collision-proof method in a kind of ultrasound detection
CN113492402A (en) * 2020-04-03 2021-10-12 发那科株式会社 Fast robot motion optimization with distance field
CN112873205A (en) * 2021-01-15 2021-06-01 陕西工业职业技术学院 Industrial robot disordered grabbing method based on real-time switching of double clamps
CN113192128A (en) * 2021-05-21 2021-07-30 华中科技大学 Mechanical arm grabbing planning method and system combined with self-supervision learning
CN114140508A (en) * 2021-11-26 2022-03-04 浪潮电子信息产业股份有限公司 Method, system and equipment for generating three-dimensional reconstruction model and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
面向物流分拣任务的自主抓取机器人系统;马灼明;朱笑笑;孙明镜;曹其新;;机械设计与研究(第06期);全文 *

Also Published As

Publication number Publication date
CN114851187A (en) 2022-08-05

Similar Documents

Publication Publication Date Title
CN109986560B (en) Mechanical arm self-adaptive grabbing method for multiple target types
CN111251295B (en) Visual mechanical arm grabbing method and device applied to parameterized parts
JP5558585B2 (en) Work picking device
Shao et al. Suction grasp region prediction using self-supervised learning for object picking in dense clutter
Song et al. CAD-based pose estimation design for random bin picking using a RGB-D camera
CN113409384B (en) Pose estimation method and system of target object and robot
CN107138432B (en) Method and apparatus for sorting non-rigid objects
CN111085997A (en) Capturing training method and system based on point cloud acquisition and processing
CN105740899A (en) Machine vision image characteristic point detection and matching combination optimization method
CN108818530A (en) Stacking piston motion planing method at random is grabbed based on the mechanical arm for improving RRT algorithm
Valencia et al. A 3D vision based approach for optimal grasp of vacuum grippers
JP2023059828A (en) Grasp generation for machine tending
Tang et al. Learning collaborative pushing and grasping policies in dense clutter
CN112819135A (en) Sorting method for guiding mechanical arm to grab materials in different poses based on ConvPoint model
CN110097599B (en) Workpiece pose estimation method based on component model expression
CN113034600A (en) Non-texture planar structure industrial part identification and 6D pose estimation method based on template matching
JP2022187983A (en) Network modularization to learn high dimensional robot tasks
CN114851187B (en) Obstacle avoidance mechanical arm grabbing method, system, device and storage medium
CN113538576A (en) Grabbing method and device based on double-arm robot and double-arm robot
JP7373700B2 (en) Image processing device, bin picking system, image processing method, image processing program, control method and control program
CN115861780B (en) Robot arm detection grabbing method based on YOLO-GGCNN
CN116276973A (en) Visual perception grabbing training method based on deep learning
CN116000966A (en) Workpiece grabbing method, device, equipment and storage medium
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system
Zhang et al. Object detection and grabbing based on machine vision for service robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20240221

Address after: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee after: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region after: China

Address before: 510641 No. five, 381 mountain road, Guangzhou, Guangdong, Tianhe District

Patentee before: SOUTH CHINA University OF TECHNOLOGY

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240401

Address after: 518057, Building 4, 512, Software Industry Base, No. 19, 17, and 18 Haitian Road, Binhai Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province

Patentee after: Cross dimension (Shenzhen) Intelligent Digital Technology Co.,Ltd.

Country or region after: China

Address before: 510641 Industrial Building, Wushan South China University of Technology, Tianhe District, Guangzhou City, Guangdong Province

Patentee before: Guangzhou South China University of Technology Asset Management Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right