CN112464410B - Method and device for determining workpiece grabbing sequence, computer equipment and medium - Google Patents

Method and device for determining workpiece grabbing sequence, computer equipment and medium Download PDF

Info

Publication number
CN112464410B
CN112464410B CN202011393632.0A CN202011393632A CN112464410B CN 112464410 B CN112464410 B CN 112464410B CN 202011393632 A CN202011393632 A CN 202011393632A CN 112464410 B CN112464410 B CN 112464410B
Authority
CN
China
Prior art keywords
workpiece
collision detection
type
point clouds
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011393632.0A
Other languages
Chinese (zh)
Other versions
CN112464410A (en
Inventor
高磊
秦继昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seizet Technology Shenzhen Co Ltd
Original Assignee
Seizet Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seizet Technology Shenzhen Co Ltd filed Critical Seizet Technology Shenzhen Co Ltd
Priority to CN202011393632.0A priority Critical patent/CN112464410B/en
Publication of CN112464410A publication Critical patent/CN112464410A/en
Application granted granted Critical
Publication of CN112464410B publication Critical patent/CN112464410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/17Mechanical parametric or variational design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/18Details relating to CAD techniques using virtual or augmented reality

Landscapes

  • Physics & Mathematics (AREA)
  • Geometry (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method and a device for determining a workpiece grabbing sequence, computer equipment and a storage medium, wherein the method comprises the steps of obtaining and identifying scene point clouds of workpieces to be sorted, wherein the successfully matched workpiece point clouds are first-class workpiece point clouds, and the unsuccessfully matched workpiece point clouds are second-class workpiece point clouds; ordering the first type of workpiece point clouds according to the high and low positions to be used as a collision detection sequence; defining a CAD model of the grabbing mechanism, and sequentially performing collision detection by using the CAD model and a plurality of first-class workpiece point clouds according to collision detection sequencing; if the current point cloud of the first type of workpiece passes through collision detection, judging that the corresponding workpiece is the workpiece to be grabbed in the current round; if the first type of workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first type of workpiece point clouds according to the collision detection sequence until one of the first type of workpiece point clouds passes the collision detection or all the first type of workpiece point clouds do not pass the collision detection, and ending the collision detection of the current round, thereby determining a reasonable grabbing sequence for the workpieces.

Description

Method and device for determining workpiece grabbing sequence, computer equipment and medium
Technical Field
The invention relates to the technical field of robots, in particular to a grabbing planning and collision detection method, a grabbing planning and collision detection device, computer equipment and a storage medium.
Background
With the development of the times, 3D vision technology is increasingly used in industrial automation sorting scenarios for sorting of workpieces. As shown in fig. 1, the application of 3D vision techniques in this process can be summarized in two stages: (1) and (3) identification: identifying one or more workpieces in the workpiece scene point cloud to be sorted; (2) a grabbing planning stage: and performing collision detection by using the CAD model of the grabbing mechanism and the identified workpiece point cloud, and selecting a workpiece without collision to realize grabbing. The current method for determining the workpiece gripping sequence is to sort the detected workpieces from the highest position to the lowest position, and start gripping from the workpiece with the highest position.
Although the prior art can solve most problems, in an industrial automatic sorting scene, workpieces are generally placed together in a messy way, the positions of the workpieces are overlapped, if a part of one workpiece is pressed by other workpieces, the workpiece pressed on the workpiece is taken up when the workpiece is grabbed in grabbing, if the workpiece is fragile or soft, the taken-up workpiece may be crushed or deformed due to collision in the falling process, and in the scene shown in fig. 2, the workpiece at a higher position is not detected; in the scenario shown in fig. 3, the workpiece with a higher position is still pressed by other workpieces, and similar scenarios are common in the grabbing of thin plate type parts, wherein the position of the workpiece is determined according to the position of the centroid. The two situations cannot be accurately detected by the prior art, the situation that the workpiece is still taken up still occurs, and in order to avoid the situation, a reasonable grabbing sequence needs to be determined for the workpiece.
Disclosure of Invention
The invention aims to provide a method for determining a workpiece grabbing sequence, which aims to solve the problems in the prior art.
In order to achieve the above object, the present invention provides a method for determining a workpiece gripping sequence, comprising the steps of:
s1, acquiring and identifying scene point clouds of workpieces to be sorted, wherein in the scene point clouds of the workpieces to be sorted, the successfully matched workpiece point clouds are defined as first type of workpiece point clouds, and the unsuccessfully matched workpiece point clouds are defined as second type of workpiece point clouds;
s2, sorting the first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
s3, defining a CAD model of a grabbing mechanism, wherein the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and sequentially performing collision detection by using the CAD model and a plurality of first-class workpiece point clouds according to collision detection sequencing;
s4, if the first type of workpiece point cloud currently performing collision detection passes the collision detection, judging that the workpiece corresponding to the first type of workpiece point cloud is the workpiece to be grabbed in the current round and finishing the current round of collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first-class workpiece point cloud according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
Preferably, the virtual detection mechanism is arranged on a traveling path of the first point cloud workpiece to be subjected to collision detection.
Further, whether the collision detection is performed by: judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Preferably, the collision detection includes the steps of:
s31, determining the robot grabbing gesture corresponding to the current first type of workpiece point cloud;
s32 virtually grasping the current first type workpiece using the grasping posture: if the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Preferably, the virtual detection mechanism is arranged at the tail end of the robot gripping clamping jaw, and the outer contour of the virtual detection mechanism is not smaller than that of the workpiece.
Preferably, a certain distance D is reserved between the bottom of the virtual detection mechanism and the current first type of workpiece during each collision detection.
Further, 0mm < D < ═ H, where H is the height value of the workpiece.
Preferably, the centroid position of the first type of workpiece point clouds is obtained, and the plurality of first type of workpiece point clouds are sorted based on the height of the centroid position.
Preferably, in the step S1, if the first type of workpiece point cloud is not identified, the collision detection of the current round is ended, the steps S1 to S4 are continuously performed, and based on the newly acquired scene point cloud of the workpiece to be sorted, the collision detection is performed to acquire the workpiece capturing sequence of the next round;
or, in the step S4, if one of the point clouds of the first type of workpiece passes through collision detection, the collision detection of the current round is finished, and after the workpiece to be captured of the current round is captured, the steps S1 to S4 are continuously performed, and based on the newly acquired point clouds of the scene of the workpiece to be sorted, collision detection is performed to acquire the capturing sequence of the workpiece of the next round;
or, in the step S4, if all the first type workpiece point clouds fail to pass the collision detection, ending the collision detection of the current round, continuing to execute the steps S1 to S4, and performing the collision detection based on the newly acquired scene point clouds of the workpieces to be sorted to acquire the workpiece grabbing sequence of the next round.
The invention also provides a device for determining the workpiece grabbing sequence, which comprises:
the point cloud identification module is used for acquiring and identifying scene point clouds of workpieces to be sorted, wherein the successfully matched workpiece point clouds are defined as first type workpiece point clouds, and the unsuccessfully matched workpiece point clouds are defined as second type workpiece point clouds;
the collision detection sequence acquisition module is used for sequencing a plurality of first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
the collision detection module is used for defining a CAD model of the grabbing mechanism in S3, the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and the CAD model and the plurality of first-class workpiece point clouds are used for carrying out collision detection in sequence according to collision detection sequencing;
the collision judgment module is used for judging whether the workpiece corresponding to the current first type of workpiece point cloud is the workpiece to be grabbed in the current round and finishing the current round of collision detection if the first type of workpiece point cloud currently subjected to collision detection passes through the collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first-class workpiece point cloud according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
Preferably, in the collision detection module, the virtual detection mechanism is arranged on a traveling path of the first point cloud workpiece to be currently subjected to collision detection.
Preferably, in the collision judging module, whether the collision detection is passed is: judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Preferably, in the collision detection module, the CAD model includes a robot gripping jaw and a virtual detection mechanism provided at a distal end of the robot gripping jaw, and an outer contour of the virtual detection mechanism is not smaller than an outer contour of the workpiece.
Preferably, the collision detection module comprises the collision detection module comprising:
the grabbing attitude determining module is used for determining the grabbing attitude of the robot corresponding to the current first-class workpiece point cloud;
the collision detection submodule is used for virtually grabbing the current first type of workpieces by using the grabbing postures: if the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Furthermore, a certain distance D is reserved between the bottom of the virtual detection mechanism and the current first type of workpiece during each collision detection. Further, 0mm < D < ═ H.
Preferably, in the collision detection sequence acquisition module, the centroid position of the first type of workpiece point clouds is acquired, and the plurality of first type of workpiece point clouds are sorted based on the height of the centroid position.
Preferably, in the point cloud identification module, if the point cloud of the first type of workpiece is not identified, the collision detection of the current round is finished, the point cloud identification module is switched to, and based on the newly acquired scene point cloud of the workpiece to be sorted, the collision detection is carried out to acquire the workpiece grabbing sequence of the next round;
or in the collision judgment module, if one first-class workpiece point cloud passes through collision detection, the current round of collision detection is finished, the current round of workpieces to be grabbed is grabbed, then the current round of workpieces to be grabbed is shifted to the point cloud identification module, and collision detection is carried out based on the newly acquired scene point cloud of the workpieces to be sorted, so that the next round of workpiece grabbing sequence is acquired;
or in the collision judgment module, if all the first-class workpiece point clouds do not pass through the collision detection, ending the collision detection of the current round, switching to the point cloud identification module, performing the collision detection based on the newly acquired scene point clouds of the workpieces to be sorted, and acquiring the workpiece grabbing sequence of the next round.
The invention also provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of any of the methods described above when executing the computer program.
The invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of any of the methods described above.
According to the method, the device, the computer equipment and the storage medium for determining the workpiece grabbing sequence, the height of the position of a workpiece is firstly identified, the collision detection sequence of the workpiece is determined, then the virtual collision detection mechanism is added on an original grabbing mechanism CAD model and is used for replacing the current workpiece to perform collision detection, if the virtual collision detection mechanism does not collide with other workpieces around the current workpiece, the current workpiece is judged to be the workpiece to be grabbed in the current round, the grabbing sequence determination in the current round is finished, and then the next round of workpiece to be grabbed can be determined according to actual requirements, so that the workpiece grabbing sequence determination is conveniently and quickly realized.
Drawings
FIG. 1 is a schematic diagram of a conventional workpiece grabbing plan;
FIG. 2 is a schematic view of a structure where the uppermost workpiece is not identified;
FIG. 3 is a schematic view of a structure in which the highest workpiece is pressed;
FIG. 4 is a schematic structural diagram illustrating an embodiment of a method for determining a workpiece grabbing sequence according to the present invention;
FIG. 5 is a schematic diagram of a CAD model capture;
FIG. 6 is a schematic diagram of an embodiment of a scene point cloud of a workpiece to be sorted;
FIG. 7 is a schematic diagram of one embodiment of collision detection;
FIG. 8 is a schematic structural diagram of an embodiment of an apparatus for determining a workpiece capture sequence according to the present invention
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the scope of the invention in any way.
Like reference numerals refer to like elements throughout the specification. The expression "and/or" includes any and all combinations of one or more of the associated listed items. In the drawings, the thickness, size, and shape of an object have been slightly exaggerated for convenience of explanation. The figures are purely diagrammatic and not drawn to scale.
It will be further understood that the terms "comprises," "comprising," "includes," "including," "has," "includes" and/or "including," when used in this specification, specify the presence of stated features, steps, integers, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, steps, integers, operations, elements, components, and/or groups thereof.
The terms "substantially", "about" and the like as used in the specification are used as terms of approximation and not as terms of degree, and are intended to account for inherent deviations in measured or calculated values that would be recognized by one of ordinary skill in the art.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
As shown in fig. 4, the present invention discloses a method for determining a workpiece grabbing sequence, comprising the following steps:
s1, acquiring and identifying scene point clouds of workpieces to be sorted, wherein in the scene point clouds of the workpieces to be sorted, the successfully matched workpiece point clouds are defined as first type of workpiece point clouds, and the unsuccessfully matched workpiece point clouds are defined as second type of workpiece point clouds;
s2, sorting the first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
S3S3, defining a CAD model of a grabbing mechanism, wherein the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and sequentially performing collision detection by using the CAD model and a plurality of first-class workpiece point clouds according to collision detection sequencing;
s4, if the current first type of workpiece point cloud passes through collision detection, judging that the workpiece corresponding to the current first type of workpiece point cloud is the workpiece to be grabbed in the current round and finishing the current round of collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing to use the CAD model and the next first-class workpiece point clouds to perform the collision detection according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
The method for determining the workpiece grabbing sequence comprises the steps of firstly identifying the height of a workpiece, determining the collision detection sequence of the workpiece, then adding a virtual collision detection mechanism on an original grabbing mechanism CAD model, using the virtual collision detection mechanism to replace the current workpiece for collision detection, judging the current workpiece to be grabbed in the current round if the virtual collision detection mechanism does not collide with other workpieces around the current workpiece, finishing the grabbing sequence determination in the current round, and then entering the next round of workpiece grabbing determination according to actual requirements, so that the workpiece grabbing sequence determination is conveniently and quickly realized.
As a preferred scheme, in step S1, a 3D camera is used to photograph a workpiece to be sorted, so as to obtain a scene point cloud of the workpiece to be sorted; and then, identifying point clouds based on a template matching technology, identifying the point clouds by using a SeizetPickling software 3D template matching function in the embodiment, and judging the classification of the workpiece point clouds in the workpiece scene point clouds to be sorted acquired in the round according to whether the matching is successful or not, wherein the matching is successful as a first type of workpiece point clouds, and the matching is failed as a second type of workpiece point clouds.
Preferably, in step S2, centroid positions of the first type of workpiece point clouds are obtained, and then the plurality of first type of workpiece point clouds obtained in step S1 are sorted based on the height of the centroid positions. In this embodiment, as described above, the centroid position of each identified first-type workpiece point cloud is calculated by using the seizetpick software 3D template matching function.
Preferably, in step S3, the virtual detection mechanism is disposed on a traveling path of the first point cloud workpiece to be currently subjected to collision detection. Correspondingly, in step S4, whether the collision detection is determined by: judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection. The virtual detection mechanism is arranged on the traveling path of the first-class point cloud workpiece to be subjected to collision detection at present and is used for replacing the first-class point cloud workpiece to carry out collision detection, so that when the virtual detection mechanism collides with the second-class point cloud workpiece, the possibility that the virtual detection mechanism collides with surrounding workpieces exists when the virtual detection mechanism actually grabs the workpieces, and the virtual detection mechanism judges that the current first-class point cloud workpiece does not pass the collision detection.
Further, in step S3, the CAD model includes a robot gripping jaw 2 and a virtual detection mechanism 1 disposed at a distal end of the robot gripping jaw 2, and an outer contour of the virtual detection mechanism 1 is not smaller than an outer contour of the workpiece. The virtual detection mechanism 1 is located at the bottom of the grabbing clamping jaw 2 and used for simulating and replacing a current first point cloud workpiece 3 to be grabbed, the outline of the virtual detection mechanism 1 is not smaller than the outline of the workpiece 3 to be grabbed, and during collision detection, if the outline of the virtual detection mechanism 1 does not collide with any peripheral point cloud, the corresponding current first type workpiece point cloud cannot collide necessarily during actual grabbing.
Correspondingly, each time the CAD model and the first type of workpiece point cloud are used for collision detection, the method comprises the following steps:
s31, determining the robot grabbing gesture corresponding to the current first type of workpiece point cloud; in this embodiment, the grabbing and planning function of the seizetpacking software is used to determine the grabbing postures of the robots corresponding to the workpieces, and the corresponding grabbing postures are determined according to the positions of different workpieces.
S32 virtually grasping the current first type workpiece using the grasping posture: if the virtual detection mechanism 1 collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
For the first type of workpieces subjected to collision detection, after the corresponding robot grabbing gesture is determined, whether the grabbing gesture is available is further checked, namely whether the virtual detection mechanism 1 collides with the surrounding point cloud when the grabbing gesture is used for virtual grabbing. The virtual detection mechanism 1 is located at the bottom of the grabbing clamping jaw 2 and used for simulating and replacing a current first point cloud workpiece to be grabbed, the outline of the virtual detection mechanism 1 is not smaller than the outline of the workpiece to be grabbed, and when collision detection is carried out, if the outline of the virtual detection mechanism 1 does not collide with any surrounding point cloud, the corresponding current first type workpiece point cloud cannot be collided necessarily when actually grabbing. Namely, if the virtual detection mechanism 1 does not collide with the second type of workpiece point cloud around the current first type of workpiece point cloud, it indicates that the first type of workpiece point cloud passes through collision detection, the collision detection is finished, and according to the detection result, the workpiece corresponding to the current first type of workpiece point cloud is taken as the workpiece to be grabbed for workpiece grabbing.
Otherwise, if the virtual detection mechanism 1 at the end of the clamping jaw collides with any point cloud around the current first-class workpiece point cloud, it is indicated that the current first-class workpiece may collide with other surrounding workpieces, and if it is determined that the current first-class workpiece point cloud does not pass the collision detection, the process proceeds to step S31 according to a predetermined collision detection sequence, and continues the collision detection of the next group of first-class workpiece point clouds until one of the first-class workpiece point clouds passes the collision detection or all the first-class workpiece point clouds fail the collision detection, and then ends the collision detection.
In step S3, further, to achieve better collision detection effect, the outer contour of the virtual detection mechanism 1 may be set to be the same as the outer contour of the workpiece.
In addition, a certain distance D is set between the bottom of the virtual detection mechanism 1 and the first type of workpiece to be currently subjected to collision detection each time of collision detection. Because the workpiece to be grabbed can be a solid part or a hollow part, when the workpiece is a hollow part and is subjected to collision detection, the gripper usually enters the hollow position to adsorb the inner wall of the workpiece to grab the workpiece during actual grabbing, and the position of the gripper is reserved between the virtual detection mechanism 1 and the workpiece in disorder during simulated grabbing; if the workpiece is a solid workpiece, the gripper usually grips the surface of the workpiece by absorbing the surface of the workpiece during actual gripping, and at this time, a certain space needs to be reserved between the virtual detection mechanism 1 and the workpiece for gripping so as to avoid the phenomenon that the gripper possibly collides with the workpiece during actual gripping.
In this embodiment, 0mm < D < ═ H, where H is the height of the workpiece to be detected. That is, the gap between the virtual detection mechanism 1 and the current first-type workpiece does not exceed the height of one workpiece, so that the situation that collision detection fails due to the fact that the workpieces actually superposed above the current workpiece are missed in shooting or missed in recognition can be effectively prevented.
As a preferable scheme, in the step S1, if the first type of workpiece point cloud is not identified, the collision detection of the round is ended, the steps S1 to S4 are continuously performed, and based on the newly acquired scene point cloud of the workpiece to be sorted, the collision detection is performed to acquire the workpiece grabbing sequence of the next round.
Or, in the step S4, if one of the point clouds of the first type of workpiece passes through collision detection, the collision detection of the current round is finished, and after the current round of workpiece to be captured is completed to capture the workpiece, the steps S1 to S4 are continuously performed, and based on the newly acquired point clouds of the scene of the workpiece to be sorted, collision detection is performed to acquire the capturing sequence of the workpiece of the next round.
Or, in the step S4, if all the first type workpiece point clouds fail to pass the collision detection, ending the collision detection of the current round, continuing to execute the steps S1 to S4, and performing the collision detection based on the newly acquired scene point clouds of the workpieces to be sorted to acquire the workpiece grabbing sequence of the next round.
When the 3D camera is used for shooting and acquiring the point clouds of the scene of the workpieces to be sorted, the acquired point cloud data is unsatisfactory and the workpieces cannot be identified in step S1, namely the point clouds of the first type of workpieces cannot be identified based on the point clouds of the scene of the workpieces to be sorted, at the moment, the collision detection of the round is finished, the shooting angle or the polishing angle of the camera is adjusted, and then the step S1 to the step S4 are continuously executed, and the collision detection is carried out based on the newly acquired point clouds of the scene of the workpieces to be sorted, so that the workpiece grabbing sequence of the next round is acquired.
As described above, due to the influence of factors such as the shooting angle and the external lighting environment, there is a part of workpiece point clouds, that is, a second type of workpiece point clouds, so in step S4, if all the first type of workpiece point clouds do not pass the collision detection after the collision detection according to the collision sequence, the current round of collision detection is ended, the shooting angle or the lighting angle of the camera is adjusted, and then steps S1 to S4 are continuously executed, a new first type of point cloud workpiece is identified based on the newly acquired workpiece scene point cloud to be sorted, and then the collision detection is performed, so as to acquire the workpiece grabbing sequence of the next round.
In addition, if one of the point clouds of the first type of workpiece passes through the collision detection in step S4, the collision detection of the current round is finished, the current round of workpiece to be grabbed is completed to grab the workpiece, and then steps S1 to S4 are continuously performed, and based on the newly acquired point clouds of the scene of the workpiece to be sorted, the collision detection is performed to acquire the workpiece grabbing sequence of the next round.
The present invention will be further described below by taking an example of determining the thin-walled workpiece gripping order by using a method for determining the workpiece gripping order according to the present invention.
As shown in fig. 5, the virtual collision detection mechanism does not exist on the actual grasping mechanism, and is only used in the CAD model of the mechanism for increasing the collision detection capability of the model as desired, in this embodiment, a virtual detection mechanism 12 is constructed according to the size profile of the thin-walled workpiece, and the virtual detection mechanism is a thin-walled structure, the size of which corresponds to that of the workpiece, and assuming that the workpiece is long L, wide W and high H, the thin-walled structure is long L1 ═ L, wide W1 ═ W, thick wall T < W/2 and T < L/2, and can also be adjusted according to the actual situation, and the height H1> -H, as shown in fig. 5. Combine virtual detection mechanism 1 to snatch mechanism clamping jaw 2 on, at collision detection position department after the combination, the distance of virtual detection mechanism 1 bottom surface distance work piece 3 surface is D, and wherein 0mm < D < ═ H.
The method for determining the workpiece grabbing sequence comprises the following steps:
s101, acquiring and identifying scene point clouds of workpieces to be sorted, wherein complete workpiece point clouds in the scene point clouds of the workpieces to be sorted are first-class workpiece point clouds, and missing workpiece point clouds are second-class workpiece point clouds;
in the implementation process, the workpieces are usually placed in a mess, and the grabbing unit mainly comprises an industrial serial robot and a tail end grabbing clamping jaw 2. As shown in fig. 6, after a workpiece to be detected is photographed by a 3D camera, scene point clouds of the workpiece to be sorted are obtained and identified, wherein the detected workpieces are respectively G2, G4 and G5, that is, three first-type workpiece point clouds, and only part of the point clouds are identified from the G1 and G3 workpieces, which are then used as second-type workpiece point clouds.
S102, sequencing a plurality of first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
in this embodiment, according to the centroid positions of G2, G4, and G5, the three first-type workpiece point clouds are sorted in order from high to low, and a collision detection order is obtained, that is, the collision detection order of the current round is G2, G4, and G5.
S103, defining a CAD model of the grabbing mechanism by S3, wherein the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and sequentially performing collision detection by using the CAD model and a plurality of first-class workpiece point clouds according to collision detection sequencing;
s104, if the current first type of workpiece point cloud passes through collision detection, judging that the workpiece corresponding to the current first type of workpiece point cloud is the workpiece to be grabbed in the current round, and finishing the current round of collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing to use the CAD model and the next first-class workpiece point clouds to perform the collision detection according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
Specifically, as shown in fig. 7, at position 1, the higher-position workpiece G1 is not detected, the next-higher-position workpiece G2 is detected, collision detection is performed on G2, collision of the virtual collision mechanism with G1 is detected, and G2 does not meet the grasping condition; at the position 2, the centroid position of G4 is higher, collision detection is carried out on G4, collision with G3 is detected, and G4 does not meet the grabbing condition; at position 3, collision detection is performed on G5, no collision, and therefore it is determined that G5 is graspable, as shown in fig. 7.
If the collision detection technique is not used and only the height relationship of the workpiece is used to grasp the workpiece from high to low, the workpiece G2 will be grasped first in the case shown in fig. 7, and it is apparent that G2 is pressed by G1 and does not meet the grasping condition; with the collision detection method proposed by the present invention, it can be correctly determined on the basis of the height information that the workpiece G5 should be grasped first.
Besides thin-wall workpieces, workpieces with other outlines can be added to the CAD model of the grabbing mechanism for determining the grabbing sequence by using the method.
Example two
As shown in fig. 8, the present invention also provides a workpiece gripping sequence determining apparatus 10, including:
the point cloud identification module 11 is used for acquiring and identifying workpiece scene point clouds to be sorted, wherein the workpiece point clouds which are successfully matched are defined as first-class workpiece point clouds, and the workpiece point clouds which are failed to be matched are defined as second-class workpiece point clouds;
a collision detection sequence obtaining module 12, configured to sort the plurality of first workpiece point clouds according to high and low positions, and use the sorted first workpiece point clouds as a collision detection sequence;
the collision detection module 13 is used for defining a CAD model of the grabbing mechanism, the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and the CAD model and the plurality of first-class workpiece point clouds are used for sequentially performing collision detection according to the collision detection sequence;
the collision judgment module 14 is configured to judge that the workpiece corresponding to the current first type of workpiece point cloud is the workpiece to be captured in the current round and end the current round of collision detection if the first type of workpiece point cloud currently performing collision detection passes the collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first-class workpiece point cloud according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
According to the workpiece grabbing sequence determining method 10, the height of the position of a workpiece is recognized firstly, the collision detection sequence of the workpiece is determined, then a virtual collision detection mechanism is added on an original grabbing mechanism CAD model and is used for replacing the current workpiece to perform collision detection, if the virtual collision detection mechanism does not collide with other workpieces around the current workpiece, the current workpiece is judged to be the workpiece to be grabbed in the current round, the grabbing sequence determining in the current round is finished, then the workpiece to be grabbed in the next round can be determined according to actual requirements, and therefore the workpiece grabbing sequence determining is achieved conveniently and quickly.
Preferably, in the collision detection module 13, the virtual detection mechanism is disposed on a traveling path of the first point cloud workpiece to be currently subjected to collision detection.
Preferably, in the collision judgment module 14, whether the collision detection is passed is: judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Preferably, in the collision detection module, the CAD model includes a robot gripping jaw and a virtual detection mechanism provided at a distal end of the robot gripping jaw, and an outer contour of the virtual detection mechanism is not smaller than an outer contour of the workpiece.
Preferably, the collision detection module comprises the collision detection module comprising:
the grabbing attitude determining module is used for determining the grabbing attitude of the robot corresponding to the current first-class workpiece point cloud;
the collision detection submodule is used for virtually grabbing the current first type of workpieces by using the grabbing postures: if the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
Furthermore, a certain distance D is reserved between the bottom of the virtual detection mechanism and the current first type of workpiece during each collision detection. Further, 0mm < D < ═ H.
Preferably, in the collision detection sequence acquisition module, the centroid position of the first type of workpiece point clouds is acquired, and the plurality of first type of workpiece point clouds are sorted based on the height of the centroid position.
Preferably, in the point cloud identification module, if the point cloud of the first type of workpiece is not identified, the collision detection of the current round is finished, the point cloud identification module is switched to, and based on the newly acquired scene point cloud of the workpiece to be sorted, the collision detection is carried out to acquire the workpiece grabbing sequence of the next round;
or in the collision judgment module, if one first-class workpiece point cloud passes through collision detection, the current round of collision detection is finished, the current round of workpieces to be grabbed is grabbed, then the current round of workpieces to be grabbed is shifted to the point cloud identification module, and collision detection is carried out based on the newly acquired scene point cloud of the workpieces to be sorted, so that the next round of workpiece grabbing sequence is acquired;
or in the collision judgment module, if all the first-class workpiece point clouds do not pass through the collision detection, ending the collision detection of the current round, switching to the point cloud identification module, performing the collision detection based on the newly acquired scene point clouds of the workpieces to be sorted, and acquiring the workpiece grabbing sequence of the next round.
EXAMPLE III
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present invention, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted server, a blade server, a tower server, or a rack-mounted server (including an independent server or a server cluster formed by multiple servers) that can execute programs. The computer device 20 of the present embodiment includes at least, but is not limited to: a memory 21, a processor 22, which may be communicatively coupled to each other via a system bus, as shown in FIG. 9. It is noted that fig. 9 only shows a computer device 20 with components 21-22, but it is to be understood that not all shown components are required to be implemented, and that more or fewer components may be implemented instead.
In this embodiment, the memory 21 (i.e., the readable storage medium) includes a Flash memory, a hard disk, a multimedia Card, a Card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), and a Programmable Read Only Memory (PROM), and the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), and the like provided on the computer device 20. Of course, the memory 21 may also include both internal and external storage devices of the computer device 20. In the present embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 20 and various types of application software, such as program codes of a method for determining a workpiece grabbing sequence in the method embodiment. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 20. In the present embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, for example, to execute the workpiece capture sequence determination apparatus 10, so as to implement the method for determining the workpiece capture sequence in the method embodiment.
Example four
The present application also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the present embodiment is for storing program code of a workpiece capture order determination apparatus, which when executed by a processor, implements a method of determining a workpiece capture order in a method embodiment. The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (9)

1. A method for determining a workpiece gripping sequence, comprising the steps of:
s1, acquiring and identifying scene point clouds of workpieces to be sorted, wherein in the scene point clouds of the workpieces to be sorted, the successfully matched workpiece point clouds are defined as first type of workpiece point clouds, and the unsuccessfully matched workpiece point clouds are defined as second type of workpiece point clouds;
s2, sorting the first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
s3, defining a CAD model of a grabbing mechanism, wherein the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and sequentially performing collision detection by using the CAD model and a plurality of first-class workpiece point clouds according to collision detection sequencing; judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise, detecting through collision; s4, if the first type of workpiece point cloud currently performing collision detection passes the collision detection, judging that the workpiece corresponding to the first type of workpiece point cloud is the workpiece to be grabbed in the current round and finishing the current round of collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first-class workpiece point cloud according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
2. A method for determining a workpiece-gripping sequence according to claim 1, characterized in that: the virtual detection mechanism is arranged on a virtual travel path of the first workpiece point cloud currently performing collision detection.
3. A method for determining a workpiece-gripping sequence according to claim 1, characterized in that: the collision detection comprises the steps of:
s31, determining the robot grabbing gesture corresponding to the current first type of workpiece point cloud;
s32 virtually grasping the current first type workpiece using the grasping posture: if the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise through collision detection.
4. A method for determining a workpiece-gripping sequence according to claim 1, characterized in that: the virtual detection mechanism is arranged at the tail end of the robot grabbing clamping jaw, and the outer contour of the virtual detection mechanism is not smaller than that of the workpiece.
5. The method for determining the workpiece grabbing sequence as claimed in claim 1, wherein a certain distance D is reserved between the bottom of the virtual detection mechanism and the current workpiece of the first type at each collision detection.
6. The method of claim 1, wherein in step S1, if the first type of workpiece point cloud is not identified, the collision detection of the round is ended, and the method continues to step S1 to step S4, and performs collision detection based on the newly acquired scene point cloud of the workpiece to be sorted to acquire the workpiece capturing sequence of the next round;
or, in the step S4, if one of the point clouds of the first type of workpiece passes through collision detection, the collision detection of the current round is finished, and after the workpiece to be captured of the current round is captured, the steps S1 to S4 are continuously performed, and based on the newly acquired point clouds of the scene of the workpiece to be sorted, collision detection is performed to acquire the capturing sequence of the workpiece of the next round;
or, in the step S4, if all the first type workpiece point clouds fail to pass the collision detection, ending the collision detection of the current round, continuing to execute the steps S1 to S4, and performing the collision detection based on the newly acquired scene point clouds of the workpieces to be sorted to acquire the workpiece grabbing sequence of the next round.
7. An apparatus for determining a workpiece gripping sequence, comprising:
the point cloud identification module is used for acquiring and identifying scene point clouds of workpieces to be sorted, wherein the successfully matched workpiece point clouds are defined as first type workpiece point clouds, and the unsuccessfully matched workpiece point clouds are defined as second type workpiece point clouds;
the collision detection sequence acquisition module is used for sequencing a plurality of first workpiece point clouds according to the high and low positions to serve as a collision detection sequence;
the collision detection module is used for defining a CAD model of the grabbing mechanism, the CAD model comprises a robot grabbing clamping jaw and a virtual detection mechanism arranged on the robot grabbing clamping jaw, and the CAD model and the plurality of first-class workpiece point clouds are used for sequentially carrying out collision detection according to the collision detection sequence; judging whether the virtual detection mechanism collides with a second type of workpiece point cloud around the current first type of workpiece point cloud, and if so, judging that the current first type of workpiece point cloud does not pass collision detection; otherwise, detecting through collision;
the collision judgment module is used for judging whether the workpiece corresponding to the current first type of workpiece point cloud is the workpiece to be grabbed in the current round and finishing the current round of collision detection if the first type of workpiece point cloud currently subjected to collision detection passes through the collision detection; if the current first-class workpiece point clouds do not pass the collision detection, continuing the collision detection of the next first-class workpiece point cloud according to the collision detection sequence until one first-class workpiece point cloud passes the collision detection or all the first-class workpiece point clouds do not pass the collision detection, and ending the current round of collision detection.
8. A computer device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein: the processor, when executing the computer program, realizes the steps of the method of any one of claims 1 to 6.
9. A computer-readable storage medium having stored thereon a computer program, characterized in that: the computer program when executed by a processor implements the steps of the method of any one of claims 1 to 6.
CN202011393632.0A 2020-12-02 2020-12-02 Method and device for determining workpiece grabbing sequence, computer equipment and medium Active CN112464410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011393632.0A CN112464410B (en) 2020-12-02 2020-12-02 Method and device for determining workpiece grabbing sequence, computer equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011393632.0A CN112464410B (en) 2020-12-02 2020-12-02 Method and device for determining workpiece grabbing sequence, computer equipment and medium

Publications (2)

Publication Number Publication Date
CN112464410A CN112464410A (en) 2021-03-09
CN112464410B true CN112464410B (en) 2021-11-16

Family

ID=74805309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011393632.0A Active CN112464410B (en) 2020-12-02 2020-12-02 Method and device for determining workpiece grabbing sequence, computer equipment and medium

Country Status (1)

Country Link
CN (1) CN112464410B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113524187B (en) * 2021-07-20 2022-12-13 熵智科技(深圳)有限公司 Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN113538582B (en) * 2021-07-20 2024-06-07 熵智科技(深圳)有限公司 Method, device, computer equipment and medium for determining workpiece grabbing sequence
CN114310892B (en) * 2021-12-31 2024-05-03 梅卡曼德(北京)机器人科技有限公司 Object grabbing method, device and equipment based on point cloud data collision detection
CN115056215A (en) * 2022-05-20 2022-09-16 梅卡曼德(北京)机器人科技有限公司 Collision detection method, control method, capture system and computer storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11407111B2 (en) * 2018-06-27 2022-08-09 Abb Schweiz Ag Method and system to generate a 3D model for a robot scene
CN109816730B (en) * 2018-12-20 2021-08-17 先临三维科技股份有限公司 Workpiece grabbing method and device, computer equipment and storage medium
CN111754515B (en) * 2019-12-17 2024-03-01 北京京东乾石科技有限公司 Sequential gripping method and device for stacked articles
CN111508066B (en) * 2020-04-16 2023-05-26 北京迁移科技有限公司 Unordered stacking workpiece grabbing system based on 3D vision and interaction method
CN114061580B (en) * 2020-05-22 2023-12-29 梅卡曼德(北京)机器人科技有限公司 Robot grabbing method and device based on symmetry degree, electronic equipment and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091062A (en) * 2019-11-21 2020-05-01 东南大学 Robot out-of-order target sorting method based on 3D visual clustering and matching
CN111558940A (en) * 2020-05-27 2020-08-21 佛山隆深机器人有限公司 Robot material frame grabbing planning and collision detection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度信息的多目标抓取规划方法研究;颜培清等;《电子测量与仪器学报》;20160915(第09期);全文 *

Also Published As

Publication number Publication date
CN112464410A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112464410B (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN108044627B (en) Method and device for detecting grabbing position and mechanical arm
CN112828892B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN112847375B (en) Workpiece grabbing method and device, computer equipment and storage medium
JP3768174B2 (en) Work take-out device
CN112109086B (en) Grabbing method for industrial stacked parts, terminal equipment and readable storage medium
CN113524187B (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
WO2023035832A1 (en) Robot sorting method based on visual recognition and storage medium
CN113538459B (en) Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection
CN112192577A (en) One-beat multi-grab method applied to robot grabbing scene
CN113858188A (en) Industrial robot gripping method and apparatus, computer storage medium, and industrial robot
CN112802107A (en) Robot-based control method and device for clamp group
CN114310892B (en) Object grabbing method, device and equipment based on point cloud data collision detection
CN115713547A (en) Motion trail generation method and device and processing equipment
CN112338922B (en) Five-axis mechanical arm grabbing and placing method and related device
CN108145712B (en) Method and device for sorting articles by robot and robot
CN113538582B (en) Method, device, computer equipment and medium for determining workpiece grabbing sequence
JPH05134731A (en) High-speed picking equipment for piled parts
JP2014174628A (en) Image recognition method
CN110533681B (en) Article grabbing method, device and equipment and computer readable storage medium
CN111702761B (en) Control method and device of palletizing robot, processor and sorting system
CN115082795A (en) Virtual image generation method, device, equipment, medium and product
JP2555823B2 (en) High-speed picking device for piled parts
CN111687829B (en) Anti-collision control method, device, medium and terminal based on depth vision
CN115246124A (en) Object grabbing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant