CN114241286B - Object grabbing method and device, storage medium and electronic device - Google Patents

Object grabbing method and device, storage medium and electronic device Download PDF

Info

Publication number
CN114241286B
CN114241286B CN202111496594.6A CN202111496594A CN114241286B CN 114241286 B CN114241286 B CN 114241286B CN 202111496594 A CN202111496594 A CN 202111496594A CN 114241286 B CN114241286 B CN 114241286B
Authority
CN
China
Prior art keywords
target
point
determining
matrix
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111496594.6A
Other languages
Chinese (zh)
Other versions
CN114241286A (en
Inventor
庄涵
汪鹏飞
刘羽
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huaray Technology Co Ltd
Original Assignee
Zhejiang Huaray Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huaray Technology Co Ltd filed Critical Zhejiang Huaray Technology Co Ltd
Priority to CN202111496594.6A priority Critical patent/CN114241286B/en
Publication of CN114241286A publication Critical patent/CN114241286A/en
Application granted granted Critical
Publication of CN114241286B publication Critical patent/CN114241286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a method and a device for capturing an object, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a scene point cloud obtained by shooting a target area by using camera equipment, wherein the target area comprises a target object; determining an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene characteristics of the scene point cloud and model characteristics of the model point cloud; determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number; determining a target pose of a first object included in the grabbing target object based on the target matching matrix; and controlling the target equipment to grasp the first object according to the target pose. The invention solves the problem of low object grabbing efficiency in the related technology and improves the object grabbing efficiency.

Description

Object grabbing method and device, storage medium and electronic device
Technical Field
The embodiment of the invention relates to the field of communication, in particular to a method and a device for grabbing an object, a storage medium and an electronic device.
Background
How to sort single randomly stacked workpieces in a material frame is an important challenge in the production line automation process. In actual production lines, mechanical vibration or manual work is mainly used to sort the workpieces. The vibration sorting device has the defects of large noise, poor flexibility, complex design and the like. And manual sorting is used, so that the efficiency is low and the cost is increased. With the development of the machine vision industry, automated sorting of stacked parts can be achieved by introducing machine vision. However, when grabbing with a two-dimensional vision system, there is often a problem of not grabbing due to stacking occlusion of parts.
As is clear from this, the related art has a problem of low efficiency of gripping the object.
In view of the above problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the invention provides a method and a device for grabbing an object, a storage medium and an electronic device, which are used for at least solving the problem of low object grabbing efficiency in the related art.
According to an embodiment of the present invention, there is provided a capturing method of an object, including: acquiring a scene point cloud obtained by shooting a target area by using camera equipment, wherein the target area comprises a target object; determining an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene characteristics of the scene point cloud and model characteristics of the model point cloud, wherein the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and the scene point cloud of one target object corresponds to one initial matching matrix included in the initial matching matrix set; determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number; determining, based on the target matching matrix, a target pose for capturing a first object included in the target object, wherein a scene point cloud of the first object corresponds to the target matching matrix; and controlling the target equipment to grasp the first object according to the target pose.
According to another embodiment of the present invention, there is provided a gripping apparatus of an object, including: the acquisition module is used for acquiring a scene point cloud obtained by shooting a target area by the camera equipment, wherein the target area comprises a target object; the first determining module is configured to determine an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene features of the scene point cloud and model features of the model point cloud, where the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and one scene point cloud of the target object corresponds to one initial matching matrix included in the initial matching matrix set; the second determining module is used for determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number; a third determining module, configured to determine, based on the target matching matrix, a target pose for capturing a first object included in the target object, where a scene point cloud of the first object corresponds to the target matching matrix; and the grabbing module is used for controlling the target equipment to grab the first object according to the target pose.
According to yet another embodiment of the present invention, there is also provided a computer-readable storage medium having stored therein a computer program, wherein the computer program when executed by a processor implements the steps of the method as described in any of the above.
According to a further embodiment of the invention, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method, scene point clouds obtained by shooting a target area by the shooting equipment are obtained, an initial matching matrix set converted from a model coordinate system to the scene coordinate system is determined according to scene characteristics of the scene point clouds and model point clouds, the number of targets of points successfully matched with the model point clouds in each initial matching matrix included in the initial matching matrix set is determined, a target matching matrix is determined from the initial matching matrix set according to target data, the target pose of a first object included in a grabbing target object is determined according to the target matching matrix, and the target equipment is controlled to grab the first object according to the target pose. After a plurality of initial matching matrixes are obtained, the target matching matrix can be determined according to the number of points successfully matched with the model point cloud in the initial matching matrix, and the objects corresponding to the target matching matrix are grabbed, so that the blocked objects are prevented from being grabbed, the problem of low object grabbing efficiency in the related technology can be solved, and the object grabbing efficiency is improved.
Drawings
Fig. 1 is a block diagram of a hardware structure of a mobile terminal according to a method for capturing an object according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of grabbing an object according to an embodiment of the invention;
fig. 3 is a schematic diagram of an application scenario of a grabbing method of an object according to an exemplary embodiment of the present invention;
FIG. 4 is a flowchart of a method of determining an initial set of matching matrices according to an exemplary embodiment of the invention;
FIG. 5 is a schematic diagram of a first sequence flow for determining according to an exemplary embodiment of the invention;
FIG. 6 is a first region schematic diagram in accordance with an exemplary embodiment of the invention;
FIG. 7 is a flow chart of a determining target pose according to an exemplary embodiment of the invention;
FIG. 8 is a flowchart of a method of grabbing an object according to an embodiment of the invention;
fig. 9 is a block diagram of a structure of an object gripping apparatus according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an embodiment of the present invention. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for capturing an object in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, there is provided a method for capturing an object, and fig. 2 is a flowchart of a method for capturing an object according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, obtaining a scene point cloud obtained by shooting a target area by using imaging equipment, wherein the target area comprises a target object;
step S204, determining an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene characteristics of the scene point cloud and model characteristics of the model point cloud, wherein the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and one scene point cloud of the target object corresponds to one initial matching matrix included in the initial matching matrix set;
Step S206, determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number;
step S208, determining a target pose for grabbing a first object included in the target object based on the target matching matrix, wherein a scene point cloud of the first object corresponds to the target matching matrix;
and step S210, controlling the target equipment to grasp the first object according to the target pose.
In the above embodiment, the target object may be a part, and the target area may be an area for holding the target object, for example, an area in a conveyor belt or the like. The target device may be a gripper or other device having a grabbing function, such as a six-axis mechanical arm and a clamping jaw, and the imaging device may be a 3D camera or the like. The schematic view of the application scene of the object capturing method can be seen in fig. 3, as shown in fig. 3, a 3d camera is used for shooting a target area to obtain scene point cloud, the scene point cloud is sent to an industrial personal computer, the industrial personal computer obtains reliable capturing pose through an algorithm, the capturing pose is sent to a mechanical arm (namely target equipment) through the industrial personal computer, the mechanical arm captures a part according to a path of the capturing pose, and the part is captured to a placement area.
In the above embodiment, the target area stores the same kind of target object, and a model of the target object may be obtained in advance, and a model point cloud of the target object may be determined. Wherein, the point cloud is a massive point set expressing the target space distribution and the target surface characteristics under the same space reference system, and P can be used i ={(x i ,y i ,z i ) I=1, 2, …, n }, respectively.
In the above embodiment, when determining the initial matching matrix set converted from the model coordinate system to the scene coordinate system based on the scene characteristics of the scene point cloud and the model characteristics of the model point cloud, the obtained scene point cloud and model point cloud may be subjected to the point cloud background removal operation, that is, according to the actual scene, the value ranges of the x, y and z coordinates of the point cloud of the given grabbing area (such as the target area) may be removed, so that the points in the point cloud beyond the limiting range may be removed. And filtering the point cloud to remove noise points of the scene point cloud. The filtering method can be mean filtering, bilateral filtering and other filtering methods. After filtering, if the point cloud density of the scene point cloud is too high, the point cloud can be subjected to downsampling. Namely, under the condition of overlarge point cloud density, the model point cloud and the scene point cloud can be directly subjected to downsampling in a mode of taking one point every other point. This step need not be performed in case of too little point cloud density. After the processed scene point cloud and model point cloud are obtained, feature calculation can be performed. The characteristics of the point cloud model and the scene point cloud are calculated respectively, and the characteristics comprise the following forms: PFH descriptors, FPFH descriptors, shot descriptors, etc. And performing rough feature matching to obtain candidate poses, namely constructing a kd tree by using feature descriptions in the model point cloud, searching the most similar feature descriptions in the model point cloud aiming at each feature description in the scene point cloud, and calculating an initial matching matrix set from the model to the scene by using the matching relation. A flowchart of a method for determining the initial set of matching matrices is shown in fig. 4. Wherein, each initial matching matrix included in the initial matching matrix set can include one or two of a rotation matrix and a translation matrix.
After the initial matching matrix set is determined, the target number of successfully matched points in the model point cloud and each initial matching matrix in the initial matching matrix set can be determined, and the target matching matrix is determined from the initial matching matrix set according to the successfully matched point data. And further determining the target pose according to the target matching matrix, and controlling the target equipment to grasp a first object corresponding to the target matching matrix according to the target pose.
Alternatively, the main body of execution of the above steps may be an industrial personal computer, a processor, or other devices with similar processing capabilities, or may be a machine integrated with at least an image acquisition device and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, and the data processing device may include a terminal such as a computer, a mobile phone, and the like, but is not limited thereto.
According to the method, scene point clouds obtained by shooting a target area by the shooting equipment are obtained, an initial matching matrix set converted from a model coordinate system to the scene coordinate system is determined according to scene characteristics of the scene point clouds and model point clouds, the number of targets of points successfully matched with the model point clouds in each initial matching matrix included in the initial matching matrix set is determined, a target matching matrix is determined from the initial matching matrix set according to target data, the target pose of a first object included in a grabbing target object is determined according to the target matching matrix, and the target equipment is controlled to grab the first object according to the target pose. After a plurality of initial matching matrixes are obtained, the target matching matrix can be determined according to the number of points successfully matched with the model point cloud in the initial matching matrix, and the objects corresponding to the target matching matrix are grabbed, so that the blocked objects are prevented from being grabbed, the problem of low object grabbing efficiency in the related technology can be solved, and the object grabbing efficiency is improved.
In an exemplary embodiment, determining the target number of points in each of the initial matching matrices included in the set of initial matching matrices that successfully match points in the model point cloud includes: determining a first bounding box of the target model; converting each initial matching matrix into the model coordinate system to obtain a first matrix set; determining a first point within the first bounding box in each matrix included in the first set of matrices; converting the first point into the model coordinate system based on the first matrix set to obtain a second point; determining a first number of points included in the second point that successfully match points in a K-dimensional tree of the target model; the first number is determined as the target number. In this embodiment, bounding box cutting may be performed, the number of matching success points is calculated, and sorting is performed according to the number of matching success points, so as to determine the target matching matrix.
In the above-described embodiment, the initial matching matrix calculated from the features has the following drawbacks: (1) The rotation matrix is unreliable, is influenced by noise points and part stacking, and has a large number of mismatching conditions; (2) Currently, RANSAC (random sample consensus) search is used to obtain an optimal matching matrix, and certain randomness exists in the method. Since a large number of initial matching matrices can be obtained, a reliable index can be introduced to measure how well the parts match. And performing reliability sequencing on the initial matching matrix by using the index to obtain an accurate target matching matrix. I.e. the number of points (i.e. the target number) that are successfully matched can be used for measurement, and a better ordering effect can be achieved. I.e. each initial matching matrix in the set of initial matching matrices is ordered according to the reliability of the matching.
In the above embodiment, the points in the part model point cloud are set asWhere n is the number of model points. The initial rotation and translation matrix (i.e. initial matching matrix) from the model to the scene are respectivelyk is the initial number of rotation translation matrices. Determining a transition from a scene point cloud to a model point based on an initial matching matrixRotation and translation matrix (i.e. first matrix set) of cloud +.>It is determined whether a point in the scene (i.e., a first point) is within a first bounding box in which the part model is located. If so, a container vector1 belonging to the RT array can be used j The point is saved.
In the above embodiment, the first point may be converted into the model coordinate system based on the first matrix set, the second point may be obtained, the second point converted from the first point into the model coordinate system may be calculated as follows,wherein u is vector1 j Is of a size that, after conversion, can use container 2 j Preserving transformed +.>And (5) a dot.
In the above embodiment, after the second point is determined, the first number of points included in the second point that successfully match the points in the K-dimensional tree of the target model may be determined; the first number is determined as the target number. Wherein the point of successful matching may be the closest point to the second point.
In one exemplary embodiment, determining the first number of points included in the second point that successfully match points in the K-dimensional tree of the target model includes: the following is performed for each sub-point included in the second point: determining a first distance between the sub-point and each point included in the K-dimensional tree, determining a first vector of the sub-point in the scene coordinate system, converting the first vector into the model coordinate system to obtain a second vector, determining a third vector of each point included in the K-dimensional tree in the model coordinate system, respectively determining cosine values of included angles between the second vector and each third vector, and determining a successfully matched point based on each cosine value and the first distance which are respectively determined; counting the successful points of the matching to obtainThe first number. In this embodiment, for each point in the containerThe closest point can be found in the kd tree species based on the object model +.>One or more indexes such as distance, normal vector included angle, difference of local coordinate system and the like can be adopted according to actual conditions to judge whether the point is successfully matched. When the distance and normal vector included angle are adopted for calculation, the first vector and the second vector are normal vectors.
In the above embodiment, it can be calculated by the following formulaFirst distance betweenFor the angle of the normal vector, o can be calculated t Normal vector of point in model coordinate system +.>(i.e. third vector) and +.>Normal vector of point in scene coordinate system +.>(i.e. the first vector) by->Calculate->The normal vector after rotation translation transformation (i.e. the second vector) and then the vector +.>And->And determining the number of successfully matched points according to the cosine value and the first distance. The cosine value of the included angle between the two vectors can be calculated by the following formula:and (5) calculating.
In an exemplary embodiment, determining the point at which the match was successful based on each of the cosine values and the first distances determined separately comprises: and determining the point with the first distance larger than a first preset distance and the cosine value larger than a preset cosine value as the successful matching point. In the present embodiment, the first distance d t Is smaller than a set first predetermined distance, and cos theta t Determining the points larger than the preset cosine value as the points successfully matched, counting the number of points +1 successfully matched and counting a container vector2 j The number of successfully matched points is stored as n.
In one exemplary embodiment, determining a target match matrix from the initial set of match matrices based on the target number comprises: sorting the initial matching matrixes included in the initial matching matrix set based on the target number to obtain a first order; the following operations are sequentially executed on all the initial matching matrixes except for the first initial matching matrix in the first sequence, so as to obtain a second number of process matching matrixes: determining a first initial matching matrix positioned before the initial matching matrix based on the first sequence, determining a first translation vector of the initial matching matrix and a second translation vector of the first initial matching matrix, and deleting the initial matching matrix when the distance between the first translation vector and the second translation vector is smaller than a second preset distance; and carrying out iterative nearby point matching on the second number of process matching matrixes to determine the target matching matrix. In the present embodiment, when it is determined that After the number of successful matches of (a), the initial matching matrices included in the initial matching matrix set may be ordered in this way, so as to obtain a reliably ordered rotational translation matrix sequence, i.e. the first order. The first order may be an order obtained by arranging the initial matching matrix in order of the number of points successfully matched from high to low. The higher the number of points that match successfully indicates the higher priority of the matrix. Wherein the first sequential flow diagram is determined with reference to fig. 5.
In the above embodiment, the distance between the first translation vector and the second translation vector may be a euclidean distance. The first translation vector and the second translation vector may each be adjacent translated from the model coordinate system to the scene coordinate system. From high to low in priorityIn, for each translation vector +.>Translation vector +.for other RT arrays with higher priority than it-> Calculate->And->Is the Euclidean distance of (2)If d is smaller than the second predetermined distance, delete +.>Matrix, determining the rest matrix as process matching matrix, and determining the rest +.>The matrix (i.e., process matching matrix) may be in order from high to low. And selecting the first n matrixes with high priority as primary matching matrixes according to the demands of users. Obtaining an accurate matching matrix from the model to the point cloud, namely a target matching matrix by utilizing an ICP matching method >The ICP, i.e., the iterative nearest point algorithm, is known in its entirety as Iterative Closest Point. The ICP algorithm determines corresponding point pair sets in point sets P and Q according to a certain criterion for 2 point clouds to be spliced, and calculates optimal coordinate transformation, namely a rotation matrix R and a translation vector t through least squares iteration, so that an error function +.>Minimum.
In the above embodiment, for the rough matching result obtained by feature matching, the grasping failure caused by the matching error can be well avoided by sorting the scores of ICP rough matching, and the rotation translation matrix and the accurate ICP matching parts can be reasonably dispersed by ICP accurate matching.
In one exemplary embodiment, determining a target pose for grabbing a first object included in the target object based on the target matching matrix includes: determining a third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix; determining a fourth number of points located in a second area of the target device in each of the matching matrices included in the target matching matrix, wherein the second area is an area of the target device other than the first area; deleting the matrixes with the fourth number smaller than the preset number included in the target matching matrixes to obtain residual matrixes; sorting the residual matrixes according to the order from small to large based on the third quantity to obtain a second order; and determining the target pose based on the matching matrix according to the second sequence. In this embodiment, after the target matching matrix is determined, the target pose of the first object included in the grabbing target object may be determined according to the target matching matrix. First, a third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix may be determined, a fourth number of points located in the second area of the target device in each of the matching matrices may be determined, the matching matrices having a fourth number smaller than the predetermined number may be deleted to obtain remaining matrices, the remaining matrices may be sorted in order from small to large according to the third number to obtain a second order, and the target pose may be determined according to the matching matrices in order according to the second order. That is, the object corresponding to the third number of matching matrices is first grasped. Wherein the predetermined number may be 1 (this value is only an exemplary illustration, and the invention is not limited thereto, and may be 3, 5, etc., for example). The first area may be an area outside a second area in the target device, where the second area is an area where the grabbing success rate of the target device is highest. The distance between the edge of the second area and the side edge of the jaw of the target device is smaller than a predetermined distance, and the predetermined distance can be set in a self-defined manner according to the jaw performance of the target device and the target object, which is not limited in the invention.
In the above embodiment, when the target object is in the second area, the grabbing success rate of the target device is the highest, and therefore, the point in the matching matrix cannot be 0 in the second area, that is, it is required to ensure that the object is in the second area.
In the above embodiment, the first area schematic view can be seen in fig. 6, and as shown in fig. 6, the first area is an area except for the area part D, and the second area is the area part D. The oblique line area is a clamping jaw, and compared with a camera coordinate system, the area of the midpoint of the clamping jaw coordinate system can be judged only by comparing x, y and z axis coordinates. Judging which region a point belongs to by the range of the x, y and z axes, and if the point is in the A, B, C region, colliding the point with a clamping jaw, wherein the collision point is increased by 1; if the point is in the E area, the jaw is most likely to pick up two, or the gripping fails due to interference from other parts, so the number of impact points is increased by 1 at the time of counting. If no point exists in the area D, the clamping jaw is in a hollow state, and the clamping pose is directly abandoned; if the number of collision points exceeds a certain threshold, discarding the clamping pose; and sequencing the grabbing pose smaller than the threshold according to the grabbing points, and grabbing with less grabbing points preferentially. And transforming the grabbing pose with the highest priority to a camera base coordinate system through a hand-eye calibration matrix, sending pose signals of the camera base coordinate system to the mechanical arm, and executing grabbing paths by the mechanical arm. The flow chart of determining the target pose by calibrating the hand-eye calibration, namely calibrating the rotation translation transformation relation between the robot base coordinate system and the camera coordinate system, can be seen in fig. 7.
In the embodiment, the collision between the point cloud and the clamping jaw is converted into the clamping jaw coordinate system for analysis, and whether the point cloud collides with the clamping jaw or not can be directly judged through the coordinates of x, y and z axes without complex judgment. According to the scene point cloud numbers of different partitions, the situation that two objects are grabbed, the objects are grabbed empty and the grabbing failure caused by collision is reasonably and effectively avoided, the pose ordering is carried out by using the number of collision points, and the occurrence of collision is reduced as much as possible. The reliable grabbing pose can be obtained by simply using the point cloud information without RGB information.
In an exemplary embodiment, before determining the third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix, the method further includes: acquiring a first grabbing point of the target model, a first rotation matrix and a grabbing translation matrix which are predetermined and correspond to the target model; the following operations are performed for each of the matching matrices included in the target matching matrix, and the third point in the each matching matrix is determined: determining a second grabbing point of the first grabbing point in an imaging coordinate system of the imaging equipment based on the matching matrix, the first grabbing point and the grabbing translation matrix; and determining the third point of a fourth point included in the matching matrix in a grabbing coordinate system of the target equipment based on the second grabbing point and a second rotation matrix, wherein the second rotation matrix is a matrix determined according to the first rotation matrix and the matching matrix. In this embodiment, before determining the number of third points in the first area of the target device in each of the matching matrices, the points in the matching matrix may first be converted into the grabbing coordinate system. The first grabbing point of the predetermined target model, the first rotation matrix corresponding to the target model and the grabbing translation matrix can be acquired first. The first rotation matrix and the grabbing translation matrix are matrices converted from a model coordinate system to a scene coordinate system.
In the above embodiment, the second grabbing vertex of the first grabbing point under the image capturing coordinate system may be determined according to the matching matrix, the first grabbing point and the grabbing translation matrix. And determining a third point of the points in the matching matrix under the grabbing coordinate system according to the second grabbing point and the second rotation translation matrix.
In one exemplary embodiment, determining a second capture point of the capture points in an imaging coordinate system of the imaging apparatus based on the matching matrix, the first capture point, and the capture translation matrix includes: determining a first product of the first grabbing point and the matching matrix, determining the sum of the first product and the grabbing translation matrix as a first coordinate, and determining a point corresponding to the first coordinate as the second grabbing point; determining, based on the second grabbing point and a second rotation matrix, the third point of the grabbing coordinates of the fourth point included in the matching matrix at the target device includes: and determining the coordinate difference between the fourth point and the second grabbing point, determining the product of the matching matrix and the first rotation matrix to obtain the second rotation matrix, determining the second product of the transpose of the second rotation matrix and the coordinate difference as a second coordinate, and determining the point corresponding to the second coordinate as the third point. In the present embodiment, the grabbing point (corresponding to the first grabbing point) in the target model is set to x 0 The grabbing pose is a rotation matrix R x (corresponding to the first rotation matrix), translation matrix T x (corresponding to the capture translation matrix) calculating the capture point (corresponding to the second capture point) x in the camera coordinate system (corresponding to the imaging coordinate system) s =R 1 *x 0 +T x Rotation matrix (corresponding to second rotation matrix)Fourth point p s Dot coordinates in the matrix of the jaw coordinate system (corresponding to the gripping coordinate system)>After the coordinates are determined, the points corresponding to the coordinates are respectively determined to be a second grabbing point and a third point.
The following describes a method for capturing an object in conjunction with the specific embodiment:
fig. 8 is a flowchart of a method for capturing an object according to an embodiment of the present invention, as shown in fig. 8, the method includes:
in step S802, candidate poses (corresponding to the initial matching matrix set) are calculated by point cloud preprocessing, feature calculation and feature rough matching. The method aims to obtain a large number of initial matching results which are not necessarily reliable through the model point cloud and the scene point cloud. The method comprises five steps of point cloud background removal, point cloud downsampling, point cloud filtering, feature calculation and feature matching.
The main content of the method is as follows: and aiming at scene point clouds, defining a part area to be matched by removing the background through the point clouds, and deleting points which do not belong to the part. And the density of the scene and the model point cloud is reduced by the point cloud downsampling, so that the time consumption of the algorithm is reduced. Noise points of scene point clouds are reduced through point cloud filtering, and a series of initial matching poses are obtained through feature calculation and feature matching.
Step S804, cutting the bounding box, calculating the successful points of the matching, and sorting according to the successful points of the matching.
In step S806, non-maximum suppression+icp-precise matching is performed, non-maximum suppression is performed by distance, and ICP-precise matching is performed on the rough matching result.
Step S808, calculating grabbing points, calculating collision points, eliminating the grabbing empty condition, and sequencing grabbing positions according to the collision points. The purpose of this step is to generate a grabbing pose under the camera coordinate system.
And step S810, through hand-eye calibration, obtaining the grabbing pose of the clamping jaw under the robot coordinate system, and controlling the mechanical arm to grab the part.
In the foregoing embodiment, by dividing the points near the rough matching and performing inverse rotation translation transformation on the points, sorting the initial matching by using the number of points satisfying the conditions that the distance is smaller than a given threshold value, the normal vector included angle is smaller than a given threshold value, and the like, by converting the points into the jaw coordinate system, it can be very convenient to determine whether the points collide, and introduce a jaw emptying mechanism. The number of collision points is utilized to sort the grabbing pose, so that the collision can be reduced as much as possible, and a good grabbing effect is obtained.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The embodiment also provides an object capturing device, which is used for implementing the above embodiment and the preferred implementation manner, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 9 is a block diagram of a structure of an object gripping apparatus according to an embodiment of the present invention, as shown in fig. 9, the apparatus including:
an obtaining module 902, configured to obtain a scene point cloud obtained by shooting a target area by an image capturing device, where the target area includes a target object;
a first determining module 904, configured to determine an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene features of the scene point cloud and model features of the model point cloud, where the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and one scene point cloud of the target object corresponds to one initial matching matrix included in the initial matching matrix set;
A second determining module 906, configured to determine a target number of points successfully matched with the model point cloud in each of the initial matching matrices included in the initial matching matrix set, and determine a target matching matrix from the initial matching matrix set based on the target number;
a third determining module 908, configured to determine, based on the target matching matrix, a target pose for capturing a first object included in the target object, where a scene point cloud of the first object corresponds to the target matching matrix;
and the grabbing module 910 is configured to control a target device to grab the first object according to the target pose.
In an exemplary embodiment, the second determining module 906 may determine the target number of points in each of the initial matching matrices included in the set of initial matching matrices that successfully match the points in the model point cloud by: determining a first bounding box of the target model; converting each initial matching matrix into the model coordinate system to obtain a first matrix set; determining a first point within the first bounding box in each matrix included in the first set of matrices; converting the first point into the model coordinate system based on the first matrix set to obtain a second point; determining a first number of points included in the second point that successfully match points in a K-dimensional tree of the target model; the first number is determined as the target number.
In one exemplary embodiment, the second determining module 906 may determine the first number of points included in the second point that successfully match points in the K-dimensional tree of the target model by: the following is performed for each sub-point included in the second point: determining a first distance between the sub-point and each point included in the K-dimensional tree, determining a first vector of the sub-point in the scene coordinate system, converting the first vector into the model coordinate system to obtain a second vector, determining a third vector of each point included in the K-dimensional tree in the model coordinate system, respectively determining cosine values of included angles between the second vector and each third vector, and determining a successfully matched point based on each cosine value and the first distance which are respectively determined; and counting the successful points of the matching to obtain the first quantity.
In an exemplary embodiment, the second determining module 906 may determine the point at which the matching is successful based on each of the cosine values and the first distances determined separately by: and determining the point with the first distance larger than a first preset distance and the cosine value larger than a preset cosine value as the successful matching point.
In one exemplary embodiment, the second determining module 906 may determine the target matching matrix from the initial set of matching matrices based on the target number by: sorting the initial matching matrixes included in the initial matching matrix set based on the target number to obtain a first order; the following operations are sequentially executed on all the initial matching matrixes except for the first initial matching matrix in the first sequence, so as to obtain a second number of process matching matrixes: determining a first initial matching matrix positioned before the initial matching matrix based on the first sequence, determining a first translation vector of the initial matching matrix and a second translation vector of the first initial matching matrix, and deleting the initial matching matrix when the distance between the first translation vector and the second translation vector is smaller than a second preset distance; and carrying out iterative nearby point matching on the second number of process matching matrixes to determine the target matching matrix.
In an exemplary embodiment, the third determining module 908 may determine, based on the target matching matrix, a target pose for grabbing the first object included in the target object by: determining a third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix; determining a fourth number of points located in a second area of the target device in each of the matching matrices included in the target matching matrix, wherein the second area is an area of the target device other than the first area; deleting the matrixes with the fourth number smaller than the preset number included in the target matching matrixes to obtain residual matrixes; sorting the residual matrixes according to the order from small to large based on the third quantity to obtain a second order; and determining the target pose based on the matching matrix according to the second sequence.
In an exemplary embodiment, the apparatus may be configured to obtain a predetermined first grabbing point of the target model and a first rotation matrix and a grabbing translation matrix corresponding to the target model before determining a third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix; the following operations are performed for each of the matching matrices included in the target matching matrix, and the third point in the each matching matrix is determined: determining a second grabbing point of the first grabbing point in an imaging coordinate system of the imaging equipment based on the matching matrix, the first grabbing point and the grabbing translation matrix; and determining the third point of a fourth point included in the matching matrix in a grabbing coordinate system of the target equipment based on the second grabbing point and a second rotation matrix, wherein the second rotation matrix is a matrix determined according to the first rotation matrix and the matching matrix.
In an exemplary embodiment, the apparatus may determine the second capture point of the capture point in the imaging coordinate system of the imaging device based on the matching matrix, the first capture point, and the capture translation matrix by: determining a first product of the first grabbing point and the matching matrix, determining the sum of the first product and the grabbing translation matrix as a first coordinate, and determining a point corresponding to the first coordinate as the second grabbing point; the apparatus may determine, based on the second grabbing point and a second rotation matrix, the third point of a fourth point included in the matching matrix in a grabbing coordinate system of the target device by: and determining the coordinate difference between the fourth point and the second grabbing point, determining the product of the matching matrix and the first rotation matrix to obtain the second rotation matrix, determining the second product of the transpose of the second rotation matrix and the coordinate difference as a second coordinate, and determining the point corresponding to the second coordinate as the third point.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, wherein the computer program when executed by a processor implements the steps of the method described in any of the above.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic apparatus may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method of capturing an object, comprising:
acquiring a scene point cloud obtained by shooting a target area by using camera equipment, wherein the target area comprises a target object;
determining an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene characteristics of the scene point cloud and model characteristics of the model point cloud, wherein the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and the scene point cloud of one target object corresponds to one initial matching matrix included in the initial matching matrix set;
determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number;
Determining, based on the target matching matrix, a target pose for capturing a first object included in the target object, wherein a scene point cloud of the first object corresponds to the target matching matrix;
and controlling the target equipment to grasp the first object according to the target pose.
2. The method of claim 1, wherein determining a target number of points in each of the initial matching matrices included in the set of initial matching matrices that successfully match points in the model point cloud comprises:
determining a first bounding box of the target model;
converting each initial matching matrix into the model coordinate system to obtain a first matrix set;
determining a first point within the first bounding box in each matrix included in the first set of matrices;
converting the first point into the model coordinate system based on the first matrix set to obtain a second point;
determining a first number of points included in the second point that successfully match points in a K-dimensional tree of the target model;
the first number is determined as the target number.
3. The method of claim 2, wherein determining a first number of points included in the second point that successfully match points in a K-dimensional tree of the target model comprises:
The following is performed for each sub-point included in the second point: determining a first distance between the sub-point and each point included in the K-dimensional tree, determining a first vector of the sub-point in the scene coordinate system, converting the first vector into the model coordinate system to obtain a second vector, determining a third vector of each point included in the K-dimensional tree in the model coordinate system, respectively determining cosine values of included angles between the second vector and each third vector, and determining a successfully matched point based on each cosine value and the first distance which are respectively determined;
and counting the successful points of the matching to obtain the first quantity.
4. A method according to claim 3, wherein determining the point at which the match was successful based on each of the cosine values and the first distances determined separately comprises:
and determining the point with the first distance larger than a first preset distance and the cosine value larger than a preset cosine value as the successful matching point.
5. The method of claim 1, wherein determining a target match matrix from the set of initial match matrices based on the target number comprises:
Sorting the initial matching matrixes included in the initial matching matrix set based on the target number to obtain a first order;
the following operations are sequentially executed on all the initial matching matrixes except for the first initial matching matrix in the first sequence, so as to obtain a second number of process matching matrixes: determining a first initial matching matrix positioned before the initial matching matrix based on the first sequence, determining a first translation vector of the initial matching matrix and a second translation vector of the first initial matching matrix, and deleting the initial matching matrix when the distance between the first translation vector and the second translation vector is smaller than a second preset distance;
and carrying out iterative nearby point matching on the second number of process matching matrixes to determine the target matching matrix.
6. The method of claim 1, wherein determining a target pose for grabbing a first object included in the target object based on the target matching matrix comprises:
determining a third number of third points located in the first area of the target device in each of the matching matrices included in the target matching matrix;
Determining a fourth number of points located in a second area of the target device in each of the matching matrices included in the target matching matrix, wherein the second area is an area of the target device other than the first area;
deleting the matrixes with the fourth number smaller than the preset number included in the target matching matrixes to obtain residual matrixes;
sorting the residual matrixes according to the order from small to large based on the third quantity to obtain a second order;
and determining the target pose based on the matching matrix according to the second sequence.
7. The method of claim 6, wherein prior to determining a third number of third points in each of the matching matrices included in the target matching matrix that are located in the first region of the target device, the method further comprises:
acquiring a first grabbing point of the target model, a first rotation matrix and a grabbing translation matrix which are predetermined and correspond to the target model;
the following operations are performed for each of the matching matrices included in the target matching matrix, and the third point in the each matching matrix is determined:
determining a second grabbing point of the first grabbing point in an imaging coordinate system of the imaging equipment based on the matching matrix, the first grabbing point and the grabbing translation matrix;
And determining the third point of a fourth point included in the matching matrix in a grabbing coordinate system of the target equipment based on the second grabbing point and a second rotation matrix, wherein the second rotation matrix is a matrix determined according to the first rotation matrix and the matching matrix.
8. The method of claim 7, wherein the step of determining the position of the probe is performed,
determining a second capture point of the capture point in an imaging coordinate system of the imaging apparatus based on the matching matrix, the first capture point, and the capture translation matrix includes: determining a first product of the first grabbing point and the matching matrix, determining the sum of the first product and the grabbing translation matrix as a first coordinate, and determining a point corresponding to the first coordinate as the second grabbing point;
determining, based on the second grabbing point and a second rotation matrix, the third point of a fourth point included in the matching matrix in the grabbing coordinate system of the target device includes: and determining the coordinate difference between the fourth point and the second grabbing point, determining the product of the matching matrix and the first rotation matrix to obtain the second rotation matrix, determining the second product of the transpose of the second rotation matrix and the coordinate difference as a second coordinate, and determining the point corresponding to the second coordinate as the third point.
9. An object gripping device, comprising:
the acquisition module is used for acquiring a scene point cloud obtained by shooting a target area by the camera equipment, wherein the target area comprises a target object;
the first determining module is configured to determine an initial matching matrix set converted from a model coordinate system to a scene coordinate system based on scene features of the scene point cloud and model features of the model point cloud, where the model point cloud is a point cloud of a target model of the target object, the model coordinate system is a coordinate system of the model point cloud, the scene coordinate system is a coordinate system of the scene point cloud, and one scene point cloud of the target object corresponds to one initial matching matrix included in the initial matching matrix set;
the second determining module is used for determining the target number of successfully matched points in the model point cloud in each initial matching matrix included in the initial matching matrix set, and determining a target matching matrix from the initial matching matrix set based on the target number;
a third determining module, configured to determine, based on the target matching matrix, a target pose for capturing a first object included in the target object, where a scene point cloud of the first object corresponds to the target matching matrix;
And the grabbing module is used for controlling the target equipment to grab the first object according to the target pose.
10. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 8.
11. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to run the computer program to perform the method of any of the claims 1 to 8.
CN202111496594.6A 2021-12-08 2021-12-08 Object grabbing method and device, storage medium and electronic device Active CN114241286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111496594.6A CN114241286B (en) 2021-12-08 2021-12-08 Object grabbing method and device, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111496594.6A CN114241286B (en) 2021-12-08 2021-12-08 Object grabbing method and device, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN114241286A CN114241286A (en) 2022-03-25
CN114241286B true CN114241286B (en) 2024-04-12

Family

ID=80754133

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111496594.6A Active CN114241286B (en) 2021-12-08 2021-12-08 Object grabbing method and device, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN114241286B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115284279A (en) * 2022-06-21 2022-11-04 福建(泉州)哈工大工程技术研究院 Mechanical arm grabbing method and device based on aliasing workpiece and readable medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
WO2021082229A1 (en) * 2019-10-31 2021-05-06 深圳市商汤科技有限公司 Data processing method and related device
CN113610921A (en) * 2021-08-06 2021-11-05 沈阳风驰软件股份有限公司 Hybrid workpiece grabbing method, device and computer-readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique
WO2021082229A1 (en) * 2019-10-31 2021-05-06 深圳市商汤科技有限公司 Data processing method and related device
CN112476434A (en) * 2020-11-24 2021-03-12 新拓三维技术(深圳)有限公司 Visual 3D pick-and-place method and system based on cooperative robot
CN113610921A (en) * 2021-08-06 2021-11-05 沈阳风驰软件股份有限公司 Hybrid workpiece grabbing method, device and computer-readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于时域编码结构光的高精度三维视觉引导抓取系统研究;孔令升;崔西宁;郭俊广;宋展;孙红雨;;集成技术;20200430(第02期);第38-49页 *

Also Published As

Publication number Publication date
CN114241286A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN109483573B (en) Machine learning device, robot system, and machine learning method
DE102019009206B4 (en) Robot system with dynamic packing mechanism
CN112837371B (en) Object grabbing method and device based on 3D matching and computing equipment
CN108044627B (en) Method and device for detecting grabbing position and mechanical arm
JP5787642B2 (en) Object holding device, method for controlling object holding device, and program
CN112109086B (en) Grabbing method for industrial stacked parts, terminal equipment and readable storage medium
CN112802105A (en) Object grabbing method and device
US20170151672A1 (en) Workpiece position/posture calculation system and handling system
CN113610921A (en) Hybrid workpiece grabbing method, device and computer-readable storage medium
Herakovic Robot vision in industrial assembly and quality control processes
CN112828892B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN112847375B (en) Workpiece grabbing method and device, computer equipment and storage medium
CN110395515B (en) Cargo identification and grabbing method and equipment and storage medium
CN114241286B (en) Object grabbing method and device, storage medium and electronic device
CN113524187B (en) Method and device for determining workpiece grabbing sequence, computer equipment and medium
CN114310892B (en) Object grabbing method, device and equipment based on point cloud data collision detection
CN112936257A (en) Workpiece grabbing method and device, computer equipment and storage medium
CN115321090B (en) Method, device, equipment, system and medium for automatically receiving and taking luggage in airport
CN113894058A (en) Quality detection and sorting method and system based on deep learning and storage medium
CN113538576A (en) Grabbing method and device based on double-arm robot and double-arm robot
CN114800533B (en) Sorting control method and system for industrial robot
CN116175542B (en) Method, device, electronic equipment and storage medium for determining clamp grabbing sequence
Luo et al. Vision-based 3-D object pick-and-place tasks of industrial manipulator
CN110253575B (en) Robot grabbing method, terminal and computer readable storage medium
CN114972495A (en) Grabbing method and device for object with pure plane structure and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant