CN112060087A - Point cloud collision detection method for robot to grab scene - Google Patents
Point cloud collision detection method for robot to grab scene Download PDFInfo
- Publication number
- CN112060087A CN112060087A CN202010885649.1A CN202010885649A CN112060087A CN 112060087 A CN112060087 A CN 112060087A CN 202010885649 A CN202010885649 A CN 202010885649A CN 112060087 A CN112060087 A CN 112060087A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- vertex
- point
- robot
- bounding box
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
- B25J9/1676—Avoiding collision or forbidden zones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/12—Bounding box
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/21—Collision detection, intersection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Manipulator (AREA)
Abstract
The invention provides a point cloud collision detection method for a robot to grab a scene, which comprises the following steps: s1: constructing an enclosure model of a robot clamping jaw, and acquiring point cloud data of a workpiece area; s2: establishing a robot coordinate system, a vertex coordinate system and a point coordinate system, and acquiring a homogeneous transformation matrix; s3: respectively acquiring the coordinates of each vertex of the bounding box model and each point of the point cloud under a robot coordinate system according to the homogeneous transformation matrix; s4: judging the relation between each point of the point cloud and the bounding box model to obtain the number of the points of the point cloud in the bounding box model; s5: and comparing the number of points of the point cloud in the bounding box model with a preset threshold value, thereby detecting whether the clamping jaw collides with an actual object in advance. The invention provides a point cloud collision detection method for a robot to grab a scene, which solves the problem that the grabbing is unstable due to collision conflict caused by the fact that a clamping jaw touches other workpieces in the process of grabbing workpieces by the existing robot.
Description
Technical Field
The invention relates to the technical field of robot vision, in particular to a point cloud collision detection method for a robot to grab a scene.
Background
With the reduction of labor cost and the development of robotics and computer vision technology, robots will be used in higher and higher proportion in the production process. The 3D vision guide robot grabbing is a key technology for realizing intelligent production of the robot. At present, due to the complexity of a production environment and the instability of 3D vision recognition, in the actual production process, the 3D vision can recognize that bottom workpieces or workpiece clamping positions interfere with each other, so that a clamping jaw can touch other workpieces to generate collision conflict in the grabbing process, and the grabbing is unstable.
In the prior art, as a chinese patent disclosed in 2019, 7, 12, a method and a device for controlling robot motion, a storage medium and a robot, the publication number is CN110000793A, a three-dimensional model of a workpiece is constructed according to a three-dimensional point cloud image set corresponding to the workpiece; planning a motion track on line according to the three-dimensional model of the workpiece; when the robot moves along the motion track, whether the robot collides or not is detected, but collision detection is not carried out by detecting the number of point clouds in a certain range.
Disclosure of Invention
The invention provides a point cloud collision detection method for a robot grabbing scene, aiming at overcoming the technical defect that grabbing is unstable due to collision caused by the fact that a clamping jaw touches other workpieces in the process of grabbing workpieces by the existing robot.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a point cloud collision detection method for a robot to grab a scene comprises the following steps:
s1: constructing an enclosure box model of a robot clamping jaw, and collecting point cloud data of a workpiece area through a robot camera;
s2: establishing a robot coordinate system, a vertex coordinate system of each vertex of the bounding box model and a point coordinate system of each point in the point cloud, and acquiring homogeneous transformation matrixes of each vertex coordinate system and each point coordinate system in the robot coordinate system;
s3: respectively acquiring the coordinates of each vertex of the bounding box model and each point of the point cloud under a robot coordinate system according to the homogeneous transformation matrix;
s4: judging the relation between each point of the point cloud and the bounding box model according to each vertex of the bounding box model and the coordinates of each point of the point cloud under the robot coordinate system to obtain the number of the points of the point cloud in the bounding box model;
s5: comparing the number of points of the point cloud in the bounding box model with a preset threshold value;
if the number of the points of the point cloud in the bounding box model is smaller than a preset threshold value, the clamping jaw cannot collide with an actual object; otherwise, the jaws may collide with the actual object.
Preferably, in step S1, the bounding box model of the robot gripping jaw is specifically constructed as follows: n clamping fingers of the robot clamping jaw are simplified into N cuboid models by adopting a bounding box method, and N bounding box models are correspondingly obtained, wherein N is more than or equal to 2.
Preferably, step S2 further includes establishing a jaw coordinate system and a camera coordinate system.
Preferably, in step S2, the method further includes the steps of:
acquiring a homogeneous transformation matrix of a clamping jaw coordinate system under a robot coordinate system through a robot demonstrator;
acquiring a homogeneous transformation matrix of a camera coordinate system in a robot coordinate system through a hand-eye calibration matrix of the robot;
acquiring a homogeneous transformation matrix of each vertex coordinate system under a clamping jaw coordinate system through actual measurement;
and photographing by a robot camera to obtain a homogeneous transformation matrix of each point coordinate system in a camera coordinate system.
Preferably, in step S2, a homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system is obtained according to the homogeneous transformation matrix of each vertex coordinate system in the jaw coordinate system and the homogeneous transformation matrix of the jaw coordinate system in the robot coordinate system;
and obtaining a homogeneous transformation matrix of each point coordinate system in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the camera coordinate system and the homogeneous transformation matrix of the camera coordinate system in the robot coordinate system.
Preferably, step S3 specifically includes: obtaining the coordinates of each vertex of the bounding box model in the robot coordinate system according to the homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system; and acquiring the coordinates of each point of the point cloud in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the robot coordinate system.
Preferably, in step S4, the relationship between the point cloud and the bounding box model is determined by the relationship between the point cloud and the three pairs of parallel planes of the bounding box model;
if the point of the point cloud is in three pairs of parallel planes of the bounding box model, the point of the point cloud is in the bounding box model; otherwise, the points of the point cloud are outside the bounding box model.
Preferably, the relation between the point of the point cloud and the parallel plane is obtained by calculating the normal included angle between the vector formed by the point of the point cloud and the vertex of the bounding box model and the parallel plane of the bounding box model;
if the included angles between the vector formed by the point of the point cloud and the vertex of the bounding box model and the normal lines of a pair of parallel planes are both acute angles or obtuse angles, the point is outside the pair of planes; otherwise the point is in the pair of planes.
Preferably, the step of determining the relationship between the point of the point cloud and the parallel plane by calculating the angle between the vector formed by the point of the point cloud and the vertex of the bounding box model and the normal of the parallel plane of the bounding box model is as follows:
s4.1: selecting a vertex in the bounding box model, setting the vertex as a first vertex, and setting three vertexes connected with the vertex as a second vertex, a third vertex and a fourth vertex respectively;
the three pairs of parallel planes of the bounding box model are respectively a pair of parallel x planes, a pair of parallel y planes and a pair of parallel z planes, the first vertex and the second vertex belong to different z planes, the first vertex and the third vertex belong to different y planes, and the first vertex and the fourth vertex belong to different x planes;
s4.2: acquiring the coordinates of the first vertex, the second vertex, the third vertex and the fourth vertex in a robot coordinate system
S4.3: calculating the vector v of the second vertex and the first vertex12:
Calculating the vector v of the third vertex and the first vertex13:
Calculating the vector v of the fourth vertex and the first vertex14:
S4.4: v is to be12、v13、v14Multiplying each two by two to respectively obtain a normal vector v of the x plane of the bounding box model under the robot coordinate systemxY normal vector of plane vyNormal vector v of z planez:
vx=v12×v13;
vy=v12×v14;
vz=v13×v14;
S4.5: setting the coordinate of a point A in the point cloud under the robot coordinate system as p15Calculating the vector of the point A and the first vertex:
calculate the vector of point a and the second vertex:
calculate the vector of point a and the third vertex:
calculate the vector of point a and the fourth vertex:
s4.6: calculating vxAnd vp1、vp4The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel x-plane, ifAndthe number of the opposite signs is different,point a is in the parallel x-plane;
calculating vyAnd vp1、vp3The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel y-plane, ifAndthe opposite sign, point A is in the parallel y plane;
calculating vzAnd vp1、vp2The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel z-plane, ifAndopposite sign, point a is in the parallel z-plane.
Preferably, in step S5, if the number of points of the point cloud in the N bounding box models is less than the preset threshold, the clamping jaw will not collide with the actual object; otherwise, the jaws may collide with the actual object.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the invention provides a point cloud collision detection method for a robot to grab a scene, which is characterized in that the number of points of a point cloud in an enclosing box model is obtained by judging the relation between the points of the point cloud and the enclosing box model under a robot coordinate system, so that whether a clamping jaw collides with an actual object or not is detected in advance.
Drawings
FIG. 1 is a flow chart of the steps for implementing the technical solution of the present invention;
fig. 2 is a schematic diagram of the relationship between the normal line included angle determination point and the parallel plane in step S4.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Example 1
As shown in fig. 1, a point cloud collision detection method for robot grabbing scene includes the following steps:
s1: constructing an enclosure box model of a robot clamping jaw, and collecting point cloud data of a workpiece area through a robot camera;
s2: establishing a robot coordinate system, a vertex coordinate system of each vertex of the bounding box model and a point coordinate system of each point in the point cloud, and acquiring homogeneous transformation matrixes of each vertex coordinate system and each point coordinate system in the robot coordinate system;
s3: respectively acquiring the coordinates of each vertex of the bounding box model and each point of the point cloud under a robot coordinate system according to the homogeneous transformation matrix;
s4: judging the relation between each point of the point cloud and the bounding box model according to each vertex of the bounding box model and the coordinates of each point of the point cloud under the robot coordinate system to obtain the number of the points of the point cloud in the bounding box model;
s5: comparing the number of points of the point cloud in the bounding box model with a preset threshold value;
if the number of the points of the point cloud in the bounding box model is smaller than a preset threshold value, the clamping jaw cannot collide with an actual object; otherwise, the jaws may collide with the actual object.
In the specific implementation process, a grabbing point is selected for detection, the number of points of the point cloud in the bounding box model is obtained by judging the relation between the point of the point cloud and the bounding box model under a robot coordinate system, and therefore whether the clamping jaw collides with an actual object or not when the grabbing point grabs is detected in advance.
More specifically, in step S1, constructing a bounding box model of the robot gripping jaw is specifically: n clamping fingers of the robot clamping jaw are simplified into N cuboid models by adopting a bounding box method, and N bounding box models are correspondingly obtained, wherein N is more than or equal to 2.
In the implementation process, the robot clamping jaw is generally a two-finger clamping jaw, and two clamping fingers of the robot clamping jaw are parallel.
More specifically, in step S2, establishing a jaw coordinate system and a camera coordinate system is further included.
In the specific implementation process, the clamping jaw coordinate system and the camera coordinate system are both established by the robot according to the self setting condition.
More specifically, in step S2, the method further includes the steps of:
acquiring a homogeneous transformation matrix of a clamping jaw coordinate system under a robot coordinate system through a robot demonstrator;
acquiring a homogeneous transformation matrix of a camera coordinate system in a robot coordinate system through a hand-eye calibration matrix of the robot;
acquiring a homogeneous transformation matrix of each vertex coordinate system under a clamping jaw coordinate system through actual measurement;
and photographing by a robot camera to obtain a homogeneous transformation matrix of each point coordinate system in a camera coordinate system.
In the implementation process, the rotating part of the homogeneous transformation matrix of each vertex coordinate system in the clamping jaw coordinate system is an identity matrix, and the rotating part of the homogeneous transformation matrix of each point coordinate system in the camera coordinate system is also an identity matrix.
More specifically, in step S2, a homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system is obtained according to the homogeneous transformation matrix of each vertex coordinate system in the jaw coordinate system and the homogeneous transformation matrix of the jaw coordinate system in the robot coordinate system;
and obtaining a homogeneous transformation matrix of each point coordinate system in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the camera coordinate system and the homogeneous transformation matrix of the camera coordinate system in the robot coordinate system.
In the implementation process, a homogeneous transformation matrix of a vertex coordinate system under a clamping jaw coordinate system is assumed to be T42The homogeneous transformation matrix of the clamping jaw coordinate system under the robot coordinate system is T12Then the homogeneous transformation matrix of the vertex coordinate system under the robot coordinate system is T14=T12*T24(ii) a Similarly, assume a homogeneous transformation matrix of a point coordinate system in the camera coordinate system as T35The homogeneous transformation matrix of the camera coordinate system under the robot coordinate system is T13Then the homogeneous transformation matrix of the point coordinate system in the robot coordinate system is T15=T13*T35。
More specifically, step S3 specifically includes: obtaining the coordinates of each vertex of the bounding box model in the robot coordinate system according to the homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system; and acquiring the coordinates of each point of the point cloud in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the robot coordinate system.
More specifically, in step S4, the relationship between the point cloud and the bounding box model is determined by the relationship between the point cloud and the three pairs of parallel planes of the bounding box model;
if the point of the point cloud is in three pairs of parallel planes of the bounding box model, the point of the point cloud is in the bounding box model; otherwise, the points of the point cloud are outside the bounding box model.
In the specific implementation process, the six planes which enclose the bounding box model are divided into three pairs of parallel planes.
More specifically, the relation between the point of the point cloud and the parallel plane is obtained by calculating the normal included angle between the vector formed by the point of the point cloud and the vertex of the bounding box model and the parallel plane of the bounding box model;
if the included angles between the vector formed by the point of the point cloud and the vertex of the bounding box model and the normal lines of a pair of parallel planes are both acute angles or obtuse angles, the point is outside the pair of planes; otherwise the point is in the pair of planes.
In the specific implementation process, the point and the parallel plane are judged by calculating the included angle of the normal line, so that the calculation process is simplified, and the detection efficiency is improved.
More specifically, the step of determining the relationship between the point of the point cloud and the parallel plane by calculating the angle between the vector formed by the point of the point cloud and the vertex of the bounding box model and the normal of the parallel plane of the bounding box model is as follows:
s4.1: selecting a vertex in the bounding box model, setting the vertex as a first vertex, and setting three vertexes connected with the vertex as a second vertex, a third vertex and a fourth vertex respectively;
the three pairs of parallel planes of the bounding box model are respectively a pair of parallel x planes, a pair of parallel y planes and a pair of parallel z planes, the first vertex and the second vertex belong to different z planes, the first vertex and the third vertex belong to different y planes, and the first vertex and the fourth vertex belong to different x planes;
as shown in fig. 2, the rectangular parallelepiped is a bounding box model, vertices 1, 2, 3, 4 correspond to a first vertex, a second vertex, a third vertex, and a fourth vertex of the bounding box model, respectively, point a1 is in a parallel plane y, and point a2 is outside the parallel plane y;
s4.2: acquiring the coordinates of the first vertex, the second vertex, the third vertex and the fourth vertex in a robot coordinate system
S4.3: calculating the vector v of the second vertex and the first vertex12:
Calculating the vector v of the third vertex and the first vertex13:
Calculating the vector v of the fourth vertex and the first vertex14:
S4.4: v is to be12、v13、v14Multiplying each two by two to respectively obtain a normal vector v of the x plane of the bounding box model under the robot coordinate systemxY normal vector of plane vyNormal vector v of z planez:
vx=v12×v13;
vy=v12×v14;
vz=v13×v14;
S4.5: setting a point A in a point cloud on a machineThe coordinate in the human coordinate system is p15Calculating the vector of the point A and the first vertex:
calculate the vector of point a and the second vertex:
calculate the vector of point a and the third vertex:
calculate the vector of point a and the fourth vertex:
s4.6: calculating vxAnd vp1、vp4The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel x-plane, ifAndthe opposite sign, point a is in the parallel x plane;
calculating vyAnd vp1、vp3The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel y-plane, ifAndthe opposite sign, point A is in the parallel y plane;
calculating vzAnd vp1、vp2The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel z-plane, ifAndopposite sign, point a is in the parallel z-plane.
In the implementation process, ifAndandandif the products of the three pairs of points are different signs, the point A is in three pairs of parallel planes, namely the point A is in the bounding box model; otherwise, point a is outside the bounding box model.
More specifically, in step S5, if the number of points of the point cloud in the N bounding box models is less than the preset threshold, the clamping jaw will not collide with the actual object; otherwise, the jaws may collide with the actual object.
In the specific implementation process, if the number of the point cloud points in at least one bounding box model is greater than or equal to a preset threshold value, the corresponding clamping fingers collide with an actual object, namely the clamping jaws collide with the actual object; only when the number of the points of the point cloud in all the bounding box models is smaller than a preset threshold value, all the clamping fingers cannot collide with the actual object, and the clamping jaws cannot collide with the actual object.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.
Claims (10)
1. A point cloud collision detection method for a robot to grab a scene is characterized by comprising the following steps:
s1: constructing an enclosure box model of a robot clamping jaw, and collecting point cloud data of a workpiece area through a robot camera;
s2: establishing a robot coordinate system, a vertex coordinate system of each vertex of the bounding box model and a point coordinate system of each point in the point cloud, and acquiring homogeneous transformation matrixes of each vertex coordinate system and each point coordinate system in the robot coordinate system;
s3: respectively acquiring the coordinates of each vertex of the bounding box model and each point of the point cloud under a robot coordinate system according to the homogeneous transformation matrix;
s4: judging the relation between each point of the point cloud and the bounding box model according to each vertex of the bounding box model and the coordinates of each point of the point cloud under the robot coordinate system to obtain the number of the points of the point cloud in the bounding box model;
s5: comparing the number of points of the point cloud in the bounding box model with a preset threshold value;
if the number of the points of the point cloud in the bounding box model is smaller than a preset threshold value, the clamping jaw cannot collide with an actual object; otherwise, the jaws may collide with the actual object.
2. The method for detecting point cloud collision of robot-grabbed scene as recited in claim 1, wherein in step S1, constructing a bounding box model of the robot jaws is specifically: n clamping fingers of the robot clamping jaw are simplified into N cuboid models by adopting a bounding box method, and N bounding box models are correspondingly obtained, wherein N is more than or equal to 2.
3. The method of claim 2, further comprising establishing a jaw coordinate system and a camera coordinate system in step S2.
4. The method of claim 3, further comprising the step of, in step S2:
acquiring a homogeneous transformation matrix of a clamping jaw coordinate system under a robot coordinate system through a robot demonstrator;
acquiring a homogeneous transformation matrix of a camera coordinate system in a robot coordinate system through a hand-eye calibration matrix of the robot;
acquiring a homogeneous transformation matrix of each vertex coordinate system under a clamping jaw coordinate system through actual measurement;
and photographing by a robot camera to obtain a homogeneous transformation matrix of each point coordinate system in a camera coordinate system.
5. The method for detecting point cloud collision of robot grabbing scene as claimed in claim 4, wherein in step S2, a homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system is obtained according to the homogeneous transformation matrix of each vertex coordinate system in the jaw coordinate system and the homogeneous transformation matrix of the jaw coordinate system in the robot coordinate system;
and obtaining a homogeneous transformation matrix of each point coordinate system in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the camera coordinate system and the homogeneous transformation matrix of the camera coordinate system in the robot coordinate system.
6. The method for detecting point cloud collision in a robot-captured scene according to claim 5, wherein step S3 specifically comprises: obtaining the coordinates of each vertex of the bounding box model in the robot coordinate system according to the homogeneous transformation matrix of each vertex coordinate system in the robot coordinate system; and acquiring the coordinates of each point of the point cloud in the robot coordinate system according to the homogeneous transformation matrix of each point coordinate system in the robot coordinate system.
7. The method of claim 1, wherein in step S4, the relationship between the point cloud and the bounding box model is determined by the relationship between the point cloud and the three pairs of parallel planes of the bounding box model;
if the point of the point cloud is in three pairs of parallel planes of the bounding box model, the point of the point cloud is in the bounding box model; otherwise, the points of the point cloud are outside the bounding box model.
8. The method of claim 7, wherein the relationship between the point cloud and the parallel plane is determined by calculating a normal angle between a vector formed by the point cloud and the vertex of the bounding box model and the parallel plane of the bounding box model;
if the included angles between the vector formed by the point of the point cloud and the vertex of the bounding box model and the normal lines of a pair of parallel planes are both acute angles or obtuse angles, the point is outside the pair of planes; otherwise the point is in the pair of planes.
9. The method according to claim 8, wherein the step of determining the relationship between the point of the point cloud and the parallel plane by calculating the normal angle between the vector formed by the point of the point cloud and the vertex of the bounding box model and the parallel plane of the bounding box model is as follows:
s4.1: selecting a vertex in the bounding box model, setting the vertex as a first vertex, and setting three vertexes connected with the vertex as a second vertex, a third vertex and a fourth vertex respectively;
the three pairs of parallel planes of the bounding box model are respectively a pair of parallel x planes, a pair of parallel y planes and a pair of parallel z planes, the first vertex and the second vertex belong to different z planes, the first vertex and the third vertex belong to different y planes, and the first vertex and the fourth vertex belong to different x planes;
s4.2: acquiring the coordinates of the first vertex, the second vertex, the third vertex and the fourth vertex in a robot coordinate system
S4.3: calculating the vector v of the second vertex and the first vertex12:
Calculating the vector v of the third vertex and the first vertex13:
Calculating the vector v of the fourth vertex and the first vertex14:
S4.4: v is to be12、v13、v14Multiplying each two by two to respectively obtain a normal vector v of the x plane of the bounding box model under the robot coordinate systemxY normal vector of plane vyNormal vector v of z planez:
vx=v12×v13;
vy=v12×v14;
vz=v13×v14;
S4.5: setting the coordinate of a point A in the point cloud under the robot coordinate system as p15Calculating the vector of the point A and the first vertex:
calculate the vector of point a and the second vertex:
calculate the vector of point a and the third vertex:
calculate the vector of point a and the fourth vertex:
s4.6: calculating vxAnd vp1、vp4The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel x-plane, ifAndthe opposite sign, point a is in the parallel x plane;
calculating vyAnd vp1、vp3The dot-product of (a) is,
if it isAndwith the same sign, point A is outside the parallel y-plane, ifAndthe opposite sign, point A is in the parallel y plane;
calculating vzAnd vp1、vp2The dot-product of (a) is,
10. The method for detecting point cloud collision of robot-grabbed scene as claimed in claim 2, wherein in step S5, if the number of points of the point cloud in the N bounding box models is less than a preset threshold, the clamping jaw will not collide with the actual object; otherwise, the jaws may collide with the actual object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010885649.1A CN112060087B (en) | 2020-08-28 | 2020-08-28 | Point cloud collision detection method for robot to grab scene |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010885649.1A CN112060087B (en) | 2020-08-28 | 2020-08-28 | Point cloud collision detection method for robot to grab scene |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112060087A true CN112060087A (en) | 2020-12-11 |
CN112060087B CN112060087B (en) | 2021-08-03 |
Family
ID=73659645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010885649.1A Active CN112060087B (en) | 2020-08-28 | 2020-08-28 | Point cloud collision detection method for robot to grab scene |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112060087B (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112720477A (en) * | 2020-12-22 | 2021-04-30 | 泉州装备制造研究所 | Object optimal grabbing and identifying method based on local point cloud model |
CN112802093A (en) * | 2021-02-05 | 2021-05-14 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method and device |
CN113232021A (en) * | 2021-05-19 | 2021-08-10 | 中国科学院自动化研究所苏州研究院 | Mechanical arm grabbing path collision detection method |
CN113284129A (en) * | 2021-06-11 | 2021-08-20 | 梅卡曼德(北京)机器人科技有限公司 | Box pressing detection method and device based on 3D bounding box |
CN113538459A (en) * | 2021-07-07 | 2021-10-22 | 重庆大学 | Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection |
CN113610921A (en) * | 2021-08-06 | 2021-11-05 | 沈阳风驰软件股份有限公司 | Hybrid workpiece grabbing method, device and computer-readable storage medium |
CN113763436A (en) * | 2021-07-29 | 2021-12-07 | 广州泽亨实业有限公司 | Workpiece collision detection method based on image registration and spraying system |
CN113910235A (en) * | 2021-10-29 | 2022-01-11 | 珠海格力智能装备有限公司 | Collision detection method, device and equipment for robot to grab materials and storage medium |
CN114310892A (en) * | 2021-12-31 | 2022-04-12 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method, device and equipment based on point cloud data collision detection |
CN114862875A (en) * | 2022-05-20 | 2022-08-05 | 中国工商银行股份有限公司 | Method and device for determining moving path of robot and electronic equipment |
CN115042171A (en) * | 2022-06-01 | 2022-09-13 | 上海交通大学 | Multi-finger under-actuated clamping jaw grabbing data set generation method |
CN115056215A (en) * | 2022-05-20 | 2022-09-16 | 梅卡曼德(北京)机器人科技有限公司 | Collision detection method, control method, capture system and computer storage medium |
CN115179326A (en) * | 2022-08-24 | 2022-10-14 | 广东工业大学 | Continuous collision detection method for articulated robot |
CN115237056A (en) * | 2022-09-23 | 2022-10-25 | 佛山智能装备技术研究院 | Multi-tool rapid deviation rectifying method for industrial robot |
CN115463845A (en) * | 2022-09-02 | 2022-12-13 | 赛那德科技有限公司 | Identification and grabbing method based on dynamic wrapping |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003331398A (en) * | 2002-05-15 | 2003-11-21 | Toshiba Corp | Region setting method, region setting program, collision determining method and collision determining program |
CN104156520A (en) * | 2014-07-31 | 2014-11-19 | 哈尔滨工程大学 | Linear projection based convex-polyhedron collision detection method |
CN107803831A (en) * | 2017-09-27 | 2018-03-16 | 杭州新松机器人自动化有限公司 | A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method |
CN108858199A (en) * | 2018-07-27 | 2018-11-23 | 中国科学院自动化研究所 | The method of the service robot grasp target object of view-based access control model |
CN109816730A (en) * | 2018-12-20 | 2019-05-28 | 先临三维科技股份有限公司 | Workpiece grabbing method, apparatus, computer equipment and storage medium |
CN110000793A (en) * | 2019-04-29 | 2019-07-12 | 武汉库柏特科技有限公司 | A kind of motion planning and robot control method, apparatus, storage medium and robot |
US10379620B2 (en) * | 2015-06-25 | 2019-08-13 | Fujitsu Limited | Finger model verification method and information processing apparatus |
CN111113428A (en) * | 2019-12-31 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Robot control method, robot control device and terminal equipment |
CN111360824A (en) * | 2020-02-27 | 2020-07-03 | 中科新松有限公司 | Double-arm self-collision detection method and computer-readable storage medium |
CN111558940A (en) * | 2020-05-27 | 2020-08-21 | 佛山隆深机器人有限公司 | Robot material frame grabbing planning and collision detection method |
-
2020
- 2020-08-28 CN CN202010885649.1A patent/CN112060087B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003331398A (en) * | 2002-05-15 | 2003-11-21 | Toshiba Corp | Region setting method, region setting program, collision determining method and collision determining program |
CN104156520A (en) * | 2014-07-31 | 2014-11-19 | 哈尔滨工程大学 | Linear projection based convex-polyhedron collision detection method |
US10379620B2 (en) * | 2015-06-25 | 2019-08-13 | Fujitsu Limited | Finger model verification method and information processing apparatus |
CN107803831A (en) * | 2017-09-27 | 2018-03-16 | 杭州新松机器人自动化有限公司 | A kind of AOAAE bounding volume hierarchy (BVH)s collision checking method |
CN108858199A (en) * | 2018-07-27 | 2018-11-23 | 中国科学院自动化研究所 | The method of the service robot grasp target object of view-based access control model |
CN109816730A (en) * | 2018-12-20 | 2019-05-28 | 先临三维科技股份有限公司 | Workpiece grabbing method, apparatus, computer equipment and storage medium |
CN110000793A (en) * | 2019-04-29 | 2019-07-12 | 武汉库柏特科技有限公司 | A kind of motion planning and robot control method, apparatus, storage medium and robot |
CN111113428A (en) * | 2019-12-31 | 2020-05-08 | 深圳市优必选科技股份有限公司 | Robot control method, robot control device and terminal equipment |
CN111360824A (en) * | 2020-02-27 | 2020-07-03 | 中科新松有限公司 | Double-arm self-collision detection method and computer-readable storage medium |
CN111558940A (en) * | 2020-05-27 | 2020-08-21 | 佛山隆深机器人有限公司 | Robot material frame grabbing planning and collision detection method |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112720477A (en) * | 2020-12-22 | 2021-04-30 | 泉州装备制造研究所 | Object optimal grabbing and identifying method based on local point cloud model |
CN112720477B (en) * | 2020-12-22 | 2024-01-30 | 泉州装备制造研究所 | Object optimal grabbing and identifying method based on local point cloud model |
CN112802093A (en) * | 2021-02-05 | 2021-05-14 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method and device |
CN112802093B (en) * | 2021-02-05 | 2023-09-12 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method and device |
CN113232021A (en) * | 2021-05-19 | 2021-08-10 | 中国科学院自动化研究所苏州研究院 | Mechanical arm grabbing path collision detection method |
CN113284129A (en) * | 2021-06-11 | 2021-08-20 | 梅卡曼德(北京)机器人科技有限公司 | Box pressing detection method and device based on 3D bounding box |
CN113284129B (en) * | 2021-06-11 | 2024-06-18 | 梅卡曼德(北京)机器人科技有限公司 | 3D bounding box-based press box detection method and device |
CN113538459B (en) * | 2021-07-07 | 2023-08-11 | 重庆大学 | Multimode grabbing obstacle avoidance detection optimization method based on drop point area detection |
CN113538459A (en) * | 2021-07-07 | 2021-10-22 | 重庆大学 | Multi-mode grabbing obstacle avoidance detection optimization method based on drop point area detection |
CN113763436A (en) * | 2021-07-29 | 2021-12-07 | 广州泽亨实业有限公司 | Workpiece collision detection method based on image registration and spraying system |
CN113610921B (en) * | 2021-08-06 | 2023-12-15 | 沈阳风驰软件股份有限公司 | Hybrid workpiece gripping method, apparatus, and computer readable storage medium |
CN113610921A (en) * | 2021-08-06 | 2021-11-05 | 沈阳风驰软件股份有限公司 | Hybrid workpiece grabbing method, device and computer-readable storage medium |
CN113910235A (en) * | 2021-10-29 | 2022-01-11 | 珠海格力智能装备有限公司 | Collision detection method, device and equipment for robot to grab materials and storage medium |
CN114310892B (en) * | 2021-12-31 | 2024-05-03 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method, device and equipment based on point cloud data collision detection |
CN114310892A (en) * | 2021-12-31 | 2022-04-12 | 梅卡曼德(北京)机器人科技有限公司 | Object grabbing method, device and equipment based on point cloud data collision detection |
CN115056215A (en) * | 2022-05-20 | 2022-09-16 | 梅卡曼德(北京)机器人科技有限公司 | Collision detection method, control method, capture system and computer storage medium |
CN114862875A (en) * | 2022-05-20 | 2022-08-05 | 中国工商银行股份有限公司 | Method and device for determining moving path of robot and electronic equipment |
CN115042171A (en) * | 2022-06-01 | 2022-09-13 | 上海交通大学 | Multi-finger under-actuated clamping jaw grabbing data set generation method |
CN115179326A (en) * | 2022-08-24 | 2022-10-14 | 广东工业大学 | Continuous collision detection method for articulated robot |
CN115179326B (en) * | 2022-08-24 | 2023-03-14 | 广东工业大学 | Continuous collision detection method for articulated robot |
CN115463845B (en) * | 2022-09-02 | 2023-10-31 | 赛那德科技有限公司 | Identification grabbing method based on dynamic package |
CN115463845A (en) * | 2022-09-02 | 2022-12-13 | 赛那德科技有限公司 | Identification and grabbing method based on dynamic wrapping |
CN115237056B (en) * | 2022-09-23 | 2022-12-13 | 佛山智能装备技术研究院 | Multi-tool rapid deviation rectifying method for industrial robot |
CN115237056A (en) * | 2022-09-23 | 2022-10-25 | 佛山智能装备技术研究院 | Multi-tool rapid deviation rectifying method for industrial robot |
Also Published As
Publication number | Publication date |
---|---|
CN112060087B (en) | 2021-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112060087B (en) | Point cloud collision detection method for robot to grab scene | |
CN113450408B (en) | Irregular object pose estimation method and device based on depth camera | |
CN113610921B (en) | Hybrid workpiece gripping method, apparatus, and computer readable storage medium | |
CN109015640B (en) | Grabbing method, grabbing system, computer device and readable storage medium | |
CN111144426B (en) | Sorting method, sorting device, sorting equipment and storage medium | |
CN114952809B (en) | Workpiece identification and pose detection method, system and mechanical arm grabbing control method | |
CN113232021B (en) | Mechanical arm grabbing path collision detection method | |
CN113379849A (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN113269723B (en) | Unordered grabbing system for parts with three-dimensional visual positioning and manipulator cooperative work | |
WO2022021156A1 (en) | Method and apparatus for robot to grab three-dimensional object | |
CN109213202A (en) | Cargo arrangement method, device, equipment and storage medium based on optical servo | |
CN115070781B (en) | Object grabbing method and two-mechanical-arm cooperation system | |
Wang et al. | A virtual end-effector pointing system in point-and-direct robotics for inspection of surface flaws using a neural network based skeleton transform | |
Lin et al. | Vision based object grasping of industrial manipulator | |
Chang | Binocular vision-based 3-D trajectory following for autonomous robotic manipulation | |
CN115661592B (en) | Weld joint identification method, device, computer equipment and storage medium | |
Nakano | Stereo vision based single-shot 6d object pose estimation for bin-picking by a robot manipulator | |
Sahu et al. | Shape features for image-based servo-control using image moments | |
CN115035492A (en) | Vehicle identification method, device, equipment and storage medium | |
Zhu et al. | Occlusion handling for industrial robots | |
JP2023158273A (en) | Method for recognizing position/attitude of target object held by robot hand, system and computer program | |
Sun et al. | Precise grabbing of overlapping objects system based on end-to-end deep neural network | |
JP7161857B2 (en) | Information processing device, information processing method, and program | |
Xin et al. | Real-time dynamic system to path tracking and collision avoidance for redundant robotic arms | |
Lee et al. | A CAD-Free Random Bin Picking System for Fast Changeover on Multiple Objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |