CN111267095B - Mechanical arm grabbing control method based on binocular vision - Google Patents

Mechanical arm grabbing control method based on binocular vision Download PDF

Info

Publication number
CN111267095B
CN111267095B CN202010037021.6A CN202010037021A CN111267095B CN 111267095 B CN111267095 B CN 111267095B CN 202010037021 A CN202010037021 A CN 202010037021A CN 111267095 B CN111267095 B CN 111267095B
Authority
CN
China
Prior art keywords
mechanical arm
coordinate system
target
target object
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010037021.6A
Other languages
Chinese (zh)
Other versions
CN111267095A (en
Inventor
王东
杨冬
董永祥
连捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN202010037021.6A priority Critical patent/CN111267095B/en
Publication of CN111267095A publication Critical patent/CN111267095A/en
Application granted granted Critical
Publication of CN111267095B publication Critical patent/CN111267095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1648Programme controls characterised by the control loop non-linear control combined or not with linear control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention relates to the technical field of a kinova mechanical arm and ZED vision, and discloses a mechanical arm grabbing control method based on binocular vision, which comprises the following steps: (1) the method comprises the following steps of (1) establishing an experimental platform, (2) identifying the position and the posture of a target when the target is placed in an inclined mode, (3) identifying the position and the posture of the target when the target is placed horizontally, (4) identifying and grabbing the target by using an end effector of a mechanical arm, and (5) enabling the mechanical arm to reach a specified joint angle state after grabbing the target. The invention has the following advantages: firstly, the attitude angle of the target object is estimated, the deviation of the included angle between the direction vector and the coordinate axis plane is within-5 to +5 degrees, and the detection accuracy is greatly improved. Secondly, the interference of interference items with the same or similar colors on the identification of the target object can be avoided by adopting the color and the shape. Thirdly, the two mechanical arms are set to grab the left and right target objects, and the effect of the postures of the left and right mechanical arms for grabbing the target objects is finally obtained.

Description

Mechanical arm grabbing control method based on binocular vision
Technical Field
The invention relates to a mechanical arm grabbing control method based on binocular vision, and belongs to the technical field of a kinova mechanical arm and ZED vision.
Background
Binocular vision pose measurement is taken as an important target pose mode, but certain difficulty exists in measuring the pose of an obliquely placed target object, and the existing methods comprise the following steps: a circle extraction method and a feature point extraction method.
Round section extraction: in the existing method, 3 industrial CCD cameras are used for extracting the circle centers of the upper and lower sections of a cylindrical target object, a coordinate system is defined, and the included angle between the connecting line of the circle centers of the two sections and the coordinate system is calculated to describe the position and the posture.
The characteristic point extraction method comprises the following steps: two cases are distinguished: firstly, manually setting characteristic points; secondly, laser irradiation is used for acquiring the characteristic points. In the first case, a pyramid prism is set on the surface of the target object as a feature point, and the position and the posture of the target object are obtained by irradiating the prism with laser with a certain wavelength. In the second case, the feature points are artificially created by irradiating the surface of the part with laser rays, and the accuracy of the method greatly depends on the accuracy and stability of the emitted laser, thereby increasing the detection cost.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide a mechanical arm grabbing control method based on binocular vision. Due to the fact that the contour extraction precision of the circular section is not high, the circle center extraction has large deviation. In practical situations, there are some cases that the circular cross section is blocked, and the like, which all make the circle center line method to estimate the posture of the cylinder impractical. In order to solve the problems of difficulty in extracting the section circle, low precision and the like, the invention provides a method for extracting the foreground to obtain the arc vertex of the section of the target object, the posture of the target object is estimated according to the coordinate difference of the vertex, and finally the mechanical arm is controlled to be capable of flexibly grabbing the target object like a human hand.
In order to achieve the purpose of the invention and solve the problems existing in the prior art, the invention adopts the technical scheme that: a mechanical arm grabbing control method based on binocular vision comprises the following steps:
step 1, an experiment platform is built, a binocular vision camera is installed at a position 1-1.5m above the experiment platform, the binocular vision camera is started, a kinova mechanical arm is started, a left mechanical arm model and a right mechanical arm model urdf model are loaded, the models describe the position and posture relation of each joint of the kinova mechanical arm, the left mechanical arm is installed on the left hand side of a horizontal desktop, the bottom coordinate system of the left mechanical arm is defined as left, the right mechanical arm is installed on the right hand side of the horizontal desktop, the bottom coordinate system of the right mechanical arm is defined as right, and the distance between the left mechanical arm and the right mechanical arm is 0.8-1.2 m. A root coordinate system is defined as root in the middle position of the left mechanical arm and the right mechanical arm, a world coordinate system of the binocular vision camera is defined as map, the root is used as the root coordinate system, the left, right and map coordinate positions and postures are determined by translation and rotation, the origin of the root coordinate system is used as a central point, a rotation matrix rotating to the bottom coordinate system of the left mechanical arm is R1, a translation matrix is T1, a rotation matrix rotating to the bottom coordinate system of the right mechanical arm is R2, a translation matrix T2, a rotation matrix rotating to the coordinate system of the binocular vision camera is R3, a translation matrix is T3, and the coordinate system is described by equations (1) to (3),
left=R1*root+T1 (1)
right=R2*root+T2 (2)
map=R3*root+T3 (3)
step 2, recognizing the position and the posture of the target object when the target object is placed in an inclined mode, and specifically comprising the following substeps:
(a) subscribing an RGB image published by a binocular vision camera, carrying out binarization on the image, manually setting a pixel frame due to more foreign objects outside the image, and extracting a foreground in the frame by using a grabCut algorithm in opencv, wherein the iteration number is set to be 7-12;
(b) extracting foreground image pixel pi(u, v), i belongs to N (N is the number of pixel points extracted from the foreground) and the depth image are matched to obtain the corresponding world coordinate P in the world coordinate systemi=[XW YW ZW]TI ∈ N. N PiThe mapping relation of the world coordinates mapped from all the pixel coordinate systems to the binocular vision camera stored in the 1 st container is described by equation (4),
Figure GDA0003444645710000021
wherein z is a binocular vision camera depth value,
Figure GDA0003444645710000031
f is the focal length of the binocular vision camera, dx is the physical distance corresponding to the horizontal unit pixel in mm, dy is the physical distance corresponding to the vertical unit pixel in mm, and the parameter u in the formula0Is the transverse midpoint of a pixel of the image, v0Is the longitudinal midpoint of an image pixel. Storing the converted world coordinates in a 2 nd container, traversing the whole 2 nd container, and solving the element world coordinate P with the minimum x value in all elements1=[X,Y,Z],X∈min{XWThe element represents the highest of the two circular arcsVertex P1With P1Is the central point;
(c) traversing the 2 nd container by taking the bottle length L as a constraint condition to find out the central point P1All points with the distance satisfying the constraint condition of L length are stored in a 3 rd container, and then a target P corresponding to the minimum value of the x-axis coordinate is searched from the 3 rd container2I.e. the highest vertex P of the rear arc2The direction vector v, calculated by equation (5),
v=P1-P2 (5)
(d) calculating the center of gravity Q3 of the target object, and setting the point Q2 as P in the substep (c)1And P2Wherein the radius of the cross-section of the bottle is r, the distance between Q2 and the point Q directly below is h, Q2, Q3 form a geometric relationship in the form of a right triangle and satisfy Q3-Q ═ kv, k ≠ 0, v ≠ kvx,vy,vzThe projection of the direction vector v on the x-axis, the y-axis and the z-axis of the world coordinate system of the binocular vision camera is described by the formulas (6) to (8),
Figure GDA0003444645710000032
Figure GDA0003444645710000033
Figure GDA0003444645710000034
(e) the reference system of the direction vector v is a binocular vision camera world coordinate system map, the direction vector v of the right target object is converted into a right mechanical arm bottom coordinate system right, the direction vector is v2, the conversion formula is described by an equation (9),
Figure GDA0003444645710000035
wherein the content of the first and second substances,
Figure GDA0003444645710000036
the angle between the direction vector v2 and each coordinate plane of right is calculated for the rotation matrix of converting the world coordinate system map of the binocular vision camera to the coordinate system right at the bottom of the right mechanical arm, which is described by the formula (10),
Figure GDA0003444645710000041
in the formula, v2x,v2y,v2zV2 are respectively projected on an x axis, a y axis and a z axis under a right mechanical arm base coordinate system, alpha is an included angle between a direction vector v2 and an x-y plane, beta is an included angle between a direction vector v2 and the x-z plane, and chi is an included angle between a direction vector v2 and the y-z plane;
step 3, recognizing the position and the posture of the object when the object is horizontally placed, which specifically comprises the following substeps:
(a) a color segmentation mode is required to be used for extracting a binary image of a left target object, because interference of irrelevant objects with the same color exists in the process, the binary image is also extracted;
(b) in order to solve the problem that the same color interferes with the identification of the target object, the shape of the target object is taken as an object identification auxiliary condition, the extracted image is screened by utilizing the shape characteristic, and a binary image with a quadrilateral characteristic is reserved;
(c) using a polygon approximation contour function cv in opencv to approximate approxColy (), and obtaining a two-dimensional array M multiplied by N, wherein M represents the number of polygon groups, and N represents a set of each group of polygonal boundary pixel points;
(d) judging whether the target object is a quadrangle or not, wherein the target object to be extracted is a quadrangle, but a certain deviation may exist in the actual engineering, the number of the edges of the target object is selected to be 3-5, and the target object can be considered to be in accordance with the shape characteristics of the target object in the range and is reserved;
(e) solving all shape information by using a findContours () function, and storing a vector of the outline;
(f) solving the number of edges of each contour by using an approx PolyDP () function, traversing the whole 1 st container, deleting interferents with the number of edges not being 3-5, keeping elements with approximate quadrilateral shapes in the 1 st container, drawing a frame by using a polylines () function, solving the pixel coordinate of a gravity center point gold, and calculating the position coordinate of the gravity center point in a world coordinate system of a binocular vision camera by using depth image matching;
(g) after the contour information of the target object is determined, setting four vertexes of a rectangular frame as A, B, C and D, if AB is the longest side, selecting the difference of coordinates of the vertexes of the AB side as a direction vector, converting the direction vector v of the left target object into the lower v1 of a left mechanical arm base coordinate system, and calculating the included angle between the direction vector and the x-z plane of the left mechanical arm base coordinate system to be
Figure GDA0003444645710000051
Described by the formula (11),
Figure GDA0003444645710000052
wherein, v1x,v1y,v1zRespectively are the projections of v1 on the x axis, the y axis and the z axis under the coordinate system of the left mechanical arm base;
step 4, recognizing and grabbing the target object by the mechanical arm end effector, horizontally placing the position corresponding to the left target object in the experimental state, wherein the included angle between the left target object and the horizontal plane is zero, solving the included angle delta between the direction vector v1 and the x-z plane of the left mechanical arm base coordinate system by the formula (11), controlling the included angle between the z axis of the left mechanical arm end effector and the z axis of the left mechanical arm bottom coordinate system to be 170-180 degrees, controlling the rotation angle Gr of the left mechanical arm end effector around the joint axis of the left mechanical arm end effector, describing the target object by the formula (12),
Figure GDA0003444645710000053
the right target object in the experimental state is placed obliquely to the table top, the vector of the direction under the bottom coordinate system of the right mechanical arm is v2, the included angle between v2 and the x-y plane is alpha, the included angle sigma between the z axis of the end effector of the right mechanical arm and the z axis of the bottom coordinate system of the right mechanical arm is described by solving the equations (13) and (14),
Figure GDA0003444645710000054
Figure GDA0003444645710000055
wherein delta is latitude value in spherical coordinate, v2 is included angle beta of z-x plane of bottom coordinate system of right mechanical arm, right mechanical arm end effector rotates angle Gr around joint axis, which is described by formulas (15) and (16),
Figure GDA0003444645710000056
Figure GDA0003444645710000061
after attitude control is finished, the end effector of the mechanical arm needs to be controlled to reach a target position, wherein the azimuth angle is defined as a horizontal included angle between a line starting from the positive direction of an x axis and a line from a clockwise direction to a target direction, and because the azimuth angle theta is [0,2 pi ], the (x, y) needs to be judged to be positioned in a quadrant in a four quadrant, and the azimuth angle theta is calculated through a formula (17);
Figure GDA0003444645710000062
in order to avoid collision with a target object during grasping, a pre-grasping process is designed, the position and posture of grasping are designed in advance, and the description is given by equation (18),
Figure GDA0003444645710000063
in the formula (x)goal,ygoal,zgoal)TIs the position coordinate of the target gravity center point relative to the bottom coordinate system of the mechanical arm, (x ', y ', z ')TThe coordinate point is a coordinate point of a pre-grabbing position relative to a coordinate system at the bottom of the mechanical arm, and L is a distance set manually; theta is an included angle between the x-y plane projection of the target point in the coordinate system of the mechanical arm and the x axis, and sigma is an included angle between the z axis of the end effector of the mechanical arm and the z axis of the coordinate system at the bottom of the mechanical arm; obtaining a preset target position and a preset target posture by using the formulas (11) to (18), and controlling the end effector of the mechanical arm to grab the target object in the optimal posture;
and 5, after the mechanical arm grabs the target, the mechanical arm action client sends the specified joint angle to the action server, and the action is finished after the execution.
The invention has the beneficial effects that: a mechanical arm grabbing control method based on binocular vision comprises the following steps:
(1) the method comprises the following steps of (1) establishing an experimental platform, (2) identifying the position and the posture of a target when the target is placed in an inclined mode, (3) identifying the position and the posture of the target when the target is placed horizontally, (4) identifying and grabbing the target by using an end effector of a mechanical arm, and (5) enabling the mechanical arm to reach a specified joint angle state after grabbing the target. Compared with the prior art, the invention has the following advantages: firstly, extracting the vertex coordinates of front and rear circular arcs from the foreground of the target object, and estimating the direction vector of the target object by using the difference value of the coordinates, thereby estimating the attitude angle of the target object, wherein the deviation of the included angle between the direction vector and the coordinate axis plane is within-5 to +5 degrees, and the detection accuracy is greatly improved. Secondly, the interference of interference items with the same or similar colors on the identification of the target object can be avoided by adopting the colors and the shapes, the influence of the interference items on the target detection is overcome, the center of gravity is stable, and the position and the posture information of the target object can be stably detected. Thirdly, the double mechanical arms are set to grab the left and right target objects, and the pose of the left target object can guide the left mechanical arm to grab the target object in the optimal pose. The pose of the right target object can guide the right mechanical arm to grab the target object in the optimal pose, and the effect of grabbing the target object pose by the left mechanical arm and the right mechanical arm is finally obtained.
Drawings
FIG. 1 is a flow chart of the method steps of the present invention.
Fig. 2 is a diagram of the detection effect of the posture of the right target object.
In the figure: (a) the method comprises the following steps of (a) obtaining a right target object attitude image, (b) selecting a rectangular frame and extracting a target object foreground elliptical image only in the frame, (c) obtaining a geometric relation image of a right target under a world coordinate system of a binocular vision camera, and (d) obtaining a result image of a right target object direction vector.
Fig. 3 is a diagram of the effect of detecting the posture of the left target object.
In the figure: (a) the method comprises the following steps of (a) obtaining a left target object posture graph, (b) obtaining a color + shape recognition effect graph, (c) obtaining a geometric relation graph of a left target under a left mechanical arm bottom coordinate system, and (d) obtaining a result graph of a left target object direction vector.
Fig. 4 is a coordinate system conversion diagram of the left and right robot arms.
Fig. 5 is an effect diagram of the left and right mechanical arms grabbing the target object in a special posture.
In the figure: (a) the effect graph of the left mechanical arm grabbing the target object in the special posture is shown, and the effect graph of the right mechanical arm grabbing the target object in the special posture is shown.
Fig. 6 is an effect diagram of the left and right robot arms after the completion of gripping two target objects.
Detailed Description
The invention will be further explained with reference to the drawings.
As shown in fig. 1, a mechanical arm grabbing control method based on binocular vision includes the following steps:
step 1, an experiment platform is built, a binocular vision camera is installed at a position 1-1.5m above the experiment platform, the binocular vision camera is started, a kinova mechanical arm is started, a left mechanical arm model and a right mechanical arm model urdf model are loaded, the models describe the position and posture relation of each joint of the kinova mechanical arm, the left mechanical arm is installed on the left hand side of a horizontal desktop, the bottom coordinate system of the left mechanical arm is defined as left, the right mechanical arm is installed on the right hand side of the horizontal desktop, the bottom coordinate system of the right mechanical arm is defined as right, and the distance between the left mechanical arm and the right mechanical arm is 0.8-1.2 m. A root coordinate system is defined as root in the middle position of the left mechanical arm and the right mechanical arm, a world coordinate system of the binocular vision camera is defined as map, the root is used as the root coordinate system, the left, right and map coordinate positions and postures are determined by translation and rotation, the origin of the root coordinate system is used as a central point, a rotation matrix rotating to the bottom coordinate system of the left mechanical arm is R1, a translation matrix is T1, a rotation matrix rotating to the bottom coordinate system of the right mechanical arm is R2, a translation matrix T2, a rotation matrix rotating to the coordinate system of the binocular vision camera is R3, a translation matrix is T3, and the coordinate system is described by equations (1) to (3),
left=R1*root+T1 (1)
right=R2*root+T2 (2)
map=R3*root+T3 (3)
step 2, recognizing the position and the posture of the target object when the target object is placed in an inclined mode, and specifically comprising the following substeps:
(a) subscribing an RGB image published by a binocular vision camera, carrying out binarization on the image, manually setting a pixel frame due to more foreign objects outside the image, and extracting a foreground in the frame by using a grabCut algorithm in opencv, wherein the iteration number is set to be 7-12;
(b) extracting foreground image pixel pi(u, v), i belongs to N (N is the number of pixel points extracted from the foreground) and the depth image are matched to obtain the corresponding world coordinate P in the world coordinate systemi=[XW YW ZW]TI ∈ N. N PiThe mapping relation of the world coordinates mapped from all the pixel coordinate systems to the binocular vision camera stored in the 1 st container is described by equation (4),
Figure GDA0003444645710000091
wherein z is a binocular vision camera depth value,
Figure GDA0003444645710000092
f is the focal length of the binocular vision camera, dx is the physical distance corresponding to the horizontal unit pixel in mm, dy is the physical distance corresponding to the vertical unit pixel in mmIn the formula, parameter u0Is the transverse midpoint of a pixel of the image, v0Is the longitudinal midpoint of an image pixel. Storing the converted world coordinates in a 2 nd container, traversing the whole 2 nd container, and solving the element world coordinate P with the minimum x value in all elements1=[X,Y,Z],X∈min{XWRepresents the highest vertices P of two circular arcs1With P1Is the central point;
(c) traversing the 2 nd container by taking the bottle length L as a constraint condition to find out the central point P1All points with the distance satisfying the constraint condition of L length are stored in a 3 rd container, and then a target P corresponding to the minimum value of the x-axis coordinate is searched from the 3 rd container2I.e. the highest vertex P of the rear arc2The direction vector v, calculated by equation (5),
v=P1-P2 (5)
(d) calculating the center of gravity Q3 of the target object, and setting the point Q2 as P in the substep (c)1And P2Wherein the radius of the cross-section of the bottle is r, the distance between Q2 and the point Q directly below is h, Q2, Q3 form a geometric relationship in the form of a right triangle and satisfy Q3-Q ═ kv, k ≠ 0, v ≠ kvx,vy,vzThe projection of the direction vector v on the x-axis, the y-axis and the z-axis of the world coordinate system of the binocular vision camera is described by the formulas (6) to (8),
Figure GDA0003444645710000093
Figure GDA0003444645710000094
Figure GDA0003444645710000095
(e) the reference system of the direction vector v is a binocular vision camera world coordinate system map, the direction vector v of the right target object is converted into a right mechanical arm bottom coordinate system right, the direction vector is v2, the conversion formula is described by an equation (9),
Figure GDA0003444645710000101
wherein the content of the first and second substances,
Figure GDA0003444645710000102
the angle between the direction vector v2 and each coordinate plane of right is calculated for the rotation matrix of converting the world coordinate system map of the binocular vision camera to the coordinate system right at the bottom of the right mechanical arm, which is described by the formula (10),
Figure GDA0003444645710000103
in the formula, v2x,v2y,v2zV2 are respectively projected on an x axis, a y axis and a z axis under a right mechanical arm base coordinate system, alpha is an included angle between a direction vector v2 and an x-y plane, beta is an included angle between a direction vector v2 and the x-z plane, and chi is an included angle between a direction vector v2 and the y-z plane;
step 3, recognizing the position and the posture of the object when the object is horizontally placed, which specifically comprises the following substeps:
(a) a color segmentation mode is required to be used for extracting a binary image of a left target object, because interference of irrelevant objects with the same color exists in the process, the binary image is also extracted;
(b) in order to solve the problem that the same color interferes with the identification of the target object, the shape of the target object is taken as an object identification auxiliary condition, the extracted image is screened by utilizing the shape characteristic, and a binary image with a quadrilateral characteristic is reserved;
(c) using a polygon approximation contour function cv in opencv to approximate approxColy (), and obtaining a two-dimensional array M multiplied by N, wherein M represents the number of polygon groups, and N represents a set of each group of polygonal boundary pixel points;
(d) judging whether the target object is a quadrangle or not, wherein the target object to be extracted is a quadrangle, but a certain deviation may exist in the actual engineering, the number of the edges of the target object is selected to be 3-5, and the target object can be considered to be in accordance with the shape characteristics of the target object in the range and is reserved;
(e) solving all shape information by using a findContours () function, and storing a vector of the outline;
(f) solving the number of edges of each contour by using an approx PolyDP () function, traversing the whole 1 st container, deleting interferents with the number of edges not being 3-5, keeping elements with approximate quadrilateral shapes in the 1 st container, drawing a frame by using a polylines () function, solving the pixel coordinate of a gravity center point gold, and calculating the position coordinate of the gravity center point in a world coordinate system of a binocular vision camera by using depth image matching;
(g) after the contour information of the target object is determined, setting four vertexes of a rectangular frame as A, B, C and D, if AB is the longest side, selecting the difference of coordinates of the vertexes of the AB side as a direction vector, converting the direction vector v of the left target object into the lower v1 of a left mechanical arm base coordinate system, and calculating the included angle between the direction vector and the x-z plane of the left mechanical arm base coordinate system to be
Figure GDA0003444645710000111
Described by the formula (11),
Figure GDA0003444645710000112
wherein, v1x,v1y,v1zRespectively are the projections of v1 on the x axis, the y axis and the z axis under the coordinate system of the left mechanical arm base;
step 4, recognizing and grabbing the target object by the mechanical arm end effector, horizontally placing the position corresponding to the left target object in the experimental state, wherein the included angle between the left target object and the horizontal plane is zero, solving the included angle delta between the direction vector v1 and the x-z plane of the left mechanical arm base coordinate system by the formula (11), controlling the included angle between the z axis of the left mechanical arm end effector and the z axis of the left mechanical arm bottom coordinate system to be 170-180 degrees, controlling the rotation angle Gr of the left mechanical arm end effector around the joint axis of the left mechanical arm end effector, describing the target object by the formula (12),
Figure GDA0003444645710000113
the right target object in the experimental state is placed obliquely to the table top, the vector of the direction under the bottom coordinate system of the right mechanical arm is v2, the included angle between v2 and the x-y plane is alpha, the included angle sigma between the z axis of the end effector of the right mechanical arm and the z axis of the bottom coordinate system of the right mechanical arm is described by solving the equations (13) and (14),
Figure GDA0003444645710000114
Figure GDA0003444645710000115
wherein delta is latitude value in spherical coordinate, v2 is included angle beta of z-x plane of bottom coordinate system of right mechanical arm, right mechanical arm end effector rotates angle Gr around joint axis, which is described by formulas (15) and (16),
Figure GDA0003444645710000121
Figure GDA0003444645710000122
after attitude control is finished, the end effector of the mechanical arm needs to be controlled to reach a target position, wherein the azimuth angle is defined as a horizontal included angle between a line starting from the positive direction of an x axis and a line from a clockwise direction to a target direction, and because the azimuth angle theta is [0,2 pi ], the (x, y) needs to be judged to be positioned in a quadrant in a four quadrant, and the azimuth angle theta is calculated through a formula (17);
Figure GDA0003444645710000123
in order to avoid collision with a target object during grasping, a pre-grasping process is designed, the position and posture of grasping are designed in advance, and the description is given by equation (18),
Figure GDA0003444645710000124
in the formula (x)goal,ygoal,zgoal)TIs the position coordinate of the target gravity center point relative to the bottom coordinate system of the mechanical arm, (x ', y ', z ')TThe coordinate point is a coordinate point of a pre-grabbing position relative to a coordinate system at the bottom of the mechanical arm, and L is a distance set manually; theta is an included angle between the x-y plane projection of the target point in the coordinate system of the mechanical arm and the x axis, and sigma is an included angle between the z axis of the end effector of the mechanical arm and the z axis of the coordinate system at the bottom of the mechanical arm; obtaining a preset target position and a preset target posture by using the formulas (11) to (18), and controlling the end effector of the mechanical arm to grab the target object in the optimal posture;
and 5, after the mechanical arm grabs the target, the mechanical arm action client sends the specified joint angle to the action server, and the action is finished after the execution.

Claims (1)

1. A mechanical arm grabbing control method based on binocular vision is characterized by comprising the following steps:
step 1, building an experiment platform, installing a binocular vision camera at a position 1-1.5m above the experiment platform, starting the binocular vision camera, starting a kinova mechanical arm and loading a left mechanical arm model urdf and a right mechanical arm model urdf, wherein the model describes the position and posture relation of each joint of the kinova mechanical arm, the left mechanical arm is installed at the left hand side of a horizontal desktop, the bottom coordinate system of the left mechanical arm is defined as left, the right mechanical arm is installed at the right hand side of the horizontal desktop, the bottom coordinate system of the right mechanical arm is defined as right, the distance between the left mechanical arm and the right mechanical arm is 0.8-1.2m, the root coordinate system is defined as root at the middle position of the left mechanical arm and the right mechanical arm, the world coordinate system of the binocular vision camera is defined as map, the root is used as the root coordinate system, the coordinate positions and postures of the left, the right and the map are determined by translation and rotation, the origin of the root coordinate system is used as a central point, a rotation matrix rotated to the bottom coordinate system of the left mechanical arm is R1, the translation matrix is T1, a rotation matrix rotated to the bottom coordinate system of the right robot arm is R2, a translation matrix T2, a rotation matrix rotated to the coordinate system of the binocular vision camera is R3, and a translation matrix is T3, and is described by equations (1) to (3),
left=R1*root+T1 (1)
right=R2*root+T2 (2)
map=R3*root+T3 (3)
step 2, recognizing the position and the posture of the right target object when the right target object is obliquely placed, and the method specifically comprises the following substeps:
(a) subscribing an RGB image published by a binocular vision camera, carrying out binarization on the image, manually setting a pixel frame due to more foreign objects outside the image, and extracting a foreground in the frame by using a grabCut algorithm in opencv, wherein the iteration number is set to be 7-12;
(b) extracting foreground image pixel pi(u, v), i belongs to N, and is matched with the depth image to obtain corresponding world coordinate P in a world coordinate systemi=[XW YW ZW]TI belongs to N, N is the number of pixel points extracted from the foreground, and N is PiThe mapping relation of the world coordinates mapped from all the pixel coordinate systems to the binocular vision camera stored in the 1 st container is described by equation (4),
Figure FDA0003444645700000011
wherein z is a binocular vision camera depth value,
Figure FDA0003444645700000021
f is the focal length of the binocular vision camera, dx is the physical distance corresponding to the horizontal unit pixel in mm, dy is the physical distance corresponding to the vertical unit pixel in mm, and the parameter u in the formula0Is the transverse midpoint of a pixel of the image, v0Storing the converted world coordinates in a 2 nd container for the longitudinal midpoint of the image pixel, traversing the whole 2 nd container, and solving the world coordinate P of the element with the minimum x value in all the elements1=[X,Y,Z],X∈min{XWRepresents the highest vertices P of two circular arcs1With P1Is the central point;
(c) traversing the 2 nd container by taking the length Y of the right target object as a constraint condition to find out the distance from the central point P1All points with the distance satisfying the length Y as the constraint condition are stored in a 3 rd container, and then a target P corresponding to the minimum value of the x-axis coordinate is searched from the 3 rd container2I.e. the highest vertex P of the rear arc2The direction vector v, calculated by equation (5),
v=P1-P2 (5)
(d) calculating the center of gravity Q3 of the right target object, and setting the point Q2 as P in the sub-step (c)1And P2Wherein the radius of the right target section is r, the distance between Q2 and the right lower point Q is h, Q2, Q3 form the geometric relationship of a right triangle and satisfy Q3-Q ═ kv, k ≠ 0, v ≠ kvx,vy,vzThe projection of the direction vector v on the x-axis, the y-axis and the z-axis of the world coordinate system of the binocular vision camera is described by the formulas (6) to (8),
Figure FDA0003444645700000022
Figure FDA0003444645700000023
Figure FDA0003444645700000024
(e) the reference system of the direction vector v is a binocular vision camera world coordinate system map, the direction vector v of the right target object is converted into a right mechanical arm bottom coordinate system right, the direction vector is v2, the conversion formula is described by an equation (9),
Figure FDA0003444645700000025
wherein the content of the first and second substances,
Figure FDA0003444645700000026
the angle between the direction vector v2 and each coordinate plane of right is calculated for the rotation matrix of converting the world coordinate system map of the binocular vision camera to the coordinate system right at the bottom of the right mechanical arm, which is described by the formula (10),
Figure FDA0003444645700000031
in the formula, v2x,v2y,v2zRespectively projecting v2 on an x axis, a y axis and a z axis under a right mechanical arm bottom coordinate system, wherein alpha is an included angle between a direction vector v2 and an x-y plane, beta is an included angle between a direction vector v2 and the x-z plane, and chi is an included angle between a direction vector v2 and the y-z plane;
step 3, recognizing the position and the posture of the left object when the left object is horizontally placed, and specifically comprising the following substeps:
(a) a color segmentation mode is required to be used for extracting a binary image of a left target object, because interference of irrelevant objects with the same color exists in the process, the binary image is also extracted;
(b) in order to solve the problem that the same color interferes with the identification of the left target object, the shape of the left target object is taken as an object identification auxiliary condition, the extracted image is screened by utilizing shape characteristics, and a binary image with quadrilateral characteristics is reserved;
(c) using a polygon in opencv to approximate a contour function cv:approxpol (), obtaining a two-dimensional array M multiplied by N1, wherein M represents the number of polygon groups, and N1 represents a set of each group of polygonal boundary pixels;
(d) judging whether the object is a quadrangle or not, wherein the number of the edges of the left object is 3-5 because the left object to be extracted is a quadrangle, and the shape characteristics of the left object can be considered to be met in the range and are reserved;
(e) solving all shape information by using a findContours () function, and storing a vector of the outline;
(f) solving the number of edges of each contour by using an approx PolyDP () function, traversing the whole 1 st container, deleting interferents with the number of edges not being 3-5, keeping elements with approximate quadrilateral shapes in the 1 st container, drawing a frame by using a polylines () function, solving the pixel coordinate of a gravity center point gold, and calculating the position coordinate of the gravity center point in a world coordinate system of a binocular vision camera by using depth image matching;
(g) after the contour information of the left target object is determined, setting four vertexes of a rectangular frame as A, B, C and D, if AB is the longest side, selecting the difference of coordinates of the vertexes of the AB sides as a direction vector, converting the direction vector v of the left target object into the lower v1 of a bottom coordinate system of the left mechanical arm, and calculating the included angle between the direction vector and the x-z plane of the bottom coordinate system of the left mechanical arm to be
Figure FDA0003444645700000041
Described by the formula (11),
Figure FDA0003444645700000042
wherein, v1x,v1y,v1zRespectively are the projections of v1 on the x axis, the y axis and the z axis under the coordinate system of the bottom of the left mechanical arm;
step 4, identifying and grabbing a left target object by the left mechanical arm end effector and identifying and grabbing a right target object by the right mechanical arm end effector, horizontally placing the position corresponding to the left target object in an experimental state, wherein the included angle between the left target object and the horizontal plane is zero, solving an included angle delta between a direction vector v1 and an x-z plane of a left mechanical arm bottom coordinate system by using a formula (11), wherein the included angle sigma between a z axis of the left mechanical arm end effector and the z axis of the left mechanical arm bottom coordinate system is 170-180 degrees, controlling the rotation angle Gr of the left mechanical arm end effector around a joint axis thereof, and describing the rotation angle Gr through a formula (12),
Figure FDA0003444645700000043
the right target object in the experimental state is placed obliquely to the table top, the vector of the direction under the bottom coordinate system of the right mechanical arm is v2, the included angle between v2 and the x-y plane is alpha, the included angle sigma between the z axis of the end effector of the right mechanical arm and the z axis of the bottom coordinate system of the right mechanical arm is described by solving the equations (13) and (14),
Figure FDA0003444645700000044
Figure FDA0003444645700000045
wherein delta is latitude value in spherical coordinate, an included angle beta between v2 and a z-x plane of a coordinate system at the bottom of the right mechanical arm, a rotation angle Gr of the right mechanical arm end effector around a joint axis is described by formulas (15) and (16),
Figure FDA0003444645700000051
Figure FDA0003444645700000052
after the attitude control is finished, the left mechanical arm end effector is required to be controlled to reach the left target position, the right mechanical arm end effector is required to reach the right target position, wherein the azimuth angle is defined as the horizontal included angle between the direction lines of the left mechanical arm end effector and the right mechanical arm end effector from the positive direction of the x axis to the left target object in the clockwise direction, the (x, y) is required to be judged to be positioned in the quadrant in the four quadrants as the azimuth angle theta is [0,2 pi ], and the azimuth angle theta is calculated through a formula (17);
Figure FDA0003444645700000053
in order to avoid collision with a left target and a right target in the grabbing process, a left mechanical arm grabbing left target pre-grabbing process is designed, a right mechanical arm grabbing right target pre-grabbing process is designed, the grabbing position and posture are designed in advance, and description is given by an equation (18),
Figure FDA0003444645700000054
in the formula (x)goal,ygoal,zgoal)TThe position coordinates of the gravity center point of the left target object relative to the bottom coordinate system of the left mechanical arm or the position coordinates of the gravity center point of the right target object relative to the bottom coordinate system of the right mechanical arm, (x ', y ', z ')TObtaining a preset left target position and posture and a right target position and posture by using formulas (11) to (18) for a coordinate point of a pre-grabbing position of a left target relative to a left mechanical arm bottom coordinate system or a coordinate point of a pre-grabbing position of a right target relative to a right mechanical arm bottom coordinate system, wherein L is a distance set manually, and σ is an included angle between a z-axis of a left mechanical arm end effector and a z-axis of the left mechanical arm bottom coordinate system or an included angle between a z-axis of a right mechanical arm end effector and a z-axis of the right mechanical arm bottom coordinate system, controlling the left mechanical arm end effector to grab the left target in an optimal posture, and controlling the right mechanical arm end effector to grab the right target in an optimal posture;
and 5, the left mechanical arm captures a left object and then reaches a state of an appointed joint angle, the right mechanical arm captures a right object and then reaches a state of the appointed joint angle, the left mechanical arm action client sends the appointed joint angle to the action server, the right mechanical arm action client sends the appointed joint angle to the action server, and the action is finished after the execution.
CN202010037021.6A 2020-01-14 2020-01-14 Mechanical arm grabbing control method based on binocular vision Active CN111267095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010037021.6A CN111267095B (en) 2020-01-14 2020-01-14 Mechanical arm grabbing control method based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010037021.6A CN111267095B (en) 2020-01-14 2020-01-14 Mechanical arm grabbing control method based on binocular vision

Publications (2)

Publication Number Publication Date
CN111267095A CN111267095A (en) 2020-06-12
CN111267095B true CN111267095B (en) 2022-03-01

Family

ID=70994170

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010037021.6A Active CN111267095B (en) 2020-01-14 2020-01-14 Mechanical arm grabbing control method based on binocular vision

Country Status (1)

Country Link
CN (1) CN111267095B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111814634B (en) * 2020-06-29 2023-09-08 北京百度网讯科技有限公司 Real-time distance determining method, device, equipment and medium
CN111751136A (en) * 2020-06-29 2020-10-09 伯肯森自动化技术(上海)有限公司 POS machine test system based on binocular vision subassembly
CN112667823B (en) * 2020-12-24 2022-11-01 西安电子科技大学 Semantic analysis method and system for task execution sequence of mechanical arm and computer readable medium
CN112894815B (en) * 2021-01-25 2022-09-27 西安工业大学 Method for detecting optimal position and posture for article grabbing by visual servo mechanical arm
CN113768640B (en) * 2021-11-09 2022-02-08 极限人工智能有限公司 Method and device for determining working pose of mechanical arm
CN114516045B (en) * 2021-11-25 2023-01-20 苏州永固智能科技有限公司 Unmanned storehouse mechanical arm control method and system based on Internet of things
CN115256019B (en) * 2022-06-25 2023-07-14 北京建工集团有限责任公司 Automatic assembling and aligning device for support plates
CN117163421B (en) * 2023-11-03 2024-01-23 山东新华医疗器械股份有限公司 Multi-arm cooperation intelligent packaging robot for disinfection supply center
CN117464692B (en) * 2023-12-27 2024-03-08 中信重工机械股份有限公司 Lining plate grabbing mechanical arm control method based on structured light vision system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05288884A (en) * 1992-04-13 1993-11-05 Toshiba Corp Robot operated plant
US5887121A (en) * 1995-04-21 1999-03-23 International Business Machines Corporation Method of constrained Cartesian control of robotic mechanisms with active and passive joints
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN108582075A (en) * 2018-05-10 2018-09-28 江门市思远信息科技有限公司 A kind of intelligent robot vision automation grasping system
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05288884A (en) * 1992-04-13 1993-11-05 Toshiba Corp Robot operated plant
US5887121A (en) * 1995-04-21 1999-03-23 International Business Machines Corporation Method of constrained Cartesian control of robotic mechanisms with active and passive joints
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN108582075A (en) * 2018-05-10 2018-09-28 江门市思远信息科技有限公司 A kind of intelligent robot vision automation grasping system
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose

Also Published As

Publication number Publication date
CN111267095A (en) 2020-06-12

Similar Documents

Publication Publication Date Title
CN111267095B (en) Mechanical arm grabbing control method based on binocular vision
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
CN107767423B (en) mechanical arm target positioning and grabbing method based on binocular vision
CN107590836B (en) Kinect-based charging pile dynamic identification and positioning method and system
CN111775152B (en) Method and system for guiding mechanical arm to grab scattered stacked workpieces based on three-dimensional measurement
CN112396664B (en) Monocular camera and three-dimensional laser radar combined calibration and online optimization method
CN110648367A (en) Geometric object positioning method based on multilayer depth and color visual information
CN111721259B (en) Underwater robot recovery positioning method based on binocular vision
CN111862201A (en) Deep learning-based spatial non-cooperative target relative pose estimation method
CN111178138B (en) Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN112907735B (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN113177983B (en) Fillet weld positioning method based on point cloud geometric features
CN114241269B (en) A collection card vision fuses positioning system for bank bridge automatic control
CN110796700A (en) Multi-object grabbing area positioning method based on convolutional neural network
CN111784655A (en) Underwater robot recovery positioning method
CN111127613B (en) Image sequence three-dimensional reconstruction method and system based on scanning electron microscope
CN111198563A (en) Terrain recognition method and system for dynamic motion of foot type robot
CN116021519A (en) TOF camera-based picking robot hand-eye calibration method and device
CN116664622A (en) Visual movement control method and device
JPH1151611A (en) Device and method for recognizing position and posture of object to be recognized
CN112767481B (en) High-precision positioning and mapping method based on visual edge features
CN114882108A (en) Method for estimating grabbing pose of automobile engine cover under two-dimensional image
CN112233176A (en) Target posture measurement method based on calibration object
CN112734843B (en) Monocular 6D pose estimation method based on regular dodecahedron
CN116579955B (en) New energy battery cell weld reflection point denoising and point cloud complement method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant