CN116100562B - Visual guiding method and system for multi-robot cooperative feeding and discharging - Google Patents

Visual guiding method and system for multi-robot cooperative feeding and discharging Download PDF

Info

Publication number
CN116100562B
CN116100562B CN202310377571.6A CN202310377571A CN116100562B CN 116100562 B CN116100562 B CN 116100562B CN 202310377571 A CN202310377571 A CN 202310377571A CN 116100562 B CN116100562 B CN 116100562B
Authority
CN
China
Prior art keywords
agv
mechanical arm
robot
agv robot
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310377571.6A
Other languages
Chinese (zh)
Other versions
CN116100562A (en
Inventor
陈海军
樊虹岐
吕丞干
胡晓兵
殷鸣
雷永志
谢龙德
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute Of Yibin Sichuan University
Sichuan Cpt Precision Industry Science & Technology Co ltd
Sichuan University
Original Assignee
Industrial Technology Research Institute Of Yibin Sichuan University
Sichuan Cpt Precision Industry Science & Technology Co ltd
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute Of Yibin Sichuan University, Sichuan Cpt Precision Industry Science & Technology Co ltd, Sichuan University filed Critical Industrial Technology Research Institute Of Yibin Sichuan University
Priority to CN202310377571.6A priority Critical patent/CN116100562B/en
Publication of CN116100562A publication Critical patent/CN116100562A/en
Application granted granted Critical
Publication of CN116100562B publication Critical patent/CN116100562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a visual guiding method and a visual guiding system for multi-robot collaborative feeding and discharging, wherein the visual guiding method comprises the following steps: the image processing unit sends the recognized current pose of the AGV robot to the cooperative module, the mechanical arm control module and the AGV robot control module respectively; the cooperative module obtains a target pose of the mechanical arm and a target pose of the AGV robot, and the AGV robot control module calculates running speeds of a left wheel and a right wheel of the AGV robot respectively and sends the running speeds to the AGV robot after receiving the current pose of the AGV robot and the target pose of the AGV robot, so as to control the AGV robot to move to the target pose of the AGV robot; and the mechanical arm control module controls the mechanical arm to move to the mechanical arm target pose according to the current pose of the mechanical arm and the received mechanical arm target pose. The invention realizes the cooperative coordination of the mechanical arm and the AGV and improves the flexibility of the working space of the mechanical arm.

Description

Visual guiding method and system for multi-robot cooperative feeding and discharging
Technical Field
The invention relates to the field of control, in particular to a visual guiding method and a visual guiding system for multi-robot cooperative feeding and discharging.
Background
The ROS system widely used for robot development is taken as a platform, so that development difficulty is reduced, and system expansibility is improved; the method comprises the steps of adopting monocular global vision as a sensor and ArUco Tag as a marker, and researching the visual identification of the same field of view of a mechanical arm with low cost and high stability and an AGV; the multi-robot machine tool loading and unloading visual cooperation system is developed based on the ROS, loading and unloading connection of any specified position in a working area is achieved, the problem of accurate positioning of a traditional fixed loading and unloading tray and an AGV is avoided, and a low-cost, easy-maintenance and high-expansibility solution is provided for the machine tool loading and unloading technology.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a visual guiding method for feeding and discharging materials cooperatively by multiple robots, which comprises the following steps:
the image processing unit adopts an ArUco Tag recognition algorithm to recognize the current pose of the AGV robot in the image acquired by the camera, and the recognized current pose of the AGV robot is respectively sent to the cooperative module, the mechanical arm control module and the AGV robot control module;
the cooperative module calculates the target pose of the AGV robot through a cooperative algorithm according to the set working area position and the work task obtained through recognition according to the ID of the AGV robot, and then obtains the target pose of the mechanical arm through the cooperative algorithm according to the target pose of the AGV robot or the acquired current pose of the designated workpiece;
after receiving the current pose of the AGV robot and the target pose of the AGV robot, the AGV robot control module calculates the running speeds of the left wheel and the right wheel of the AGV robot respectively and sends the running speeds to the AGV robot, and controls the AGV robot to move to the target pose of the AGV robot;
the mechanical arm control module sends a motion instruction to a movet according to the current pose of the mechanical arm and the received target pose of the mechanical arm, the movet obtains a mechanical arm joint angle through inverse kinematics solution, and after receiving the mechanical arm joint angle, the communication node converts the mechanical arm joint angle into a dynamic instruction, sends the dynamic instruction to the mechanical arm through TCP/IP communication, and controls the mechanical arm to move to the target pose of the mechanical arm; wherein movet is a robot-related tool set contained in the ROS system;
the AGV robot reaches the target pose of the AGV robot, and the mechanical arm reaches the target pose of the mechanical arm, so that the guidance is completed.
Further, the cooperation module calculates a target pose of the AGV robot through a cooperation algorithm according to the set working area position and a working task obtained through recognition according to the ID of the AGV robot, and then obtains a target pose of the mechanical arm through the cooperation algorithm according to the target pose of the AGV robot or the obtained current pose of the designated workpiece, comprising:
target point pixel coordinates of AGV robot target pose
Figure SMS_1
The calculation formula is as follows:
Figure SMS_2
,
wherein%
Figure SMS_3
,/>
Figure SMS_4
) The pixel coordinates of the central point of the AGV robot; (/>
Figure SMS_5
,/>
Figure SMS_6
) For the pixel coordinates corresponding to the camera optical axis, H is the height of the camera lens to the ground, ++>
Figure SMS_7
The AGV height; />
Mechanical arm target pose
Converting the pixel coordinates of the target point to obtain the target pose of the mechanical arm
Figure SMS_8
Figure SMS_9
,
Figure SMS_10
XYZ coordinates of the target point, which is the pose of the robot arm target in the camera coordinate system, wherein +.>
Figure SMS_11
Is a preset value; />
Figure SMS_12
Is an internal reference value; />
Figure SMS_13
The coordinate of the target point pixel is coordinated in such a way that the tail end of the mechanical arm and the center of the AGV robot are positioned at the same position in the image, namely:
Figure SMS_14
,
Figure SMS_16
for the quaternion of the target point under the camera coordinate system, +.>
Figure SMS_19
Is->
Figure SMS_23
Rotate along X-axis>
Figure SMS_17
Angular quaternion, < >>
Figure SMS_20
Is->
Figure SMS_22
Rotation along y-axis>
Figure SMS_24
Quaternion of angle>
Figure SMS_15
Is->
Figure SMS_18
Rotate along Z axis>
Figure SMS_21
Quaternion of angle.
Further, the AGV robot control module receive the current position appearance of AGV robot and the target position appearance of AGV robot after, calculate the running speed of left wheel and right wheel of AGV robot and send the AGV robot to, control AGV robot motion to the target position appearance of AGV robot, include:
the equation of motion of the AGV robot is:
Figure SMS_25
,
wherein the method comprises the steps of
Figure SMS_26
The rotation speeds of a right wheel and a left wheel of the AGV are respectively +.>
Figure SMS_27
For AGV robot target pose +.>
Figure SMS_28
For the current position of the AGV robot, +.>
Figure SMS_29
For the time required to move from the current pose to the target pose, v is set to a fixed value u, and the left and right wheel speeds are:
Figure SMS_30
,
two position points are determined to current position appearance and target position appearance of AGV robot
Figure SMS_31
And (3) orientation->
Figure SMS_32
,/>
Figure SMS_33
And (3) orientation->
Figure SMS_34
From two position points->
Figure SMS_35
、/>
Figure SMS_36
Crossing the extension line along the respective directions to a point to obtain +.>
Figure SMS_37
There is
Figure SMS_38
,
To be used for
Figure SMS_39
The three points are used as control points to make a quadratic Bezier curve, so that the AGV robot moves from the current pose to the target pose, and the curve is as follows:
Figure SMS_40
,
Figure SMS_41
,
Figure SMS_42
for step length, quadratic Bezier curve +.>
Figure SMS_43
The orientation angle of any tangential line is as follows: />
Figure SMS_44
,
Obtaining the intermediate point
Figure SMS_45
And corresponding tangential direction->
Figure SMS_46
The pose of the middle point is used as a target point to obtain the speed of the left wheel and the right wheel, < + >>
Figure SMS_47
Is a quadratic Bezier curve +.>
Figure SMS_48
Is a derivative of (a).
The visual guiding system for the multi-robot collaborative loading and unloading by applying the visual guiding method for the multi-robot collaborative loading and unloading comprises a truss, a mechanical arm, an AGV and an ArUco Tag marker; the ArUco Tag marker is arranged on the AGV, and the camera is arranged on a beam of the truss and comprises a mechanical arm control module, an AGV control module, an image processing unit, a cooperation module and an identification module;
the camera is connected with the image processing unit, the image unit is connected with the identification module, the cooperation module, the mechanical arm control module and the AGV control module are respectively connected with the identification module, and the mechanical arm control module and the AGV control module are also respectively connected with the cooperation module.
The beneficial effects of the invention are as follows: according to the technical scheme provided by the invention, the mechanical arm is effectively controlled by the control system, so that the cooperative matching of the mechanical arm and the AGV is realized, and the flexibility of the working space of the mechanical arm is improved.
Drawings
FIG. 1 is a flow diagram of a visual guidance method for multi-robot collaborative loading and unloading;
FIG. 2 is a schematic diagram of a visual guidance system with multiple robots cooperating with loading and unloading;
FIG. 3 is a schematic diagram of an ArUco Tag recognition flow;
FIG. 4 is a schematic diagram illustrating a coordinate transformation relationship between pixel coordinates and world coordinates;
FIG. 5 is a schematic diagram of a robotic arm control flow;
FIG. 6 is a schematic diagram of the camera imaging principle and the target point offset;
FIG. 7 is a schematic view of an AGV motion model.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to the accompanying drawings, but the scope of the present invention is not limited to the following description.
For the purpose of making the technical solution and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the particular embodiments described herein are illustrative only and are not intended to limit the invention, i.e., the embodiments described are merely some, but not all, of the embodiments of the invention. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be made by a person skilled in the art without making any inventive effort, are intended to be within the scope of the present invention. It is noted that relational terms such as "first" and "second", and the like, are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The features and capabilities of the present invention are described in further detail below in connection with the examples.
As shown in fig. 1, the visual guiding method for the multi-robot collaborative loading and unloading comprises the following steps:
the image processing unit adopts an ArUco Tag recognition algorithm to recognize the pose of the AGV robot in the image acquired by the camera, and sends the recognized current pose of the AGV robot to the cooperative module, the mechanical arm control module and the AGV robot control module respectively;
the cooperative module is used for respectively obtaining a target pose of the mechanical arm and a target pose of the AGV according to the current pose of the AGV through a cooperative algorithm, sending the target pose of the mechanical arm to the mechanical arm control module, and sending the target pose of the AGV to the AGV control module;
after receiving the current pose of the AGV robot and the target pose of the AGV robot, the AGV robot control module calculates the running speeds of the left wheel and the right wheel of the AGV robot and sends the running speeds to the AGV robot, and controls the AGV robot to move to the target pose of the AGV robot;
the mechanical arm control module sends a motion instruction to the MoveIt according to the current pose of the mechanical arm and the received target pose of the mechanical arm, the MoveIt obtains a mechanical arm joint angle through inverse kinematics solution, and the communication node converts the mechanical arm joint angle into a dynamic instruction after receiving the mechanical arm joint angle, sends the dynamic instruction to the mechanical arm through TCP/IP communication, and controls the mechanical arm to move to the target pose of the mechanical arm;
the AGV robot reaches the target pose of the AGV robot, and the mechanical arm reaches the target pose of the mechanical arm, so that the guidance is completed.
The image processing unit adopts ArUco Tag recognition algorithm to recognize the pose of the AGV robot in the image acquired by the camera, and comprises the following steps:
and recognizing pixel coordinates of four corner points of the ArUco Tag marker in the image through an ArUco Tag recognition algorithm, converting the pixel coordinates into world coordinates, and obtaining the pose of the AGV robot according to the world coordinates.
Specifically, arUco Tag is a square marker for robot positioning, 6 degrees of freedom information of the markers can be calculated, and each marker corresponds to a unique ID number, so that different robots can be distinguished by using IDs. The method is suitable for identifying and positioning multiple robots in a working space in the system. Due to the influence of noise and other factors, the square outline of a single marker can be repeatedly detected twice, and the same identification result of two IDs is obtained. To avoid this, one of the ID results is removed when two identical ID results are identified. The improved ArUco Tag recognition algorithm flow is shown in fig. 3, and the core of the improved ArUco Tag recognition algorithm flow is that whether two results are intersected or not is detected, and only when the distance between the center points of the two results is larger than any side length of the rectangular outline of the two results, the two adjacent results are indicated to be not intersected, and the two results are obtained by recognizing the markers with the same ID. Based on the improved ArUco Tag recognition algorithm, a plurality of markers with the same ID in a working space can be correctly recognized without repeatedly detecting a single marker. Through the identification algorithm, the pixel coordinates of the four corner points of the marker in the image can be obtained, the conversion relationship between the pixel coordinates and the world coordinate system is as shown in fig. 4, and the current pose of the AGV robot is obtained through the world coordinates.
The cooperation module is used for respectively obtaining the target pose of the mechanical arm and the target pose of the AGV according to the current pose of the AGV through a cooperation algorithm, and comprises the following steps:
target point pixel coordinates of AGV robot target pose
Figure SMS_49
The calculation formula is as follows: />
Figure SMS_50
,
Mechanical arm target pose
Converting the pixel coordinates of the target point to obtain the target pose of the mechanical arm
Figure SMS_51
Figure SMS_52
,
Figure SMS_53
XYZ coordinates of the target point, which is the pose of the robot arm target in the camera coordinate system, wherein +.>
Figure SMS_54
Is a preset value; />
Figure SMS_55
Is an internal reference value; />
Figure SMS_56
The coordinate of the target point pixel is coordinated in such a way that the tail end of the mechanical arm and the center of the AGV robot are positioned at the same position in the image, namely:
Figure SMS_57
,
Figure SMS_58
the target point quaternion under the camera coordinate system.
Specifically, in actual control, if the working area information is directly used as the target pose of the AGV, the AGV cannot operate correctly into the working area, because the same point in reality is projected at different positions of the image under different heights due to the small-hole imaging principle of the camera, as shown in fig. 6.
The center point A of the working area is projected at a position a in the image, and if the point is taken as the target pose of the AGV, the point B' of the AGV movement is inconsistent with the expectation. The true target pose of the AGV is at point B, which is projected at image B. Therefore, in order for the AGV to properly operate in the work area, the position b needs to be calculated to obtain the target pose of the AGV
Figure SMS_59
The calculation formula is as follows:
Figure SMS_60
,
mechanical arm target pose calculation
The pixel coordinates of the target pose are obtained through the identification of the working area and the calculation of the target point in the previous process
Figure SMS_61
Because the control of the mechanical arm requires six-degree-of-freedom information of the target pose, the pixel coordinates are required to be converted, and the target pose of the mechanical arm is obtained>
Figure SMS_62
Figure SMS_63
,
Figure SMS_64
Is the XYZ coordinates of the target point in the camera coordinate system, wherein +.>
Figure SMS_65
Is a preset value; />
Figure SMS_66
Is an internal reference value; />
Figure SMS_67
The pixel coordinates of the target point may be modified according to a coordinated manner, where the coordinated manner is that the end of the robotic arm and the center of the AGV are located at the same position in the image.
Figure SMS_68
,
Figure SMS_69
For the target point quaternion in the camera coordinate system, the order of multiplication on the right side of the equation depends on the order of the actual pivoting, in this context pivoting is sequential along the XYZ axis, thus +.>
Figure SMS_70
To the right of (2); />
Figure SMS_71
The XYZ axis rotation angles are respectively.
The loading and unloading area of the machine tool is provided with a plurality of tasks such as connection, loading and unloading, wherein the connection task requires the mechanical arm to follow the current pose of the AGV as a target pose, at the moment, the moving target and the speed of the mechanical arm need to be adjusted in real time, the loading and unloading task requires the mechanical arm to take the upper part of a static workpiece as a target pose, at the moment, only the mechanical arm moves and the robot is required to arrive at the speed as fast as possible, and therefore, the cooperation module needs to calculate the moving speed of the robot according to the difference of the work tasks executed by the robot.
The AGV robot control module receive the current position appearance of AGV robot and the target position appearance of AGV robot after, calculate the running speed of the left and right sides wheel of AGV robot and send the AGV robot to, control the AGV robot and move to the target position appearance of AGV robot, include:
the AGV motion model is shown in FIG. 7, and the motion equation of the AGV robot is:
Figure SMS_72
,
wherein the method comprises the steps of
Figure SMS_73
The rotation speeds of a right wheel and a left wheel of the AGV are respectively +.>
Figure SMS_74
For the target pose>
Figure SMS_75
As for the current pose, the position and the orientation of the user,
Figure SMS_76
for the time required to move from the current pose to the target pose, v is set to a fixed value u, and the left and right wheel speeds are:
Figure SMS_77
,
the current pose and the target pose of the AGV robot determine two position points
Figure SMS_78
And (3) orientation->
Figure SMS_79
Figure SMS_80
And (3) orientation->
Figure SMS_81
From two position points->
Figure SMS_82
Intersecting the two directions at a point along the extension line to obtain
Figure SMS_83
The method comprises the following steps:
Figure SMS_84
,
to be used for
Figure SMS_85
The three points are used as control points to make a quadratic Bezier curve, so that the AGV moves from the current pose to the target pose, and the curve is as follows:
Figure SMS_86
,
Figure SMS_87
,
Figure SMS_88
as a step length, the orientation angle of any tangent point on the quadratic bezier curve is as follows: />
Figure SMS_89
,
Obtaining the intermediate point
Figure SMS_90
And corresponding tangential direction->
Figure SMS_91
And taking the pose of the middle point as a target point to obtain the speeds of the left wheel and the right wheel.
Wherein the intermediate point is through
Figure SMS_92
Obtained. The expression of the argument t in the equation is given below, assuming a step h=0.1, that as k increases, t takes values of 0, 0.1, 0.2, …, 1. Bringing the value of t into the formula +.>
Figure SMS_93
P1, P2, …, i.e. the intermediate point, is obtained. I.e. a continuous bezier curve is discretized into a finite number of points, which are intermediate points.
The orientation is obtained by the identification of the arcotag, a coordinate vector representing the direction is obtained by the identification, the form is (x, y), and then the direction angle is obtained by the arctan (y/x).
The visual guiding system for the multi-robot collaborative loading and unloading by applying the visual guiding method for the multi-robot collaborative loading and unloading comprises a camera, a truss, a mechanical arm, an AGV and an ArUco Tag marker; the ArUco Tag marker is arranged on an AGV, and the camera is arranged on a beam of a truss, and is characterized by comprising a mechanical arm control module, an AGV control module, an image processing unit, a cooperation module and an identification module;
the camera is connected with the image processing unit, the image unit is connected with the identification module, the cooperation module, the mechanical arm control module and the AGV control module are respectively connected with the identification module, and the mechanical arm control module and the AGV control module are also respectively connected with the cooperation module.
The foregoing is merely a preferred embodiment of the invention, and it is to be understood that the invention is not limited to the form disclosed herein but is not to be construed as excluding other embodiments, but is capable of numerous other combinations, modifications and environments and is capable of modifications within the scope of the inventive concept, either as taught or as a matter of routine skill or knowledge in the relevant art. And that modifications and variations which do not depart from the spirit and scope of the invention are intended to be within the scope of the appended claims.

Claims (2)

1. The visual guiding method for the cooperation of the multiple robots for loading and unloading is characterized by comprising the following steps of:
the image processing unit adopts an ArUco Tag recognition algorithm to recognize the current pose of the AGV robot in the image acquired by the camera, and the recognized current pose of the AGV robot is respectively sent to the cooperative module, the mechanical arm control module and the AGV robot control module;
the cooperative module calculates the target pose of the AGV robot through a cooperative algorithm according to the set working area position and the work task obtained through recognition according to the ID of the AGV robot, and then obtains the target pose of the mechanical arm through the cooperative algorithm according to the target pose of the AGV robot or the acquired current pose of the designated workpiece;
after receiving the current pose of the AGV robot and the target pose of the AGV robot, the AGV robot control module calculates the running speeds of the left wheel and the right wheel of the AGV robot respectively and sends the running speeds to the AGV robot, and controls the AGV robot to move to the target pose of the AGV robot;
the mechanical arm control module sends a motion instruction to a movet according to the current pose of the mechanical arm and the received target pose of the mechanical arm, the movet obtains a mechanical arm joint angle through inverse kinematics solution, and after receiving the mechanical arm joint angle, the communication node converts the mechanical arm joint angle into a dynamic instruction, sends the dynamic instruction to the mechanical arm through TCP/IP communication, and controls the mechanical arm to move to the target pose of the mechanical arm; wherein movet is a robot-related tool set contained in the ROS system;
the AGV robot reaches the target pose of the AGV robot, and the mechanical arm reaches the target pose of the mechanical arm, so that guidance is completed;
the cooperation module according to the work task that work area position and according to the discernment of AGV robot ID that sets up obtained, calculate the target position appearance of AGV robot through cooperation algorithm, then according to the target position appearance of AGV robot or the current position appearance of appointed work piece of acquisition, obtain the arm target position appearance through cooperation algorithm, include:
target point pixel coordinates of AGV robot target pose
Figure QLYQS_1
The calculation formula is as follows: ,
Figure QLYQS_2
wherein%
Figure QLYQS_3
,/>
Figure QLYQS_4
) Is the central point image of the AGV robotCoordinates of the element; (/>
Figure QLYQS_5
,/>
Figure QLYQS_6
) For the pixel coordinates corresponding to the camera optical axis, H is the height of the camera lens to the ground, ++>
Figure QLYQS_7
The AGV height;
mechanical arm target pose
Converting the pixel coordinates of the target point to obtain the target pose of the mechanical arm
Figure QLYQS_8
Figure QLYQS_9
Figure QLYQS_10
XYZ coordinates of the target point, which is the pose of the robot arm target in the camera coordinate system, wherein +.>
Figure QLYQS_11
Is a preset value;
Figure QLYQS_12
is an internal reference value; />
Figure QLYQS_13
The coordinate of the target point pixel is coordinated in such a way that the tail end of the mechanical arm and the center of the AGV robot are positioned at the same position in the image, namely:
Figure QLYQS_14
Figure QLYQS_16
for the quaternion of the target point under the camera coordinate system, +.>
Figure QLYQS_20
Is->
Figure QLYQS_22
Rotate along X-axis>
Figure QLYQS_17
Angular quaternion, < >>
Figure QLYQS_19
Is->
Figure QLYQS_21
Rotated along the y-axis
Figure QLYQS_24
Quaternion of angle>
Figure QLYQS_15
Is->
Figure QLYQS_18
Rotate along Z axis>
Figure QLYQS_23
Quaternion of angle; />
The AGV robot control module receive the current position appearance of AGV robot and the target position appearance of AGV robot after, calculate the running speed of left wheel and right wheel of AGV robot and send for the AGV robot, control AGV robot motion to the target position appearance of AGV robot, include:
the equation of motion of the AGV robot is:
Figure QLYQS_25
wherein the method comprises the steps of
Figure QLYQS_26
The rotation speeds of a right wheel and a left wheel of the AGV are respectively +.>
Figure QLYQS_27
For AGV robot target pose +.>
Figure QLYQS_28
For the current position of the AGV robot, +.>
Figure QLYQS_29
For the time required to move from the current pose to the target pose, v is set to a fixed value u, and the left and right wheel speeds are:
Figure QLYQS_30
two position points are determined to current position appearance and target position appearance of AGV robot
Figure QLYQS_31
And (3) orientation->
Figure QLYQS_32
,/>
Figure QLYQS_33
And (3) orientation->
Figure QLYQS_34
From two position points->
Figure QLYQS_35
、/>
Figure QLYQS_36
Crossing the extension line along the respective directions to a point to obtain +.>
Figure QLYQS_37
There is
Figure QLYQS_38
To be used for
Figure QLYQS_39
The three points are used as control points to make a quadratic Bezier curve, so that the AGV robot moves from the current pose to the target pose, and the curve is as follows:
Figure QLYQS_40
Figure QLYQS_41
Figure QLYQS_42
for step length, quadratic Bezier curve +.>
Figure QLYQS_43
The orientation angle of any tangential line is as follows:
Figure QLYQS_44
obtaining the intermediate point
Figure QLYQS_45
And corresponding tangential direction->
Figure QLYQS_46
The pose of the middle point is used as a target point to obtain the speed of the left wheel and the right wheel, < + >>
Figure QLYQS_47
Is quadratic BesselCurve->
Figure QLYQS_48
Is a derivative of (a).
2. The visual guiding system for the multi-robot collaborative loading and unloading by using the visual guiding method for the multi-robot collaborative loading and unloading according to claim 1, which comprises a truss, a mechanical arm, an AGV and an ArUco Tag marker; the ArUco Tag marker is arranged on an AGV, and the camera is arranged on a beam of a truss, and is characterized by comprising a mechanical arm control module, an AGV control module, an image processing unit, a cooperation module and an identification module;
the camera is connected with the image processing unit, the image processing unit is connected with the identification module, the cooperation module, the mechanical arm control module and the AGV control module are respectively connected with the identification module, and the mechanical arm control module and the AGV control module are also respectively connected with the cooperation module.
CN202310377571.6A 2023-04-11 2023-04-11 Visual guiding method and system for multi-robot cooperative feeding and discharging Active CN116100562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310377571.6A CN116100562B (en) 2023-04-11 2023-04-11 Visual guiding method and system for multi-robot cooperative feeding and discharging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310377571.6A CN116100562B (en) 2023-04-11 2023-04-11 Visual guiding method and system for multi-robot cooperative feeding and discharging

Publications (2)

Publication Number Publication Date
CN116100562A CN116100562A (en) 2023-05-12
CN116100562B true CN116100562B (en) 2023-06-09

Family

ID=86258256

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310377571.6A Active CN116100562B (en) 2023-04-11 2023-04-11 Visual guiding method and system for multi-robot cooperative feeding and discharging

Country Status (1)

Country Link
CN (1) CN116100562B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009285816A (en) * 2008-05-30 2009-12-10 Toyota Motor Corp Leg type robot and control method of the same
CN102288192A (en) * 2011-07-01 2011-12-21 重庆邮电大学 Multi-robot path planning method based on Ad-Hoc network
CN106737868A (en) * 2017-01-16 2017-05-31 深圳汇创联合自动化控制有限公司 A kind of mobile-robot system
CN110472698A (en) * 2019-08-22 2019-11-19 四川大学 Increase material based on the metal of depth and transfer learning and shapes fusion penetration real-time predicting method
CN111360818A (en) * 2020-01-15 2020-07-03 上海锵玫人工智能科技有限公司 Mechanical arm control system through visual positioning
CN115648200A (en) * 2022-09-08 2023-01-31 杭州景吾智能科技有限公司 Cooperative control method and system for composite robot

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9533418B2 (en) * 2009-05-29 2017-01-03 Cognex Corporation Methods and apparatus for practical 3D vision system
JP5507595B2 (en) * 2012-02-17 2014-05-28 ファナック株式会社 Article assembling apparatus using robot
DE102017005194C5 (en) * 2017-05-31 2022-05-19 Kuka Deutschland Gmbh Controlling a robot assembly

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009285816A (en) * 2008-05-30 2009-12-10 Toyota Motor Corp Leg type robot and control method of the same
CN102288192A (en) * 2011-07-01 2011-12-21 重庆邮电大学 Multi-robot path planning method based on Ad-Hoc network
CN106737868A (en) * 2017-01-16 2017-05-31 深圳汇创联合自动化控制有限公司 A kind of mobile-robot system
CN110472698A (en) * 2019-08-22 2019-11-19 四川大学 Increase material based on the metal of depth and transfer learning and shapes fusion penetration real-time predicting method
CN111360818A (en) * 2020-01-15 2020-07-03 上海锵玫人工智能科技有限公司 Mechanical arm control system through visual positioning
CN115648200A (en) * 2022-09-08 2023-01-31 杭州景吾智能科技有限公司 Cooperative control method and system for composite robot

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Dual quaternion-based visual servoing for grasping moving objects;De Farias,C.;IEEE Xplore;第151-158页 *
Path Planning and Tracking Control Method Based on Bessel Curve;Zhiben Zhang;IEEE Xplore;第511-514页 *
双机器人协同精准铺丝技术;陈力啸;航空制造技术;第65卷(第13期);第70-77页 *
基于区间二型模糊系统的坡口机器人速度推断系统的研究;张雪健;热加工工艺(第15(2023)期);第116-122页 *
多机器人视觉同时定位与建图技术研究综述;阴贺生;机械工程学报;第58卷(第11期);第11-36页 *

Also Published As

Publication number Publication date
CN116100562A (en) 2023-05-12

Similar Documents

Publication Publication Date Title
CN111775146B (en) Visual alignment method under industrial mechanical arm multi-station operation
US10894324B2 (en) Information processing apparatus, measuring apparatus, system, interference determination method, and article manufacturing method
CN107901041B (en) Robot vision servo control method based on image mixing moment
JP6180087B2 (en) Information processing apparatus and information processing method
CN105014677B (en) Vision Mechanical arm control method based on Camshift visual tracking and D-H modeling algorithm
Taylor et al. Robust vision-based pose control
JP2004508954A (en) Positioning device and system
CN104786226A (en) Posture and moving track positioning system and method of robot grabbing online workpiece
CN104552341B (en) Mobile industrial robot single-point various visual angles pocket watch position and attitude error detection method
US20220390954A1 (en) Topology Processing for Waypoint-based Navigation Maps
CN113246142B (en) Measuring path planning method based on laser guidance
WO2022014312A1 (en) Robot control device and robot control method, and program
Gratal et al. Virtual visual servoing for real-time robot pose estimation
CN116766194A (en) Binocular vision-based disc workpiece positioning and grabbing system and method
JP2003311670A (en) Positioning control method of robot arm and robot equipment
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
CN116100562B (en) Visual guiding method and system for multi-robot cooperative feeding and discharging
CN109048911B (en) Robot vision control method based on rectangular features
JPH0847881A (en) Method of remotely controlling robot
KR102452315B1 (en) Apparatus and method of robot control through vision recognition using deep learning and marker
JPH02110788A (en) Method for recognizing shape of three-dimensional object
Zhang et al. High-precision pose estimation method of the 3C parts by combining 2D and 3D vision for robotic grasping in assembly applications
KR100991194B1 (en) System and method for transporting object of mobing robot
Qingda et al. Workpiece posture measurement and intelligent robot grasping based on monocular vision
Cheng et al. A study of using 2D vision system for enhanced industrial robot intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant