CN113771042A - Vision-based method and system for clamping tool by mobile robot - Google Patents

Vision-based method and system for clamping tool by mobile robot Download PDF

Info

Publication number
CN113771042A
CN113771042A CN202111161341.3A CN202111161341A CN113771042A CN 113771042 A CN113771042 A CN 113771042A CN 202111161341 A CN202111161341 A CN 202111161341A CN 113771042 A CN113771042 A CN 113771042A
Authority
CN
China
Prior art keywords
tool
robot
pose
coordinate system
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111161341.3A
Other languages
Chinese (zh)
Other versions
CN113771042B (en
Inventor
夏子涛
郭震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jingwu Intelligent Technology Co Ltd
Original Assignee
Shanghai Jingwu Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jingwu Intelligent Technology Co Ltd filed Critical Shanghai Jingwu Intelligent Technology Co Ltd
Priority to CN202111161341.3A priority Critical patent/CN113771042B/en
Publication of CN113771042A publication Critical patent/CN113771042A/en
Application granted granted Critical
Publication of CN113771042B publication Critical patent/CN113771042B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention provides a method and a system for clamping a tool by a mobile robot based on vision, wherein the method comprises the following steps: step S1: positioning and calibrating the tool; step S2: positioning and clamping a tool; four tools are arranged on the tool clamp, a flat plate is arranged on the upper portion of the tool clamp, and positioning codes are arranged at two ends of the flat plate respectively. The invention enables the robot to dynamically adapt to the accurate clamping action of the robot at different positions, ensures the precision of the position and the posture of the robot when clamping an external tool and solves the problem of difficulty in directly positioning the clamp.

Description

Vision-based method and system for clamping tool by mobile robot
Technical Field
The invention relates to the technical field of robot image processing, in particular to a method and a system for clamping a tool by a mobile robot based on vision.
Background
Robots require different end tools when performing different tasks, which requires the robot to have the ability to switch tools. However, unlike randomly picking up an article, the robot has a high demand on the accuracy of the position and posture of the end of the robot when switching tools. Conventionally, a robot with a fixed base can determine a position and a posture at the time of gripping in a teaching manner since the relationship between the coordinates of the robot and the coordinate system of a tool can be kept fixed.
Patent document No. CN112008767A discloses a gripping device for an AGV robot and a method for using the same, which comprises an AGV robot, a monitoring device, a gripping device and a positioning device, the left end of the AGV robot is fixedly connected with a monitoring device, the top end of the AGV robot is fixedly connected with a controller, a positioning device and a mechanical arm from left to right in sequence, the top end of the mechanical arm is fixedly connected with a clamping device, the clamping device comprises a first electric telescopic rod, the top end of the mechanical arm is fixedly connected with a first electric telescopic rod, through the mount and the pressure sensor that set up, this kind of setting cooperation dwang is connected with the rotation of bracing piece, the fixed connection of bracing piece and clamping jaw, first spring to the elasticity of clamp plate and mount and clamp plate and pressure sensor's fixed connection, when the use device, can press from both sides the material of getting article and adjust clamping force.
Aiming at the related technologies, the inventor considers that the robot clamping tool has high requirements on the position and the posture and is difficult to accurately position, the robot is moved, the clamping difficulty is increased due to the dynamic change of the relation between the robot and a tool coordinate system, point cloud information of the tool to be clamped is difficult to obtain, and auxiliary positioning is needed. Therefore, a technical solution is needed to improve the above technical problems.
Disclosure of Invention
In view of the defects in the prior art, the present invention provides a method and a system for a vision-based mobile robot gripping tool.
According to the invention, the vision-based method for clamping the tool by the mobile robot comprises the following steps:
step S1: positioning and calibrating the tool;
step S2: positioning and clamping a tool; four tools are arranged on the tool clamp, a flat plate is arranged on the upper portion of the tool clamp, and positioning codes are arranged at two ends of the flat plate respectively.
Preferably, the step S1 includes the steps of:
step S1.1: constructing and converting a coordinate system of a positioning code under a photographing position;
step S1.2: teaching a tool clamping pose;
step S1.3: and calculating the clamping pose of the tool under the coordinate system of the positioning code.
Preferably, said step S1.1 comprises the steps of:
step S1.1.1: placing an external tool clamp, moving the robot to a specified position, adjusting the posture of the robot, forming an image completely under the RGBD camera by the tool clamp, and recording a pose matrix M of the tail end of the robot under a robot coordinate system at the momentS
Step S1.1.2: collecting RGBD images of the tool holder, respectively calculating coordinates of central points of the two positioning codes, calculating an equation of a plane where the two-dimensional code is located, constructing a coordinate system formed by the two positioning codes, using a central position original point of the right positioning code to point to the center of the left positioning code as an X axis, using a normal vector of the plane to point downwards as a Z axis, and determining a Y axis according to the right-hand coordinate system;
step S1.1.3: calculating the position matrix of the positioning code coordinate system in the camera coordinate system at the moment, and recording as ML2 Cam(ii) a Calibration matrix of camera to robot end, MCam 2 End(ii) a Calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording the pose matrix as ML
ML=MS*MCam 2 End*ML2 Cam
Preferably, said step S1.2 comprises the steps of:
step S1.2.1: keeping the robot chassis unmovable, teaching tool clamping action, namely moving the mechanical arm to the pose for clamping the tool, recording the pose conversion matrix of the tail end of the mechanical arm in the state, and recording the pose conversion matrix as Mcla min g
Step S1.2.2: has a plurality of tool clamping poses, the nth clamping pose is recorded as Mcla min g(n)。
Preferably, said step S1.3 comprises the steps of:
step S1.3.1: because the position of the positioning code on the tool and the clamping position of the tool are relatively fixedRecording the transformation matrix of the nth clamping pose relative to the positioning coordinate system as Mcla min g 2LThe equation is as follows:
Mcla min g(n)=ML*Mcla min g 2L(n);
step S1.3.2: solving the transformation system M of each clamping pose relative to the positioning code coordinate system according to the formulacla min g 2L(n):
Figure BDA0003290027080000031
Preferably, the step S2 includes the steps of:
step S2.1: positioning preparation work in real time;
step S2.2: and calculating and executing the clamping pose of the tool in real time.
Preferably, said step S2.1 comprises the steps of:
step S2.1.1: when the tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is operated to the previous position matrix MS
Step S2.1.2: judging whether two positioning codes in an image acquired by a camera are imaged completely or not, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix M 'of the tail end of the robot under a robot coordinate system at the moment's
Step S2.1.3: calculating a pose matrix of the positioning code coordinate system at the moment in the camera coordinate system and recording the pose matrix as M'L2 Cam
Step S2.1.4: calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording as M'L
M'L=M′s*MCam 2 End*M'L2 Cam
Preferably, said step S2.2 comprises the steps of:
step S2.2.1: based on the pick command, the pick tool is selected, and the pose of the nth pick is denoted as Mclaing 2L' (n), then there is the following formula, where M iscla min g 2L(n) a fixed value is obtained by calculation in the calibration process:
M'cla min g(n)=M'L*Mcla min g 2L(n);
step S2.2.2: calculating the pose of the clamping tool according to the current position of the mobile robot, and operating the tail end of the mechanical arm to M'cla min gAnd (n) closing the robot clamping jaw to finish the robot clamping action.
The invention also provides a system for clamping the tool by the mobile robot based on vision, which comprises the following modules:
module M1: positioning and calibrating the tool;
module M2: positioning and clamping a tool; the tool clamp is provided with four tools, the upper part of the tool clamp is provided with a flat plate, and two ends of the flat plate are respectively provided with a positioning code;
module M1.1: constructing and converting a coordinate system of a positioning code under a photographing position;
module M1.2: teaching a tool clamping pose;
module M1.3: calculating the clamping pose of the tool under the positioning code coordinate system;
module M1.1.1: placing an external tool clamp, moving the robot to a specified position, adjusting the posture of the robot, forming an image completely under the RGBD camera by the tool clamp, and recording a pose matrix M of the tail end of the robot under a robot coordinate system at the momentS
Module M1.1.2: collecting RGBD images of the tool holder, respectively calculating coordinates of central points of the two positioning codes, calculating an equation of a plane where the two-dimensional code is located, constructing a coordinate system formed by the two positioning codes, using a central position original point of the right positioning code to point to the center of the left positioning code as an X axis, using a normal vector of the plane to point downwards as a Z axis, and determining a Y axis according to the right-hand coordinate system;
module M1.1.3: calculating the position matrix of the positioning code coordinate system in the camera coordinate system at the moment, and recording as ML2 Cam(ii) a Calibration matrix of camera to robot end, MCam 2 End(ii) a Calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording the pose matrix as ML
ML=MS*MCam 2 End*ML2 Cam
Module M1.2.1: keeping the robot chassis unmovable, teaching tool clamping action, namely moving the mechanical arm to the pose for clamping the tool, recording the pose conversion matrix of the tail end of the mechanical arm in the state, and recording the pose conversion matrix as Mcla min g
Module M1.2.2: has a plurality of tool clamping poses, the nth clamping pose is recorded as Mcla min g(n);
Module M1.3.1: because the position of the positioning code on the tool and the clamping position of the tool are relatively fixed, the transformation matrix of the nth clamping position relative to the positioning coordinate system is recorded as Mcla min g 2LThe equation is as follows:
Mcla min g(n)=ML*Mcla min g 2L(n);
module M1.3.2: solving the transformation system M of each clamping pose relative to the positioning code coordinate system according to the formulacla min g 2L(n):
Figure BDA0003290027080000041
Preferably, the module M2 includes the following modules:
module M2.1: positioning preparation work in real time;
module M2.2: calculating and executing a tool clamping pose in real time;
module M2.1.1: when the tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is operated to the previous position matrix MS
Module M2.1.2: judging whether two positioning codes in an image acquired by a camera are imaged completely or not, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix M 'of the tail end of the robot under a robot coordinate system at the moment's
Module M2.1.3: calculating a pose matrix of the positioning code coordinate system at the moment in the camera coordinate system and recording the pose matrix as M'L2 Cam
Module M2.1.4: calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording as M'L
M'L=M′s*MCam 2 End*M'L2 Cam
Module M2.2.1: based on the pick command, the pick tool is selected, and the pose of the nth pick is denoted as Mclaing 2L' (n), then there is the following formula, where M iscla min g 2L(n) a fixed value is obtained by calculation in the calibration process:
M'cla min g(n)=M'L*Mcla min g 2L(n);
module M2.2.2: calculating the pose of the clamping tool according to the current position of the mobile robot, and operating the tail end of the mechanical arm to M'cla min gAnd (n) closing the robot clamping jaw to finish the robot clamping action.
Compared with the prior art, the invention has the following beneficial effects:
1. the mirror is used as an important article in an indoor environment, accurately identifies and positions the mirror, and has an important supporting function on the work of an indoor robot;
2. the method is widely suitable for mirror positioning in various scenes, and the mirror algorithm has good universality;
3. the mirror algorithm has good anti-interference performance and can adapt to complex background walls.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic view of a mobile robot and a tool holder;
FIG. 2 is a schematic view of a robot body;
FIG. 3 is a schematic view of an external tool holder;
FIG. 4 is a schematic view of a positioning code on the tool holder;
fig. 5 is a schematic diagram of two positioning code-based coordinate systems.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Referring to fig. 1, the present invention provides a vision-based apparatus for gripping a tool by a mobile robot, including a mobile robot body and an external tool gripper.
Referring to fig. 2, the mobile robot body comprises a movable chassis, a 6-axis mechanical arm above the chassis, and an RGBD camera at the end of the mechanical arm.
Referring to fig. 3, the external toolholder comprises a toolholder body with 4 tools thereon.
Referring to fig. 4, the tool holder body is specified as a flat plate on the upper part of the tool holder, and the left end and the right end of the flat plate are respectively provided with a positioning code.
The invention also provides a vision-based method for clamping the tool by the mobile robot, which comprises the following steps:
step S1: positioning and calibrating the tool; step 1.1: constructing and converting a coordinate system of a positioning code under a photographing position; and (3) placing an external tool clamp, and adjusting the posture of the robot at a proper position where the robot moves, so that the tool clamp can form images completely under an RGBD camera. Recording a pose matrix MS of the tail end of the robot under a robot coordinate system at the moment; and collecting RGBD images of the tool holder, respectively calculating the coordinates of the central points of the two positioning codes, and calculating the equation of the plane where the two-dimensional code is located. And constructing a coordinate system consisting of the two positioning codes, wherein the center position origin of the right positioning code points to the center of the left positioning code is taken as an X axis, the normal vector of the plane in which the positioning code points downwards is taken as a Z axis, and the Y axis can be determined according to the right-hand coordinate system. Calculating a pose matrix of the positioning code coordinate system under the camera coordinate system at the moment, and recording the pose matrix as ML2 Cam; a calibration matrix from a known camera to the tail end of the robot is marked as MCam2 end; and calculating a pose matrix of the positioning code coordinate system in the robot coordinate system, and recording the pose matrix as ML:
ML=MS*MCam 2 End*ML2 Cam
step 1.2: teaching a tool clamping pose; and keeping the robot chassis not moving, and teaching the tool clamping action. Namely, the mechanical arm is moved to the clamping pose of the tool, and the pose transformation matrix of the tail end of the mechanical arm in the state is recorded and recorded as Mgallamine. There may be multiple tool clamping positions, with the nth clamping position recorded as mclaming (n).
Step 1.3: calculating the clamping pose of the tool under the positioning code coordinate system; since the position of the positioning code on the tool and the clamping pose of the tool are relatively fixed, the transformation matrix of the nth clamping pose relative to the positioning coordinate system is written as Mclaming2L, and the equation is as follows:
Mcla min g(n)=ML*Mcla min g 2L(n)
according to the formula, the transformation relation between each clamping pose and the positioning code coordinate system can be solved
Mcla min g 2L(n):
Figure BDA0003290027080000061
Step S2: positioning and clamping a tool; step 2.1: positioning preparation work in real time; when a tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is moved to the previous position matrix MS; judging whether two positioning codes in an image acquired by a camera are imaged completely, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix MS' of the tail end of the robot under a robot coordinate system; similar to the calibration process, calculating a position matrix of the positioning code coordinate system at the moment in a camera coordinate system, and recording the position matrix as ML2 Cam'; and calculating a pose matrix of the positioning code coordinate system in the robot coordinate system, and recording the pose matrix as ML':
M'L=M′s*MCam 2 End*M'L2 Cam
step 2.2: calculating and executing a tool clamping pose in real time; selecting a clip according to the clipping commandThe pose of the tool is expressed as Mclaing 2L' (n) as the nth picking pose, and the formula is as follows (where M iscla min g 2L(n) is a fixed value calculated in the calibration process):
M'cla min g(n)=M'L*Mcla min g 2L(n)
according to the current position of the mobile robot, the pose of the clamping tool is calculated, and the tail end of the mechanical arm is operated to M'cla min gAnd (n) closing the robot clamping jaw to finish the robot clamping action.
The invention also provides a system for clamping the tool by the mobile robot based on vision, which comprises the following modules:
module M1: positioning and calibrating the tool; module M1.1: constructing and converting a coordinate system of a positioning code under a photographing position; module M1.1.1: placing an external tool clamp, moving the robot to a specified position, adjusting the posture of the robot, forming an image completely under the RGBD camera by the tool clamp, and recording a pose matrix M of the tail end of the robot under a robot coordinate system at the momentS(ii) a Module M1.1.2: collecting RGBD images of the tool holder, respectively calculating coordinates of central points of the two positioning codes, calculating an equation of a plane where the two-dimensional code is located, constructing a coordinate system formed by the two positioning codes, using a central position original point of the right positioning code to point to the center of the left positioning code as an X axis, using a normal vector of the plane to point downwards as a Z axis, and determining a Y axis according to the right-hand coordinate system; module M1.1.3: calculating the position matrix of the positioning code coordinate system in the camera coordinate system at the moment, and recording as ML2 Cam(ii) a Calibration matrix of camera to robot end, MCam 2 End(ii) a Calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording the pose matrix as ML
ML=MS*MCam 2 End*ML2 Cam
Module M1.2: teaching a tool clamping pose; module M1.2.1: keeping the robot chassis unmovable, teaching tool clamping action, namely moving the mechanical arm to the pose for clamping the tool, recording the pose conversion matrix of the tail end of the mechanical arm in the state, and recording the pose conversion matrix as Mcla min g(ii) a Module M1.2.2: with multiple tool-holding positions, the nth clampThe pose is recorded as Mcla min g(n)。
Module M1.3: calculating the clamping pose of the tool under the positioning code coordinate system; module M1.3.1: because the position of the positioning code on the tool and the clamping position of the tool are relatively fixed, the transformation matrix of the nth clamping position relative to the positioning coordinate system is recorded as Mcla min g 2LThe equation is as follows:
Mcla min g(n)=ML*Mcla min g 2L(n)。
module M1.3.2: solving the transformation system M of each clamping pose relative to the positioning code coordinate system according to the formulacla min g 2L(n):
Figure BDA0003290027080000081
Module M2: positioning and clamping a tool; the tool clamp is provided with four tools, the upper part of the tool clamp is provided with a flat plate, and two ends of the flat plate are respectively provided with a positioning code; module M2.1: positioning preparation work in real time; module M2.1.1: when the tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is operated to the previous position matrix MS(ii) a Module M2.1.2: judging whether two positioning codes in an image acquired by a camera are imaged completely or not, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix M 'of the tail end of the robot under a robot coordinate system at the moment's(ii) a Module M2.1.3: calculating a pose matrix of the positioning code coordinate system at the moment in the camera coordinate system and recording the pose matrix as M'L2 Cam(ii) a Module M2.1.4: calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording as M'L
M'L=M′s*MCam 2 End*M'L2 Cam
Module M2.2: calculating and executing a tool clamping pose in real time; module M2.2.1: based on the pick command, the pick tool is selected, and the pose of the nth pick is denoted as Mclaing 2L' (n), then there is the following formula, where M iscla min g 2L(n) is a fixed value calculated in the calibration process:
M'cla min g(n)=M'L*Mcla min g 2L(n)。
Module M2.2.2: calculating the pose of the clamping tool according to the current position of the mobile robot, and operating the tail end of the mechanical arm to M'cla min gAnd (n) closing the robot clamping jaw to finish the robot clamping action.
The mirror is used as an important article in an indoor environment, accurately identifies and positions the mirror, and has an important supporting function on the work of an indoor robot; the method is widely suitable for mirror positioning in various scenes, and the mirror algorithm has good universality; the mirror algorithm has good anti-interference performance and can adapt to complex background walls.
Those skilled in the art will appreciate that, in addition to implementing the system and its various devices, modules, units provided by the present invention as pure computer readable program code, the system and its various devices, modules, units provided by the present invention can be fully implemented by logically programming method steps in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units included in the system for realizing various functions can also be regarded as structures in the hardware component; means, modules, units for performing the various functions may also be regarded as structures within both software modules and hardware components for performing the method.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A vision-based method of mobile robotic grasping a tool, the method comprising the steps of:
step S1: positioning and calibrating the tool;
step S2: positioning and clamping a tool; four tools are arranged on the tool clamp, a flat plate is arranged on the upper portion of the tool clamp, and positioning codes are arranged at two ends of the flat plate respectively.
2. The vision-based mobile robot gripping tool method of claim 1, wherein the step S1 comprises the steps of:
step S1.1: constructing and converting a coordinate system of a positioning code under a photographing position;
step S1.2: teaching a tool clamping pose;
step S1.3: and calculating the clamping pose of the tool under the coordinate system of the positioning code.
3. The vision-based mobile robotic gripper tool method of claim 2, wherein said step S1.1 comprises the steps of:
step S1.1.1: placing an external tool clamp, moving the robot to a specified position, adjusting the posture of the robot, forming an image completely under the RGBD camera by the tool clamp, and recording a pose matrix M of the tail end of the robot under a robot coordinate system at the momentS
Step S1.1.2: collecting RGBD images of the tool holder, respectively calculating coordinates of central points of the two positioning codes, calculating an equation of a plane where the two-dimensional code is located, constructing a coordinate system formed by the two positioning codes, using a central position original point of the right positioning code to point to the center of the left positioning code as an X axis, using a normal vector of the plane to point downwards as a Z axis, and determining a Y axis according to the right-hand coordinate system;
step S1.1.3: calculating the position matrix of the positioning code coordinate system in the camera coordinate system at the moment, and recording as ML2Cam(ii) a Calibration matrix of camera to robot end, MCam2End(ii) a Calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording the pose matrix as ML
ML=MS*MCam2End*ML2Cam
4. The vision-based mobile robotic gripper tool method of claim 2, wherein said step S1.2 comprises the steps of:
step S1.2.1: keeping the robot chassis unmovable, teaching tool clamping action, namely moving the mechanical arm to the pose for clamping the tool, recording the pose conversion matrix of the tail end of the mechanical arm in the state, and recording the pose conversion matrix as Mclaming
Step S1.2.2: has a plurality of tool clamping poses, the nth clamping pose is recorded as Mclaming(n)。
5. The vision-based mobile robotic gripper tool method of claim 2, wherein said step S1.3 comprises the steps of:
step S1.3.1: because the position of the positioning code on the tool and the clamping position of the tool are relatively fixed, the transformation matrix of the nth clamping position relative to the positioning coordinate system is recorded as Mclaming2LThe equation is as follows:
Mclaming(n)=ML*Mclaming2L(n);
step S1.3.2: solving the transformation system M of each clamping pose relative to the positioning code coordinate system according to the formulaclaming2L(n):
Figure FDA0003290027070000021
6. The vision-based mobile robot gripping tool method of claim 1, wherein the step S2 comprises the steps of:
step S2.1: positioning preparation work in real time;
step S2.2: and calculating and executing the clamping pose of the tool in real time.
7. The vision-based mobile robotic gripper tool method of claim 6, wherein said step S2.1 comprises the steps of:
step S2.1.1: when the tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is operated to the previous position matrix MS
Step S2.1.2: judging whether two positioning codes in an image acquired by a camera are imaged completely or not, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix M 'of the tail end of the robot under a robot coordinate system at the moment's
Step S2.1.3: calculating a pose matrix of the positioning code coordinate system at the moment in the camera coordinate system and recording the pose matrix as M'L2Cam
Step S2.1.4: calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording as M'L
M′L=M′s*MCam2End*M′L2Cam
8. The vision-based mobile robotic gripper tool method of claim 6, wherein said step S2.2 comprises the steps of:
step S2.2.1: based on the pick command, the pick tool is selected, and the pose of the nth pick is denoted as Mclaing 2L' (n), then there is the following formula, where M isclaming2L(n) a fixed value is obtained by calculation in the calibration process:
M′claming(n)=M′L*Mclaming2L(n);
step S2.2.2: calculating the pose of the clamping tool according to the current position of the mobile robot, and operating the tail end of the mechanical arm to M'clamingAnd (n) closing the robot clamping jaw to finish the robot clamping action.
9. A vision-based system for a mobile robotic gripper tool, the system comprising:
module M1: positioning and calibrating the tool;
module M2: positioning and clamping a tool; the tool clamp is provided with four tools, the upper part of the tool clamp is provided with a flat plate, and two ends of the flat plate are respectively provided with a positioning code;
module M1.1: constructing and converting a coordinate system of a positioning code under a photographing position;
module M1.2: teaching a tool clamping pose;
module M1.3: calculating the clamping pose of the tool under the positioning code coordinate system;
module M1.1.1: placing an external tool clamp, moving the robot to a specified position, adjusting the posture of the robot, forming an image completely under the RGBD camera by the tool clamp, and recording a pose matrix M of the tail end of the robot under a robot coordinate system at the momentS
Module M1.1.2: collecting RGBD images of the tool holder, respectively calculating coordinates of central points of the two positioning codes, calculating an equation of a plane where the two-dimensional code is located, constructing a coordinate system formed by the two positioning codes, using a central position original point of the right positioning code to point to the center of the left positioning code as an X axis, using a normal vector of the plane to point downwards as a Z axis, and determining a Y axis according to the right-hand coordinate system;
module M1.1.3: calculating the position matrix of the positioning code coordinate system in the camera coordinate system at the moment, and recording as ML2Cam(ii) a Calibration matrix of camera to robot end, MCam2End(ii) a Calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording the pose matrix as ML
ML=MS*MCam2End*ML2Cam
Module M1.2.1: keeping the robot chassis unmovable, teaching tool clamping action, namely moving the mechanical arm to the pose for clamping the tool, recording the pose conversion matrix of the tail end of the mechanical arm in the state, and recording the pose conversion matrix as Mclaming
Module M1.2.2: has a plurality of tool clamping poses, the nth clamping pose is recorded as Mclaming(n);
Module M1.3.1: because the position of the positioning code on the tool and the clamping position of the tool are relatively fixed, the transformation matrix of the nth clamping position relative to the positioning coordinate system is recorded as Mclaming2LThe equation is as follows:
Mclaming(n)=ML*Mclaming2L(n);
module M1.3.2: solving the transformation system M of each clamping pose relative to the positioning code coordinate system according to the formulaclaming2L(n):
Figure FDA0003290027070000041
10. The vision-based system for mobile robotic gripper tools of claim 9, wherein the module M2 comprises the following modules:
module M2.1: positioning preparation work in real time;
module M2.2: calculating and executing a tool clamping pose in real time;
module M2.1.1: when the tool is clamped in real time, the mobile robot firstly moves to the position near the robot, and the tail end of the robot is operated to the previous position matrix MS
Module M2.1.2: judging whether two positioning codes in an image acquired by a camera are imaged completely or not, if not, finely adjusting the mechanical arm until the two positioning codes are imaged completely in the image, and recording a pose matrix M 'of the tail end of the robot under a robot coordinate system at the moment's
Module M2.1.3: calculating a pose matrix of the positioning code coordinate system at the moment in the camera coordinate system and recording the pose matrix as M'L2Cam
Module M2.1.4: calculating a pose matrix of the positioning code coordinate system in the robot coordinate system and recording as M'L
M′L=M′s*MCam2End*M′L2Cam
Module M2.2.1: based on the pick command, the pick tool is selected, and the pose of the nth pick is denoted as Mclaing 2L' (n), then there is the following formula, where M isclaming2L(n) a fixed value is obtained by calculation in the calibration process:
M′claming(n)=M′L*Mclaming2L(n);
module M2.2.2: calculating the pose of the clamping tool according to the current position of the mobile robot, and operating the tail end of the mechanical arm to M'clamingAnd (n) closing the robot clamping jaw to finish the robot clamping action.
CN202111161341.3A 2021-09-30 2021-09-30 Vision-based method and system for clamping tool by mobile robot Active CN113771042B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111161341.3A CN113771042B (en) 2021-09-30 2021-09-30 Vision-based method and system for clamping tool by mobile robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111161341.3A CN113771042B (en) 2021-09-30 2021-09-30 Vision-based method and system for clamping tool by mobile robot

Publications (2)

Publication Number Publication Date
CN113771042A true CN113771042A (en) 2021-12-10
CN113771042B CN113771042B (en) 2023-03-24

Family

ID=78854892

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111161341.3A Active CN113771042B (en) 2021-09-30 2021-09-30 Vision-based method and system for clamping tool by mobile robot

Country Status (1)

Country Link
CN (1) CN113771042B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986522A (en) * 2022-08-01 2022-09-02 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10150213B1 (en) * 2016-07-27 2018-12-11 X Development Llc Guide placement by a robotic device
CN110017852A (en) * 2019-04-25 2019-07-16 广东省智能机器人研究院 A kind of navigation positioning error measurement method
CN110163912A (en) * 2019-04-29 2019-08-23 达泊(东莞)智能科技有限公司 Two dimensional code pose scaling method, apparatus and system
CN111091587A (en) * 2019-11-25 2020-05-01 武汉大学 Low-cost motion capture method based on visual markers
US20210023694A1 (en) * 2019-07-23 2021-01-28 Qingdao university of technology System and method for robot teaching based on rgb-d images and teach pendant
CN113211431A (en) * 2021-04-16 2021-08-06 中铁第一勘察设计院集团有限公司 Pose estimation method based on two-dimensional code correction robot system
CN113352345A (en) * 2021-08-09 2021-09-07 季华实验室 System, method and device for replacing quick-change device, electronic equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10150213B1 (en) * 2016-07-27 2018-12-11 X Development Llc Guide placement by a robotic device
CN110017852A (en) * 2019-04-25 2019-07-16 广东省智能机器人研究院 A kind of navigation positioning error measurement method
CN110163912A (en) * 2019-04-29 2019-08-23 达泊(东莞)智能科技有限公司 Two dimensional code pose scaling method, apparatus and system
US20210023694A1 (en) * 2019-07-23 2021-01-28 Qingdao university of technology System and method for robot teaching based on rgb-d images and teach pendant
CN111091587A (en) * 2019-11-25 2020-05-01 武汉大学 Low-cost motion capture method based on visual markers
CN113211431A (en) * 2021-04-16 2021-08-06 中铁第一勘察设计院集团有限公司 Pose estimation method based on two-dimensional code correction robot system
CN113352345A (en) * 2021-08-09 2021-09-07 季华实验室 System, method and device for replacing quick-change device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986522A (en) * 2022-08-01 2022-09-02 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium
CN114986522B (en) * 2022-08-01 2022-11-08 季华实验室 Mechanical arm positioning method, mechanical arm grabbing method, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113771042B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
CN111369625B (en) Positioning method, positioning device and storage medium
CN107571246B (en) Part assembling system and method based on double-arm robot
US20150343637A1 (en) Robot, robot system, and control method
JP2003211381A (en) Robot control device
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN113771042B (en) Vision-based method and system for clamping tool by mobile robot
CN112720458A (en) System and method for online real-time correction of robot tool coordinate system
CN111070197A (en) Automatic clamp, automatic grabbing device, mechanical arm and robot
US10207413B2 (en) End effector, robot, and robot control apparatus
WO2016008215A1 (en) 5-axis and 6-axis mixing control method for industrial robot and system thereof
JP2667153B2 (en) Direct teaching method for multiple arm device
JP2017047479A (en) Robot, control device and robot system
CN111618845B (en) Robot system
JPH06187021A (en) Coordinate correcting method for robot with visual sense
CN114074331A (en) Disordered grabbing method based on vision and robot
CN215709916U (en) Material posture adjustment equipment and adjustment system
WO2023024277A1 (en) Method and apparatus for controlling dual-arm robot, and dual-arm robot and readable storage medium
CN108724215A (en) A kind of two finger magic square robot systems of automatic reduction magic square
TWI721895B (en) Robot arm adjustment method and the adjustment system thereof
CN113977637A (en) Robot vision identification grabbing system and method applicable to non-precision work bin
CN111267139B (en) Intelligent end effector of robot
TWM599726U (en) Robot arm adjustment system
CN114589689A (en) Visual positioning method and device based on two-dimensional code and computer readable storage medium
CN111872921B (en) Robot double-gripper calibration system and teaching method
CN112123329A (en) Robot 3D vision hand-eye calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 311231 building 3, No. 477, Hongxing Road, Qiaonan block, economic and Technological Development Zone, Xiaoshan District, Hangzhou City, Zhejiang Province

Applicant after: Hangzhou Jingwu Intelligent Technology Co.,Ltd.

Address before: Room 12, 3rd floor, No.2 Lane 1446, Yunguan Road, Lingang New District, Pudong New Area pilot Free Trade Zone, Shanghai, 201306

Applicant before: Shanghai Jingwu Intelligent Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant