CN115042184A - Robot hand-eye coordinate conversion method and device, computer equipment and storage medium - Google Patents
Robot hand-eye coordinate conversion method and device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN115042184A CN115042184A CN202210787924.5A CN202210787924A CN115042184A CN 115042184 A CN115042184 A CN 115042184A CN 202210787924 A CN202210787924 A CN 202210787924A CN 115042184 A CN115042184 A CN 115042184A
- Authority
- CN
- China
- Prior art keywords
- calibration plate
- end effector
- robot
- scanner
- poses
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1692—Calibration of manipulator
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
- Numerical Control (AREA)
Abstract
The application relates to a robot hand-eye coordinate transformation method, a robot hand-eye coordinate transformation device, a computer device, a storage medium and a computer program product. The method comprises the following steps: acquiring images of an end effector which is provided with a calibration plate and arranged on the robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses; when the difference values among the calibration plate images of the plurality of poses do not fall into the difference value interval, controlling the end effector to operate in the direction corresponding to the difference values, and acquiring images of the end effector by a scanner in the operation process to obtain a target calibration plate image; determining the tool origin position of the robot according to the mark point position corresponding to the mark point in the target calibration plate image; based on the tool origin position, the coordinate transformation relationship between the end effector and the scanner is calibrated. By adopting the method, the tool origin position under the robot coordinate system can be accurately obtained, and the coordinate conversion relation between the end effector and the scanner can be accurately calibrated.
Description
Technical Field
The present application relates to the field of robotics, and in particular, to a method and an apparatus for converting coordinates of a robot hand and eye, a computer device, a storage medium, and a computer program product.
Background
With the development of artificial intelligence technology, robots have been widely used in various industries. In the industrial application field, the robot is provided with a visual perception system, and the robot can control an end effector to execute actions such as machining, installation and the like by utilizing three-dimensional information acquired by the visual perception system. In brief, the three-dimensional sensing system is equivalent to human eyes, the end effector is equivalent to human hands, and preset action tasks are completed through cooperation between the hands and the eyes.
In order to ensure that the robot accurately moves the space object to the target position, the conversion relation between the coordinate system of the vision system and the coordinate system of the manipulator needs to be determined, the accuracy of the traditional conversion relation determination method is poor, and the obtained result has larger deviation with the actual true value.
Disclosure of Invention
In view of the above, it is necessary to provide a robot hand-eye coordinate transformation method, an apparatus, a computer device, a computer readable storage medium, and a computer program product, which can improve accuracy in view of the above technical problems.
In a first aspect, the application provides a robot hand-eye coordinate transformation method. The method comprises the following steps:
acquiring images of an end effector which is provided with a calibration plate and arranged on a robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses;
when the difference values between the calibration plate images at the multiple poses do not fall into a difference value interval, controlling the end effector to operate towards the direction corresponding to the difference values, and performing image acquisition on the end effector through the scanner in the operation process to obtain a target calibration plate image;
determining the tool origin position of the robot according to the scanning value corresponding to the mark point in the target calibration plate image;
and calibrating the coordinate conversion relation between the end effector and the scanner based on the tool origin position.
In one embodiment, the controlling the end effector to operate in the direction corresponding to the difference value, and performing image acquisition on the end effector by using the scanner during the operation process to obtain an image of the target calibration plate includes:
generating respective scanning values of the calibration plate images of the plurality of poses based on respective mark point positions of the calibration plate images of the plurality of poses respectively;
generating a difference vector based on the scanning values of the calibration plate images of the poses;
controlling the operation of the end effector according to the reverse direction of the difference vector;
and in the operation process, the scanner is used for carrying out image acquisition on the end effector to obtain an image of the target calibration plate.
In one embodiment, the generating the scan values of the calibration plate images of the plurality of poses based on the marker point positions of the calibration plate images of the plurality of poses respectively comprises:
acquiring the positions of the mark points corresponding to the mark points in the calibration plate images of the plurality of poses;
and averaging the positions of the mark points under each pose respectively to obtain respective scanning values of the calibration plate images of each pose.
In one embodiment, the generating a disparity vector based on the scan values of the calibration plate image of each pose includes:
calculating based on the homogeneous transformation matrix of each pose in a tool coordinate system and the homogeneous transformation matrix of each pose in a scanner coordinate system to obtain an initial coordinate transformation relation between the end effector and the scanner;
and combining and converting the scanning value combinations of the calibration plate images of the adjacent poses into a robot coordinate system according to the initial coordinate conversion relation, and respectively generating difference vectors of the groups based on the converted adjacent pose scanning values of the combinations.
In one embodiment, the determining the tool origin position of the robot according to the scanning value corresponding to the marker point in the target calibration plate image includes:
taking a scanning value corresponding to a mark point in the target calibration plate image as an initial working origin position;
converting the initial working origin position into a robot coordinate system to obtain a target tool origin position in the robot coordinate system;
the calibrating a coordinate transformation relationship between the end effector and the scanner based on the tool origin position comprises:
and calibrating the coordinate conversion relation between the end effector and the scanner based on the origin position of the target tool.
In one embodiment, the calibrating the coordinate transformation relationship between the end effector and the scanner based on the tool origin position includes:
acquiring calibration plate images matched with poses based on the tool origin positions under different poses respectively;
and calculating a coordinate rotation conversion relation and a translation conversion relation between the end effector and the scanner based on the marker point sets corresponding to the calibration plate images with different poses.
In one embodiment, the image acquisition of the end effector on the robot, on which the calibration plate is mounted, by the scanner of the robot includes:
keeping the position change of the calibration plate in the robot coordinate system within a preset range, and controlling the end effector to move so as to enable the calibration plate to change the pose for multiple times;
and acquiring images of the end effector with each pose change through a scanner of the robot.
In a second aspect, the application further provides a robot hand-eye coordinate transformation device. The device comprises:
the image acquisition module is used for acquiring images of an end effector which is arranged on the robot and is provided with a calibration plate through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses;
the adjusting module is used for controlling the end effector to operate towards the direction corresponding to the difference value when the difference value between the calibration plate images of the plurality of poses does not fall into the difference value interval, and carrying out image acquisition on the end effector by the scanner in the operation process to obtain a target calibration plate image;
the origin calibration module is used for determining the tool origin position of the robot according to the scanning value corresponding to the marker point in the target calibration plate image;
and the conversion relation determining module is used for calibrating the coordinate conversion relation between the end effector and the scanner based on the tool origin position.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory and a processor, the memory stores a computer program, and the processor realizes the steps of robot eye coordinate transformation in any of the above embodiments when executing the computer program.
In a fourth aspect, the present application further provides a computer-readable storage medium. The computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the steps of robot eye coordinate transformation in any of the embodiments described above.
In a fifth aspect, the present application further provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, performs the steps of robot eye coordinate transformation in any of the embodiments described above.
According to the robot hand-eye coordinate transformation method, the robot hand-eye coordinate transformation device, the computer equipment, the storage medium and the computer program product, when the difference values between the calibration plate images of the plurality of poses do not fall into the difference value interval, the end effector is controlled to operate towards the direction corresponding to the difference values, and the scanner is used for carrying out image acquisition on the end effector in the operation process to obtain the target calibration plate image; and determining the tool origin position of the robot according to the scanning value corresponding to the mark point in the target calibration plate image to replace the process of calibrating by using a needle point for the traditional demonstrator, setting the coordinates of the robot origin, avoiding manual alignment errors, and avoiding the limitation that the tool coordinates can only be set to the characteristic position recognizable by human eyes, so as to more accurately obtain the tool origin position under the robot coordinate system and further more accurately calibrate the coordinate conversion relationship between the end effector and the scanner.
Drawings
FIG. 1 is a diagram illustrating an exemplary embodiment of a robot hand-eye coordinate transformation method;
FIG. 2 is a schematic flowchart of a robot hand-eye coordinate transformation method according to an embodiment;
FIG. 3 is a diagram showing the result of the distribution of the marking points on the marking plate in one embodiment;
FIG. 4 is a flowchart illustrating a method for converting coordinates of a robot hand-eye according to another embodiment;
FIG. 5 is a schematic flow chart of generating an image of a target calibration plate in another embodiment;
FIG. 6 is a block diagram showing the structure of a robot hand-eye coordinate transformation apparatus according to an embodiment;
FIG. 7 is a diagram of the internal structure of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
The robot hand-eye coordinate transformation method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104, or may be located on the cloud or other network server. The terminal 102 acquires images of an end effector which is provided with a calibration plate and arranged on the robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses; when the difference values between the calibration plate images of the plurality of poses do not fall into a difference value interval, controlling the end effector to operate towards the direction corresponding to the difference values, and acquiring images of the end effector by the scanner in the operation process to obtain a target calibration plate image; determining the tool origin position of the robot according to the scanning value corresponding to the mark point in the target calibration plate image; and calibrating the coordinate conversion relation between the end effector and the scanner based on the tool origin position.
The terminal 102 may be, but not limited to, various robots, personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like. The portable wearable device can be a smart watch, a smart bracelet, a head-mounted device, and the like. The server 104 may be implemented as a stand-alone server or as a server cluster comprised of multiple servers.
In one embodiment, as shown in fig. 2, a robot hand-eye coordinate transformation method is provided, which is described by taking the method as an example applied to the terminal 102 in fig. 1, and includes the following steps:
step 202, acquiring images of the end effector provided with the calibration plate on the robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses.
The robot comprises a robot body, a scanner and an end effector; the robot body is provided with a scanner, the conversion relation between the robot coordinate system and the scanner coordinate system is fixed, and the scanner comprises a depth map and a gray camera; correspondingly, an end effector is installed on a certain shaft of the robot body, the end effector is provided with a tool coordinate system, in order to calibrate the position of the tool origin of the tool coordinate system in the robot coordinate system, a mark plate is installed on a manipulator controlled by the robot through the end effector, mark points are distributed on the mark plate, and the distribution state of the mark points is shown in fig. 3.
In one embodiment, image acquisition of an end effector on a robot to which a calibration plate is mounted by a robot scanner includes: the robot controls the end effector to move for multiple times in a working range, and a scanner acquires images of a calibration plate mounted on the end effector in each movement to obtain calibration plate images in multiple poses; wherein the range of motion of the end effector in the working range is positively correlated with the accuracy of the tool origin.
In one embodiment, a robot controls multiple movements of an end effector in a working range, comprising: the robot keeps the position change of the calibration plate in the robot coordinate system within a preset range, and controls the end effector to move so as to enable the calibration plate to change the pose for multiple times; correspondingly, the calibration board installed on the end effector in each movement is used for image acquisition by the scanner, and the method comprises the following steps: and acquiring images of the end effector with each pose change through a scanner of the robot.
And 204, when the difference values between the calibration plate images of the plurality of poses do not fall into the difference value interval, controlling the end effector to operate towards the direction corresponding to the difference values, and acquiring images of the end effector of the robot through the scanner in the operation process to obtain the target calibration plate image.
For the difference values between the calibration plate images with multiple poses, the difference values are obtained by performing difference calculation on the basis of the robot coordinate system in the calibration plate images with multiple poses, the difference calculation process can be the calculation on the position of a preset point in the robot coordinate system, and the difference calculation process can be the difference values between the scanning values of the calibration plate images with multiple poses.
In one embodiment, when the difference value does not fall into the difference value interval, the position of a preset point in the calibration plate image of each pose under a robot coordinate system is calculated based on the position of a mark point in the calibration plate image of each pose, the multiple operation of the end effector is controlled based on the calculated result, the image acquisition of the end effector of the robot is carried out through a scanner in the operation process, and the like until the difference value falls into the difference value interval, and the target calibration plate image is obtained.
The scan values of the target calibration plate image are generated based on the positions of the landmark points in the calibration plate image for each pose. And when the mark point positions are a plurality of three-dimensional mark point coordinates, respectively performing stability control on the three-dimensional mark point coordinate set of a certain pose through the robot, and respectively averaging the three-dimensional mark point coordinate set after the stability control of the pose to obtain a scanning value.
When the calibration plate has a mark point and the calibration plate image of each pose has a mark point position, scanning the end effector with the same pose by the scanner of each pose, wherein the mark point position obtained by scanning is the scanning value of the calibration plate image of the corresponding pose. The landmark positions are coordinate positions of landmark points in the calibration plate images of the plurality of poses in the scanner coordinate system, and the landmark positions are used for generating scanning values of the poses.
The scanning value can be an input value in the process of calibrating the tool origin of the robot and is used for calculating the difference value between calibration plate images; the scanning value can also be an input value for calibrating the relation between the three-dimensional scanner and the robot body, and is used for calibrating the coordinate conversion relation between the tool coordinate system and the scanner coordinate system of the end effector.
In one embodiment, controlling the end effector to operate in the direction corresponding to the difference value, and acquiring an image of the end effector by a scanner during operation to obtain an image of the target calibration plate includes: generating respective scanning values of the calibration plate images of the plurality of poses based on respective mark point positions of the calibration plate images of the plurality of poses respectively; generating a difference vector based on the scanning values of the calibration plate images of the poses; controlling the operation of the end effector according to the reverse direction of the bit difference vector; in the operation process, the scanner is used for carrying out image acquisition on the end effector to obtain an image of the target calibration plate.
In order to further improve the accuracy of the tool origin position, the calibration plate has a plurality of marker points distributed annularly, and the calibration plate image of each pose has a plurality of marker points respectively. Correspondingly, generating respective scanning values of the calibration plate images of the plurality of poses based on the respective mark point positions of the calibration plate images of the plurality of poses respectively, and the method comprises the following steps: acquiring mark point positions corresponding to mark points in a calibration plate image of a plurality of poses; and averaging the positions of the mark points under each pose respectively to obtain respective scanning values of the calibration plate images of each pose.
In one embodiment, the averaging of the positions of the landmark points in each pose respectively includes: scanning the end effectors with the same pose by a plurality of poses of scanners, and respectively equalizing the positions of the mark points with the same pose obtained by scanning to obtain an equalized result of the positions of the mark points of each pose, wherein the equalized result of the positions of the mark points of each pose is a scanning value of the calibration plate image of the corresponding pose.
And step 206, determining the tool origin position of the robot according to the mark point position corresponding to the mark point in the target calibration plate image.
In one embodiment, determining the tool origin position of the robot according to the mark point position corresponding to the mark point in the target calibration board image comprises: taking a scanning value corresponding to a mark point in the target calibration plate image as an initial working original point position; and converting the initial working origin position into a robot coordinate system to obtain the origin position of the target tool in the robot coordinate system. Correspondingly, based on the tool origin position, calibrating the coordinate transformation relationship between the end effector and the scanner, comprising: and calibrating the coordinate conversion relation between the end effector and the scanner based on the origin position of the target tool.
The scanning value corresponding to the mark point position is the scanning value of the mark point position corresponding to each target calibration image. It can be understood that when a certain group of difference values fall into the calibration plate image in the difference value interval, the group of target calibration images exist, and different groups of target calibration images obtain the origin position of the target tool in the robot coordinate system based on the function of the robot demonstrator, so as to calibrate the coordinate conversion relationship between the end effector and the scanner.
The process of obtaining the target tool origin position in the robot coordinate system specifically includes: based on the function of the robot demonstrator, the end effector is moved by using the difference value, and after the positions of the scanning values (mark point averaging results) determined by different groups of target calibration images are overlapped, the positions of the scanning values can be determined as the origin of coordinates of the tool, so that the coordinate positions of the tool in the robot coordinate system can be obtained.
And 208, calibrating the coordinate transformation relation between the end effector and the scanner based on the tool origin position.
In one embodiment, calibrating a coordinate transformation relationship between the end effector and the scanner based on the tool origin position comprises: acquiring calibration plate images matched with poses based on tool origin positions under different poses respectively; and calculating a coordinate rotation conversion relation and a translation conversion relation between the end effector and the scanner based on the marker point sets corresponding to the calibration plate images with different poses.
Specifically, when coordinates of the robot and the three-dimensional scanner are calibrated, an included angle between a tool coordinate system and a robot body coordinate system is kept unchanged during calibration, XYZ information of the robot is recorded after the robot moves every time, three-dimensional average values of mark points of the three-dimensional scanner are recorded at the same time, at least 8 groups of data are collected together, the position covers the working range of the robot as far as possible to reduce calibration errors, and finally a rotation value and a translation value between the robot and the three-dimensional scanner can be obtained through calculation of PNP (passive-n-Point), and the tool origin position is the position in the robot coordinate system, so that the corresponding tool origin position is obtained.
The PnP is a method for solving the motion of a pair of points from 3D to 2D, which is a method how to estimate the pose of the camera when the PnP is based on 3D spatial points and their projection positions.
In the robot hand-eye coordinate transformation method, when the difference values between calibration plate images of a plurality of poses do not fall into a difference value interval, the end effector is controlled to operate towards the direction corresponding to the difference values, and the scanner is used for carrying out image acquisition on the end effector in the operation process to obtain a target calibration plate image; according to the mark point position corresponding to the mark point in the target calibration plate image, the tool origin position of the robot is determined, so that the process of calibrating by using a needle point for a traditional teaching machine is replaced, the original point coordinates of the robot are set, the manual alignment error can be avoided, the limitation that the tool coordinates can only be set to the characteristic position recognizable by human eyes can be avoided, the tool origin position under the robot coordinate system can be obtained more accurately, and the coordinate conversion relation between the end effector and the scanner can be further accurately calibrated.
In addition, the tool origin position of the robot is determined according to the mark point position corresponding to the mark point in the target calibration plate image, the RT relation between the tool coordinate origin and the target coordinate system origin is not implied in the solving process, and the coordinate conversion relation between the end effector and the scanner can be directly calibrated without using a tsai-lenz algorithm. Therefore, compared with the classical method based on tsai-lenz, the calculation of the AX-XB matrix is omitted in the embodiment, the input quantity in the matrix calculation process is the variable quantity between the two postures, and is not very intuitive, and the actual simulation test finds that the input plus 0.1mm noise results in about 1mm error, and the error is caused by the introduction of a three-dimensional scanner or a manipulator, so that the method can directly adjust the calibration working range to distribute the quantitative error to a larger interval.
In one embodiment, to clarify the generation process of the origin position of the scan target tool, a more specific embodiment is described, in which the method includes:
and 402, acquiring images of the end effector provided with the calibration plate on the robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses, combining the calibration plate images at the plurality of poses into a first group according to the acquisition sequence of each pose, and judging whether the difference value of the scanning value of the first group falls into a difference value interval. For example: the two poses of adjacent acquisition orders are combined into a first group.
And 404, when the difference value of the first group of scanning values does not fall into the difference value interval, controlling the end effector to operate in the opposite direction corresponding to the difference value, and acquiring an image of the end effector of the robot by the scanner in the operation process, so as to repeatedly adjust the operation until the difference value of the first group of scanning values falls into the difference value interval, and obtaining a first group of target calibration plate images.
Step 406, determining the tool origin position of the first group according to the mark point position corresponding to the mark point in the target calibration plate image of the first group, obtaining the tool origin position of the second group according to the steps 402 to 406, repeating the steps until obtaining a plurality of groups of tool origin positions such as four groups, six groups and the like, calibrating the tool origin position of each group through the teaching aid function of the robot.
And step 408, calibrating a coordinate transformation relation between the end effector and the scanner based on the tool origin position.
Therefore, through the steps 402 to 408, the technical scheme of determining the tool origin position through a plurality of groups under the condition of grouping the poses is more clearly shown.
In one embodiment, as shown in fig. 5, on the basis of any of the above embodiments, further definition is performed to perform hand-eye calibration more quickly. Correspondingly, the end effector is controlled to operate in the opposite direction corresponding to the difference value, and the image acquisition is carried out on the end effector of the robot through the scanner in the operation process, wherein the image acquisition comprises the following steps:
step 502, when the difference values between the calibration plate images of the plurality of poses do not fall into the difference value interval, generating respective scanning values of the calibration plate images of the plurality of poses based on respective mark point positions of the calibration plate images of the plurality of poses respectively.
Step 504, calculating based on the homogeneous transformation matrix of each pose in the tool coordinate system and the homogeneous transformation matrix of each pose in the scanner coordinate system to obtain an initial coordinate transformation relation between the end effector and the scanner;
the homogeneous transformation matrix of each pose in the tool coordinate system is used for representing the conversion relation between different poses in the tool coordinate system under the condition that the position between the calibration plate and the robot body is unchanged; and the homogeneous transformation matrix of each pose in the scanner coordinate system is used for representing the conversion relation between different poses in the scanner coordinate system under the condition that the position between the calibration plate and the robot body is unchanged.
It can be understood that even if a large error exists in the initial coordinate conversion relationship, a certain reference can be provided, so that the number of times of iterative generation of the difference vector is reduced, and further, the number of iterations of conversion of the scanning values of the calibration plate images of adjacent poses is reduced, so that the difference values can fall into the difference value interval more easily, and the generation speed of the difference vector is increased. In addition, because the initial coordinate transformation relation is the step of coordinate transformation between the tool coordinate system and the robot coordinate system, the step has a plurality of poses, and the precision problem caused by the initial coordinate transformation relation can be avoided through the difference vectors corresponding to the poses.
In one embodiment, the initial coordinate transformation relation is calculated based on the algorithm tsai-lenz, and correspondingly, in the case that the position between the calibration plate and the robot body is not changed, the calculation formula of the initial coordinate transformation relation is as follows:
H gi jH gc =H gc H ci j;
wherein H gc Is the initial coordinate transformation relationship between the tool coordinate system and the scanner coordinate system; h gi j is a homogeneous transformation matrix used for representing the conversion relation between the tool coordinate systems of the i pose and the j pose; h gi j is a homogeneous transformation matrix representing the transformation relationship between the scanner coordinate systems of i and j poses.
And 506, combining and converting the scanning value combinations of the calibration plate images of the adjacent poses into a robot coordinate system according to the initial coordinate conversion relation, and respectively generating difference vectors of the groups based on the converted adjacent pose scanning values of the combinations.
The combined adjacent pose scan value may be the scan values of at least two adjacent poses in a group. By means of a plurality of combinations, the tool origin position can be more accurately specified based on the teach pendant function of the robot.
In one embodiment, step 506 includes: and sequentially converting the first scanning value and the second scanning value of the calibration plate image of each combined adjacent pose into a robot coordinate system according to the initial coordinate conversion relation, and respectively calculating the difference vector of the first scanning value and the second scanning value of each group in the robot coordinate system.
And step 508, controlling the end effector to repeatedly run for multiple times according to the reverse direction of the difference vectors of each group, so that the difference vectors of each group fall into the difference vector interval.
It can be understood that, after a certain group of first scanning values is obtained, if the difference value between the second scanning value and the first scanning value of the group is too large, the end effector is controlled to perform one motion based on the reverse direction of the current difference vector, so as to obtain a third scanning value which replaces the second scanning value; if the difference between the third scanning value and the first scanning value is still too large, the motion is reversed based on the difference between the first scanning value and the third scanning value, and so on, and the difference vectors of the group are reduced to the difference vector interval through repeated operation for many times.
And 510, acquiring an image of the end effector by the scanner in the operation process to obtain an image of the target calibration plate.
In this embodiment, an initial coordinate transformation relationship between the end effector and the scanner is obtained by performing calculation based on the homogeneous transformation matrix of each pose in the tool coordinate system and the homogeneous transformation matrix of each pose in the scanner coordinate system, and although the initial coordinate transformation relationship cannot accurately calibrate the coordinate transformation relationship between the end effector and the scanner, the initial coordinate transformation relationship can provide a rough modification direction, so as to reduce the number of times of loop operations for calculating a difference vector, and improve the calibration speed on the premise of ensuring the precision.
In addition, although the initial coordinate conversion relation is based on the tsai-lenz algorithm, the process is used for assisting in calibrating the origin of the tool, and the end effector is controlled to repeatedly run for multiple times in the opposite direction of the difference vector of each group, so that the difference vector of each group falls into the difference vector interval, and the error caused by the tsai-lenz algorithm is eliminated.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially displayed as indicated by arrows, the steps are not necessarily performed sequentially as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts related to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the execution order of the steps or stages is not necessarily sequential, but may be rotated or alternated with other steps or at least a part of the steps or stages in other steps.
Based on the same inventive concept, the embodiment of the application also provides a robot hand-eye coordinate conversion device for realizing the robot hand-eye coordinate conversion method. The implementation scheme for solving the problem provided by the device is similar to the implementation scheme described in the above method, so specific limitations in one or more embodiments of the robot hand-eye coordinate transformation device provided below can be referred to the limitations on the robot hand-eye coordinate transformation method in the above, and are not described again here.
In one embodiment, as shown in fig. 6, there is provided a robot hand-eye coordinate transformation apparatus including: an image acquisition module 602, an adjustment module 604, an origin calibration module 606, and a conversion relation determination module 608, wherein:
an image acquisition module 602, configured to acquire, by a scanner of a robot, an image of an end effector on the robot, where a calibration plate is installed, to obtain calibration plate images of the end effector at multiple poses;
the adjusting module 604 is configured to control the end effector to operate in a direction corresponding to a difference value when the difference value between the calibration plate images of the multiple poses does not fall into a difference value interval, and perform image acquisition on the end effector by using the scanner during operation to obtain a target calibration plate image;
an origin calibration module 606, configured to determine a tool origin position of the robot according to a scan value corresponding to a marker point in the target calibration plate image;
a transformation relation determining module 608, configured to calibrate a coordinate transformation relation between the end effector and the scanner based on the tool origin position.
In one embodiment, the image capturing module 602 includes:
the pose change unit is used for keeping the change of the position of the calibration plate in the robot coordinate system within a preset range and controlling the end effector to move so as to enable the calibration plate to change the pose for multiple times;
and the image acquisition unit is used for acquiring images of the end effector with each pose change through a scanner of the robot.
In one embodiment, the adjusting module 604 includes:
a scanning value generation unit configured to generate respective scanning values of the calibration plate images of the plurality of poses based on respective landmark positions of the calibration plate images of the plurality of poses, respectively;
a discrepancy vector calculation unit for generating a discrepancy vector based on the scan value of the calibration board image of each pose;
the adjusting unit is used for controlling the operation of the end effector according to the reverse direction of the difference vector;
and the target calibration plate image determining unit is used for acquiring the image of the end effector by the scanner in the operation process to obtain the target calibration plate image.
In one embodiment, the scan value generating unit includes:
a mark point position acquiring subunit, configured to acquire a mark point position corresponding to a mark point in the calibration plate image of the multiple poses;
and the scanning value calculating operator unit is used for respectively carrying out averaging on the positions of the mark points under each pose to obtain the respective scanning values of the calibration plate images of each pose.
In one embodiment, the disparity vector calculating unit includes:
the primary calibration subunit is used for calculating based on the homogeneous transformation matrix of each pose in the tool coordinate system and the homogeneous transformation matrix of each pose in the scanner coordinate system to obtain an initial coordinate conversion relation between the end effector and the scanner;
and the circulating calculation subunit is used for combining and converting the scanning value combinations of the calibration plate images of the adjacent poses into a robot coordinate system according to the initial coordinate conversion relation, and respectively generating difference vectors of the groups based on the converted adjacent pose scanning values of the combinations.
In one embodiment, the origin calibration module 606 includes:
an initial working origin determining unit, configured to use a scanning value corresponding to a mark point in the target calibration plate image as an initial working origin position;
the tool origin position calculating unit is used for converting the initial working origin position into a robot coordinate system to obtain a target tool origin position in the robot coordinate system;
correspondingly, the conversion relation determining module 608 is configured to calibrate a coordinate conversion relation between the end effector and the scanner based on the target tool origin position.
In one embodiment, the conversion relation determining module 608 includes:
the calibration plate image matching unit is used for acquiring calibration plate images matched with poses based on the tool origin positions under different poses respectively;
and the conversion relation calibration unit is used for calculating a coordinate rotation conversion relation and a translation conversion relation between the end effector and the scanner based on the marker point sets corresponding to the calibration plate images with different poses.
The modules in the robot eye coordinate conversion device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 7. The computer apparatus includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input device. The processor, the memory and the input/output interface are connected by a system bus, and the communication interface, the display unit and the input device are connected by the input/output interface to the system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The input/output interface of the computer device is used for exchanging information between the processor and an external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a robot hand-eye coordinate transformation method. The display unit of the computer equipment is used for forming a visual and visible picture, and can be a display screen, a projection device or a virtual reality imaging device, the display screen can be a liquid crystal display screen or an electronic ink display screen, the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 7 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In an embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, displayed data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the relevant laws and regulations and standards of the relevant country and region.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), Magnetic Random Access Memory (MRAM), Ferroelectric Random Access Memory (FRAM), Phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others. The databases referred to in various embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.
Claims (10)
1. A robot hand-eye coordinate transformation method, characterized in that the method comprises:
acquiring images of an end effector which is provided with a calibration plate and arranged on a robot through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses;
when the difference values between the calibration plate images of the plurality of poses do not fall into a difference value interval, controlling the end effector to operate towards the direction corresponding to the difference values, and acquiring images of the end effector by the scanner in the operation process to obtain a target calibration plate image;
determining the tool origin position of the robot according to the scanning value corresponding to the mark point in the target calibration plate image;
and calibrating the coordinate conversion relation between the end effector and the scanner based on the tool origin position.
2. The method according to claim 1, wherein the controlling the end effector to move in the direction corresponding to the disparity value and acquiring an image of the end effector by the scanner during the operation to obtain an image of the target calibration plate comprises:
generating respective scanning values of the calibration plate images of the plurality of poses based on respective mark point positions of the calibration plate images of the plurality of poses respectively;
generating a difference vector based on the scanning values of the calibration plate images of the poses;
controlling the operation of the end effector according to the reverse direction of the difference vector;
and in the operation process, the scanner is used for carrying out image acquisition on the end effector to obtain an image of the target calibration plate.
3. The method according to claim 2, wherein the generating the scan values of the calibration plate images of the plurality of poses based on the marker point positions of the calibration plate images of the plurality of poses, respectively, comprises:
acquiring the positions of the mark points corresponding to the mark points in the calibration plate images of the plurality of poses;
and averaging the positions of the mark points under each pose respectively to obtain respective scanning values of the calibration plate images of each pose.
4. The method of claim 2, wherein generating a disparity vector based on the scan values of the calibration plate image for each pose comprises:
calculating based on the homogeneous transformation matrix of each pose in a tool coordinate system and the homogeneous transformation matrix of each pose in a scanner coordinate system to obtain an initial coordinate transformation relation between the end effector and the scanner;
and combining and converting the scanning value combinations of the calibration plate images of the adjacent poses into a robot coordinate system according to the initial coordinate conversion relation, and respectively generating difference vectors of the groups based on the converted adjacent pose scanning values of the combinations.
5. The method of claim 1, wherein determining the tool origin position of the robot according to the scanning values corresponding to the marker points in the target calibration plate image comprises:
taking a scanning value corresponding to a mark point in the target calibration plate image as an initial working origin position;
converting the initial working origin position into a robot coordinate system to obtain a target tool origin position in the robot coordinate system;
the calibrating a coordinate transformation relationship between the end effector and the scanner based on the tool origin position comprises:
and calibrating the coordinate conversion relation between the end effector and the scanner based on the origin position of the target tool.
6. The method of claim 1, wherein calibrating the coordinate transformation relationship between the end effector and the scanner based on the tool origin position comprises:
acquiring calibration plate images matched with poses based on the tool origin positions under different poses respectively;
and calculating a coordinate rotation conversion relation and a translation conversion relation between the end effector and the scanner based on the marker point sets corresponding to the calibration plate images with different poses.
7. The method of claim 1, wherein said image capturing, by a scanner of the robot, an end effector on the robot to which the calibration plate is mounted comprises:
keeping the position change of the calibration plate in the robot coordinate system within a preset range, and controlling the end effector to move so as to change the pose of the calibration plate for multiple times;
and acquiring images of the end effector with each pose change through a scanner of the robot.
8. A robot hand-eye coordinate conversion apparatus, characterized by comprising:
the image acquisition module is used for acquiring images of an end effector which is arranged on the robot and is provided with a calibration plate through a scanner of the robot to obtain calibration plate images of the end effector at a plurality of poses;
the adjusting module is used for controlling the end effector to operate towards the direction corresponding to the difference value when the difference value between the calibration plate images of the plurality of poses does not fall into the difference value interval, and acquiring an image of the end effector by the scanner in the operation process to obtain a target calibration plate image;
the origin calibration module is used for determining the tool origin position of the robot according to the mark point position corresponding to the mark point in the target calibration plate image;
and the conversion relation determining module is used for calibrating the coordinate conversion relation between the end effector and the scanner based on the tool origin position.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210787924.5A CN115042184A (en) | 2022-07-06 | 2022-07-06 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210787924.5A CN115042184A (en) | 2022-07-06 | 2022-07-06 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115042184A true CN115042184A (en) | 2022-09-13 |
Family
ID=83164770
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210787924.5A Pending CN115042184A (en) | 2022-07-06 | 2022-07-06 | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115042184A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115249267A (en) * | 2022-09-22 | 2022-10-28 | 海克斯康制造智能技术(青岛)有限公司 | Automatic detection method and device based on turntable and robot position and attitude calculation |
CN117817671A (en) * | 2024-02-21 | 2024-04-05 | 北京迁移科技有限公司 | Robot system based on visual guidance and robot system calibration method |
-
2022
- 2022-07-06 CN CN202210787924.5A patent/CN115042184A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115249267A (en) * | 2022-09-22 | 2022-10-28 | 海克斯康制造智能技术(青岛)有限公司 | Automatic detection method and device based on turntable and robot position and attitude calculation |
CN117817671A (en) * | 2024-02-21 | 2024-04-05 | 北京迁移科技有限公司 | Robot system based on visual guidance and robot system calibration method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108346165B (en) | Robot and three-dimensional sensing assembly combined calibration method and device | |
CN114012731B (en) | Hand-eye calibration method and device, computer equipment and storage medium | |
CN115042184A (en) | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium | |
CN113601503B (en) | Hand-eye calibration method, device, computer equipment and storage medium | |
CN111208783B (en) | Action simulation method, device, terminal and computer storage medium | |
CN109079787B (en) | Non-rigid robot automatic hand-eye calibration method based on neural network | |
WO2021218542A1 (en) | Visual perception device based spatial calibration method and apparatus for robot body coordinate system, and storage medium | |
CN113330486A (en) | Depth estimation | |
CN113379849A (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
CN116129037B (en) | Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof | |
CN109901123A (en) | Transducer calibration method, device, computer equipment and storage medium | |
CN115049744A (en) | Robot hand-eye coordinate conversion method and device, computer equipment and storage medium | |
Wang et al. | A vision-based fully-automatic calibration method for hand-eye serial robot | |
Li et al. | Simultaneous kinematic calibration, localization, and mapping (SKCLAM) for industrial robot manipulators | |
CN117944025A (en) | Robot hand-eye calibration method, device, computer equipment and storage medium | |
CN113084791B (en) | Mechanical arm control method, mechanical arm control device and terminal equipment | |
CN115972202A (en) | Method, robot, device, medium and product for controlling operation of a robot arm | |
JP2014238687A (en) | Image processing apparatus, robot control system, robot, image processing method, and image processing program | |
CN110653823A (en) | Hand-eye calibration result visualization method based on data inversion | |
CN113570659B (en) | Shooting device pose estimation method, device, computer equipment and storage medium | |
CN116079727A (en) | Humanoid robot motion simulation method and device based on 3D human body posture estimation | |
US20230130816A1 (en) | Calibration system, calibration method, and calibration apparatus | |
CN115294280A (en) | Three-dimensional reconstruction method, apparatus, device, storage medium, and program product | |
CN114833825A (en) | Cooperative robot control method and device, computer equipment and storage medium | |
CN118342520B (en) | Mechanical arm calibration method and device, intelligent equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |