Disclosure of Invention
An object of embodiments of the present invention is to provide a method for determining a joint positioning error of a robot, and a storage medium, which can accurately determine a joint positioning error and reduce the cost for determining a joint positioning error.
To solve the above technical problem, an embodiment of the present invention provides a method for determining joint positioning error of a robot, the method being applied to the robot, the robot including joints and an end effector, the method including: controlling each joint of the robot to move according to the target pose of the end effector so as to drive the end effector to move; after each joint of the robot finishes moving, acquiring image data of a first positioning label, wherein the first positioning label is arranged on the surface of the end effector or the surface of the joint; determining the actual pose of the end effector according to the image data of the first positioning tag; and determining joint positioning errors of the robot according to the actual pose and the target pose.
Embodiments of the present invention also provide a robot including: the joint is provided with a first positioning label on the surface or the end effector is provided with a first positioning label; the processor is respectively connected with the drivers of all joints so as to control the movement of all joints; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of determining joint positioning error of a robot described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method of determining joint positioning errors of a robot.
Compared with the prior art, the robot has the advantages that the first positioning tag is arranged on the joint surface of the robot or the first positioning tag is arranged on the surface of the end effector, the robot determines the actual pose of the end effector by acquiring the image data of the first positioning tag and determining the joint positioning error according to the image data of the first positioning tag, and the joint positioning error is determined according to the actual pose and the target pose.
Additionally, determining an actual pose of the end effector from the image data of the first positional tag comprises: determining a first pose of the first positioning label according to the image data of the first positioning label; and determining the actual pose of the end effector according to the first pose and the prestored position relation between the first positioning label and the end effector. The first pose of the first positioning label is positioned through the image data of the first positioning label, and the positioning mode is simple; according to the first pose and the position relation between the first positioning label and the end effector, the actual pose of the end effector can be determined quickly and accurately, the mode is simple, and the cost is low.
In addition, the first positioning tag comprises a plurality of positioning characteristic information; determining a first pose of the first positioning tag according to the image data of the first positioning tag, comprising: identifying positioning characteristic information in the image data, and determining two-dimensional coordinate information of the positioning characteristic information in the image data; constructing three-dimensional coordinate information of the positioning characteristic information according to the size information and the two-dimensional coordinate information of the first positioning label; and determining the first pose of the first positioning label according to the three-dimensional coordinate information, the two-dimensional coordinate information and the projection relation between the three-dimensional coordinate system and the two-dimensional coordinate system. The first positioning label comprises a plurality of positioning characteristic information, so that the first positioning label can be conveniently and accurately identified, the position and the attitude of the first positioning label can be quickly and accurately determined by determining the position of the positioning characteristic information,
in addition, a first positioning tag is arranged on the surface of the joint, and a second positioning tag is arranged on the surface of the end effector; before controlling the movements of the joints of the robot according to the acquired target poses of the end effector of the robot, the method further comprises: collecting a calibration image, wherein the calibration image comprises a first positioning label and a second positioning label; respectively determining a first calibration pose of the first positioning label and a second calibration pose of the second positioning label according to the calibration image; and calibrating the position relation between the first positioning label and the second positioning label according to the first calibration pose and the second calibration pose, and taking the calibrated position relation as the position relation between the first positioning label and the end effector. The position relation between the first positioning label and the end effector is determined before the robot is controlled to move, the accuracy of the position relation is ensured, and the accuracy of subsequently determining the joint positioning error is further improved.
In addition, after determining joint positioning errors of the robot according to the actual pose and the target pose, the method further comprises the following steps: adjusting the pose of the target according to the joint positioning error; and controlling the movement of each joint of the robot again according to the re-determined target pose. And according to the joint positioning error, the target position is adjusted, and the accuracy of the robot motion is improved.
In addition, the first positioning tags are multiple and are arranged on the joints in a surrounding mode. Set up a plurality of first location labels, and a plurality of first location labels encircle and set up on the joint, avoid appearing the robot and take place to shelter from the problem of first location label in the motion process.
In addition, the first positioning tag includes: two-dimensional codes and/or bar codes.
In addition, the joint provided with the first positioning tag is a terminal joint. The tail end joint and the tail end actuator belong to the same rigid body, and no other joint exists between the tail end joint and the tail end actuator, so that errors of other joints are avoided, and the accuracy of determining the joint positioning error is further improved.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The inventor finds that in the fine operation process of the robot, due to the fact that errors exist in the movement of each joint, the actual pose and the target pose of an end effector of the robot are greatly different after the movement of a plurality of joints is superposed, at present, a high-precision joint code disc is usually installed to reduce the error of each joint, but the error between the target pose and the actual pose gradually increases along with the increase of the movement times of the robot, and the high-precision code disc is very expensive, so that the cost of the robot is high. Another way is to determine joint positioning errors between the actual pose of the end effector of the robot and the target position in advance by an additional device, and in the process of determining the target pose, the target pose is compensated by the joint positioning errors, but since the joint positioning errors are determined in advance by the additional device, the joint positioning errors change with the passage of time, so that the end effector of the robot cannot reach the target pose, and the fine operation of the robot is affected.
A first embodiment of the invention relates to a method of determining joint positioning errors of a robot. The robot is applied to a robot, and the robot comprises joints and an end effector. The method for determining the joint positioning error of the robot is applied to the robot, and the specific flow is shown in figure 1:
step 101: and controlling each joint of the robot to move according to the target pose of the end effector so as to drive the end effector to move.
Specifically, the robot comprises a plurality of joints and an end effector arranged on the end joint, wherein the end joint can be a wrist joint, an ankle joint and the like of the robot, and the end effector can be a palm, a sole and the like of the robot. The target pose may include: a target position and a pose of the end effector. The target pose can be determined according to the information of the target position and the specified operation information, for example, if the robot is required to grab the coke on the table, the target pose can be determined according to the position information of the coke and the grabbed pose.
According to the position and the attitude of the target, the motion parameters of each joint of the robot can be determined, so that the robot can be operated according to the position and the attitude of the targetThe determined motion parameters controlling the movement of each joint, e.g. setting the pose of the object to RTgoalAnd calculating the position to which the robot hand needs to move according to the target pose, planning linkage parameters of each joint according to the position to which the robot hand needs to move, and driving the rotation of each joint by controlling code disc parameters of the joint so as to drive the palm connected with the tail end joint to move.
Step 102: after each joint of the robot finishes moving, image data of a first positioning label is obtained, and the first positioning label is arranged on the surface of the end effector or the surface of the joint.
Specifically, the first positioning tag may be an image containing positioning feature information, for example, a two-dimensional code, a barcode, or the like.
In this embodiment, the first positioning tag is described by taking a two-dimensional code as an example. The two-dimensional code comprises four vertexes, each vertex comprises identification information used for identification, and the identification information of the four vertexes can be used as the positioning feature information of the first positioning label.
The first positioning tag may be provided on a surface of an end effector or a surface of a joint of the robot. For example, the two-dimensional code is provided on the palm surface or the back surface of the robot hand, the two-dimensional code is provided on the instep surface, or the like. The first positioning label may also be printed directly on the surface of the end effector or the joint surface, or the image printed with the first positioning label may be affixed to the surface of the end effector or the joint surface. In addition, the first positioning label can be embedded on the robot end effector or the joint, a transparent layer made of transparent materials is covered on the surface of the first positioning label, so that image data of the first positioning label can be acquired, and the covered transparent surface can protect the first positioning label from being damaged. The joint provided with the first positioning tag can be a terminal joint or other joints.
The image data of the first positioning label can be acquired through the acquisition device on the robot, the acquisition device can be a camera, the acquisition device can be arranged at the position where the eyes of the robot are located, and the acquisition device can also be arranged at other positions.
Step 103: and determining the actual pose of the end effector according to the image data of the first positioning label.
In one example, the process of determining the actual pose of the end effector may employ sub-steps as shown in fig. 2.
Substep S11: and determining a first pose of the first positioning label according to the image data of the first positioning label.
Specifically, the first positioning tag includes a plurality of positioning characteristic information so that the first positioning tag can be accurately identified. If the first positioning label is the two-dimensional code, information contained in four vertexes of the two-dimensional code is positioning characteristic information.
In one example, the process of determining the first pose of the first positioning tag may be: identifying each positioning characteristic information in the image data, and determining two-dimensional coordinate information of each positioning characteristic information in the image data; according to the size information and the two-dimensional coordinate information of the first positioning label, three-dimensional coordinate information of each piece of positioning feature information is constructed; and determining the first pose of the first positioning label according to the three-dimensional coordinate information, the two-dimensional coordinate information and the projection relation between the three-dimensional coordinate system and the two-dimensional coordinate system.
The following describes a process of determining the first position by taking the first positioning label as a two-dimensional code and setting the two-dimensional code at the palm center of the robot as an example:
the method comprises the steps of identifying the positioning characteristic information of the two-dimensional code, calculating the positions of four vertexes of the two-dimensional code, rapidly identifying the positioning characteristic information by adopting a visual reference library AprilTag, calculating the position information of the positioning characteristic information relative to image data, obtaining coordinates of the four vertexes of the two-dimensional code due to the fact that the two-dimensional code has the four vertexes, and recording the coordinates of the four vertexes as: p2D[4](ii) a The two-dimensional code is arranged in the palm of the robot, the shape of the shot two-dimensional code is unchanged, and the side length of the two-dimensional code is L; according to the size information of the two-dimensional code, determining the 3D coordinate P of four vertexes of the two-dimensional code3D[4]The 3D coordinates of the two-dimensional code are as shown in equations (1) to (4):
P3D[0]point3d (-0.5L, 0) formula (1);
P3D[1]point3d (0.5L, -0.5L, 0) formula (2);
P3D[2]point3d (0.5L, 0) formula (3);
P3D[4]point3d (-0.5L, 0) formula (4);
after the 3D coordinates of the four vertices of the two-dimensional code are determined, the first pose RT is solved simultaneously according to a projection relationship, which can be shown as formula (5):
wherein, in the formula (5), ZcIs a constant number fx、fy、cxAnd cyThe RT is a transformation matrix of the two-dimensional code under a camera coordinate system and is also a first pose of the two-dimensional code, and is an internal parameter of a collecting device (such as a camera).
Substep S12: and determining the actual pose of the end effector according to the first pose and the prestored position relation between the first positioning label and the end effector.
Specifically, when the first positioning tag is disposed on the end effector, the positional relationship between the first positioning tag and the end effector may be set as follows: the position of the first positioning tag is equal to the position of the end effector. Based on the first position, the actual position of the end effector is determined to be the first position according to the first position and the pre-stored position relation.
Step 104: and determining joint positioning errors of the robot according to the actual pose and the target pose.
Specifically, a deviation between the actual pose and the target pose is calculated, and the deviation is taken as a joint positioning error. It should be noted that, in the process of calculating the joint positioning error, the coordinate system between the actual pose and the target pose needs to be unified, and since the target pose is based on the coordinate system of the robot, the actual pose can be converted from the coordinate system of the acquisition device to the coordinate system of the robot, and since the position of the acquisition device on the robot isFixing, the transformation relation of the actual pose transformed to the coordinate system of the robot is fixed, for example, H is the transformation relation between the camera coordinate system and the robot coordinate system, and the transformed actual pose is expressed as: RT (reverse transcription)realAnd when the coordinate system is converted into the real pose, the RT is the real pose before the coordinate system is converted.
It is worth mentioning that the first positioning label can be arranged in a plurality of ways, so that the situation that the robot is shielded in the movement process is avoided.
Compared with the prior art, the robot has the advantages that the first positioning tag is arranged on the joint surface of the robot or the first positioning tag is arranged on the surface of the end effector, the robot determines the actual pose of the end effector by acquiring the image data of the first positioning tag and determining the joint positioning error according to the image data of the first positioning tag, and the joint positioning error is determined according to the actual pose and the target pose.
A second embodiment of the invention relates to a method of determining joint positioning errors of a robot. The second embodiment is substantially the same as the first embodiment, and mainly differs therefrom in that: in the second embodiment of the present invention, step 104 is performed: after determining joint positioning errors of the robot according to the actual pose and the target pose, the method further comprises: adjusting the pose of the target according to the joint positioning error; and controlling the movement of each joint of the robot again according to the re-determined target pose. The specific flow of the method for determining the joint positioning error of the robot is shown in fig. 3.
Step 201: and controlling each joint of the robot to move according to the target pose of the end effector so as to drive the end effector to move.
Step 202: after each joint of the robot finishes moving, image data of a first positioning label is obtained, and the first positioning label is arranged on the surface of the end effector or the surface of the joint.
Step 203: and determining the actual pose of the end effector according to the image data of the first positioning label.
In one example, the first positioning tag is disposed on a surface of a joint, which may be any joint. The first positioning tags may be multiple, the multiple first positioning tags are disposed around the joint, the first positioning tags are disposed in the manner shown in fig. 4, a1 to A3 are the first positioning tags, and B is the joint.
It is worth mentioning that the first positioning tag is arranged on the joint, so that the shielding situation can be effectively avoided, for example, in the process that the robot hand moves the held object to the designated position, because the robot hand holds the object, if the first positioning tag is arranged on the palm of the robot hand, the first positioning tag is shielded.
The following describes a process of determining the first position by taking the first positioning tag as a two-dimensional code and arranging a plurality of two-dimensional codes around the end joint of the robot as an example:
the method comprises the steps of identifying the positioning characteristic information of the two-dimensional code, calculating the positions of four vertexes of the two-dimensional code, rapidly identifying the positioning characteristic information by adopting a visual reference library AprilTag, calculating the position information of the positioning characteristic information relative to image data, obtaining coordinates of the four vertexes of the two-dimensional code due to the fact that the two-dimensional code has the four vertexes, and recording the coordinates of the four vertexes as: p2D[4](ii) a The two-dimensional code is arranged on a joint at the tail end of the robot, the two-dimensional code is arranged in a mode shown in fig. 4, the shot two-dimensional code is in a circular arc shape, namely a plane square when the two-dimensional code in the real world is no longer in time, as shown in fig. 5, fig. 5 is an annular top view formed by a plurality of two-dimensional codes, the side length of a single two-dimensional code is L, and the annular radius formed by the two-dimensional codes is R; the transverse length of the two-dimensional code is the chord length instead of the arc length, the chord length xL is calculated, as shown in formula (6),
determining the 3D of four vertexes of the two-dimensional code according to the determined chord length xL and the side length L of the two-dimensional codeCoordinate P3D[4]The 3D coordinates of the two-dimensional code are as shown in equations (7) to (10):
P3D[0]point3d (-0.5 × xL, -0.5 × L,0) formula (7);
P3D[1]point3d (0.5 × xL, -0.5 × L,0) formula (8);
P3D[2]point3d (0.5 × xL,0.5 × L,0) formula (9);
P3D[4]point3d (-0.5 × xL,0.5 × L,0) formula (10);
after the 3D coordinates of the four vertices of the two-dimensional code are determined, the first pose RT is solved simultaneously according to a projection relationship, which may be shown as formula (11):
wherein, in formula (11), ZcIs a constant number fx、fy、cxAnd cyThe RT is a transformation matrix of the two-dimensional code under a camera coordinate system and is also a first pose of the two-dimensional code, which is an internal parameter of a collecting device (such as a camera).
And determining the actual pose of the end effector according to the first pose and the predetermined position relation between the first positioning label and the end effector.
In one example, before controlling the movements of the joints of the robot according to the acquired target poses of the end effector of the robot, the position relationship between the first positioning tag and the end effector may be determined in advance, and the process of determining the position relationship between the first positioning tag and the end effector may be: collecting a calibration image, wherein the calibration image comprises a first positioning label and a second positioning label; respectively determining a first calibration pose of the first positioning label and a second calibration pose of the second positioning label according to the calibration image; and calibrating the position relation between the first positioning label and the second positioning label according to the first calibration pose and the second calibration pose, and taking the calibrated position relation as the position relation between the first positioning label and the end effector.
In particular, as shown in figure 6,the second positioning label h is arranged at the palm center of the robot, the first positioning label is arranged at the tail end joint (such as the wrist), the
first positioning label 0,1,2 … N is arranged, the acquisition device acquires calibration images of the two-dimensional code 1 and the two-
dimensional code 2 at the same time, the pose of the two-dimensional code is calculated according to the calibration images, and the pose conversion relationship from the two-
dimensional code 2 to the two-dimensional code 1 is as follows:
similarly, the pose transformation relation between the two-dimensional code 1 and the two-dimensional code 0 can be obtained:
the pose conversion relation from the two-dimensional code 0 to the two-dimensional code h is as follows:
therefore, if the two-
dimensional code 2 is photographed from the side, the actual pose RT of the palm center can be calculated
h' is:
step 204: and determining joint positioning errors of the robot according to the actual pose and the target pose.
This step is substantially the same as step 104 in the first embodiment, and will not be described herein.
Step 205: and adjusting the pose of the target according to the joint positioning error.
Specifically, the joint positioning error may be returned to the control end of the robot to compensate the target pose determined in step 201, so as to obtain an adjusted target pose, and steps 201 to 204 may be executed again according to the adjusted target pose until the determined joint positioning error is within the preset range.
According to the method for determining the joint positioning error of the robot, the target position is adjusted according to the joint positioning error, and the accuracy of the robot motion is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A third embodiment of the present invention relates to a robot, in which the connection structure of the robot 30 is, as shown in fig. 7, a joint 301 and an end effector 302, a first positioning tag is provided on a surface of the joint 301 or a first positioning tag is provided on the end effector 301; at least one processor 303, wherein the processor 303 is respectively connected with the drivers of the joints to control the motion of the joints; and a memory 304 communicatively coupled to the at least one processor 303; wherein the memory 304 stores instructions executable by the at least one processor 303, the instructions being executable by the at least one processor 303 to enable the at least one processor 303 to perform the method of determining joint positioning errors of a robot of the first or second embodiment.
Where the memory 304 and the processor 303 are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses linking one or more of the various circuits of the processor 303 and the memory 304 together. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 303 is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor 303.
The processor 303 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of determining joint positioning errors of a robot according to the first or second embodiment.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.