WO2023083056A1 - 用于标定机器人的运动学参数的方法和装置 - Google Patents
用于标定机器人的运动学参数的方法和装置 Download PDFInfo
- Publication number
- WO2023083056A1 WO2023083056A1 PCT/CN2022/128991 CN2022128991W WO2023083056A1 WO 2023083056 A1 WO2023083056 A1 WO 2023083056A1 CN 2022128991 W CN2022128991 W CN 2022128991W WO 2023083056 A1 WO2023083056 A1 WO 2023083056A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- robot
- displacement
- image
- joint variable
- calibration object
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 140
- 238000006073 displacement reaction Methods 0.000 claims abstract description 279
- 238000003860 storage Methods 0.000 claims abstract description 13
- 230000033001 locomotion Effects 0.000 claims description 116
- 230000015654 memory Effects 0.000 claims description 34
- 238000012545 processing Methods 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 15
- 230000006870 function Effects 0.000 description 24
- 239000011159 matrix material Substances 0.000 description 23
- 230000036544 posture Effects 0.000 description 22
- 230000008569 process Effects 0.000 description 22
- 238000010586 diagram Methods 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 239000012636 effector Substances 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003466 welding Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000003638 chemical reducing agent Substances 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007921 spray Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
- B25J19/0095—Means or methods for testing manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Definitions
- the embodiments of the present application relate to the technical field of robot kinematics calibration, and more specifically, to a method and device for calibrating kinematic parameters of a robot.
- the embodiment of the present application provides a method for robot calibration, which can realize the calibration of the kinematic parameters of the robot on the premise of reducing the calibration cost.
- a method for calibrating kinematic parameters of a robot includes: firstly, acquiring a displacement pair including a first displacement and a second displacement. Then, an error value for calibrating the kinematic parameters of the robot is determined according to the displacement pair.
- first displacement is the actual displacement of the end of the robot moving from the first position to the second position
- second displacement is the nominal displacement of the end of the robot moving from the first position to the second position.
- first position and the second position are the positions of two different points in the operating space of the robot, and the attitude of the end of the robot at the first position is the same as that at the second position.
- the actual movement displacement is determined by the size of the calibration object in the operating space, the size of the first image and the size of the second image.
- the first image is an image of the calibration object captured by the actuator of the robot end when the robot end is at the first position.
- the second image is an image of the calibration object captured by the actuator of the robot end when the robot end is at the second position.
- the nominal movement displacement is determined by the robot kinematics model, the first joint variable and the second joint variable.
- the first joint variable is the joint variable of the robot when the end of the robot is at the first position.
- the second joint variable is the joint variable of the robot when the end of the robot is at the second position.
- the robot kinematics model is used to represent the relationship between the joint variables of the robot and the pose of the robot end.
- the method for calibrating the kinematic parameters of the robot according to the present application can obtain the displacement pair including the actual movement displacement and the nominal movement displacement of the end of the robot, and determine the error value for calibrating the kinematic parameters of the robot according to the displacement pair .
- the nominal movement displacement can be determined according to the robot kinematics model, the joint variables of the robot at the first position and the joint variables of the robot at the second position, while the actual movement displacement can be determined by the size of the calibration object and the image obtained by the actuator at the end of the robot.
- the size of the robot is determined, and the calibration of the kinematic parameters of the robot is realized without the need for expensive measuring instruments to measure the actual displacement. Therefore, the calibration of the kinematic parameters of the robot can be realized under the premise of reducing the calibration cost.
- the method further includes: determining the kinematics model of the robot according to the factory parameters of the robot, and the factory parameters of the robot include the translation between each joint of the robot and rotation.
- the robot kinematics model can be determined according to the factory parameters of the robot, and the factory parameters of the robot can be obtained relatively easily, which provides a simple solution for establishing the robot kinematics model.
- the acquiring displacement pairs includes: acquiring multiple displacement pairs.
- the above-mentioned determination of the error value based on the displacement pair includes: constructing an error equation group according to the plurality of displacement pairs, each error equation in the error equation group is constructed by the first displacement and the second displacement, and the error equation group is used to solve An error matrix is obtained, and the error matrix includes a plurality of error values.
- the above-mentioned error value can be obtained by constructing and solving a system of equations, and the error value can be obtained by mathematical calculation, and there are many ways to solve the system of equations, which improves the flexibility of the scheme.
- the method further includes: when the end of the robot is at the first position, acquiring the first motor of the robot for calculating the first joint variable encoder value.
- the end of the robot is at the second position, acquire the second motor encoder value of the robot used to calculate the second joint variable.
- the method further includes: determining a first command joint variable according to the kinematics model of the robot and the pose of the end of the robot at the first position.
- a second command joint variable is determined according to the robot kinematics model and the pose of the end of the robot at the second position.
- a command is determined according to the first command joint variable and the second command joint variable, and the command is used to control the end of the robot to move from the first position to the second position.
- the established robot kinematics model can be reversely solved to obtain the command joint variables at different positions, so that the command to control the robot can be determined according to the command joint variables at different positions, so that the end of the robot moves from the first position to the second position.
- moving the end of the robot from the first position to the second position includes: moving the end of the robot from the first position to the second position along a first path position, wherein the first position and the second position are on the first path, and the first path is a line connecting the center of the end of the robot and a point on the surface of the calibration object.
- the moving path of the robot end can be provided, so that the robot end can move along a prescribed path.
- the acquiring displacement pairs includes: acquiring multiple displacement pairs, where the multiple displacement pairs include a first displacement pair and a second displacement pair.
- the method further includes: controlling the robot so that the robot end effector is parallel to the first surface of the calibration object.
- the method further includes: controlling the robot so that the end of the robot is parallel to the second surface of the calibration object, wherein the first surface and the second surface are two different sides of the calibration object surface. The end of the robot can be moved in different orientations, so that the obtained multiple displacement pairs include displacement pairs in different orientations, thereby more accurately calibrating the kinematic parameters of the robot.
- the method before acquiring the displacement pair, the method further includes: determining that an error of a kinematic parameter of the robot is greater than a preset threshold.
- the process of calibrating the kinematic parameters of the robot can be started, so as to ensure the accuracy of the kinematic parameters of the robot as much as possible and improve the motion accuracy of the robot.
- the actual displacement, the size of the calibration object in the robot's operating space, the size of the first image, and the size of the second image satisfy the following relationship:
- d R is the actual movement displacement
- H is the height of the calibration object
- h 1 and h 2 are the heights of the first image and the second image respectively
- V' is the execution of the robot end when the robot end is at the first position
- V′′ is the distance between the center point of the actuator at the end of the robot and the center point of the second image when the robot end is in the second position
- the actual displacement can be calculated by the above formula, and the parameters H, h 1 , h 2 , V′ and V′′ in the formula can be obtained in a simple way (such as direct measurement), which improves the simplicity of the scheme.
- the nominal displacement, the robot kinematics model, the first joint variable and the second joint variable satisfy the following relationship:
- d C is the nominal displacement
- f is the kinematics model of the robot
- q i is the first joint variable
- q j is the second joint variable
- represents a modulo operation.
- a device for calibrating kinematic parameters of a robot is provided, and the device is used to implement the method provided in the first aspect above.
- the kinematic parameter calibration of the robot may include units and/or modules for performing the first aspect or the method provided by any of the above-mentioned implementation manners of the first aspect, such as a processing unit and an acquisition unit.
- the device for calibrating the kinematic parameters of the robot is a robot.
- the acquisition unit may be a transceiver, or an input/output interface;
- the processing unit may be at least one processor.
- the transceiver may be a transceiver circuit.
- the input/output interface may be an input/output circuit.
- the device for calibrating the kinematic parameters of the robot is a chip, a chip system or a circuit in the robot.
- the acquisition unit can be an input/output interface, interface circuit, output circuit, or input circuit on the chip, chip system or circuit. , pins or related circuits, etc.; the processing unit may be at least one processor, processing circuit or logic circuit, etc.
- a device for calibrating kinematic parameters of a robot includes: at least one processor coupled with at least one memory. At least one memory is used to store computer programs or instructions, and at least one processor is used to call and run the computer programs or instructions from the at least one memory, so that the device for calibrating the kinematic parameters of the robot performs the first aspect or any possible method in the implementation.
- the device is a robot. In another implementation, the device is a chip, a chip system or a circuit in a robot.
- the present application provides a processor configured to execute the methods provided in the foregoing aspects.
- the processor's output and reception, input and other operations can also be understood as the sending and receiving operations performed by the radio frequency circuit and the antenna, which is not limited in this application.
- a computer-readable storage medium stores program code for device execution, and the program code includes the above-mentioned first aspect or any one of the above-mentioned implementation methods of the first aspect provided method.
- a computer program product containing instructions is provided, and when the computer program product is run on a computer, the computer is made to execute the method provided by the above first aspect or any one of the above implementation manners of the first aspect.
- a chip in a seventh aspect, includes a processor and a communication interface.
- the processor reads the instructions stored in the memory through the communication interface, and executes the method provided by the first aspect or any one of the above implementations of the first aspect.
- the chip further includes a memory, in which computer programs or instructions are stored, and the processor is used to execute the computer programs or instructions stored in the memory, and when the computer programs or instructions are executed, the processor is used to execute The method provided by the above-mentioned first aspect or any one of the above-mentioned implementation manners of the first aspect.
- a system for calibrating kinematic parameters of a robot includes: a robot and an actuator at the end of the robot, the robot is used to obtain a displacement pair including a first displacement and a second displacement, and determine the The error value of the kinematic parameters of the robot.
- the actuator at the end of the robot is used to: acquire a first image of the calibration object when the end of the robot is at the first position; acquire a second image of the calibration object when the end of the robot is at the second position.
- the first displacement is the actual movement displacement of the robot end from the first position to the second position
- the second displacement is the nominal movement displacement of the robot end from the first position to the second position.
- the first position and the second position are positions of two different points in the operating space of the robot, and the attitude of the end of the robot at the first position is the same as that at the second position.
- the actual movement displacement is determined by the size of the calibration object in the operating space of the robot, the size of the first image and the size of the second image.
- the nominal movement displacement is determined by the robot kinematics model, the first joint variable and the second joint variable, the first joint variable is the joint variable of the robot when the end of the robot is at the first position; the second joint variable is the joint variable of the robot when the end of the robot is at the second position; the kinematics model of the robot is used to represent the relationship between the joint variable of the robot and the pose of the end of the robot.
- the system further includes: the calibration object.
- FIG. 1 is a schematic diagram of a scene where the embodiment of the present application can be applied.
- Fig. 2 is a schematic flowchart of a method for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- Fig. 3 is a schematic flow chart of another method for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of camera movement provided by an embodiment of the present application.
- FIG. 5 are schematic diagrams of another camera movement provided by the embodiment of the present application.
- Fig. 6 is a schematic diagram of calculating an actual displacement provided by an embodiment of the present application.
- Fig. 7 is a schematic block diagram of an apparatus 700 for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- Fig. 8 is a schematic block diagram of an apparatus 800 for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- the technical solution of the embodiment of the present application can be applied to the calibration of the kinematic parameters of the robot, such as the calibration of the kinematic parameters of the mechanical arm, the calibration of the kinematic parameters of the smart car, and the calibration of the kinematic parameters of the drone.
- FIG. 1 is a schematic diagram of a scene where the embodiment of the present application can be applied. It includes the following components: a robot body 110 , an image acquisition module 120 and an object 130 of known size.
- the robot body 110 is a robot to be calibrated, including but not limited to robots such as robotic arms, smart cars, and drones;
- the image acquisition module 120 is used to collect images, including but not limited to cameras, cameras, etc.;
- objects with known dimensions 130 is any object with definite dimensions, including but not limited to cubes, cuboids, polyhedrons, etc. with known dimensions.
- the size of the object 130 whose size is known may be obtained by measurement. It should be noted that, in this application, there is no limitation on the method of determining the size of the object 130 with known size, for example, it may be known from the parameter description of the object 130, or it may be obtained by measurement, for example.
- Figure 1 is only for the convenience of understanding the application to illustrate the scene where the method for calibrating the kinematic parameters of the robot provided by the application can be applied, and does not constitute any limitation to the protection scope of the application.
- the method for calibrating the robot provided by the application The method of kinematic parameters can also be applied to other scenarios.
- the robot body has an image acquisition module, which can be understood as the above-mentioned robot body 110 and image acquisition module 120 as a whole; for example, the structure of the robot body can also be is another shape. The applicable scenarios of this application will not be repeated here.
- the first is the most widely used parameter calibration based on the position error model.
- the actual position of the end of the robot is measured by an external measuring instrument, and compared with the theoretical position of the robot, and the position error differential equation is established by using the actual position and the theoretical position of multiple points. , and then solve the error parameters.
- Commonly used measuring instruments such as laser trackers and three-coordinate measuring machines are calibrated based on this model. Although laser trackers and three-coordinate measuring machines have high measurement accuracy, they are expensive, complicated to operate, and low in calibration efficiency.
- the second is the parameter calibration based on the distance error model.
- This method uses the characteristic that the distance between any two points of the robot in the robot coordinate system and the measurement coordinate system is equal to establish an error model, and then solves the kinematic parameter error.
- Commonly used instruments for this method such as calibration devices based on wire-drawn sensors, are expensive and complicated to operate.
- the third is the method of using sensors, such as inertial sensors plus position sensors, laser sensors plus phase-sensitive detector (phase-sensitive detector, PSD) calibration devices, etc., and image processing methods based on image sensors, etc., however,
- sensors such as inertial sensors plus position sensors, laser sensors plus phase-sensitive detector (phase-sensitive detector, PSD) calibration devices, etc.
- PSD phase-sensitive detector
- this application provides a method for calibrating the kinematic parameters of the robot, through the size information of the object in the robot's operating space and the image size obtained by the actuator at the end of the robot.
- the information determines the actual movement displacement of the robot end, the actual movement displacement of the robot end and the nominal movement displacement of the robot end are used to construct the error equation, and the error equation is solved to complete the calibration, without the need for expensive measuring instruments, reducing the calibration cost.
- Geometric error The error in the geometry of the object, such as the deviation of the actual shape, direction and position of the object from the ideal shape, direction and position.
- Calibration of kinematic parameters refers to obtaining higher absolute positioning accuracy by identifying the geometric error of the robot and compensating for the geometric error. Calibration of kinematic parameters is an effective way to improve the absolute positioning accuracy of the robot.
- Robot end the edge of the robot, or the last joint of the robot, or the part of the robot connected to the actuator at the end of the robot.
- the actuator at the end of the robot refers to any tool that is connected to the end of the robot and has a certain function. Including but not limited to: robot gripper, robot tool quick change device, robot collision sensor, robot rotary connector, robot pressure tool, compliance device, robot spray gun, robot deburring tool, robot arc welding torch, robot electric welding torch, etc. .
- the actuator at the end of the robot is usually considered as a peripheral device of the robot, or as an accessory of the robot, or as a robot tool, or as an end-of-arm tool, etc.
- the actuator at the end of the robot in this application may be an image acquisition module (such as a camera, a camera, etc.). It only needs to be able to realize image acquisition, and the specific form is not limited.
- the operating space of the robot refers to the collection of space points that the actuator movement at the end of the robot can reach, generally represented by the projection of the horizontal plane and the vertical plane.
- the shape and size of the robot's operating space is very important. When a robot is performing a job, it may not be able to complete the job because of the dead zone where the actuator at the end of the robot cannot reach.
- the collection of space points that can be reached by the movement of the actuator at the end of the robot is called the operating space of the robot as an example, and does not constitute any limitation to the protection scope of the present application.
- the operating space of the robot can also be called The working space of the robot; for example, the operating space of the robot may also be called the space of the robot.
- the position and posture of the robot can refer to the position and posture of the end of the robot in space, or it can also indicate the position and posture of other movable rods of the robot in space. Among them, the position can be described by the following position matrix:
- the posture can be represented by the following posture matrix composed of the cosine values of the angles between two pairs of the three coordinate axes of the coordinate system:
- Robot kinematics including forward kinematics and inverse kinematics. Forward kinematics is to calculate the position and posture of the end of the robot given the joint variables of the robot; All joint variables corresponding to the position.
- Robot kinematics equation involves the establishment of robot kinematics model, where the robot kinematics model can be expressed as:
- M is the pose of the end of the robot
- q i is the variable of each joint of the robot.
- Inverse kinematics problems capable of path planning, robot control, etc.
- the establishment process of the robot kinematic equation is illustrated below by taking the establishment of the robot kinematic equation of the three-degree-of-freedom planar joint robot as an example.
- a three-degree-of-freedom planar joint robot is established, and the lengths of the robot rods 1, 2, and 3 are respectively l 1 , l 2 , and l 3 , and the process of establishing the kinematic equation of the robot includes:
- the coordinate system of the robot includes the hand coordinate system, the base coordinate system, the bar coordinate system, and the absolute coordinate system.
- Hand coordinate system Refer to the coordinate system of the robot hand, also known as the robot pose coordinate system, which indicates the position and posture of the robot hand in the specified coordinate system.
- Frame coordinate system refer to the coordinate system of the robot base, which is the common reference coordinate system of each active rod and hand of the robot.
- Rod coordinate system Refer to the coordinate system of the robot rod, which is a fixed coordinate system on each movable rod of the robot and moves with the movement of the rod.
- Absolute coordinate system The coordinate system referring to the ground of the work site, which is the common reference coordinate system of all components of the robot.
- Hand coordinate system ⁇ h ⁇ frame coordinate formula ⁇ 0 ⁇ ; member coordinate system ⁇ i ⁇ (i-1,2,...n); absolute coordinate system ⁇ B ⁇ .
- the establishment of the coordinate system may be: the base coordinate system ⁇ 0 ⁇ ; the bar coordinate system ⁇ i ⁇ ; the hand coordinate system ⁇ h ⁇ coincides with the end coordinate system ⁇ n ⁇ .
- the pose matrix of adjacent members is:
- the connecting rod parameter Jacobian matrix M03 is:
- the connecting rod parameter Jacobian matrix M 01 is:
- the connecting rod parameter Jacobian matrix M 02 is:
- Joint coordinate system This coordinate system can be used to describe the motion of each independent joint of the robot. For example, for a six-axis series robot arm, the joint types are all rotary joints. In the joint coordinate system, moving the end of the robot to the desired position can drive the movement of each joint in turn, so that the end of the robot can reach the specified position.
- Transformation matrix It is the transformation matrix between coordinate systems between different joints of the robot.
- the coordinate system corresponding to joint #1 of the robot is coordinate system #1
- the coordinate system corresponding to joint #2 of the robot is coordinate system #2
- coordinate system #2 can be described by coordinate system #1 and the transformation matrix.
- the object has six degrees of freedom in space, that is, the degree of freedom of movement along the three rectangular coordinate axes of x, y, and z and the degree of freedom of rotation around these three coordinate axes.
- Visual Servo This concept is often found in the research of robotics. It generally refers to automatically receiving and processing the image of a real object through optical devices and non-contact sensors, and using the image feedback information to make the machine system The act of further controlling the machine or making corresponding adaptive adjustments.
- Nominal position the inaccurate end position of the robot calculated from the kinematic parameters with errors.
- Multi-point perspective imaging (Perspective-n-Point, PnP) algorithm This algorithm is a method for solving three-dimensional (three dimensional, 3D) to two-dimensional (2 dimensional, 2D) point pair motion. For example, in a picture, knowing the relative coordinate positions of at least four determined points in the 3D space can estimate the pose of the camera for these points, or estimate the pose of these points in the camera coordinate system.
- Fig. 2 is a schematic flowchart of a method for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- the method may be executed by the device for calibrating the kinematic parameters of the robot, or may be executed by an internal module of the device for calibrating the kinematic parameters of the robot.
- the method includes the following steps.
- the displacement pair includes a first displacement and a second displacement
- the first displacement is the actual movement displacement of the end of the robot moving from the first position to the second position
- the second displacement is the movement of the end of the robot from the first position Nominal movement displacement to the second position
- the first position and the second position are the positions of two different points in the operating space of the robot, and the attitude of the end of the robot at the first position Same posture as in the second position.
- the robot may be mechanical equipment such as a mechanical arm, a smart car, or a drone.
- a robot is used as a mechanical arm for description.
- the above-mentioned end of the robot may be the last joint of the robot.
- acquiring displacement pairs includes: acquiring multiple displacement pairs, wherein the specific number of displacement pairs may be determined in the following manner.
- the number of displacement pairs is equal to the number of kinematic parameters of the robot to be calibrated.
- the kinematic parameters of the robot to be calibrated include two link parameters, and the number of displacement pairs is two.
- the number of displacement pairs is greater than the number of kinematic parameters of the robot to be calibrated.
- the kinematic parameters of the robot to be calibrated include two link parameters, and the number of displacement pairs can be greater than two.
- the above displacement pair can be one.
- the kinematic parameters of the robot to be calibrated include two link parameters (for example, link parameter #1 and link parameter #2), and one of the link parameters (for example, link parameter #1) can be calibrated first, Then calibrate another connecting rod parameter (for example, connecting rod parameter #2).
- the above-mentioned displacement pair can be one; similarly, in the case of calibrating the connecting rod parameter #2, the above The displacement pair can also be one.
- the kinematics parameter of the robot to be calibrated is one link parameter, and the number of displacement pairs may be one.
- acquiring a plurality of displacement pairs includes acquiring a first displacement pair and a second displacement pair.
- the end of the robot can move multiple times along one direction, for example, move along the first direction for the first time to obtain the first displacement pair, and move along the first direction for the second time to obtain the second displacement pair .
- control the robot so that the end of the robot is parallel to the first surface of the calibration object; before obtaining the second displacement pair, control the robot so that the end of the robot is parallel to the first surface of the calibration object;
- the first face of the calibration object is parallel.
- the end of the robot can move multiple times along multiple directions, for example, move along the first direction for the first time to obtain the first displacement pair, and move along the second direction for the second time to obtain the second displacement pair.
- displacement pair For example, before obtaining the first displacement pair, control the robot so that the end of the robot is parallel to the first surface of the calibration object; before obtaining the second displacement pair, control the robot so that the end of the robot is parallel to the first surface of the calibration object; The second surface of the calibration object is parallel, wherein the first surface and the second surface are two different surfaces of the calibration object.
- displacement pair #1 includes actual movement displacement #1 and nominal movement displacement #1
- actual movement displacement #1 may be the actual movement displacement of the end of the robot from the first position #1 to the second position #1 (such as the first Position #1 and second position #1 are the positions of two different points in the operating space of the robot)
- nominal movement displacement #1 can be the nominal movement displacement of the end of the robot moving from the first position #1 to the second position #1 , there is an error between the actual movement displacement #1 and the nominal movement displacement #1, and the postures of the end of the robot at the first position #1 and the second position #1 are the same.
- Displacement pair #2 includes actual movement displacement #2 and nominal movement displacement #2.
- Actual movement displacement #2 may be the actual movement displacement of the end of the robot from first position #2 to second position #2
- nominal movement displacement #2 may be is the nominal movement displacement of the end of the robot moving from the first position #2 to the second position #2, and the attitude of the end of the robot at the first position #2 and the second position #2 is the same.
- Displacement pair #3 includes actual movement displacement #3 and nominal movement displacement #3, actual movement displacement #3 may be the actual movement displacement of the end of the robot from first position #3 to second position #3, and nominal movement displacement #3 may be is the nominal movement displacement of the end of the robot moving from the first position #3 to the second position #3, and the attitude of the end of the robot at the first position #3 and the second position #3 is the same.
- the first position #2 and the second position #1 can be the same position
- the first position #3 and the second position #2 can be the same position, for example, the end of the robot moves from the first position #1 to The second position #1, and then move from the second position #1 to the second position #2, and then move from the second position #2 to the second position #3, and the end of the robot is at the first position #1, the second position # 1.
- the postures of the second position #2 and the second position #3 are the same.
- the posture of the end of the robot at the first position is the same as that at the second position, including the following two possible ways.
- the end of the robot it is enough for the end of the robot to maintain the same attitude before and after moving, and the attitude of the end of the robot can change during the movement, that is, keep the attitude matrix of the end of the robot unchanged before and after moving.
- the attitude of the robot end at the first position can be expressed by the attitude matrix R1 composed of the cosine values of the angles between two pairs of the three coordinate axes of the coordinate system
- the attitude of the robot end at the second position can be expressed by the three coordinate axes of the coordinate system.
- the attitude matrix R2 composed of the cosine values of the two included angles is used to represent, wherein, R1 and R2 are the same.
- the attitude of the robot end at the second position is adjusted to satisfy the attitude matrix R1.
- the end of the robot maintains the same attitude before and after the movement and during the movement, and the attitude of the end of the robot remains unchanged during the movement.
- the moving process of the end of the robot can be controlled by instructions.
- the first instruction joint variable and the second instruction joint variable determine an instruction (or called a control instruction), and the instruction is used to control the end of the robot to move from the first position to the second position.
- the pose of the end of the robot at the first position is pose #1, wherein pose #1 includes position #1 and posture #1, position #1 is the first position, and posture #1 can be based on the unmoved
- the previous attitude is determined (for example, attitude #1 is before the movement, it can be the attitude of the factory), according to the robot kinematics model and the posture #1 reverse solution to obtain the command joint variable #1; the end of the robot is in the second position
- the pose of is pose #2, where pose #2 includes position #2 and pose #2, position #2 is the second position, and pose #2 is pose #1, according to the robot kinematics model and pose #2 Reverse solution to obtain the command joint variable #2, based on the command joint variable #1 and the command joint variable #2, the command can be obtained, for example, the translation of the command joint variable #2 compared with the command joint variable #1 is 5cm, and the rotation is positive 30 degrees, the command can be plus 5cm for translation and plus 30 degrees for rotation.
- the movement of the end of the robot from the first position to the second position may take a point on the end of the robot as a reference point, and the point moves from the first position to the second position.
- the center point of the robot tip is moved from said first position to said second position.
- the robot tip can move from a first position to said second position along a certain path.
- the end of the robot moves from the first position to the second position along a first path, the first position and the second position are on the first path, the first path is the robot A line connecting the center of the tip to a point on the surface of the calibration object.
- the movement of the end of the robot can drive the movement of the actuator at the end of the robot.
- the robot end moves from said first position to said second position, and the actuator of the robot end moves from a first position' to a second position'.
- first position' is different from the first position and the second position' is different from the second position.
- the first position' and The distance between the first position can be understood as the distance between the center point of the end of the robot and the center point of the actuator at the end of the robot; the distance between the second position' and the second position can also be understood as the center point of the end of the robot Distance from the center point of the actuator at the end of the robot.
- the nominal movement displacement of the end of the robot from the first position to the second position can be seen as Do is the nominal movement displacement of the actuator at the end of the robot from the first position to the second position; the actual movement displacement of the end of the robot from the first position to the second position can be regarded as the execution of the end of the robot The actual movement displacement of the device from the first position 'to the second position'.
- the online kinematic parameter calibration can be performed regularly according to the frequency of use and the degree of wear of the robot on the production line. For example, regularly detect the distance error between the end of the robot and the actual target after reaching the specified position, and if the error exceeds the allowable range, recalibrate the kinematic parameters.
- the online calibration system with closed-loop feedback helps to find out whether the absolute positioning accuracy of the robot is degraded in time.
- the robot calibration process can be retriggered. Realize ex-factory calibration of the robot, timely eliminate the cumulative error caused by the long-term operation of the robot, and do not need to stop the line for calibration, which improves the production efficiency of the industrial production line.
- the flow of the method shown in FIG. 2 further includes the following steps.
- the preset threshold may be a preset value.
- the kinematic parameters of the robot may be calibrated when the robot leaves the factory.
- the actual movement displacement is determined by the size of the calibration object in the operation space, the size of the first image, and the size of the second image.
- the first image is the image of the calibration object acquired by the actuator of the robot end when the robot end is at the first position
- the second image is the image of the robot end at the first position. In the case of the second position, the image of the calibration object acquired by the actuator at the end of the robot.
- the above-mentioned actual displacement can be determined based on the size of the calibration object in the operating space of the robot and the size of the image of the calibration object acquired by the actuator at the end of the robot, without using expensive measuring instruments to measure the actual displacement. Realize the calculation of the actual displacement under the premise. Therefore, the calibration cost of the kinematic parameters of the robot can be reduced.
- the calibration object in the operating space of the robot in the embodiment of the present application may be any object of known size (eg, the object 130 of known size shown in FIG. 1 ). That is to say, there is no need to use a specific calibration plate for calibration, and the existing workpieces with known dimensions on the production line can be simply used.
- the size of the above-mentioned calibration object can be measured before calculating the actual displacement, or obtained from the parameter specification of the calibration object before calculating the actual displacement, and stored in the memory of the robot. Read from memory is used when calculating the actual movement displacement.
- the size of the above-mentioned calibration object can be measured when calculating the actual displacement, or obtained from the parameter specification of the calibration object when calculating the actual displacement. In this implementation mode, it can be There is no need to store the size of the calibration object, it can be obtained when calculating the actual displacement.
- the process of determining the movement displacement, the above-mentioned acquisition method of the size of the calibration object can be the first possible implementation method above, that is, before calculating the actual movement displacement, it is acquired and stored, and when the size of the calibration object is needed, from It can be read and used in the memory.
- the required parameters such as the size of the calibration object, the size of the first image and the size of the second image
- the calibration object when the end of the robot is in the first position, the calibration The object is located in the operating space of the robot, and the actuator at the end of the robot can obtain the first image of the calibration object; similarly, when the end of the robot is in the second position, the calibration object is also located in the operating space of the robot, and the execution of the robot end to acquire a second image of the calibrator.
- the calibration object when the end of the robot is at the first position and at the second position, the calibration object is located in the operating space of the robot, which may be the following two situations.
- the calibration object before and after the end of the robot moves, the calibration object is located in the operating space of the robot, but the calibration object may not be located in the operating space of the robot during the moving process.
- the calibration object is located in the operating space of the robot before, after and during the movement of the end of the robot.
- the size of the image acquired by the actuator at the end of the robot there is no limitation on how to obtain the size of the image acquired by the actuator at the end of the robot, and the size of the image may be obtained by measuring the image acquired by the actuator at the end of the robot.
- the above-mentioned calibration object in the embodiment of the present application may be an object with regular edges, for example, an object of known size such as a cuboid or a polyhedron.
- the actual displacement, the size of the calibration object in the operating space of the robot, the size of the first image, and the size of the second image satisfy the following relationship:
- d R is the actual displacement
- H is the height of the calibration object
- h 1 is the height of the first image
- h 2 is the height of the second image
- V' is the height of the robot end at In the first position, the distance between the center point of the actuator at the end of the robot and the center point of the first image
- V′′ is when the end of the robot is in the second position, the distance between the end of the robot The distance between the center point of the actuator and the center point of the second image.
- the above-mentioned actual movement displacement, the size of the calibration object in the operating space of the robot, the size of the first image and the size of the second image satisfy the relational expression, which is just an example to illustrate how to calculate the actual movement
- the displacement does not constitute any limitation to the protection scope of the present application, and the actual movement displacement can also be calculated based on the size of the calibration object, the size of the first image, and the size of the second image through other mathematical calculation methods.
- the PnP algorithm when the size of the calibration object is known, first use the PnP algorithm to calculate the spatial position p1 of the calibration object in the camera coordinate system when the end of the robot is in the first position; after the end of the robot moves, use the same PnP algorithm Calculate the spatial position p2 of the calibration object in the camera coordinate system when the end of the manipulator is in the second position; then the actual displacement of the camera in space can be calculated equivalently:
- d R represents the displacement of the distance
- represents the modulo operation
- the nominal movement displacement is determined by the robot kinematics model, the first joint variable and the second joint variable.
- the first joint variable is the joint variable of the robot when the end of the robot is at the first position
- the second joint variable is the joint variable of the robot when the end of the robot is at the second position
- the joint variables of the robot and the robot kinematics model is used to represent the relationship between the joint variables of the robot and the pose of the end of the robot.
- a first motor encoder value of the robot is acquired, and the first motor encoder value is used to calculate the first joint variable.
- a second motor encoder value of the robot is obtained, and the second motor encoder value is used to calculate the second joint variable.
- the value of the motor encoder at a certain joint read from the robot is encoder1, the initial value of the encoder is encoder0, and the resolution of the encoder is bit1;
- the reduction ratio of the harmonic reducer is a fixed value ration1;
- the joint variable can be calculated by the following formula: (encoder1-encoder0)/(ration1*bit1/2/pi).
- the first nominal position of the robot end can be obtained based on the first joint variable and the robot kinematics model;
- the second nominal position of the robot end can be obtained through a forward solution based on the second joint variable and the robot kinematics model.
- the first nominal position is the distance between the second nominal positions, which can be understood as the nominal displacement.
- the nominal displacement, the robot kinematics model, the first joint variable and the second joint variable satisfy the following relationship:
- d C is the nominal displacement
- f is the kinematics model of the robot
- q i is the first joint variable
- q j is the second joint variable
- represents a modulo operation.
- the calculation of the nominal displacement of the end of the robot moving from the first position to the second position may refer to the introduction of current related technologies, and details will not be repeated here.
- an error value for calibrating the kinematic parameters of the robot can be determined based on the displacement pair, and the method flow shown in FIG. 2 further includes the following steps.
- the above displacement pair includes one displacement pair.
- An error equation is constructed for the first displacement and the second displacement included in the one displacement pair, and an error value is obtained by solving the error equation.
- the kinematic parameter of the robot to be calibrated is displacement, and the displacement is compensated and corrected based on the obtained error value to improve the absolute positioning accuracy of the robot.
- the above displacement pair includes multiple displacement pairs.
- An error equation system is constructed according to the plurality of displacement pairs, each error equation in the error equation system is constructed by the first displacement and the second displacement, and the error equation system is used to solve to obtain an error matrix, and the error matrix includes a plurality of errors value.
- the kinematic parameters of the robot to be calibrated include two link parameters, and the two link parameters are compensated and corrected based on the two error values included in the obtained error matrix to improve the absolute positioning accuracy of the robot.
- a displacement error model is constructed by using the actual displacement and the nominal displacement of the end of the robot.
- the basic idea of the displacement error model is: if the kinematic parameters of the robot are accurate enough, the actual displacement of the end of the robot should be equal to the nominal displacement. However, due to the error between the theoretical kinematic parameters and the actual kinematic parameters, the actual displacement is not equal to the nominal displacement, so the error equation can be constructed. The error equation is described as follows:
- P C (i) and P C (j) are the nominal starting position and nominal end position of the end of the robot calculated by using the robot kinematics model;
- P R (i) and P R (j) refer to the The actual starting position and the actual end position of the end of the robot obtained by (eg, camera).
- x i , y i , zi represent the starting nominal position components of the end of the robot in the x, y and z directions respectively
- dxi , dy i , dz i represent the actual starting position and the nominal starting position of the robot end in x, y and z directions respectively.
- the error components in the y and z directions; x j , y j , z j represent the nominal end position components of the robot end in the x, y and z directions, respectively; dx j , dy j , dz j are the actual end position and the nominal end position of the robot end respectively The error components of the end position in the x, y, and z directions.
- d R (i, j) ⁇ d (i, j) + d C (i, j)
- d C (i, j) and d R (i, j) are the nominal displacement length and the actual displacement length (i.e. the modulus of the displacement vector)
- ⁇ d(i ,j) is the difference between the two displacement lengths, and the following error equation can be constructed after obtaining the displacement length difference:
- the displacement error ⁇ d is: the difference between the actual moving displacement d R of the robot end and the nominal moving displacement d C of the robot end; J is the Jacobian matrix of the connecting rod parameters calculated based on the original kinematic parameters; therefore, Only ⁇ X remains as the unknown quantity in the displacement error expression.
- the solution of ⁇ X can combine multiple sets of motion data to construct an error equation group, use mathematical solution methods (such as least squares method or iterative solution method) to obtain the error matrix ⁇ X of the connecting rod parameters, and compensate and correct the kinematic parameters to improve the robot. absolute positioning accuracy.
- mathematical solution methods such as least squares method or iterative solution method
- Fig. 3 is a schematic flow chart of another method for calibrating kinematic parameters of a robot provided by an embodiment of the present application. Include the following steps.
- the robot kinematics model is a function of the joint variables of the robot, which is used to represent the relationship between the joint variables of the robot and the pose of the end of the robot.
- M is the pose of the end of the robot
- q i is each joint variable of the robot
- function f represents the kinematics model of the robot.
- the joint variables of the robot include the angle information of the joints of the robot, the position information of the joints of the robot, the translation amount between different joints of the robot, the rotation amount between different joints of the robot, or the height information of the joints of the robot, etc. .
- the robot kinematics model is used to determine the pose of the end of the robot according to the joint variables of the robot.
- the pose M of the end of the robot can be calculated by combining the robot kinematics model with the joint variables q i of each joint of the robot, that is, the forward kinematics solution process.
- the kinematics model of the robot is used to determine command joint variable values of the robot according to the pose of the end of the robot.
- the instruction joint variable is a joint variable used to determine an instruction to control the robot.
- the robot kinematics model can be combined with the pose M of the end of the robot to calculate the command joint variable q i of each joint of the robot, which is the inverse kinematics solution process.
- the kinematics model of the robot is established according to the original parameters provided by the robot manufacturer (for example, translation and rotation between each joint of the robot).
- the robot to be calibrated is a multi-joint robot arm
- the total number of joints of the multi-joint robot is n
- the sequence from the motor to the outside is the nth section, n-1th section...1st section
- n is a positive integer.
- the joint coordinate system of the i-1th joint, the transformation matrix to the joint coordinate system of the i-th joint is described as i-1 T i
- the transformation matrix i-1 T i is composed of the i-1 joint and the i-th joint
- the relative translation and rotation relationship between joint axes of joint i is determined
- i is a positive integer less than or equal to n.
- the kinematics model of the robot is determined according to the original parameters provided by the robot manufacturer.
- the robot kinematics model is included in the factory parameters of the robot.
- the kinematics model of the robot is acquired through other devices.
- a device capable of establishing a robot kinematics model establishes the robot kinematics model, and sends the established robot kinematics model to the device for calibrating the robot through a message.
- command joint variables before and after the end of the robot moves can be determined according to the robot kinematics model.
- the method flow shown in FIG. 3 also includes the following steps.
- the joint variables include joint angle values.
- the end of the robot may be moved from a first position to a second position, and the attitude of the end of the robot at the first position is the same as that at the second position.
- the first position and the second position are positions of two different points in the operation space of the robot.
- the first position is the current position of the end of the robot.
- the robot is controlled to drive the end of the robot parallel to a certain surface (which can be called the first surface) of an object of known size in the operating space, and then designate a point in the operating space of the robot as the target point.
- the position is the second position mentioned above.
- the first position is the current position of the end of the robot.
- the target point Before determining the above-mentioned target point, it is not necessary to make the end of the robot parallel to a certain surface of an object of known size in the operation space.
- a point in the operation space is arbitrarily selected as the target point, and the position of the target point is the above-mentioned second position.
- the robot kinematics model is a function of the robot joint variables. After the robot kinematics model is established through the above step S210, the first command joint variable corresponding to the first position and the second position corresponding to the first command joint variable can be determined based on the robot kinematics model. The second instruction joint variable.
- the first command joint variable corresponding to the first position is calculated inversely according to the kinematics model of the robot.
- the pose parameter M1 of the end of the robot at the first position is used as the input of the kinematics model, and the first command joint variable q i1 corresponding to the first position is output.
- the pose parameter M2 of the end of the robot at the second position is used as the input of the kinematics model, and the second command joint variable q i2 corresponding to the second position is output.
- the instruction is determined according to the first instruction joint variable and the second instruction joint variable, and the method flow shown in FIG. 3 further includes the following steps.
- the relationship between the first command joint variable and the second command joint variable may determine the command.
- the instruction may be an instruction to control the movement and/or rotation of each joint so that the angle value of each joint is updated from the first command joint variable to the second command joint variable.
- This instruction is used to control the robot to drive the end of the robot to move a certain distance in the robot's operating space, move to the second position above, and ensure that the attitude of the end of the robot always meets certain constraints before and after the movement (for example, the attitude of the end of the robot remains unchanged before and after the movement) , and the objects placed in the operation space are within the field of view of the actuator (eg, image acquisition module) at the end of the robot before and after the robot moves.
- the actuator eg, image acquisition module
- moving the end of the robot for a certain distance in the operating space of the robot includes: moving the end of the robot for a certain distance along the line connecting the feature point and the center of the end of the robot, And ensure that the end posture of the robot remains unchanged throughout the movement process.
- the feature point is any point on an object of known size (for example, any point on the above-mentioned first surface).
- the movement of the end of the robot drives the movement of the actuator at the end of the robot.
- the image acquisition module at the end of the robot is taken as an example for illustration below.
- FIG. 4 is a schematic diagram of camera movement provided by an embodiment of the present application.
- Fig. 4 It can be seen from Fig. 4 that the camera moves from the initial position #1 to the target position #1 under the control of the instruction, and the connection between the initial position #1 and the target position #1 is at the center of the camera and the feature points on the first surface. on-line. The pose of the end of the robot does not change before and after the camera is moved.
- FIG. 4 is only an example of a single movement of the camera from the initial position #1 to the target position #1.
- the camera can move multiple times. For example, the camera moves from the initial position #1 After moving to the target position #1, a point in the operation space of the specified robot can be re-designated as the target point.
- the position of the target point is the target position #2, and the current camera position #1 can be used as the initial position #2 .
- the relationship between the command joint variable corresponding to the target position #2 and the command joint variable corresponding to the current initial position #2 can determine another command, which is used to control the camera to move from the initial position #2 to the target position #2.
- the movement of the camera from the initial position #2 to the target position #2 can be as follows: the camera moves a certain distance along the line connecting the feature point and the center of the camera, and ensures that the end posture of the robot remains unchanged before and after the movement (in the middle process, the end posture of the robot can occur Change).
- the feature point is any point on the object of known size (for example, any point on the second surface, the second surface is different from the first surface).
- FIG. 4 only exemplarily shows the moving manner of the camera, which does not constitute any limitation to the protection scope of the present application.
- the moving path of the camera may not move along the line connecting the feature point and the center of the camera.
- the camera can move multiple times in different directions.
- FIG. 5 (a) and (b) in FIG. 5 are schematic diagrams of another camera movement provided by the embodiment of the present application.
- the second surface and the first surface are different surfaces of an object of known size, and the initial position #2 may be the target position #1 after the first movement.
- the use of polyhedral structural parts can increase the range of motion of the robot and traverse different configurations as much as possible, which helps to improve the calibration accuracy.
- the polyhedral structure has low manufacturing cost, strong applicability, and easy promotion.
- the embodiment of the present application does not limit the moving path of the camera, as long as the camera can capture images of objects of known size at the initial position and the target position.
- the robot After the robot drives the camera to move a certain distance under the control of instructions, it can determine the actual movement displacement (or actual movement distance) of the end of the robot through the actual size of the object of known size and the imaging size of the image collected by the camera.
- the method flow shown in FIG. 3 also includes the following steps.
- the spatial movement displacement of the camera is calculated by combining the actual size information of the object and the visual measurement value. Since the end of the robot keeps its posture unchanged during a certain process, this distance is also the actual movement displacement of the end of the robot.
- FIG. 6 is a schematic diagram of calculating the actual moving displacement provided by the embodiment of the present application. It can be seen from Fig. 6 that the displacement of the camera in space is deduced according to the actual size information of the object and the imaging size information on the imaging plane of the camera.
- h 1 is the imaging height of the object in the image collected at the initial position (such as the above-mentioned initial position #1) before the camera moves;
- h 2 is the target position after the camera moves (such as the above-mentioned target position #1)
- the imaging height of the object V is the distance between the cameras (for example, the factory parameters of the camera);
- H is the actual height of the workpiece calibration object;
- U is the object distance between the workpiece calibration object and the camera;
- C' is the imaging before the camera moves The distance between the center point and the center point of the imaging plane, V′ is the distance between the imaging center point and the lens center point before the camera moves;
- C′′ is the distance between the imaging center point and the imaging plane center point after the camera moves, V′′ is the imaging after the camera moves The distance between the center point and the lens center point;
- the distance d R that the camera moves in space can be calculated. Since the posture of the robot end remains unchanged before and after the robot moves, the distance d R can be equal to the
- the manner of acquiring the actual movement displacement of the camera is not limited to the method shown in FIG. 6 .
- the method flow shown in FIG. 3 also includes the following steps.
- the calculation method of the nominal movement displacement of the robot end movement there is no limitation on the calculation method of the nominal movement displacement of the robot end movement.
- the nominal position of the robot end before and after the movement is calculated according to the robot kinematics model and the angle information of each joint before and after the movement of the robot end, and the nominal displacement between the two positions is calculated.
- an error equation can be constructed based on the actual displacement of the end of the robot and the nominal displacement of the actuator at the end of the robot to solve the error variance.
- the method flow shown in FIG. 3 also includes the following steps.
- the above processes S310 to S360 can realize the first calibration of kinematic parameters after the robot is assembled and leaves the factory. After the calibration is completed, the absolute positioning accuracy of the robot meets the requirements and starts to work online. Then, the absolute positioning accuracy is checked periodically relying on visual feedback information. When the accuracy deteriorates beyond the allowable range (for example, the error of the robot’s kinematic parameters is greater than the preset threshold) , to start the online recalibration step.
- the online calibration system is assisted by visual feedback information to regularly detect the distance error between the end tool of the robot and the actual target after reaching the specified position. If the error exceeds the allowable range, repeat S320 to S360 to recalibrate the kinematic parameters.
- sequence numbers of the above processes do not mean the order of execution, and the execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
- the robot arm is used as an example for illustration, and it should be understood that the specific form of the robot is not limited in this embodiment of the present application.
- the kinematic parameters of other types of robots can be calibrated based on the method provided by the embodiment of the present application.
- the methods and operations implemented by the device for calibrating the kinematic parameters of the robot may also be implemented by components of the device (eg, a processor).
- the method for calibrating the kinematic parameters of the robot described above based on FIGS. 2-3 is mainly introduced from the perspective of how the device for calibrating the kinematic parameters of the robot realizes calibration. It should be understood that, in order to realize the above functions, the device for calibrating the kinematic parameters of the robot includes corresponding hardware structures and/or software modules for performing various functions.
- the device for calibrating the kinematic parameters of the robot provided by the embodiment of the present application will be described in detail with reference to FIGS. 7-8 . It should be understood that the description of the device embodiment corresponds to the description of the method embodiment. Therefore, for content that is not described in detail, reference may be made to the above method embodiments, and part of the content will not be repeated for brevity.
- the embodiment of the present application can divide the functional modules of the device for calibrating the kinematic parameters of the robot according to the above method example, for example, each functional module can be divided corresponding to each function, or two or more functions can be integrated in one in a processing module.
- the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation. In the following, description will be made by taking the division of each functional module corresponding to each function as an example.
- Fig. 7 is a schematic block diagram of an apparatus 700 for calibrating kinematic parameters of a robot provided by an embodiment of the present application.
- the apparatus 700 includes an acquisition unit 710 and a processing unit 720 .
- the acquisition unit 710 may implement a corresponding acquisition function, and the processing unit 720 is configured to perform data processing.
- the acquisition unit 710 may be called a communication interface or a communication unit.
- part of the functions of the acquiring unit 710 may also be implemented by the processing unit 720 .
- the calculation function of the acquisition unit 710 to obtain the displacement pair including the actual displacement and the nominal displacement can be performed by the processing unit 720 .
- the device 700 may further include a storage unit, which may be used to store instructions and/or data, and the processing unit 720 may read the instructions and/or data in the storage unit, so that the device implements the aforementioned method embodiments .
- a storage unit which may be used to store instructions and/or data
- the processing unit 720 may read the instructions and/or data in the storage unit, so that the device implements the aforementioned method embodiments .
- the device 700 can be used to execute the actions performed by the device for calibrating the kinematic parameters of the robot in the above method embodiments.
- the device 700 can be a device for calibrating the kinematic parameters of the robot or can be configured in The components of the device for calibrating the kinematic parameters of the robot, the acquisition unit 710 is used to perform operations related to the acquired displacement pair of the device for calibrating the kinematic parameters of the robot in the above method embodiments, and the processing unit 720 is used to perform the above
- the device for calibrating the kinematic parameters of the robot in the embodiment of the method in this paper handles operations related to displacement pairs.
- the acquiring unit 710 is configured to acquire a displacement pair, the displacement pair includes a first displacement and a second displacement, the first displacement is the actual movement displacement of the end of the robot from the first position to the second position, and the second displacement is the The nominal movement displacement of the end of the robot moving from the first position to the second position.
- the first position and the second position are the positions of two different points in the operating space of the robot, and the posture of the end of the robot at the first position is the same as that at the second position;
- the actual displacement Determined by the size of the calibration object in the operation space, the size of the first image, and the size of the second image the first image is the image captured by the actuator at the end of the robot when the end of the robot is at the first position
- the image of the calibration object, the second image is the image of the calibration object acquired by the actuator at the end of the robot when the end of the robot is in the second position;
- the nominal displacement is determined by the robot kinematics model, the first joint variable and the second joint variable are determined, the first joint variable is the joint variable of the robot when the end of the robot is at the first position;
- the second joint variable is the joint variable of the robot when the end of the robot is at the second position, the Joint variables of the robot;
- this robot kinematics model is used to represent the relationship between the joint variables of the robot and
- the processing unit 720 is configured to determine an error value according to the displacement pair, and the error value is used to calibrate the kinematic parameters of the robot.
- the processing unit 720 is further configured to determine a first command joint variable according to the robot kinematics model and the pose of the end of the robot at the first position.
- the processing unit 720 is further configured to determine a second command joint variable according to the kinematics model of the robot and the pose of the end of the robot at the second position.
- the processing unit 720 is further configured to determine an instruction according to the first instruction joint variable and the second instruction joint variable, and the instruction is used to control the end of the robot to move from the first position to the second position.
- the acquiring unit 710 configured to acquire displacement pairs, includes: the acquiring unit 710, configured to acquire a plurality of displacement pairs, the plurality of displacement pairs including a first displacement pair and a second displacement pair; in the acquisition unit 710 before acquiring the first displacement pair, the processing unit 720 is also used to control the robot so that the end of the robot is parallel to the first surface of the calibration object; before the acquiring unit 710 acquires the second displacement pair, the processing unit 720. It is further used to control the robot so that the end of the robot is parallel to the second surface of the calibration object, wherein the first surface and the second surface are two different surfaces of the calibration object.
- the processing unit 720 is further configured to determine that the error of the kinematic parameters of the robot is greater than a preset threshold.
- the acquiring unit 710 is further configured to acquire a first motor encoder value of the robot, and the first motor encoder value is used to calculate the first joint variable; when the end of the robot is at the second position, the acquisition unit 710 is also used for the second motor encoder value of the robot, and the second motor encoder value is used to calculate the second joint variable.
- the device 700 can implement the steps or processes corresponding to the device for calibrating the kinematic parameters of the robot in the method embodiment according to the embodiment of the present application, and the device 700 can include the method for calibrating in the method embodiment A unit of the method performed by means of kinematic parameters of a robot. Moreover, each unit in the apparatus 700 and the above-mentioned other operations and/or functions are respectively for realizing the corresponding procedures of the method embodiment in the apparatus for calibrating the kinematic parameters of a robot in the method embodiment.
- the acquiring unit 710 can be used to execute the step of acquiring displacement pairs in the method, such as step S210; the processing unit 720 can be used to execute the processing steps in the method, such as step S211 and S220.
- the processing unit 720 in the above embodiments may be implemented by at least one processor or processor-related circuits.
- the acquiring unit 710 may be implemented by a transceiver or a transceiver-related circuit.
- the storage unit can be realized by at least one memory.
- the embodiment of the present application also provides an apparatus 800 for calibrating kinematic parameters of a robot.
- the apparatus 800 includes a processor 810 and may further include one or more memories 820 .
- the processor 810 is coupled with the memory 820, and the memory 820 is used to store computer programs or instructions and/or data, and the processor 810 is used to execute the computer programs or instructions and/or data stored in the memory 820, so that the methods in the above method embodiments be executed.
- the apparatus 800 includes one or more processors 810 .
- the memory 820 may be integrated with the processor 810, or set separately.
- the apparatus 800 may further include a transceiver 830 , and the transceiver 830 is used for receiving and/or sending signals.
- the processor 810 is configured to control the transceiver 830 to receive and/or send signals.
- the apparatus 800 is used to implement the operations performed by the apparatus for calibrating the kinematic parameters of the robot in the above method embodiments.
- An embodiment of the present application further provides a computer-readable storage medium, on which computer instructions for implementing the method performed by the device for calibrating the kinematic parameters of a robot in the above method embodiments are stored.
- the embodiment of the present application also provides a computer program product including instructions, which, when executed by a computer, enable the computer to implement the method performed by the device for calibrating the kinematic parameters of the robot in the above method embodiment.
- the embodiment of the present application also provides a system for calibrating the kinematic parameters of the robot, the system for calibrating the kinematic parameters of the robot includes the apparatus for calibrating the kinematic parameters of the robot in the above embodiments.
- processors mentioned in the embodiment of the present application may be a central processing unit (central processing unit, CPU), and may also be other general purpose processors, digital signal processors (digital signal processor, DSP), application specific integrated circuits ( application specific integrated circuit (ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- the memory mentioned in the embodiments of the present application may be a volatile memory and/or a nonvolatile memory.
- the non-volatile memory can be read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM, PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically programmable Erases programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
- the volatile memory may be random access memory (RAM).
- RAM random access memory
- RAM can be used as an external cache.
- RAM may include the following forms: static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM) , double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous connection dynamic random access memory (synchlink DRAM, SLDRAM) and Direct memory bus random access memory (direct rambus RAM, DR RAM).
- static random access memory static random access memory
- dynamic RAM dynamic random access memory
- DRAM synchronous dynamic random access memory
- SDRAM synchronous DRAM
- double data rate SDRAM double data rate SDRAM
- DDR SDRAM double data rate SDRAM
- ESDRAM enhanced synchronous dynamic random access memory
- SLDRAM synchronous connection dynamic random access memory
- Direct memory bus random access memory direct rambus RAM, DR RAM
- the processor when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic devices, discrete gate or transistor logic devices, or discrete hardware components, the memory (storage module) may be integrated in the processor. It should also be noted that the memories described herein are intended to include, but are not limited to, these and any other suitable types of memories.
- the disclosed devices and methods may be implemented in other ways.
- the device embodiments described above are illustrative only.
- the division of the units is only a logical function division, and there may be other division methods in actual implementation.
- several units or components may be combined or integrated into another system, or some features may be omitted, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to implement the solutions provided in this application.
- each functional unit in each embodiment of the present application may be integrated into one unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
- the computer may be a personal computer, a server, or a network device.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
- the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD) etc.
- the aforementioned available medium may include But not limited to: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.
- the term "and/or” in this application is only an association relationship describing associated objects, indicating that there may be three relationships, for example, A and/or B may indicate: A exists alone, and A and B exist simultaneously , there are three cases of B alone.
- the character "/" in this article generally means that the contextual objects are an "or” relationship; the term “at least one” in this application can mean “one” and "two or more", for example, A At least one of , B, and C can mean: A exists alone, B exists alone, C exists alone, A and B exist simultaneously, A and C exist simultaneously, C and B exist simultaneously, and A, B, and C exist simultaneously, which Seven situations.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
一种用于标定机器人的运动学参数的方法。该方法包括:获取位移对,根据位移对确定用于标定机器人的运动学参数的误差值,位移对包括实际移动位移和名义移动位移。实际移动位移由标定物的尺寸和机器人末端的执行器获取的标定物的图像的尺寸确定;名义移动位移由机器人运动学模型和机器人的关节变量确定,机器人末端在第一位置和第二位置的姿态相同。通过标定物的尺寸和机器人末端的执行器获取的图像的尺寸确定机器人末端的实际移动位移,能够在降低标定成本的前提下实现机器人的运动学参数的标定。还涉及一种用于标定机器人运动学参数的装置、计算机可读存储介质、用于标定机器人运动学参数的系统。
Description
本申请要求于2021年11月12日提交中国国家知识产权局、申请号202111340403.7、申请名称为“用于标定机器人的运动学参数的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及机器人运动学标定技术领域,更具体地,涉及一种用于标定机器人的运动学参数的方法和装置。
在机器人完成制造装配之后,由于制造和装配误差的存在,其运动学几何参数实际值和理论设计值存在差异。按理论几何参数进行运动控制时,机器人末端实到位姿与指令位姿之间将出现误差。通过运动学标定辨识出不准确的几何参数,对机器人运动学模型参数进行更新,是保证机器人精度的可行方法。
目前机器人标定的方法需要借助昂贵的测量仪器,标定成本较高,因此如何在降低标定成本的前提下实现机器人标定成为亟待解决的问题。
发明内容
本申请实施例提供一种用于机器人标定的方法,能够在降低标定成本的前提下实现机器人的运动学参数的标定。
第一方面,提供了一种用于标定机器人的运动学参数的方法。所述方法包括:首先,获取包括第一位移和第二位移的位移对。然后,根据该位移对确定用于标定该机器人的运动学参数的误差值。
上述的第一位移为机器人末端从第一位置移动至第二位置的实际移动位移,第二位移为该机器人末端从该第一位置移动至该第二位置的名义移动位移。其中,第一位置和第二位置为该机器人的操作空间中不同的两个点的位置,该机器人末端在该第一位置的姿态和在该第二位置的姿态相同。
实际移动位移由该操作空间中的标定物的尺寸、第一图像的尺寸和第二图像的尺寸确定。该第一图像为该机器人末端位于在该第一位置的情况下,该机器人末端的执行器获取的该标定物的图像。该第二图像为该机器人末端位于在该第二位置的情况下,该机器人末端的执行器获取的该标定物的图像。
该名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定。该第一关节变量为该机器人末端在该第一位置的情况下,该机器人的关节变量。该第二关节变量为该机器人末端在该第二位置的情况下,该机器人的关节变量。该机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
本申请的用于标定机器人的运动学参数的方法,可以获取到包括机器人末端的实际移动 位移和名义移动位移的位移对,并根据该位移对确定用于标定该机器人的运动学参数的误差值。其中,名义移动位移可以根据机器人运动学模型、机器人在第一位置的关节变量和机器人在第二位置的关节变量确定,而实际移动位移可以通过标定物的尺寸和机器人末端的执行器获取的图像的尺寸确定,无需昂贵的测量仪器测量实际移动位移的前提下实现机器人的运动学参数的标定。因此,能够在降低标定成本的前提下实现机器人的运动学参数的标定。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该机器人的出厂参数确定该机器人运动学模型,该机器人的出厂参数包括该机器人的各个关节之间的平移量和旋转量。可以根据机器人的出厂参数确定机器人运动学模型,而机器人的出厂参数能够比较容易获得,提供了一种简单的建立机器人运动学模型的方案。
结合第一方面,在第一方面的某些实现方式中,该获取位移对,包括:获取多个位移对。上述的根据位移对确定误差值,包括:根据该多个位移对构建误差方程组,该误差方程组中每个误差方程由该第一位移和该第二位移构建,该误差方程组用于求解得到误差矩阵,该误差矩阵包括多个该误差值。可以通过构建并求解方程组的方式获得上述误差值,通过数学计算方式获取误差值,而方程组的求解方式有多种,提高方案的灵活性。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在该机器人末端位于该第一位置的情况下,获取用于计算该第一关节变量的该机器人的第一电机编码器值。在该机器人末端位于该第二位置的情况下,获取该用于计算该第二关节变量的机器人的第二电机编码器值。可以通过获取机器人的电机编码器值,计算机器人末端位于不同位置情况下,机器人的关节变量,而获取机器人的电机编码器值的方式可以参考目前已有的方案,提高本申请提供的方法的向前兼容性。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该机器人运动学模型和该机器人末端在该第一位置的位姿确定第一指令关节变量。根据该机器人运动学模型和该机器人末端在该第二位置的位姿确定第二指令关节变量。根据该第一指令关节变量和该第二指令关节变量确定指令,该指令用于控制该机器人末端从该第一位置移动至该第二位置。可以通过建立的机器人运动学模型逆向求解得到不同位置的指令关节变量,从而可以根据不同位置的指令关节变量确定出控制机器人的指令,使得机器人末端从第一位置移动至第二位置。
结合第一方面,在第一方面的某些实现方式中,该机器人末端从该第一位置移动至该第二位置,包括:该机器人末端沿第一路径从该第一位置移动至该第二位置,其中,该第一位置和该第二位置在该第一路径上,该第一路径为该机器人末端中心和该标定物的表面的一点的连线。可以提供机器人末端的移动路径,使得机器人末端能够沿规定的路径移动。
结合第一方面,在第一方面的某些实现方式中,该获取位移对,包括:获取多个位移对,其中,多个位移对包括第一位移对和第二位移对。在获取该第一位移对之前,该方法还包括:控制该机器人使得该机器人末端器与该标定物的第一面平行。在获取该第二位移对之前,该方法还包括:控制该机器人使得该机器人末端与该标定物的第二面平行,其中,该第一面和该第二面为该标定物不同的两个表面。可以沿不同的方位移动机器人末端,使得获取的多个位移对包括不同方位上的位移对,从而更精确地标定机器人的运动学参数。
结合第一方面,在第一方面的某些实现方式中,在获取该位移对之前,该方法还包括:确定该机器人的运动学参数的误差大于预设阈值。在机器人的运动学参数的误差大于预设阈 值的情况下,即可启动标定机器人的运动学参数的流程,从而尽最大可能保证机器人的运动学参数的准确性,提高机器人的运动精度。
结合第一方面,在第一方面的某些实现方式中,该实际移动位移、该器人操作空间中的标定物的尺寸、该第一图像的尺寸和该第二图像的尺寸满足以下关系:
其中,d
R为实际移动位移,H为该标定物的高度,h
1和h
2分别为第一图像和第二图像的高度,V′为机器人末端在该第一位置时,机器人末端的执行器的中心点和该第一图像的中心点之间的距离,V″为机器人末端在该第二位置时,该机器人末端的执行器的中心点和该第二图像的中心点之间的距离。可以通过上述公式计算得到实际移动位移,该式中的参数H、h
1、h
2、V′以及V″的获取方式简单(如,直接测量),提高方案的简洁性。
结合第一方面,在第一方面的某些实现方式中,该名义移动位移、该机器人运动学模型、该第一关节变量和该第二关节变量满足以下关系:
d
C=|f(q
i)-f(q
j)|
其中,d
C为该名义移动位移,f为该机器人运动学模型,该q
i为第一关节变量,该q
j为该第二关节变量,||表示取模运算。
第二方面,提供了一种用于标定机器人的运动学参数的装置,该装置用于执行上述第一方面提供的方法。具体地,该用于标定机器人的运动学参数的可以包括用于执行第一方面或第一方面的上述任意一种实现方式提供的方法的单元和/或模块,如处理单元和获取单元。
在一种实现方式中,该用于标定机器人的运动学参数的装置为机器人。当该用于标定机器人的运动学参数的装置为机器人时,获取单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。
在另一种实现方式中,该用于标定机器人的运动学参数的装置为机器人中的芯片、芯片系统或电路。当该用于标定机器人的运动学参数的装置为机器人中的芯片、芯片系统或电路时,获取单元可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。
以上第二方面及其可能的设计所示方法的有益效果可参照第一方面及其可能的设计中的有益效果。
第三方面,提供一种用于标定机器人的运动学参数的装置。该通信装装置包括:至少一个处理器,至少一个处理器与至少一个存储器耦合。至少一个存储器用于存储计算机程序或指令,至少一个处理器用于从至少一个存储器中调用并运行该计算机程序或指令,使得用于标定机器人的运动学参数的装置执行第一方面或其任意可能的实现方式中的方法。
在一种实现方式中,该装置为机器人。在另一种实现方式中,该装置为机器人中的芯片、芯片系统或电路。
以上第三方面及其可能的设计所示方法的有益效果可参照第一方面及其可能的设计中的有益效果。
第四方面,本申请提供一种处理器,用于执行上述各方面提供的方法。
对于处理器所涉及的发送和获取/接收等操作,如果没有特殊说明,或者,如果未与其在 相关描述中的实际作用或者内在逻辑相抵触,则可以理解为处理器输出和接收、输入等操作,也可以理解为由射频电路和天线所进行的发送和接收操作,本申请对此不做限定。
第五方面,提供一种计算机可读存储介质,该计算机可读存储介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面或第一方面的上述任意一种实现方式提供的方法。
第六方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面或第一方面的上述任意一种实现方式提供的方法。
第七方面,提供一种芯片,芯片包括处理器与通信接口,处理器通过通信接口读取存储器上存储的指令,执行上述第一方面或第一方面的上述任意一种实现方式提供的方法。
可选地,作为一种实现方式,芯片还包括存储器,存储器中存储有计算机程序或指令,处理器用于执行存储器上存储的计算机程序或指令,当计算机程序或指令被执行时,处理器用于执行上述第一方面或第一方面的上述任意一种实现方式提供的方法。
第八方面,提供一种用于标定机器人的运动学参数的系统。该用于标定机器人的运动学参数的系统包括:机器人和机器人末端的执行器,该机器人用于获取包括第一位移和第二位移的位移对,以及根据该多个位移对确定用于标定该机器人的运动学参数的误差值。该机器人末端的执行器用于:在该机器人末端位于第一位置的情况下,获取标定物的第一图像;在该机器人末端位于该第二位置的情况下,获取该标定物的第二图像。
该第一位移为该机器人末端从第一位置移动至第二位置的实际移动位移,该第二位移为该机器人末端从该第一位置移动至该第二位置的名义移动位移。其中,该第一位置和该第二位置为该机器人的操作空间中不同的两个点的位置,该机器人末端在该第一位置的姿态和在该第二位置的姿态相同。
该实际移动位移由该机器人的操作空间中的标定物的尺寸、该第一图像的尺寸和该第二图像的尺寸确定。该名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定,该第一关节变量为该机器人末端在该第一位置的情况下,该机器人的关节变量;该第二关节变量为该机器人末端在该第二位置的情况下,该机器人的关节变量;该机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
结合第八方面,在第八方面的某些实现方式中,该系统还包括:该标定物。
图1是本申请实施例能够应用的场景示意图。
图2是本申请实施例提供的用于标定机器人的运动学参数的方法的示意性流程图。
图3是本申请实施例提供的另一种用于标定机器人的运动学参数的方法的示意性流程图。
图4是本申请实施例提供的一种摄像头移动的示意图。
图5中的(a)和(b)是本申请实施例提供的另一种摄像头移动的示意图。
图6是本申请实施例提供的一种计算实际移动位移的示意图。
图7是本申请实施例提供的用于标定机器人的运动学参数的装置700的示意性框图。
图8是本申请实施例提供的用于标定机器人的运动学参数的装置800的示意性框图。
下面将结合附图,对本申请实施例中的技术方案进行描述。
本申请实施例的技术方案可以应用于机器人的运动学参数的标定,如,机械臂的运动学参数的标定、智能车的运动学参数的标定、无人机的运动学参数的标定等。
如图1所示,图1是本申请实施例能够应用的场景示意图。包括以下部件:机器人本体110、图像采集模块120和尺寸已知的物体130。其中,机器人本体110为待标定的机器人,包括但不限于机械臂、智能车、无人机等机器人;图像采集模块120用于采集图像,包括但不限于摄像头、相机等;尺寸已知的物体130为尺寸明确的任何物体,包括但不限于已知尺寸的正方体、长方体、多面体等。
示例性地,尺寸已知的物体130的尺寸可以是通过测量得到的。需要说明的是,本申请中对于尺寸已知的物体130的尺寸的确定方式不做限定,例如,可以是从该物体130的参数说明中获知的,还例如可以是通过测量得到的。
图1只是为了便于理解本申请示例性示出本申请提供的用于标定机器人的运动学参数的方法能够应用的场景,对本申请的保护范围不构成任何的限定,本申请提供的用于标定机器人的运动学参数的方法还能够应用于其他的场景,如,机器人本体带有图像采集模块,可以理解为上述的机器人本体110和图像采集模块120为一个整体;还如,机器人本体的结构还可以是其他的形状。对于本申请能够应用场景,这里不再赘述。
随着机器人领域的发展,国内外的研究者为了提高机器人的精度,提出了许多标定机器人的运动学参数的方法,主要包括以下几种标定方法。
第一种即使用最广泛的基于位置误差模型的参数标定,用外部测量仪器测出机器人末端实际位置,并与机器人理论位置作比较,用多组点的实际位置与理论位置建立位置误差微分方程,进而求解出误差参数。常用的测量仪器如激光追踪仪、三坐标测量机等都是基于此模型进行标定的,激光追踪仪及三坐标测量机虽然测量精度高,但价格昂贵,操作复杂,标定效率低。
第二种是基于距离误差模型的参数标定,该方法利用机器人在空间中的任意两点在机器人坐标系及测量坐标系中的距离相等的特点建立误差模型,进而求解出运动学参数误差。此法的常用仪器,如基于拉线传感器的标定装置等价格昂贵,操作复杂。
第三种是使用传感器的方法,如惯性传感器加位置传感器的方法、激光传感器加相位灵敏探测器(phase-sensitive detector,PSD)的标定装置等、以及基于图像传感器的图像处理方法等,然而,此方法采用的标定装置操作复杂,价格昂贵,且未大规模商业化。
针对目前机器人标定技术的以上缺陷的改进需求,本申请提供了一种用于标定机器人的运动学参数的方法,通过机器人的操作空间中的物体的尺寸信息和机器人末端的执行器获取的图像尺寸信息确定机器人末端的实际移动位移,该机器人末端的实际移动位移和机器人末端的名义移动位移用于构建误差方程,求解该误差方程完成标定,无需借助昂贵的测量仪器,降低标定成本。
为了便于理解本申请实施例的技术方案,首先对本申请实施例涉及到的一些术语或概念进行简单描述。
1、几何误差:物体的几何方面出现的误差,如,物体的实际形状、方向和位置相对于理想形状、方向和位置的偏离量。
2、运动学参数的标定:指通过识别机器人的几何误差并对该几何误差进行补偿来获得较 高的绝对定位精度,运动学参数标定是提高机器人绝对定位精度的有效方式。
3、机器人末端:机器人的边缘,或者说机器人最后一节关节,或者说连接机器人末端的执行器的机器人的部位。
4、机器人末端的执行器:指的是任何一个连接在机器人末端具有一定功能的工具。包括但不限于:机器人抓手、机器人工具快换装置、机器人碰撞传感器、机器人旋转连接器、机器人压力工具、顺从装置、机器人喷涂枪、机器人毛刺清理工具、机器人弧焊焊枪,机器人电焊焊枪等等。机器人末端的执行器通常被认为是机器人的外围设备,或者认为是机器人的附件,或者认为是机器人工具,或者认为是手臂末端工具等。本申请中机器人末端的执行器可以是图像采集模块(如,摄像头、相机等)。能够实现图像采集即可,具体形式不做限定。
5、机器人的操作空间:是指机器人末端的执行器运动所能达到的空间点的集合,一般用水平面和垂直面的投影表示。对于机器人来说,机器人的操作空间的形状和大小十分重要。机器人在执行某作业时可能会因为存在机器人末端的执行器不能到达的作业死区(dead zone)而不能完成任务。需要说明的是,将机器人末端的执行器运动所能达到的空间点的集合称为机器人的操作空间只是举例,对本申请的保护范围不构成任何的限定,例如,机器人的操作空间还可以称为机器人的工作空间;还例如,机器人的操作空间还可以称为机器人的空间等。
6、机器人的位姿:可以是指机器人末端在空间的位置和姿态,或者也可以表示机器人的其他各个活动杆件在空间的位置和姿态。其中,位置可以用如下位置矩阵来描述:
姿态可以用坐标系三个坐标轴两两夹角的余弦值组成的如下姿态矩阵来表示:
7、机器人运动学:包括正向运动学和逆向运动学,正向运动学即给定机器人的各关节变量,计算机器人末端的位置姿态;逆向运动学即已知机器人末端的位置姿态,计算机器人对应位置的全部关节变量。
8、机器人运动学方程:涉及机器人运动学模型的建立,其中,机器人运动学模型可以表示为:
M=f(q
i)
其中,M为机器人末端的位姿,q
i为机器人各个关节变量。
示例性地,q
i为已知的,要求根据机器人运动学模型M=f(q
i)和已知的q
i确定相应的M,称为正向运动学问题,求解正向运动学问题,能够进行检验,校准机器人,计算工作空间等示例性地,机器人末端的位姿M为已知的,要求根据机器人运动学模型M=f(q
i)和已知的M求解对应的关节变量q
i,称为逆运动学问题。求逆运动学问题,能够进行路径规划,机器人控制等。
为了便于理解,下面以建立三自由度平面关节机器人的机器人运动学方程为例说明机器人运动学方程的建立流程。
示例性地,建立三自由度平面关节机器人,设机器人杆件1,2,3长度为分别为l
1,l
2,l
3,建立机器人的运动学方程流程包括:
(1)建立坐标系:机器人的坐标系包括手部坐标系,机座坐标系,杆件坐标系,绝对坐标系。
手部坐标系:参考机器人手部的坐标系,也称机器人位姿坐标系,它表示机器人手部在指定坐标系中的位置和姿态。
机座坐标系:参考机器人基座的坐标系,它是机器人各活动杆件和手部的公共参考坐标系。
杆件坐标系:参考机器人杆件的坐标系,它是在机器人每个活动杆上固定的坐标系,随杆件的运动而运动。
绝对坐标系:参考工作现场地面的坐标系,它是机器人所有构件的公共参考坐标系。
手部坐标系{h};机座坐标式{0};杆件坐标系{i}(i-1,2,...n);绝对坐标系{B}。
具体地,建立坐标系可以是:机座坐标系{0};杆件坐标系{i};手部坐标系{h}与末端坐标系{n}重合。
(2)确定参数:各个轴线相互平行,各个杆件处于同一平面内,参数如下表1所示:
表1
d i | θ i | l i | α i | |
θ 1 | l 1 | |||
θ 2 | l 2 | |||
θ 3 | l 3 |
相邻杆件位姿矩阵是:
将相邻杆件位姿方程依次相乘,则有:
其中,cθ
123=cos(θ
1+θ
2+θ
3),sθ
123=sin(θ
1+θ
2+θ
3),cθ
12=cos(θ
1+θ
2),sθ
12=sin(θ
1+θ
2)。
连杆参数雅可比矩阵M
03为:
连杆参数雅可比矩阵M
01为:
连杆参数雅可比矩阵M
02为:
9、关节坐标系:该坐标系可以用来描述机器人每个独立关节的运动,例如,对于六轴串联型机械臂,关节类型均为转动关节。在关节坐标系下,将机器人末端移动到期望位置,可以依次驱动各关节运动,从而让机器人末端到达指定位置。
10、变换矩阵:为机器人的不同关节之间的坐标系之间的转换矩阵。
例如,机器人的关节#1对应的坐标系为坐标系#1,机器人的关节#2对应的坐标系为坐标系#2,坐标系#2能够通过坐标系#1和转换矩阵描述。
11、六自由度:物体在空间具有六个自由度,即沿x、y、z三个直角坐标轴方向的移动自由度和绕这三个坐标轴的转动自由度。
12、视觉伺服:该概念常见于机器人技术方面的研究,一般指的是,通过光学的装置和非接触的传感器自动地接收和处理一个真实物体的图像,通过图像反馈的信息,来让机器系统对机器做进一步控制或相应的自适应调整的行为。
13、名义位置:由带误差的运动学参数计算出的不精确的机器人末端位置。
14、多点透视成像(Perspective-n-Point,PnP)算法:该算法是求解三维的(three dimensional,3D)到二维(2 dimensional,2D)点对运动的方法。例如,在一幅图中,知道其中至少四个确定的点在3D空间下的相对坐标位置,即可以估计出相机对于这些点的姿态,或者说估计出这些点在相机坐标系下姿态。
上文中结合图1说明了本申请能够应用的场景,并介绍本申请涉及的一些概念,下面将结合附图,详细介绍本申请提供的用于标定机器人的运动学参数的方法。
应理解,下文示出的实施例并未对本申请实施例提供的方法的执行主体的具体结构特别限定,执行主体只要能够通过运行记录有本申请实施例的提供的方法的代码的程序即可。
图2是本申请实施例提供的用于标定机器人的运动学参数的方法的示意性流程图。该方法可以由用于标定机器人的运动学参数的装置执行,也可以由用于标定机器人的运动学参数的装置内部模块执行。该方法包括以下步骤。
S210,获取位移对。
所述位移对包括第一位移和第二位移,所述第一位移为机器人末端从第一位置移动至第二位置的实际移动位移,所述第二位移为机器人末端从所述第一位置移动至所述第二位置的名义移动位移,其中,所述第一位置和所述第二位置为机器人的操作空间中不同的两个点的位置,所述机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同。
本申请实施例中,机器人可以是机械臂、智能车或无人机等机器设备。为了便于描述,本申请中以机器人为机械臂进行说明。在机器人为机械臂的情况下,上述的机器人末端可以是机器人最后一节关节。
应理解,本申请实施例中对于待标定的对象不做限制(机器人的类型、形状、功能等), 可以是任何需要标定的机器人。
示例性地,获取位移对包括:获取多个位移对,其中,位移对的具体个数可以通过如下方式确定。作为一种可能的实现方式,位移对的个数等于待标定的机器人的运动学参数的个数。例如,待标定的机器人的运动学参数包括两个连杆参数,位移对的个数为两个。作为另一种可能的实现方式,位移对的个数大于待标定的机器人的运动学参数的个数。例如,待标定的机器人的运动学参数包括位两个连杆参数,位移对的个数可以大于两个。作为又一种可能的实现方式,如果待标定的机器人的运动学参数的个数是一个,或者可以分别多次标定机器人的多个运动学参数,上述的位移对可以是一个。例如,待标定的机器人的运动学参数包括两个连杆参数(如,连杆参数#1和连杆参数#2),可以先标定其中一个连杆参数(如,连杆参数#1),然后再标定另外一个连杆参数(如,连杆参数#2),在标定连杆参数#1的情况下上述的位移对可以是一个;同理,在标定连杆参数#2的情况下上述的位移对也可以是一个。还例如,待标定的机器人的运动学参数为一个连杆参数,位移对的个数可以是一个。
示例性地,获取多个位移对包括获取第一位移对和第二位移对。作为一种可能的实现方式,机器人末端可以沿着一个方向多次移动,例如,沿着第一方向第一次移动获取第一位移对,沿着第一方向第二次移动获取第二位移对。例如,在获取所述第一位移对之前,控制所述机器人使得所述机器人末端与标定物的第一面平行;在获取所述第二位移对之前,控制所述机器人使得所述机器人末端与所述标定物的第一面平行。作为另一种可能的实现方式,机器人末端可以沿着多个方向多次移动,例如,沿着第一方向第一次移动获取第一位移对,沿着第二方向第二次移动获取第二位移对。例如,在获取所述第一位移对之前,控制所述机器人使得所述机器人末端与标定物的第一面平行;在获取所述第二位移对之前,控制所述机器人使得所述机器人末端与所述标定物的第二面平行,其中,所述第一面和所述第二面为所述标定物不同的两个表面。
应理解,机器人末端可以沿着多个方向多次移动的情况下,可使机器人的运动范围增大,尽可能遍历不同的姿态,有助于提升标定精度。
为了便于理解,举例说明获取多个位移对可能的情况。
例如,获取3个位移对(如,位移对#1、位移对#2和位移对#3)。其中,位移对#1包括实际移动位移#1和名义移动位移#1,实际移动位移#1可以是机器人末端从第一位置#1移动至第二位置#1的实际移动位移(如,第一位置#1和第二位置#1为机器人的操作空间中不同的两个点的位置),名义移动位移#1可以是机器人末端从第一位置#1移动至第二位置#1的名义移动位移,该实际移动位移#1和名义移动位移#1之间存在误差,且机器人末端在第一位置#1和第二位置#1的姿态相同。
位移对#2包括实际移动位移#2和名义移动位移#2,实际移动位移#2可以是机器人末端从第一位置#2移动至第二位置#2的实际移动位移,名义移动位移#2可以是机器人末端从第一位置#2移动至第二位置#2的名义移动位移,且机器人末端在第一位置#2和第二位置#2的姿态相同。
位移对#3包括实际移动位移#3和名义移动位移#3,实际移动位移#3可以是机器人末端从第一位置#3移动至第二位置#3的实际移动位移,名义移动位移#3可以是机器人末端从第一位置#3移动至第二位置#3的名义移动位移,且机器人末端在第一位置#3和第二位置#3的姿态相同。
可选地,第一位置#2和第二位置#1可以是同一个位置,第一位置#3和第二位置#2可以是同一个位置,如,机器人末端从第一位置#1移动至第二位置#1,再从第二位置#1移动至第二位置#2,然后从第二位置#2移动至第二位置#3,且机器人末端在第一位置#1、第二位置#1、第二位置#2和第二位置#3的姿态相同。
示例性地,机器人末端在第一位置的姿态和在第二位置的姿态相同包括以下两种可能的方式。
作为一种可能的实现方式,机器人末端在移动前后保持姿态相同即可,机器人末端在移动过程中姿态可以发生变化,即保持机器人末端在移动前后的姿态矩阵不变。例如,机器人末端在第一位置的姿态可以用坐标系三个坐标轴两两夹角的余弦值组成的姿态矩阵R1来表示,机器人末端在第二位置的姿态可以用坐标系三个坐标轴两两夹角的余弦值组成的姿态矩阵R2来表示,其中,R1和R2相同。可选地,通过记录机器人末端在第一位置的姿态矩阵R1,在机器人末端发生移动,到达第二位置之后将机器人末端在第二位置的姿态调整为满足姿态矩阵R1的姿态。
作为另一种可能的实现方式,机器人末端在移动前后,以及移动过程中保持姿态相同,机器人末端在移动过程中姿态不变。
示例性地,机器人末端的移动过程可以由指令控制。根据机器人运动学模型和机器人末端在所述第一位置的位姿确定第一指令关节变量;根据机器人运动学模型和机器人末端在所述第二位置的位姿确定第二指令关节变量;根据所述第一指令关节变量和所述第二指令关节变量确定指令(或者称为控制指令),所述指令用于控制所述机器人末端从所述第一位置移动至所述第二位置。
为了便于理解,举例说明机器人末端的移动过程。
例如,机器人末端在所述第一位置的位姿为位姿#1,其中,位姿#1包括位置#1和姿态#1,位置#1为第一位置,而姿态#1可以根据未移动之前的姿态确定(如,姿态#1为未移动之前的,可以是出厂的姿态),根据机器人运动学模型和位姿#1逆向求解得到指令关节变量#1;机器人末端在所述第二位置的位姿为位姿#2,其中,位姿#2包括位置#2和姿态#2,位置#2为第二位置,而姿态#2为位姿#1,根据机器人运动学模型和位姿#2逆向求解得到指令关节变量#2,基于指令关节变量#1和指令关节变量#2即可获得指令,如,指令关节变量#2相比于指令关节变量#1的平移正5cm,旋转正30度,则指令可以为平移正5cm,旋转正30度。
示例性地,机器人末端从所述第一位置移动至所述第二位置可以是以机器人末端上的一点为参考点,该点从所述第一位置移动至所述第二位置。
例如,机器人末端的中心点从所述第一位置移动至所述第二位置。示例性地,机器人末端可以沿一定的路径从第一位置移动至所述第二位置。例如,机器人末端沿第一路径从所述第一位置移动至所述第二位置,所述第一位置和所述第二位置在所述第一路径上,所述第一路径为所述机器人末端中心和标定物的表面的一点的连线。
可选地,机器人末端的移动可以带动机器人末端的执行器移动。例如,机器人末端从所述第一位置移动至所述第二位置,机器人末端的执行器从第一位置’移动至第二位置’。
应理解,第一位置’和第一位置不同,第二位置’和第二位置不同。
例如,在机器人末端的中心点从所述第一位置移动至所述第二位置,机器人末端的执行器的中心点从第一位置’移动至第二位置’的情况下,第一位置’和第一位置之间的距离可以理 解为机器人末端的中心点和机器人末端的执行器的中心点之间的距离;第二位置’和第二位置之间的距离也可以理解为机器人末端的中心点和机器人末端的执行器的中心点之间的距离。
本申请实施例中,由于机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同,则机器人末端从所述第一位置移动至所述第二位置的名义移动位移可以看做是机器人末端的执行器从第一位置’移动至第二位置’的名义移动位移;机器人末端从所述第一位置移动至所述第二位置的实际移动位移可以看做是机器人末端的执行器从第一位置’移动至第二位置’的实际移动位移。
作为一种可能的实现方式,本申请实施例中,可以是完成一次机器人的运动学参数的标定任务后,根据机器人在生产线上的使用频率及磨损程度定期进行在线运动学参数的标定。如,定期检测机器人末端在到达指定位置后与实际目标的距离误差,若该误差超过允许范围,进行运动学参数重标定。
在该实现方式下,带闭环反馈的在线标定系统有助于及时发现机器人的绝对定位精度是否发生劣化,当劣化到允许范围之外时,可重新触发机器人标定流程。实现机器人出厂免标定,及时消除机器人长时间运作产生的累积误差,且不需要停线进行标定,提升了工业产线的生产效率。
在该实现方式下,获取位移对之前,图2所示的方法流程还包括如下步骤。
S211,确定所述机器人的运动学参数的误差大于预设阈值。
其中,预设阈值可以是预先设定的某个值。作为另一种可能的实现方式,本申请实施例中,可以是在机器人出厂的时候进行机器人的运动学参数的标定。
具体地,所述实际移动位移由所述操作空间中的标定物的尺寸、第一图像的尺寸和第二图像的尺寸确定。其中,所述第一图像为所述机器人末端在所述第一位置的情况下,所述机器人末端的执行器获取的所述标定物的图像;所述第二图像为所述机器人末端在所述第二位置的情况下,所述机器人末端的执行器获取的所述标定物的图像。
本申请实施例中,上述实际移动位移可以基于机器人的操作空间中的标定物的尺寸和机器人末端的执行器获取的该标定物的图像的尺寸确定,无需借助昂贵的测量仪器测量实际移动位移的前提下实现实际移动位移的计算。因此,能够降低机器人的运动学参数的标定成本。
需要说明的是,本申请实施例中机器人的操作空间中的标定物可以是任意的尺寸已知的物体(如,图1中所示的已知尺寸的物体130)。也就是说不需要借助特定的标定板进行标定,可简单利用产线上已有的已知尺寸工件。
作为一种可能的实现方式,上述的标定物的尺寸可是在计算实际移动位移之前测量得到的,或者在计算实际移动位移之前从该标定物的参数说明书获取得到的,存储在机器人的存储器中,在计算实际移动位移时从存储器中读取使用。作为另一种可能的实现方式,上述的标定物的尺寸可是在计算实际移动位移时测量得到的,或者在计算实际移动位移时从该标定物的参数说明书获取得到的,在该实现方式下可以无需存储该标定物的尺寸,计算实际移动位移时获取即可。
需要说明的是,本申请实施例中可能需要获取多个位移对,而每个位移对中包括的实际移动位移均需要基于标定物的尺寸确定,为了简化多个位移对所包括的多个实际移动位移的确定流程,上述的标定物的尺寸的获取方式可以是上述的第一种可能的实现方式,即在计算实际移动位移之前获取并存储,需要用到该标定物的尺寸的时候,从存储器中读取使用即可。
由上述的实际移动位移确定所需的参数(如,标定物的尺寸、第一图像的尺寸和第二图像的尺寸)可知:本申请实施例中,机器人末端在第一位置的情况下,标定物位于机器人的操作空间中,机器人末端的执行器可以获取该标定物的第一图像;同理,机器人末端在第二位置的情况下,标定物也位于机器人的操作空间中,机器人末端的执行器以获取该标定物的第二图像。
本申请实施例中,机器人末端在第一位置和在第二位置时,标定物均位于机器人的操作空间中,可以是以下两种情况。作为一种可能的实现方式,机器人末端在移动前后,标定物均位于机器人的操作空间中,但是在移动过程中标定物可以不位于机器人的操作空间中。作为另一种可能的实现方式,机器人末端在移动前后和移动过程中,标定物均位于机器人的操作空间中。
进一步地,本申请实施例中对于机器人末端的执行器获取的图像的尺寸的获得方式不做限定,可以是测量机器人末端的执行器获取的图像以获得该图像的尺寸。
可以理解,为了使得测量图像的尺寸更为精确,本申请实施例中上述的标定物可以是边缘规则的物体,例如,可以是长方体、多面体等已知尺寸的物体。
作为一种可能的实现方式,所述实际移动位移、所述器人操作空间中的标定物的尺寸、所述第一图像的尺寸和所述第二图像的尺寸满足以下关系:
其中,d
R为所述实际移动位移,H为所述标定物的高度,h
1为所述第一图像的高度,h
2为所述第二图像的高度,V′为所述机器人末端在所述第一位置时,所述机器人末端的执行器的中心点和所述第一图像的中心点之间的距离,V″为所述机器人末端在所述第二位置时,所述机器人末端的执行器的中心点和所述第二图像的中心点之间的距离。
应理解,上述的实际移动位移、所述器人操作空间中的标定物的尺寸、所述第一图像的尺寸和所述第二图像的尺寸满足的关系式,只是举例说明如何计算得到实际移动位移,对本申请的保护范围不构成任何的限定,还可以通过其他的数学计算方式基于标定物的尺寸、所述第一图像的尺寸和所述第二图像的尺寸计算得到实际移动位移。
例如,在已知标定物尺寸的情况下,首先使用PnP算法,计算出机器人末端在第一位置时,标定物在相机坐标系下的空间位置p1;待机器人末端移动之后,使用相同的PnP算法计算机械臂末端在第二位置时,标定物在相机坐标系下的空间位置p2;然后可等效计算出相机在空间中的实际移动位移:
d
R=|p1-p2|
其中,d
R表示际移动位移,||表示取模运算。
具体地,所述名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定。其中,所述第一关节变量为所述机器人末端在所述第一位置的情况下,所述机器人的关节变量;所述第二关节变量为所述机器人末端在所述第二位置的情况下,所述机器人的关节变量,所述机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
示例性地,在所述机器人末端位于所述第一位置的情况下,获取所述机器人的第一电机编码器值,所述第一电机编码器值用于计算所述第一关节变量。在所述机器人末端位于所述第二位置的情况下,获取所述机器人的第二电机编码器值,所述第二电机编码器值用于计算 所述第二关节变量。例如,从机器人中读取到某个关节处的电机编码器值为encoder1,该编码器的初始值为encoder0,编码器的分辨率为bit1;另外,与该电机配套使用的用于提高电机力矩的谐波减速器的减速比为固定值ration1;则关节变量可以用以下公式计算获得:(encoder1-encoder0)/(ration1*bit1/2/pi)。
进一步地,在基于电机编码器值计算得到第一关节变量之后,可以基于第一关节变量和机器人运动学模型正向求解得到机器人末端的第一名义位置;同理,在基于电机编码器值计算得到第二关节变量之后,可以基于第二关节变量和机器人运动学模型正向求解得到机器人末端的第二名义位置。第一名义位置为第二名义位置之间的距离即可以理解为名义移动位移。
示例性地,该名义移动位移、该机器人运动学模型、该第一关节变量和该第二关节变量满足以下关系:
d
C=|f(q
i)-f(q
j)|
其中,d
C为该名义移动位移,f为该机器人运动学模型,该q
i为第一关节变量,该q
j为该第二关节变量,||表示取模运算。
应理解,本申请实施例中,机器人末端从所述第一位置移动至所述第二位置的名义移动位移的计算可以参考目前相关技术的介绍,这里不再赘述。
进一步地,上述的位移对确定之后,能够基于该位移对确定用于标定所述机器人的运动学参数的误差值,图2所示的方法流程还包括如下步骤。
S220,根据位移对确定误差值。
作为一种可能的实现方式,上述的位移对包括一个位移对。该一个位移对包括的第一位移和第二位移构建误差方程,求解该误差方程得到一个误差值。
例如,待标定的机器人的运动学参数为位移,基于得到的误差值对位移进行补偿该校正,提升机器人的绝对定位精度。
作为另一种可能的实现方式,上述的位移对包括多个位移对。根据该多个位移对构建误差方程组,该误差方程组中每个误差方程由该第一位移和该第二位移构建,该误差方程组用于求解得到误差矩阵,该误差矩阵包括多个误差值。
例如,待标定的机器人的运动学参数包括两个连杆参数,基于得到的误差矩阵中包括的两个误差值分别对两个连杆参数进行补偿该校正,提升机器人的绝对定位精度。
示例性地,利用机器人末端的实际移动位移与名义移动位移构建位移误差模型。其中,位移误差模型的基本思想是:如果机器人的运动学参数足够准确,则机器人末端的实际移动位移与名义移动位移应该相等。但由于理论运动学参数与实际运动学参数存在误差,导致实际移动位移与名义移动位移并不相等,由此可以构建误差方程,误差方程具体描述如下:
P
C(i)→P
C(j)
P
R(i)→P
R(j)
P
C(j)=(x
j,y
j,z
j)
P
C(i)=(x
i,y
i,z
i)
P
R(j)=(x
j+dx
j,y
j+dy
j,z
j+dz
j)
P
R(i)=(x
i+dx
i,y
i+dy
i,z
i+dz
i)
其中,P
C(i)和P
C(j)分别为利用机器人运动学模型计算出的机器人末端的名义起点位置和名义终点位置;P
R(i)和P
R(j)指利用外部测量设备(如,摄像头)获取的机器人末端的实际起点位置和实际终点位置。x
i,y
i,z
i分别表示机器人末端在x、y和z方向的起始名义位置分量, dx
i,dy
i,dz
i分别为机器人末端实际起始位置与名义起始位置在x、y和z方向的误差分量;x
j,y
j,z
j分别表示机器人末端在x、y和z方向的终点名义位置分量,dx
j,dy
j,dz
j分别为机器人末端实际终点位置与名义终点位置在x、y和z方向的误差分量。
Δd(i,j)=d
R(i,j)-d
C(i,j)
d
R(i,j)=Δd(i,j)+d
C(i,j)
其中,
和
分别为名义移动位移向量和实际移动位移向量,d
C(i,j)和d
R(i,j)分别为名义移动位移长度和实际移动位移长度(即位移向量的模值),Δd(i,j)为两位移长度的差值,获取该位移长度差后方可构建接以下的误差方程:
(d
R(i,j))
2=(Δd(i,j)+d
C(i,j))
2
=(x
j-x
i+dx
j-dx
i)
2+(y
j-y
i+dy
j-dy
i)
2+(z
j-z
i+dz
j-dz
i)
2
以上误差方程中,位移误差Δd为:机器人末端实际移动位移d
R与机器人末端名义移动位移d
C之间的差值;J为基于原始运动学参数计算获取的连杆参数雅可比矩阵;因此,位移误差表达式中的未知量仅剩ΔX。
ΔX的求解可结合多组运动数据,构建误差方程组,利用数学求解方式(如,最小二乘法或迭代求解法)求取连杆参数误差矩阵ΔX,并对运动学参数进行补偿校正,提升机器人的绝对定位精度。
由上述的标定过程可知,无需进行机器人末端执行器与机器人坐标系的坐标关系标定,标定效率高,也减少了坐标转换计算误差。
基于上述方法介绍,为了便于理解下面结合一个具体的例子,进一步说明本申请提供的用于标定机器人的运动学参数的方法的应用。
图3是本申请实施例提供的另一种用于标定机器人的运动学参数的方法的示意性流程图。包括以下步骤。
S310,确定机器人运动学模型。
其中,机器人运动学模型为该机器人的关节变量的函数,用于表示机器人的关节变量和 机器人末端的位姿之间的关系。
例如,
M=f(q
i)
其中,M为机器人末端的位姿,q
i为机器人的各个关节变量,函数f表示机器人运动学模型。关节变量可以理解为
示例性地,机器人的关节变量包括机器人的关节的角度信息、机器人的关节的位置信息、机器人不同的关节之间的平移量、机器人不同的关节之间的旋转量或机器人的关节的高度信息等。
作为一种可能的实现方式,机器人运动学模型用于根据机器人的关节变量确定机器人末端的位姿。例如,该机器人运动学模型结合机器人各关节的关节变量q
i即可计算得到机器人末端的位姿M,也就是正向运动学解算过程。作为另一种可能的实现方式,机器人运动学模型用于根据机器人末端的位姿确定机器人的指令关节变量值。其中,指令关节变量为用于确定控制机器人的指令的关节变量。例如,该机器人运动学模型结合机器人末端的位姿M即可计算得到机器人各关节的指令关节变量q
i,也就是逆向运动学解算过程。
为了便于理解下面举例说明如何确定机器人运动学模型。
作为一种可能的实现方式,根据机器人生产厂家提供的原始参数(如,机器人各关节之间的平移和旋转量),建立机器人运动学模型。
例如,假设待标定的机器人为多关节机械臂,该多关节机械总关节数为n,从电机往外依次为第n节,第n-1节…第1节,n为正整数。其中,第i-1节关节的关节坐标系,到第i节关节的关节坐标系的转换矩阵描述为
i-1T
i,该转换矩阵
i-1T
i由第i-1节关节和第i节关节轴间的相对平移和旋转关系确定,i为小于或者等于n的正整数。该多关节机械臂对应的运动学模型可以为T=
0T
1×
1T
2×
2T
3×…×
i-1T
i×…*
n-1T
n。
作为另一种可能的实现方式,根据机器人生产家提供的原始参数确定机器人运动学模型。例如,机器人出厂参数中包括了机器人运动学模型。作为又一种可能的实现方式,通过其他设备获取机器人运动学模型。例如,某个具备建立机器人运动学模型功能的设备建立该机器人运动学模型,并将建立好的机器人运动学模型通过消息发送给标定机器人的设备。
需要说明的是,上述的确定机器人运动学模型的方法只是举例,对本申请的保护范围不构成任何的限定,其他建立用于确定机器人末端位姿的机器人运动学模型的方法也在本申请保护范围之内。本申请实施例中对于如何建立机器人运动学模型不做限制,可以参考目前相关技术中的介绍。
进一步地,在建立机器人运动学模型之后,能够根据该机器人运动学模型确定机器人末端移动前后的指令关节变量。图3所示的方法流程还包括如下步骤。
S320,根据机器人运动学模型确定指令关节变量。
可选地,关节变量包括关节角度值。
示例性地,为了标定机器人的运动学参数,可以使得机器人末端从第一位置移动至第二位置,且机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同。其中,所述第一位置和所述第二位置为所述机器人的操作空间中不同的两个点的位置。示例性地,第一位置为机器人末端的当前位置。通过视觉伺服,控制机器人带动机器人末端与操作空间中的已知尺寸的物体的某一个表面(可以称为第一面)平行,然后指定机器人的操作空间中的一个点 为目标点,该目标点的位置为上述的第二位置。
示例性地,第一位置为机器人末端的当前位置。确定上述的目标点之前无需使得机器人末端与操作空间中的已知尺寸的物体的某一个表面平行,任意选取操作空间中的一个点为目标点,该目标点的位置为上述的第二位置。
由上述可知,机器人运动学模型为该机器人关节变量的函数,通过上述步骤S210建立机器人运动学模型之后,能够基于机器人运动学模型确定第一位置对应的第一指令关节变量,以及第二位置对应的第二指令关节变量。
具体地,根据机器人运动学模型逆向解算出第一位置对应的第一指令关节变量。
例如,
M=f(q
i)
将第一位置的机器人末端的位姿参数M1作为运动学模型的输入,输出该第一位置对应的第一指令关节变量q
i1。将第二位置的机器人末端的位姿参数M2作为运动学模型的输入,输出该第二位置对应的第二指令关节变量q
i2。
进一步地,根据第一指令关节变量和所述第二指令关节变量确定指令,图3所示的方法流程还包括如下步骤。
S330,根据指令关节变量确定指令。
示例性地,第一指令关节变量和所述第二指令关节变量之间的关系可以确定指令。例如,指令可以是控制各个关节移动和/或旋转以使得各个关节的角度值由第一指令关节变量更新为第二指令关节变量的指令。
该指令用于控制机器人带动机器人末端在机器人的操作空间中移动一段距离,移动到上述的第二位置,移动前后确保机器人末端的姿态始终符合一定的约束条件(如,移动前后机器人末端姿态不变),且操作空间中放置的物体在机器人移动前后,均在机器人末端的执行器(如,图像采集模块)视野范围内。
示例性地,在机器人末端与已知尺寸的物体的第一面平行的情况下,该机器人末端在机器人的操作空间中移动一段距离包括:机器人末端沿特征点与机器人末端中心连线移动一段距离,并确保整个运动过程中机器人末端姿态不变。其中,特征点为已知尺寸的物体上的任意一点(如,上述的第一面上任意一点)。
应理解,本申请实施例中,机器人末端移动带动机器人末端的执行器移动。
为了便于描述,下文中以机器人末端的图像采集模块为摄像头为例进行说明。
为了便于理解,结合图4说明摄像头在机器人的操作空间中移动一段距离。图4是本申请实施例提供的一种摄像头移动的示意图。
从图4中可以是看出,摄像头在指令的控制下由初始位置#1移动到目标位置#1,该初始位置#1和目标位置#1在摄像头中心和第一面上的特征点的连线上。在摄像头移动前后,机器人末端的姿态未发生变化。
需要说明的是,图4只是示例性地示出摄像头单次由初始位置#1移动到目标位置#1的过程,本申请实施例中摄像头可以发生多次移动,例如,摄像头从初始位置#1移动到目标位置#1之后,可以重新在指定机器人的操作空间中的一个点为目标点,该目标点的位置为目标位置#2,而当前摄像头所在的标位置#1可以作为初始位置#2。
具体地,目标位置#2对应的指令关节变量与当前初始位置#2对应的指令关节变量之间的 关系可以确定另一个指令,该另一个指令用于控制摄像头从初始位置#2移动到目标位置#2。
示例性地,摄像头从初始位置#2移动到目标位置#2可以是:摄像头沿特征点与摄像头中心连线移动一段距离,并确保运动前后机器人末端姿态不变(中间过程,机器人末端姿态可以发生改变)。其中,特征点为已知尺寸的物体上的任意一点(如,第二面上任意一点,第二面与第一面不同)。
另外,需要说明的是,图4只是示例性地示出摄像头的移动方式,对本申请的保护范围不构成任何的限定。例如,摄像头移动的路径可以不是沿特征点与摄像头中心连线移动。还例如,摄像头可以向不同的方向进行多次移动。如图5所示,图5中的(a)和(b)是本申请实施例提供的另一种摄像头移动的示意图。
从图5中的(a)可以是看出,摄像头在指令的控制下由初始位置#1移动到目标位置#1,该初始位置#1和目标位置#1在摄像头中心和第一面上的特征点的连线上。从图5中的(b)可以是看出,摄像头在指令的控制下由初始位置#2移动到目标位置#2,该初始位置#2和目标位置#2在摄像头中心和第二面上的特征点的连线上。在摄像头移动前后,机器人末端的姿态未发生变化。
第二面和第一面为已知尺寸的物体的不同的表面,初始位置#2可以是第一次移动后的目标位置#1。
在机器人的运动学参数标定过程中,使用多面体结构件可使机器人的运动范围增大,尽可能遍历不同的构型,有助于提升标定精度。此外,多面体结构件制作成本低,适用性强,易于推广。
本申请实施例对于摄像头移动的路径不做限定,在初始位置和目标位置摄像头能够采集到已知尺寸的物体的图像即可。
机器人在指令的控制下带动摄像头移动一段距离之后,能够通过已知尺寸的物体的实际尺寸和摄像头采集的图像的成像尺寸确定机器人末端移动的实际移动位移(或者称为实际移动距离)。
图3所示的方法流程还包括如下步骤。
S340,确定机器人末端移动的实际移动位移。
摄像头移动完毕后结合物体的实际尺寸信息及视觉测量值计算摄像头的空间移动位移,由于一定过程中机器人末端保持姿态不变,该距离也就是机器人末端的实际移动位移。
示例性地,机器人移动完毕后,可按照如图6所示的方法,图6是本申请实施例提供的一种计算实际移动位移的示意图。从图6中可以看出,根据物体实际尺寸信息及摄像头成像平面上的成像尺寸信息推导摄像头在空间中的移动位移。
通过物体实际尺寸与空间几何关系推导摄像头在空间中的移动位移:
其中,h
1为摄像头移动前在初始位置(如,上述的初始位置#1)采集的图像中,物体的成像高度;h
2为摄像头移动后在目标位置(如,上述的目标位置#1)采集的图像中,物体的成像高度;V为摄像头相距(如,该摄像头的出厂参数);H为工件标定物实际高度;U为工件标定物距摄像头的物距;C′为摄像头移动前成像中心点与成像平面中心点的距离,V′为摄像头移动前成像中心点与镜头中心点的距离;C″为摄像头移动后成像中心点与成像平面中心点的距离,V″为摄像头移动后成像中心点与镜头中心点的距离;最终可计算出摄像头在空间中移动的距离d
R。由于在机器人运动前后机器人末端姿态不变,因此距离d
R可等同于机器人末端的实际移动位移。
需要说明的是,获取摄像头实际移动位移的方式不局限于图6所示的方法。
进一步地,可以确定机器人末端移动的名义移动位移。图3所示的方法流程还包括如下步骤。
S350,确定机器人末端移动的名义移动位移。
本申请实施例中对于机器人末端移动的名义移动位移的计算方式不做限定,可以参考目前相关技术中的描述,包括但不限于:根据机器人末端移动前后的机器人的关节变量,以及机器人运动学模型确定。例如,根据机器人运动学模型及机器人末端移动前后各关节的角度信息计算机器人末端在移动前后的名义位置,并计算得两位置间的名义移动位移。
在确定了机器人末端移动的实际移动位移和名义移动位移之后,能够基于机器人末端的实际移动位移和机器人末端的执行器的名义移动位移构建误差方程,求解误差方差。图3所示的方法流程还包括如下步骤。
S360,根据机器人末端移动的实际移动位移和名义移动位移构建误差方程,并求解。
参考上述的S220的描述,这里不再赘述。
上述流程S310至S360可实现机器人组装完成出厂后的首次运动学参数的标定。标定完成后,机器人的绝对定位精度达到要求开始上线工作,后续依赖视觉反馈信息定期进行绝对定位精度检测,当精度劣化至允许范围外时(如,机器人的运动学参数的误差大于预设阈值),启动在线重标定步骤。
例如,在线标定系统辅助以视觉反馈信息,定期检测机器人末端工具在到达指定位置后与实际目标的距离误差,若该误差超过允许范围,则重复S320至S360,进行运动学参数重标定。
应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。
还应理解,在上述一些实施例中,主要以机械臂为例进行了示例性说明,应理解,对于机器人的具体形式本申请实施例不作限定。例如,可以基于本申请实施例提供的方法标定其他类型的机器人的运动学参数。
可以理解的是,上述各个方法实施例中,由标定机器人的运动学参数的装置(如,机器人)实现的方法和操作,也可以由装置的部件(如,处理器)实现。
上述基于图2-3介绍的用于标定机器人的运动学参数的方法主要从用于标定机器人的运动学参数的装置如何实现标定的角度进行了介绍。应理解,用于标定机器人的运动学参数的装置,为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。
本领域技术人员应该可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
以下,结合图7-8详细说明本申请实施例提供的用于标定机器人的运动学参数的装置。应理解,装置实施例的描述与方法实施例的描述相互对应。因此,未详细描述的内容可以参见上文方法实施例,为了简洁,部分内容不再赘述。
本申请实施例可以根据上述方法示例对用于标定机器人的运动学参数的装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。下面以采用对应各个功能划分各个功能模块为例进行说明。
图7是本申请实施例提供的用于标定机器人的运动学参数的装置700的示意性框图。该装置700包括获取单元710和处理单元720。获取单元710可以实现相应的获取功能,处理单元720用于进行数据处理。获取单元710可以称为通信接口或通信单元。
应理解,获取单元710的部分功能也可以由处理单元720实现。例如,获取单元710获取位移对包括的实际移动位移和名义移动位移的计算功能可以由处理单元720。
可选地,该装置700还可以包括存储单元,该存储单元可以用于存储指令和/或数据,处理单元720可以读取存储单元中的指令和/或数据,以使得装置实现前述方法实施例。
该装置700可以用于执行上文方法实施例中用于标定机器人的运动学参数的装置所执行的动作,这时,该装置700可以为用于标定机器人的运动学参数的装置或者可配置于用于标定机器人的运动学参数的装置的部件,获取单元710用于执行上文方法实施例中用于标定机器人的运动学参数的装置的获取位移对相关的操作,处理单元720用于执行上文方法实施例中用于标定机器人的运动学参数的装置处理位移对相关的操作。
获取单元710,用于获取位移对,该位移对包括第一位移和第二位移,该第一位移为该机器人末端从第一位置移动至第二位置的实际移动位移,该第二位移为该机器人末端从该第一位置移动至该第二位置的名义移动位移。
其中,该第一位置和该第二位置为该机器人的操作空间中不同的两个点的位置,该机器人末端在该第一位置的姿态和在该第二位置的姿态相同;该实际移动位移由该操作空间中的标定物的尺寸、第一图像的尺寸和第二图像的尺寸确定,该第一图像为该机器人末端在该第一位置的情况下,该机器人末端的执行器获取的该标定物的图像,该第二图像为该机器人末端在该第二位置的情况下,该机器人末端的执行器获取的该标定物的图像;该名义移动位移 由机器人运动学模型、第一关节变量和第二关节变量确定,该第一关节变量为该机器人末端在该第一位置的情况下,该机器人的关节变量;该第二关节变量为该机器人末端在该第二位置的情况下,该机器人的关节变量;该机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
处理单元720,用于根据该位移对确定误差值,该误差值用于标定该机器人的运动学参数。可选地,该处理单元720,还用于根据该机器人运动学模型和该机器人末端在该第一位置的位姿确定第一指令关节变量。该处理单元720,还用于根据该机器人运动学模型和该机器人末端在该第二位置的位姿确定第二指令关节变量。该处理单元720,还用于根据该第一指令关节变量和该第二指令关节变量确定指令,该指令用于控制该机器人末端从该第一位置移动至该第二位置。
可选地,该获取单元710,用于获取位移对,包括:该获取单元710,用于获取多个位移对,该多个位移对包括第一位移对和第二位移对;在该获取单元710获取该第一位移对之前,该处理单元720,还用于控制该机器人使得该机器人末端与该标定物的第一面平行;在该获取单元710获取该第二位移对之前,该处理单元720,还用于控制该机器人使得该机器人末端与该标定物的第二面平行,其中,该第一面和该第二面为该标定物不同的两个表面。
可选地,在该获取单元710获取该位移对之前,该处理单元720,还用于确定该机器人的运动学参数的误差大于预设阈值。
可选地,在该机器人末端位于该第一位置的情况下,该获取单元710,还用于获取该机器人的第一电机编码器值,该第一电机编码器值用于计算该第一关节变量;在该机器人末端位于该第二位置的情况下,该获取单元710,还用于该机器人的第二电机编码器值,该第二电机编码器值用于计算该第二关节变量。
该装置700可实现对应于根据本申请实施例的方法实施例中的用于标定机器人的运动学参数的装置执行的步骤或者流程,该装置700可以包括用于执行方法实施例中的用于标定机器人的运动学参数的装置执行的方法的单元。并且,该装置700中的各单元和上述其他操作和/或功能分别为了实现方法实施例中的用于标定机器人的运动学参数的装置中的方法实施例的相应流程。
其中,当该装置700用于执行图2中的方法时,获取单元710可用于执行方法中的获取位移对的步骤,如步骤S210;处理单元720可用于执行方法中的处理步骤,如步骤S211和S220。
应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。
上文实施例中的处理单元720可以由至少一个处理器或处理器相关电路实现。获取单元710可以由收发器或收发器相关电路实现。存储单元可以通过至少一个存储器实现。
如图8所示,本申请实施例还提供一种用于标定机器人的运动学参数的装置800。该装置800包括处理器810,还可以包括一个或多个存储器820。处理器810与存储器820耦合,存储器820用于存储计算机程序或指令和/或数据,处理器810用于执行存储器820存储的计算机程序或指令和/或数据,使得上文方法实施例中的方法被执行。可选地,该装置800包括的处理器810为一个或多个。
可选地,该存储器820可以与该处理器810集成在一起,或者分离设置。
可选地,如图8所示,该装置800还可以包括收发器830,收发器830用于信号的接收和/或发送。例如,处理器810用于控制收发器830进行信号的接收和/或发送。
作为一种方案,该装置800用于实现上文方法实施例中由用于标定机器人的运动学参数的装置执行的操作。
本申请实施例还提供一种计算机可读存储介质,其上存储有用于实现上述方法实施例中由用于标定机器人的运动学参数的装置执行的方法的计算机指令。
本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得该计算机实现上述方法实施例中由用于标定机器人的运动学参数的装置执行的方法。
本申请实施例还提供一种用于标定机器人的运动学参数的系统,该用于标定机器人的运动学参数的系统包括上文实施例中的用于标定机器人的运动学参数的装置。
上述提供的任一种装置中相关内容的解释及有益效果均可参考上文提供的对应的方法实施例,此处不再赘述。
应理解,本申请实施例中提及的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
还应理解,本申请实施例中提及的存储器可以是易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM)。例如,RAM可以用作外部高速缓存。作为示例而非限定,RAM可以包括如下多种形式:静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。还需要说明的是,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及步骤,能够以电子硬件、或计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,这种实现不应认为超出本申请的保护范围。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅是示意性的。例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。此外,所显示或讨论的相互之间 的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元实现本申请提供的方案。
另外,在本申请各个实施例中的各功能单元可以集成在一个单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。例如,所述计算机可以是个人计算机,服务器,或者网络设备等。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD)等。例如,前述的可用介质可以包括但不限于:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
另外,本申请中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系;本申请中术语“至少一个”,可以表示“一个”和“两个或两个以上”,例如,A、B和C中至少一个,可以表示:单独存在A,单独存在B,单独存在C、同时存在A和B,同时存在A和C,同时存在C和B,同时存在A和B和C,这七种情况。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
Claims (19)
- 一种用于标定机器人的运动学参数的方法,其特征在于,包括:获取位移对,所述位移对包括第一位移和第二位移,所述第一位移为机器人末端从第一位置移动至第二位置的实际移动位移,所述第二位移为所述机器人末端从所述第一位置移动至所述第二位置的名义移动位移;根据所述位移对确定误差值,所述误差值用于标定所述机器人的运动学参数;其中,所述第一位置和所述第二位置为所述机器人的操作空间中不同的两个点的位置,所述机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同;所述实际移动位移由所述操作空间中的标定物的尺寸、第一图像的尺寸和第二图像的尺寸确定,所述第一图像为所述机器人末端在所述第一位置的情况下,所述机器人末端的执行器获取的所述标定物的图像,所述第二图像为所述机器人末端在所述第二位置的情况下,所述机器人末端的执行器获取的所述标定物的图像;所述名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定,所述第一关节变量为所述机器人末端在所述第一位置的情况下,所述机器人的关节变量;所述第二关节变量为所述机器人末端在所述第二位置的情况下,所述机器人的关节变量;所述机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:根据所述机器人运动学模型和所述机器人末端在所述第一位置的位姿确定第一指令关节变量;根据所述机器人运动学模型和所述机器人末端在所述第二位置的位姿确定第二指令关节变量;根据所述第一指令关节变量和所述第二指令关节变量确定指令,所述指令用于控制所述机器人末端从所述第一位置移动至所述第二位置。
- 根据权利要求2所述的方法,其特征在于,所述机器人末端从所述第一位置移动至所述第二位置,包括:所述机器人末端沿第一路径从所述第一位置移动至所述第二位置,其中,所述第一位置和所述第二位置在所述第一路径上,所述第一路径为所述机器人末端中心和所述标定物的表面的一点的连线。
- 根据权利要求1至3中任一项所述的方法,其特征在于,所述获取位移对,包括:获取多个位移对,其中,多个位移对包括第一位移对和第二位移对;在获取所述第一位移对之前,所述方法还包括:控制所述机器人使得所述机器人末端与所述标定物的第一面平行;在获取所述第二位移对之前,所述方法还包括:控制所述机器人使得所述机器人末端与所述标定物的第二面平行,其中,所述第一面和所述第二面为所述标定物不同的两个表面。
- 根据权利要求1至4中任一项所述的方法,其特征在于,在获取所述位移对之前,所述方法还包括:确定所述机器人的运动学参数的误差大于预设阈值。
- 根据权利要求1至6中任一项所述的方法,其特征在于,所述方法还包括:在所述机器人末端位于所述第一位置的情况下,获取所述机器人的第一电机编码器值,所述第一电机编码器值用于计算所述第一关节变量;在所述机器人末端位于所述第二位置的情况下,获取所述机器人的第二电机编码器值,所述第二电机编码器值用于计算所述第二关节变量。
- 根据权利要求1至7中任一项所述的方法,其特征在于,所述机器人包括:机械臂、无人机或智能车。
- 一种用于标定机器人的运动学参数的装置,其特征在于,包括:获取单元,用于获取位移对,所述位移对包括第一位移和第二位移,所述第一位移为所述机器人末端从第一位置移动至第二位置的实际移动位移,所述第二位移为所述机器人末端从所述第一位置移动至所述第二位置的名义移动位移;处理单元,用于根据所述位移对确定误差值,所述误差值用于标定所述机器人的运动学参数;其中,所述第一位置和所述第二位置为所述机器人的操作空间中不同的两个点的位置,所述机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同;所述实际移动位移由所述操作空间中的标定物的尺寸、第一图像的尺寸和第二图像的尺寸确定,所述第一图像为所述机器人末端在所述第一位置的情况下,所述机器人末端的执行器获取的所述标定物的图像,所述第二图像为所述机器人末端在所述第二位置的情况下,所述机器人末端的执行器获取的所述标定物的图像;所述名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定,所述第一关节变量为所述机器人末端在所述第一位置的情况下,所述机器人的关节变量;所述第二关节变量为所述机器人末端在所述第二位置的情况下,所述机器人的关节变量;所述机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
- 根据权利要求9所述的装置,其特征在于,所述处理单元,还用于根据所述机器人运动学模型和所述机器人末端在所述第一位置的位姿确定第一指令关节变量;所述处理单元,还用于根据所述机器人运动学模型和所述机器人末端在所述第二位置的位姿确定第二指令关节变量;所述处理单元,还用于根据所述第一指令关节变量和所述第二指令关节变量确定指令,所述指令用于控制所述机器人末端从所述第一位置移动至所述第二位置。
- 根据权利要求10所述的装置,其特征在于,所述机器人末端从所述第一位置移动至所述第二位置,包括:所述机器人末端沿第一路径从所述第一位置移动至所述第二位置,其中,所述第一位置和所述第二位置在所述第一路径上,所述第一路径为所述机器人末端中心和所述标定物的表面的一点的连线。
- 根据权利要求9至11中任一项所述的装置,其特征在于,所述获取单元,用于获取位移对,包括:所述获取单元,用于获取多个位移对,所述多个位移对包括第一位移对和第二位移对;在所述获取单元获取所述第一位移对之前,所述处理单元,还用于控制所述机器人使得所述机器人末端与所述标定物的第一面平行;在所述获取单元获取所述第二位移对之前,所述处理单元,还用于控制所述机器人使得所述机器人末端与所述标定物的第二面平行,其中,所述第一面和所述第二面为所述标定物不同的两个表面。
- 根据权利要求9至12中任一项所述的装置,其特征在于,在所述获取单元获取所述位移对之前,所述处理单元,还用于确定所述机器人的运动学参数的误差大于预设阈值。
- 根据权利要求9至14中任一项所述的装置,其特征在于,在所述机器人末端位于所述第一位置的情况下,所述获取单元,还用于获取所述机器人的第一电机编码器值,所述第一电机编码器值用于计算所述第一关节变量;在所述机器人末端位于所述第二位置的情况下,所述获取单元,还用于所述机器人的第二电机编码器值,所述第二电机编码器值用于计算所述第二关节变量。
- 一种用于标定机器人的运动学参数的装置,其特征在于,包括:存储器,用于存储计算机程序;处理器,用于执行所述存储器中存储的计算机程序,以使得所述用于标定机器人的运动学参数的装置执行权利要求1至8中任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机指令,当所述计算机指令在计算机上运行时,如权利要求1至8中任一项所述的方法被执行。
- 一种用于标定机器人的运动学参数的系统,其特征在于,包括:机器人和机器人末端的执行器,所述机器人用于:获取位移对,所述位移对包括第一位移和第二位移,所述第一位移为所述机器人末端从第一位置移动至第二位置的实际移动位移,所述第二位移为所述机器人末端从所述第一位置移动至所述第二位置的名义移动位移;根据所述多个位移对确定误差值,所述误差值用于标定所述机器人的运动学参数;所述机器人末端的执行器用于:在所述机器人末端位于所述第一位置的情况下,获取标定物的第一图像;在所述机器人末端位于所述第二位置的情况下,获取所述标定物的第二图像;其中,所述第一位置和所述第二位置为所述机器人的操作空间中不同的两个点的位置,所述机器人末端在所述第一位置的姿态和在所述第二位置的姿态相同;所述实际移动位移由所述机器人的操作空间中的标定物的尺寸、所述第一图像的尺寸和所述第二图像的尺寸确定;所述名义移动位移由机器人运动学模型、第一关节变量和第二关节变量确定,所述第一关节变量为所述机器人末端在所述第一位置的情况下,所述机器人的关节变量;所述第二关节变量为所述机器人末端在所述第二位置的情况下,所述机器人的关节变量;所述机器人运动学模型用于表示机器人的关节变量和机器人末端的位姿之间的关系。
- 根据权利要求18所述的系统,其特征在于,所述系统还包括:所述标定物。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111340403.7A CN116117785A (zh) | 2021-11-12 | 2021-11-12 | 用于标定机器人的运动学参数的方法和装置 |
CN202111340403.7 | 2021-11-12 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023083056A1 true WO2023083056A1 (zh) | 2023-05-19 |
Family
ID=86294278
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/128991 WO2023083056A1 (zh) | 2021-11-12 | 2022-11-01 | 用于标定机器人的运动学参数的方法和装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN116117785A (zh) |
WO (1) | WO2023083056A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116423526A (zh) * | 2023-06-12 | 2023-07-14 | 上海仙工智能科技有限公司 | 一种机械臂工具坐标的自动标定方法及系统、存储介质 |
CN116817815A (zh) * | 2023-08-29 | 2023-09-29 | 聊城大学 | 一种基于三拉线位移传感器的位姿测量装置及方法 |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105021144A (zh) * | 2015-07-08 | 2015-11-04 | 合肥泰禾光电科技股份有限公司 | 一种工业机器人运动学参数标定装置及标定方法 |
CN106493708A (zh) * | 2016-12-09 | 2017-03-15 | 南京理工大学 | 一种基于双机械臂和辅助臂的带电作业机器人控制系统 |
CN108724190A (zh) * | 2018-06-27 | 2018-11-02 | 西安交通大学 | 一种工业机器人数字孪生系统仿真方法及装置 |
CN110555889A (zh) * | 2019-08-27 | 2019-12-10 | 西安交通大学 | 一种基于CALTag和点云信息的深度相机手眼标定方法 |
US20200039075A1 (en) * | 2017-04-26 | 2020-02-06 | Hewlett-Packard Development Company, L.P. | Robotic structure calibrations |
CN111923049A (zh) * | 2020-08-21 | 2020-11-13 | 福州大学 | 基于球面模型的飞行机械臂视觉伺服与多任务控制方法 |
CN112132894A (zh) * | 2020-09-08 | 2020-12-25 | 大连理工大学 | 一种基于双目视觉引导的机械臂实时跟踪方法 |
CN113101584A (zh) * | 2021-03-17 | 2021-07-13 | 国网江西省电力有限公司电力科学研究院 | 一种基于三维点云模型的智能消防机器人控制方法 |
-
2021
- 2021-11-12 CN CN202111340403.7A patent/CN116117785A/zh active Pending
-
2022
- 2022-11-01 WO PCT/CN2022/128991 patent/WO2023083056A1/zh unknown
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105021144A (zh) * | 2015-07-08 | 2015-11-04 | 合肥泰禾光电科技股份有限公司 | 一种工业机器人运动学参数标定装置及标定方法 |
CN106493708A (zh) * | 2016-12-09 | 2017-03-15 | 南京理工大学 | 一种基于双机械臂和辅助臂的带电作业机器人控制系统 |
US20200039075A1 (en) * | 2017-04-26 | 2020-02-06 | Hewlett-Packard Development Company, L.P. | Robotic structure calibrations |
CN108724190A (zh) * | 2018-06-27 | 2018-11-02 | 西安交通大学 | 一种工业机器人数字孪生系统仿真方法及装置 |
CN110555889A (zh) * | 2019-08-27 | 2019-12-10 | 西安交通大学 | 一种基于CALTag和点云信息的深度相机手眼标定方法 |
CN111923049A (zh) * | 2020-08-21 | 2020-11-13 | 福州大学 | 基于球面模型的飞行机械臂视觉伺服与多任务控制方法 |
CN112132894A (zh) * | 2020-09-08 | 2020-12-25 | 大连理工大学 | 一种基于双目视觉引导的机械臂实时跟踪方法 |
CN113101584A (zh) * | 2021-03-17 | 2021-07-13 | 国网江西省电力有限公司电力科学研究院 | 一种基于三维点云模型的智能消防机器人控制方法 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116423526A (zh) * | 2023-06-12 | 2023-07-14 | 上海仙工智能科技有限公司 | 一种机械臂工具坐标的自动标定方法及系统、存储介质 |
CN116423526B (zh) * | 2023-06-12 | 2023-09-19 | 上海仙工智能科技有限公司 | 一种机械臂工具坐标的自动标定方法及系统、存储介质 |
CN116817815A (zh) * | 2023-08-29 | 2023-09-29 | 聊城大学 | 一种基于三拉线位移传感器的位姿测量装置及方法 |
CN116817815B (zh) * | 2023-08-29 | 2023-11-17 | 聊城大学 | 一种基于三拉线位移传感器的位姿测量装置及方法 |
Also Published As
Publication number | Publication date |
---|---|
CN116117785A (zh) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023083056A1 (zh) | 用于标定机器人的运动学参数的方法和装置 | |
US20200298411A1 (en) | Method for the orientation of an industrial robot, and industrial robot | |
US10525598B2 (en) | Positioning system using robot | |
JP6855492B2 (ja) | ロボットシステム、ロボットシステム制御装置、およびロボットシステム制御方法 | |
US20180161984A1 (en) | Control device, robot, and robot system | |
US7756608B2 (en) | System for calibration of an industrial robot and a method thereof | |
CN108098762A (zh) | 一种基于新型视觉引导的机器人定位装置及方法 | |
JP5450242B2 (ja) | マニピュレータのキャリブレーション方法及びロボット制御システム | |
CN113001535A (zh) | 机器人工件坐标系自动校正系统与方法 | |
JP2005201824A (ja) | 計測装置 | |
JP6922204B2 (ja) | 制御装置、ロボットおよびロボットシステム | |
CN108972543B (zh) | 自动高精度非接触式机器人tcp标定方法 | |
WO2021169855A1 (zh) | 机器人校正方法、装置、计算机设备及存储介质 | |
JP6855491B2 (ja) | ロボットシステム、ロボットシステム制御装置、およびロボットシステム制御方法 | |
WO2018196232A1 (zh) | 机器人和末端执行器的自动标定方法及系统 | |
CN113211431B (zh) | 基于二维码修正机器人系统的位姿估计方法 | |
KR101842286B1 (ko) | 로봇의 자동 캘리브레이션 방법 | |
TWI701123B (zh) | 機器人工件座標系自動校正系統與方法 | |
CN112109072B (zh) | 一种大型稀疏特征托盘精确6d位姿测量和抓取方法 | |
CN109900251A (zh) | 一种基于视觉技术的机器人定位装置及方法 | |
CN115446836A (zh) | 一种基于多种图像特征信息混合的视觉伺服方法 | |
CN115816448A (zh) | 基于光学定位仪的机械臂标定方法、装置、设备及介质 | |
JP5378908B2 (ja) | ロボットの精度調整方法およびロボット | |
JP5574805B2 (ja) | 視覚センサを有するマニピュレータのセンサキャリブレーション方法及びロボット制御システム | |
CN116038721B (zh) | 一种无运动学参与的手眼标定方法和系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22891852 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |