CN114131611A - Joint error offline compensation method, system and terminal for robot gravity pose decomposition - Google Patents

Joint error offline compensation method, system and terminal for robot gravity pose decomposition Download PDF

Info

Publication number
CN114131611A
CN114131611A CN202111544653.2A CN202111544653A CN114131611A CN 114131611 A CN114131611 A CN 114131611A CN 202111544653 A CN202111544653 A CN 202111544653A CN 114131611 A CN114131611 A CN 114131611A
Authority
CN
China
Prior art keywords
joint
robot
track
motion
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111544653.2A
Other languages
Chinese (zh)
Other versions
CN114131611B (en
Inventor
杨吉祥
谭世忠
丁汉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202111544653.2A priority Critical patent/CN114131611B/en
Publication of CN114131611A publication Critical patent/CN114131611A/en
Application granted granted Critical
Publication of CN114131611B publication Critical patent/CN114131611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/02Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type
    • B25J9/04Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian coordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical coordinate type or polar coordinate type
    • B25J9/046Revolute coordinate type
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control

Abstract

The invention belongs to the technical field of robots, and discloses a joint error offline compensation method, a system and a terminal for robot gravity pose decomposition.A uniform random point is generated in a working space, then a training set track is obtained by using an asymmetric spline difference method, and a corresponding joint actual position during movement is obtained by actual operation on a robot; extracting the motion characteristics of the theoretical positions of the joints of the robot by using nonlinear processing, decomposing the load at the tail end into each joint, and calculating to obtain joint errors when the track is operated; training joint tracking errors according to the built deep learning model; and (3) performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, and performing off-line compensation on the track. The method collects the joint motion errors of the robot on the track of the training set, fits the joint errors including nonlinear errors through the deep learning model, and has the advantages of simple compensation process and high compensation precision compared with the existing offline compensation method.

Description

Joint error offline compensation method, system and terminal for robot gravity pose decomposition
Technical Field
The invention belongs to the technical field of robots, and particularly relates to a joint error offline compensation method, system and terminal for robot gravity pose decomposition.
Background
At present, in the field of robot machining, joint errors of a robot can affect track errors of a center point of a tool of the robot, and outline errors of machined products are directly increased, so that machining precision is directly reduced. In order to reduce joint errors of the robot, the processing quality of the robot is improved. At present, joint errors are mainly compensated to reduce robot joint motion errors, and the existing compensation methods are generally divided into two types: on-line compensation, namely, determining the actual position of the robot end effector by adding a sensor on the robot end effector or using a visual system such as a binocular camera, a laser tracker and the like, and inputting the deviation into a controller for closed-loop control; the method comprises the steps of off-line compensation, namely, predicting the error of a robot in motion by constructing a robot mathematical physical model, and then performing open-loop control on the robot by using the predicted error. The above method has the following problems: during the machining process, the actual machining environment may block the camera or the sensor cannot be installed, and there is a certain challenge to real-time attitude measurement of the robot in motion. An accurate error model of the robot is difficult to construct, the nonlinear error has a large influence on the error of the robot, and the nonlinear error part of the robot is difficult to identify. The existing machine learning method cannot accurately consider the influence of the pose of the robot, and the model training set is difficult to construct. Generally, the existing robot joint error compensation method has certain limitations, so that the robot joint motion error is large in the actual machining process, the tail end of a tool deviates from the design track, and the design requirement cannot be met.
Through the above analysis, the problems and defects of the prior art are as follows: the existing robot joint error compensation method has certain limitations, an accurate kinematic model of a robot needs to be constructed, nonlinear errors of the robot joint during movement are often difficult to model, and the existing method does not consider the influence of the robot pose on the joint tracking errors, so that the difficulty of compensating the robot joint tracking errors is large, the compensation precision is low, the movement precision of a robot end effector is influenced, and the design requirements cannot be met.
The difficulty in solving the above problems and defects is:
the accurate parameter identification process for constructing the robot kinematic model is complicated, the motion nonlinear errors of the joints are many in sources and difficult to accurately model, and the change of the pose of the robot in the motion process can change the rigidity of the robot to influence the joint errors, so that the joint tracking errors in the motion process cannot be accurately predicted.
The significance of solving the problems and the defects is as follows:
the problems that a robot joint accurate kinematics model is difficult to construct and nonlinear errors are difficult to model are solved through a neural network model, the load at the tail end is decomposed to each joint, and the influence of the robot load and the space pose on joint tracking errors is fully considered; the tracking error of the robot joint is accurately predicted, the joint motion precision is improved by pre-compensating the input instruction, and the processing quality of the robot is improved.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a joint error offline compensation method, a joint error offline compensation system and a joint error offline compensation terminal for robot gravity pose decomposition. According to the method, after the machining track of the robot is obtained, the influence of the end load and the pose of the robot is fully considered, then nonlinear processing is carried out, and the joint error of the robot on the machining track is predicted by using the deep learning model, so that the joint error of the robot is compensated off line, and the machining precision of the robot is improved.
The invention is realized in this way, a joint error off-line compensation method for robot gravity pose decomposition, comprising:
generating uniform random points in a working space, then obtaining a training set track by using an asymmetric spline difference method, and obtaining a corresponding joint actual position during movement through actual operation on a machine tool;
secondly, extracting the motion characteristics of the theoretical positions of the joints of the robot by using nonlinear processing, and calculating to obtain joint errors when the track is operated;
step three, training the extracted motion characteristics and joint motion errors according to the constructed deep learning model;
and step four, performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, thereby performing off-line compensation on the track.
Further, in the fourth step, the offline compensation is performed on the trajectory, and the specific process is as follows:
step A, generating a motion track covering the working space as much as possible in the working space, and operating on the robot to obtain the actual position X of the joint of the robota=[xa1,xa2...xan];
Step B, passing the theoretical position X of the robot joint1=[x1,x2...xn]Obtaining the velocity X of the joint2=[v1,v2...vn]And calculating a Jacobian matrix J under the current pose, converting the end load to each joint according to the Jacobian matrix, and obtaining the converted load tau [ tau ] of each joint12,...τ6];
C, converting continuous time sequence data into a motion state for retaining historical information through data processing;
step D, constructing a neural network model, taking joint displacement, speed and conversion load as input characteristics, and setting an actual error Y as Xa-X1Training as a label;
step E, calculating the speed and the conversion load of the track to be compensated, and putting the track into a trained model after nonlinear conversion for prediction to obtain the prediction error of the given track;
and F, performing off-line compensation on the machining track by using the predicted track error, and reducing the joint motion error during machining.
Further, the step a specifically comprises the following steps:
generating uniformly distributed points in x, y and z position spaces and alpha, beta and gamma pose spaces through a pseudorandom sequence in a central point working space of a robot tool;
in order to prevent the effect of a training set from being influenced by the overlong stage of linear motion of the central point of the robot cutter, the original point of a working space is taken as an initial motion point, and the point closest to the current point is taken as the next point of a motion track until all random points generated in a traversal space are reached;
interpolating the obtained track points to obtain the theoretical position of the central point of the robot tool in each motion period;
converting the track of the center point of the tool into the motion joint angle of each joint by a robot inverse solution method, and inputting the obtained joint track command into the robot to obtain the actual position X of the robot joint at the current momenta=[xa1,xa2...xan]。
Further, the step B specifically comprises the following steps:
and calculating the theoretical speed of the track at each command point according to the theoretical position of the given joint track, wherein the calculation formula is as follows:
Figure BDA0003415496380000041
wherein :viSpeed of the current point, xi、xi-1Delta t is the theoretical position of the current moment and the previous moment and is an interpolation period;
according to the joint angle theta of the current robot ═ theta12...θ6]And calculating a Jacobian matrix J of the robot in the current pose, wherein the Jacobian matrix J meets the following requirements:
Figure BDA0003415496380000042
wherein
Figure BDA0003415496380000043
Z represents a representation in the base coordinate system {0} of a position vector representing the origin of coordinates of the end effector relative to the coordinate system { i }iIs a representation of the z-axis unit vector of coordinate system { i } in base coordinate system {0 };
calculating the Jacobian moments of all points in the trajectoryArray, thereby calculating the joint translation moment tau [ tau ] of the robot at each pose12,...τ6]Which satisfies:
Figure BDA0003415496380000044
wherein τkRepresenting the reduced load on each joint at time k,
Figure BDA0003415496380000045
for the transposition of the Jacobian matrix in the robot pose at time k, FkRepresenting the external load to which the robot is subjected at time k, FkIs a six-dimensional vector consisting of an external force f and a moment m.
Further, the step C specifically comprises the following steps:
combining the reference displacement, the reference speed and the converted moment into 3 multiplied by n time sequence data; and then segmenting the time sequence data.
Further, the segmentation method comprises the following steps: and combining the data at the time t and the data L-1 times before the time t into a 3 xL matrix for describing the motion state of the current point.
Further, the step D specifically includes:
the method comprises the following steps of constructing a neural network model, wherein the model consists of a time sequence convolution network layer, a cutting layer and an activation layer, and the time sequence convolution network layer calculation method comprises the following steps:
Figure BDA0003415496380000051
wherein l represents an expansion coefficient in the time sequence convolution network, k represents the convolution kernel size of the current convolution layer, and s-d · i represents the historical information direction extracted by the convolution network;
cropping the layer to ensure that the input information remains the same size as the output prediction error and to crop away the padded zero data; the active layer selects ReLU as an active function, and the calculation formula is as follows:
Figure BDA0003415496380000052
the motion characteristics of the robot in different states are better extracted by constructing a deeper network, meanwhile, the model is guaranteed not to generate gradient explosion to cause the reduction of the prediction performance of the model, and a residual error layer is added to construct the model, which satisfies the following conditions:
Figure BDA0003415496380000053
wherein ,
Figure BDA0003415496380000054
the residual error is output from the upper layer network, x is input from the upper layer network, and activation (x) is an activation function of the model;
and D, calculating the motion errors of the robot in different motion states in the actual processing process by using the data acquired in the step A, taking the motion errors as labels of the data set, namely the values to be learned, putting the labels and the training set data constructed in the step C into the constructed deep network model for training, and storing the structure and parameters of the model after model fitting.
Further, the step F specifically includes:
constructing a new network model by using the same model structure and then loading the training parameters stored in the step D; e, putting the data obtained in the step E into a model to obtain a prediction error of the processing track; and performing off-line compensation on the processing command by using the obtained prediction error to obtain a compensated processing command track.
Another object of the present invention is to provide a program storage medium storing a computer program for causing an electronic device to execute the method for offline compensation of joint errors in robot gravity pose decomposition, comprising the steps of:
generating uniform random points in a working space, then obtaining a training set track by using an asymmetric spline difference method, and obtaining a corresponding joint actual position during movement through actual operation on a machine tool;
secondly, extracting the motion characteristics of the theoretical positions of the joints of the robot by using nonlinear processing, and calculating to obtain joint errors when the track is operated;
step three, training the extracted motion characteristics and joint motion errors according to the constructed deep learning model;
and step four, performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, thereby performing off-line compensation on the track.
Another object of the present invention is to provide an information data processing terminal including a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to execute the joint error offline compensation method of robot gravity pose decomposition.
By combining all the technical schemes, the invention has the advantages and positive effects that:
the method has the advantages that the deep learning model is used for replacing a robot kinematics model, the prediction difficulty of the robot joint tracking error is simplified, the robot joint error caused by the robot end load under the condition of no gesture is considered, the prediction precision of the robot joint error is improved, and the accurate prediction of the robot joint tracking error is realized, so that the input instruction is pre-compensated, the joint motion precision is improved, the motion precision of the robot end actuator is effectively improved, and the processing quality is improved.
According to the method, the joint motion error of the robot is compensated by constructing the deep learning model, and only the actual joint angle of the robot under the given motion track needs to be acquired, so that the method provided by the invention does not need to construct a robot physical model and does not need to perform robot modeling and parameter identification. In addition, the method provided by the invention is suitable for tools and actuators in any robot load range, and does not need any other modification, so that the method is high in applicability and can be widely used for robot joint error compensation under various loads. The method collects the joint motion errors of the robot on the track of the training set, fits the joint errors including nonlinear errors through the deep learning model, and has the advantages of simple compensation process and high compensation precision compared with the existing offline compensation method. The invention greatly improves the motion precision of the robot joint and can be applied to the field of finish machining of robots. Meanwhile, the joint error compensation method provided by the invention is suitable for not only six-axis robots but also robots with different joint numbers. The joint error compensation method provided by the invention is suitable for but not limited to machining tools, and any tool and load which can be connected to the tail end of a robot through a connector can be used for error compensation.
Drawings
Fig. 1 is a flowchart of a joint error offline compensation method for robot gravity pose decomposition according to an embodiment of the present invention.
FIG. 2 is a schematic structural diagram of a six-degree-of-freedom robot provided in an embodiment of the present invention;
in fig. 2: 1. a six-degree-of-freedom serial industrial robot; 2. a high-speed motorized spindle; 3. a tool for machining; 4. a workpiece to be machined; 5. and the tool table is used for installing a workpiece to be processed.
FIG. 3 is a diagram illustrating a preferred training set trajectory provided by an embodiment of the present invention;
in fig. 3: fig. a, trace 1; fig. b, trace 2; fig. c, trace 3.
FIG. 4 is a diagram illustrating non-linear processing of data according to a preferred embodiment of the present invention.
Fig. 5 is a schematic diagram of a deep learning model of a preferred example provided by an embodiment of the present invention.
FIG. 6 is a comparison of compensated anterior-posterior joint tracking errors for a preferred example provided by an embodiment of the present invention.
Fig. 7 is a comparison of end effector error before and after compensation according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a joint error offline compensation method for robot gravity pose decomposition, which is described in detail below with reference to the accompanying drawings.
A person skilled in the art can also use other steps to implement the method for offline compensating joint error in robot gravity pose decomposition provided by the present invention, and the method for offline compensating joint error in robot gravity pose decomposition provided by the present invention shown in fig. 1 is only a specific embodiment.
As shown in fig. 1, an off-line joint error compensation method for robot gravity pose decomposition according to an embodiment of the present invention includes:
s101: generating uniform random points in a working space, then obtaining a training set track by using an asymmetric spline difference method, and obtaining the corresponding joint actual position during movement through actual operation on a machine tool.
S102: and extracting the motion characteristics of the obtained theoretical positions of the joints of the robot by using nonlinear processing, decomposing the end load into each joint, and calculating to obtain joint errors when the track is operated.
S103: and training the extracted motion characteristics and joint motion errors according to the constructed deep learning model.
S104: and (3) performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, so as to perform off-line compensation on the track.
In S104 provided by the embodiment of the present invention, the off-line compensation is performed on the trajectory, and the specific process is as follows:
step A, generating a motion track covering the working space as much as possible in the working space, and operating on the robot to obtain the actual position X of the joint of the robota=[xa1,xa2...xan];
Step B, passing the theoretical position X of the robot joint1=[x1,x2...xn]Obtaining the velocity X of the joint2=[v1,v2...vn]And calculating a Jacobian matrix J under the current pose, converting the end load to each joint according to the Jacobian matrix, and obtaining the converted load tau [ tau ] of each joint12,...τ6];
C, converting continuous time sequence data into a motion state for retaining historical information through data processing;
step D, constructing a neural network model, taking joint displacement, speed and conversion load as input characteristics, and setting an actual error Y as Xa-X1Training as a label;
step E, calculating the speed and the conversion load of the track to be compensated, and putting the track into a trained model after nonlinear conversion for prediction to obtain the prediction error of the given track;
and F, performing off-line compensation on the machining track by using the predicted track error, and reducing the joint motion error during machining.
The step A provided by the embodiment of the invention specifically comprises the following processes:
generating uniformly distributed points in x, y and z position spaces and alpha, beta and gamma pose spaces through a pseudorandom sequence in a central point working space of a robot tool;
the stage that the robot tool center point linear motion is prevented from being too long, the effect of a training set is prevented from being influenced, the original point of a working space is taken as an initial motion point, and the point closest to the current point is taken as the next point of a motion track until all random points generated in a traversal space are reached.
Interpolating the obtained track points to obtain the theoretical position of the central point of the robot tool in each motion period;
converting the track of the center point of the tool into the motion joint angle of each joint by a robot inverse solution method, and inputting the obtained joint track command into the robot to obtain the actual position X of the robot joint at the current momenta=[xa1,xa2...xan]。
The step B provided by the embodiment of the invention specifically comprises the following processes:
and calculating the theoretical speed of the track at each command point according to the theoretical position of the given joint track, wherein the calculation formula is as follows:
Figure BDA0003415496380000091
wherein :viSpeed of the current point, xi、xi-1Delta t is the theoretical position of the current moment and the previous moment and is an interpolation period;
according to the joint angle theta of the current robot ═ theta12...θ6]And calculating a Jacobian matrix J of the robot in the current pose, wherein the Jacobian matrix J meets the following requirements:
Figure BDA0003415496380000092
wherein
Figure BDA0003415496380000093
Z represents a representation in the base coordinate system {0} of a position vector representing the origin of coordinates of the end effector relative to the coordinate system { i }iIs a representation of the z-axis unit vector of coordinate system { i } in base coordinate system {0 };
calculating a Jacobian matrix of all points in the track, and calculating the joint conversion moment tau [ tau ] of the robot in each pose12,...τ6]Which satisfies:
Figure BDA0003415496380000094
wherein τkRepresenting the reduced load on each joint at time k,
Figure BDA0003415496380000095
for the transposition of the Jacobian matrix in the robot pose at time k, FkIndicating the external load to which the robot is subjected at time k,Fkis a six-dimensional vector consisting of an external force f and a moment m.
The step C provided by the embodiment of the present invention specifically includes:
combining the reference displacement, the reference speed and the converted moment into 3 multiplied by n time sequence data; dividing the time sequence data;
the segmentation method comprises the following steps: and combining the data at the time t and the data L-1 times before the time t into a 3 xL matrix for describing the motion state of the current point.
The step D provided in the embodiment of the present invention specifically includes:
the method comprises the following steps of constructing a neural network model, wherein the model consists of a time sequence convolution network layer, a cutting layer and an activation layer, and the time sequence convolution network layer calculation method comprises the following steps:
Figure BDA0003415496380000101
wherein l represents an expansion coefficient in the time sequence convolution network, k represents the convolution kernel size of the current convolution layer, and s-d · i represents the historical information direction extracted by the convolution network;
cropping the layer to ensure that the input information remains the same size as the output prediction error and to crop away the padded zero data; the active layer selects ReLU as an active function, and the calculation formula is as follows:
Figure BDA0003415496380000102
the motion characteristics of the robot in different states are better extracted by constructing a deeper network, meanwhile, the model is guaranteed not to generate gradient explosion to cause the reduction of the prediction performance of the model, and a residual error layer is added to construct the model, which satisfies the following conditions:
Figure BDA0003415496380000103
wherein ,
Figure BDA0003415496380000104
is the output of the last network layer of the residual error, x is the input of the last network layer, and activation (x) is the activation function of the model.
And D, calculating the motion errors of the robot in different motion states in the actual processing process by using the data acquired in the step A, taking the motion errors as labels of the data set, namely the values to be learned, putting the labels and the training set data constructed in the step C into the constructed deep network model for training, and storing the structure and parameters of the model after model fitting.
The step F provided by the embodiment of the invention specifically comprises the following processes:
constructing a new network model by using the same model structure and then loading the training parameters stored in the step D; e, putting the data obtained in the step E into a model to obtain a prediction error of the processing track; and performing off-line compensation on the processing command by using the obtained prediction error to obtain a compensated processing command track.
The technical solution of the present invention will be described in detail with reference to the following specific examples.
As shown in fig. 2, taking a six-degree-of-freedom robot as an example, the six-degree-of-freedom robot is provided with a tooling table 5 for mounting a workpiece to be processed, and a workpiece 4 to be processed is placed at the upper end of the tooling table 5 for mounting the workpiece to be processed; the upper end of a tooling table 5 for installing a workpiece to be processed is provided with a processing cutter 3, the processing cutter 3 is connected with a high-speed electric spindle 2, and the high-speed electric spindle 2 is connected with a six-degree-of-freedom serial industrial robot 1. An equipment diagram used in the robot end tool offset calibration method based on the line laser measuring instrument is constructed.
First, uniform random points are generated in a working space, then a training set track as shown in fig. 3 is obtained by using an asymmetric spline difference method, and the corresponding joint actual position during movement is obtained through actual operation on a machine tool. Then, the obtained theoretical position of the robot joint is processed by using the non-linear processing shown in fig. 4 to extract the motion characteristics of the robot joint, and the joint error when the track is operated is calculated. And constructing a deep learning model according to the fifth graph, and training the extracted motion characteristics and joint motion errors. And finally, performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, so as to perform off-line compensation on the track.
FIG. 6 is a comparison of joint tracking errors before and after actual grinding and polishing track compensation of a blade by using the method of the present invention, and the tracking errors of six joints are respectively reduced by 90.33%, 92.06%, 85.91%, 89.49%, 86.19% and 89.72% after the track is pre-compensated by using the method of the present invention. Fig. 7 is a comparison of comparative end motion errors before and after actual polishing trajectory compensation of a blade by using the method of the present invention, and after the trajectory is pre-compensated by using the method of the present invention, the motion errors of the end effector position X, Y, Z are respectively reduced by 87.0%, 85.6%, 83.3%, and the motion errors of the end effector euler angles α, β, γ are respectively reduced by 83.9%, 82.7%, 87.3%.
It should be noted that the embodiments of the present invention can be realized by hardware, software, or a combination of software and hardware. The hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory and executed by a suitable instruction execution system, such as a microprocessor or specially designed hardware. Those skilled in the art will appreciate that the apparatus and methods described above may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided on a carrier medium such as a disk, CD-or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier, for example. The apparatus and its modules of the present invention may be implemented by hardware circuits such as very large scale integrated circuits or gate arrays, semiconductors such as logic chips, transistors, or programmable hardware devices such as field programmable gate arrays, programmable logic devices, etc., or by software executed by various types of processors, or by a combination of hardware circuits and software, e.g., firmware.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. The off-line joint error compensation method for the gravity pose decomposition of the robot is characterized by comprising the following steps of:
generating uniform random points in a working space, then obtaining a training set track by using an asymmetric spline difference method, and obtaining a corresponding joint actual position during movement through actual operation on a machine tool;
secondly, extracting the motion characteristics of the theoretical positions of the joints of the robot by using nonlinear processing, and calculating to obtain joint errors when the track is operated;
step three, training the extracted motion characteristics and joint motion errors according to the constructed deep learning model;
and step four, performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, thereby performing off-line compensation on the track.
2. The robot gravity pose decomposition joint error offline compensation method according to claim 1, wherein in the fourth step, the track is compensated offline, and the specific process is as follows:
step A, generating a motion track covering the working space as much as possible in the working space, and operating on the robot to obtain the actual position X of the joint of the robota=[xa1,xa2...xan];
Step B, passing the theoretical position X of the robot joint1=[x1,x2...xn]Obtaining the velocity X of the joint2=[v1,v2...vn]And calculating a Jacobian matrix J under the current pose, converting the end load to each joint according to the Jacobian matrix, and obtaining the converted load tau [ tau ] of each joint12,...τ6];
C, converting continuous time sequence data into a motion state for retaining historical information through data processing;
step D, constructing a neural network model, taking joint displacement, speed and conversion load as input characteristics, and setting an actual error Y as Xa-X1Training as a label;
step E, calculating the speed and the conversion load of the track to be compensated, and putting the track into a trained model after nonlinear conversion for prediction to obtain the prediction error of the given track;
and F, performing off-line compensation on the machining track by using the predicted track error, and reducing the joint motion error during machining.
3. The robot gravity pose decomposition joint error offline compensation method according to claim 2, wherein the step a comprises the following specific processes:
generating uniformly distributed points in x, y and z position spaces and alpha, beta and gamma pose spaces through a pseudorandom sequence in a central point working space of a robot tool;
in order to prevent the effect of a training set from being influenced by the overlong stage of linear motion of the central point of the robot cutter, the original point of a working space is taken as an initial motion point, and the point closest to the current point is taken as the next point of a motion track until all random points generated in a traversal space are reached;
interpolating the obtained track points to obtain the theoretical position of the central point of the robot tool in each motion period;
converting the track of the center point of the tool into the motion joint angle of each joint by a robot inverse solution method, and inputting the obtained joint track command into the robot to obtain the actual position X of the robot joint at the current momenta=[xa1,xa2...xan]。
4. The robot gravity pose decomposition joint error offline compensation method according to claim 2, wherein the specific process of the step B is as follows:
and calculating the theoretical speed of the track at each command point according to the theoretical position of the given joint track, wherein the calculation formula is as follows:
Figure FDA0003415496370000021
wherein :viSpeed of the current point, xi、xi-1Delta t is the theoretical position of the current moment and the previous moment and is an interpolation period;
according to the joint angle theta of the current robot ═ theta12...θ6]And calculating a Jacobian matrix J of the robot in the current pose, wherein the Jacobian matrix J meets the following requirements:
Figure FDA0003415496370000022
wherein
Figure FDA0003415496370000023
Z represents a representation in the base coordinate system {0} of a position vector representing the origin of coordinates of the end effector relative to the coordinate system { i }iIs a representation of the z-axis unit vector of coordinate system { i } in base coordinate system {0 };
calculating a Jacobian matrix of all points in the track, and calculating the joint conversion moment tau [ tau ] of the robot in each pose12,...τ6]Which satisfies:
Figure FDA0003415496370000031
wherein τkRepresenting the reduced load on each joint at time k,
Figure FDA0003415496370000032
for the transposition of the Jacobian matrix in the robot pose at time k, FkIndicating that the robot isExternal load applied at time k, FkIs a six-dimensional vector consisting of an external force f and a moment m.
5. The robot gravity pose decomposition joint error offline compensation method according to claim 2, wherein the specific process of the step C is as follows:
combining the reference displacement, the reference speed and the converted moment into 3 multiplied by n time sequence data; and then segmenting the time sequence data.
6. The robot gravity pose decomposition joint error offline compensation method according to claim 5, wherein the segmentation method comprises the following steps: and combining the data at the time t and the data L-1 times before the time t into a 3 xL matrix for describing the motion state of the current point.
7. The robot gravity pose decomposition joint error offline compensation method according to claim 2, wherein the specific process of the step D is as follows:
the method comprises the following steps of constructing a neural network model, wherein the model consists of a time sequence convolution network layer, a cutting layer and an activation layer, and the time sequence convolution network layer calculation method comprises the following steps:
Figure FDA0003415496370000033
wherein l represents an expansion coefficient in the time sequence convolution network, k represents the convolution kernel size of the current convolution layer, and s-d · i represents the historical information direction extracted by the convolution network;
cropping the layer to ensure that the input information remains the same size as the output prediction error and to crop away the padded zero data; the active layer selects ReLU as an active function, and the calculation formula is as follows:
Figure FDA0003415496370000034
the motion characteristics of the robot in different states are better extracted by constructing a deeper network, meanwhile, the model is guaranteed not to generate gradient explosion to cause the reduction of the prediction performance of the model, and a residual error layer is added to construct the model, which satisfies the following conditions:
Figure FDA0003415496370000035
wherein ,
Figure FDA0003415496370000041
the residual error is output from the upper layer network, x is input from the upper layer network, and activation (x) is an activation function of the model;
and D, calculating the motion errors of the robot in different motion states in the actual processing process by using the data acquired in the step A, taking the motion errors as labels of the data set, namely the values to be learned, putting the labels and the training set data constructed in the step C into the constructed deep network model for training, and storing the structure and parameters of the model after model fitting.
8. The robot gravity pose decomposition joint error offline compensation method according to claim 2, wherein the specific process of the step F is as follows:
constructing a new network model by using the same model structure and then loading the training parameters stored in the step D; e, putting the data obtained in the step E into a model to obtain a prediction error of the processing track; and performing off-line compensation on the processing command by using the obtained prediction error to obtain a compensated processing command track.
9. A program storage medium for receiving user input, the stored computer program causing an electronic device to execute the method for offline compensation of joint errors in gravity pose decomposition of a robot according to any one of claims 1 to 8, comprising the steps of:
generating uniform random points in a working space, then obtaining a training set track by using an asymmetric spline difference method, and obtaining a corresponding joint actual position during movement through actual operation on a machine tool;
secondly, extracting the motion characteristics of the theoretical positions of the joints of the robot by using nonlinear processing, and calculating to obtain joint errors when the track is operated;
step three, training the extracted motion characteristics and joint motion errors according to the constructed deep learning model;
and step four, performing off-line prediction on the processing track needing compensation by using the trained model to obtain the motion error of the processing track, thereby performing off-line compensation on the track.
10. An information data processing terminal, comprising a memory and a processor, wherein the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the joint error offline compensation method for robot gravity pose decomposition according to any one of claims 1 to 8.
CN202111544653.2A 2021-12-16 2021-12-16 Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition Active CN114131611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111544653.2A CN114131611B (en) 2021-12-16 2021-12-16 Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111544653.2A CN114131611B (en) 2021-12-16 2021-12-16 Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition

Publications (2)

Publication Number Publication Date
CN114131611A true CN114131611A (en) 2022-03-04
CN114131611B CN114131611B (en) 2023-10-24

Family

ID=80382703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111544653.2A Active CN114131611B (en) 2021-12-16 2021-12-16 Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition

Country Status (1)

Country Link
CN (1) CN114131611B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114769800A (en) * 2022-06-20 2022-07-22 中建五洲工程装备有限公司 Intelligent operation control system and method for welding process
CN115648228A (en) * 2022-12-28 2023-01-31 广东隆崎机器人有限公司 Industrial robot multi-source error compensation method, device, equipment and storage medium
CN116000935A (en) * 2023-01-17 2023-04-25 山东大学 Method and system for constructing robot integrated joint friction characteristic model
CN117086886B (en) * 2023-10-18 2023-12-22 山东建筑大学 Robot dynamic error prediction method and system based on mechanism data hybrid driving

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110385720A (en) * 2019-07-26 2019-10-29 南京航空航天大学 A kind of robot localization error compensating method based on deep neural network
CN111203890A (en) * 2020-02-28 2020-05-29 中国科学技术大学 Position error compensation method of robot
US20200223069A1 (en) * 2019-01-10 2020-07-16 General Electric Company Utilizing optical data to dynamically control operation of a snake-arm robot
CN112497216A (en) * 2020-12-01 2021-03-16 南京航空航天大学 Industrial robot pose precision compensation method based on deep learning
CN112518753A (en) * 2020-12-04 2021-03-19 浙江理工大学 Industrial robot trajectory tracking system and method based on neural network iterative compensation
CN112643669A (en) * 2020-12-04 2021-04-13 广州机械科学研究院有限公司 Robot position deviation compensation method, system, device and storage medium
US20210209788A1 (en) * 2020-01-03 2021-07-08 Naver Corporation Method and apparatus for generating data for estimating three-dimensional (3d) pose of object included in input image, and prediction model for estimating 3d pose of object

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200223069A1 (en) * 2019-01-10 2020-07-16 General Electric Company Utilizing optical data to dynamically control operation of a snake-arm robot
CN110385720A (en) * 2019-07-26 2019-10-29 南京航空航天大学 A kind of robot localization error compensating method based on deep neural network
US20210209788A1 (en) * 2020-01-03 2021-07-08 Naver Corporation Method and apparatus for generating data for estimating three-dimensional (3d) pose of object included in input image, and prediction model for estimating 3d pose of object
CN111203890A (en) * 2020-02-28 2020-05-29 中国科学技术大学 Position error compensation method of robot
CN112497216A (en) * 2020-12-01 2021-03-16 南京航空航天大学 Industrial robot pose precision compensation method based on deep learning
CN112518753A (en) * 2020-12-04 2021-03-19 浙江理工大学 Industrial robot trajectory tracking system and method based on neural network iterative compensation
CN112643669A (en) * 2020-12-04 2021-04-13 广州机械科学研究院有限公司 Robot position deviation compensation method, system, device and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114769800A (en) * 2022-06-20 2022-07-22 中建五洲工程装备有限公司 Intelligent operation control system and method for welding process
CN114769800B (en) * 2022-06-20 2022-09-27 中建五洲工程装备有限公司 Intelligent operation control system and method for welding process
CN115648228A (en) * 2022-12-28 2023-01-31 广东隆崎机器人有限公司 Industrial robot multi-source error compensation method, device, equipment and storage medium
CN116000935A (en) * 2023-01-17 2023-04-25 山东大学 Method and system for constructing robot integrated joint friction characteristic model
CN117086886B (en) * 2023-10-18 2023-12-22 山东建筑大学 Robot dynamic error prediction method and system based on mechanism data hybrid driving

Also Published As

Publication number Publication date
CN114131611B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN114131611B (en) Off-line compensation method, system and terminal for joint errors of robot gravity pose decomposition
JP4271232B2 (en) Apparatus, method, program, and recording medium for executing offline programming of robot
Wang et al. Nonparametric statistical learning control of robot manipulators for trajectory or contour tracking
CN110640747B (en) Hand-eye calibration method and system for robot, electronic equipment and storage medium
Peng et al. Total differential methods based universal post processing algorithm considering geometric error for multi-axis NC machine tool
Liao et al. Optimization of robot posture and workpiece setup in robotic milling with stiffness threshold
US11292130B2 (en) Method for motion simulation of a manipulator
Nagata et al. Development of CAM system based on industrial robotic servo controller without using robot language
Schnoes et al. Model-based planning of machining operations for industrial robots
Tan et al. A prediction and compensation method of robot tracking error considering pose-dependent load decomposition
Celikag et al. Cartesian stiffness optimization for serial arm robots
CN111775145A (en) Control system of series-parallel robot
Uzunovic et al. A novel hybrid contouring control method for 3-DOF robotic manipulators
Theissen et al. Quasi-static compliance calibration of serial articulated industrial manipulators
CN116652939A (en) Calibration-free visual servo compliant control method for parallel robot
Mayer et al. Global kinematic calibration of a Stewart platform
Li et al. A spatial vector projection based error sensitivity analysis method for industrial robots
Li et al. Pose accuracy improvement in robotic machining by visually-guided method and experimental investigation
JPH10225885A (en) Multi-collaboration work method, and system device
Berselli et al. Engineering methods and tools enabling reconfigurable and adaptive robotic deburring
Schneider et al. Combining holistic programming with kinematic parameter optimisation for robot machining
CN109773581B (en) Method for applying robot to reappear machining
JP2021186929A (en) Control method for multi-axis robot
Kainrath et al. Accuracy improvement and process flow adaption for robot machining
Berselli et al. Design optimisation of cutting parameters for a class of radially-compliant spindles via virtual prototyping tools

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant