CN110815190A - Industrial robot dragging demonstration method and system - Google Patents
Industrial robot dragging demonstration method and system Download PDFInfo
- Publication number
- CN110815190A CN110815190A CN201911144507.3A CN201911144507A CN110815190A CN 110815190 A CN110815190 A CN 110815190A CN 201911144507 A CN201911144507 A CN 201911144507A CN 110815190 A CN110815190 A CN 110815190A
- Authority
- CN
- China
- Prior art keywords
- robot
- joint
- matrix
- friction
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0081—Programme-controlled manipulators with master teach-in means
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1661—Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Feedback Control In General (AREA)
- Manipulator (AREA)
Abstract
The invention relates to a dragging teaching method and a dragging teaching system for an industrial robot, wherein a six-dimensional force sensor is used for collecting stress information of the robot to identify robot dynamics parameters so as to establish a frictionless robot dynamics model, the actual motion control quantity and the theoretical motion control quantity of the robot are calculated according to an impedance control law, a reinforced iterative learning process is used for identifying corresponding friction force model parameters in the robot dynamics model containing friction force, dragging teaching is implemented and errors in the teaching process are compensated based on the robot dynamics model containing friction force and the reinforced iterative learning process, and the dragging teaching method and the dragging teaching system have the advantages of simplicity, accurate control, stability and the like.
Description
Technical Field
The invention relates to the technical field of robot teaching, in particular to an industrial robot dragging teaching method and system.
Background
With the continuous development of the robot technology, the application of the robot is also greatly popularized. In many fields, the robot technology can play an important role. Industrial robots have long been widely used in the field of industrial production to perform welding, assembly, handling, etc. tasks to improve production efficiency. In order for a robot to accomplish a given task, it is necessary to teach the robot.
Teaching techniques for robots include direct teaching, teach pendant teaching, and offline teaching. The demonstrator teaching and the off-line teaching have the defects of long time consumption, low efficiency, complex operation, high requirement on operators and the like, and the requirement of industrial production on the teaching efficiency is difficult to meet. Direct teaching is a way to teach by directly pulling the robot to reach the target pose. Compared with teaching of a demonstrator and off-line teaching, the teaching device has the advantages that the direct teaching efficiency is higher, and the requirement on operators is not too high.
The direct teaching is divided into power level off teaching and servo level on teaching. The power level deviation teaching is that a teaching person overcomes the gravity and the joint friction force of the robot to pull the robot, and considerable labor intensity is required. The servo level connection teaching is that the robot is indirectly pulled by the joint actuator. Therefore, direct teaching is generally realized by adopting a servo-level connection teaching method.
The servo level turn-on teaching can be divided into three categories depending on the sensors used in the teaching process. The first type is that a six-dimensional force sensor is arranged at the tail end of a robot, a controller collects external force information applied by an operator through the sensor, and direct teaching is realized by combining impedance control. This implementation is simple, but only can be taught at the sensor mounting location, contact information on the robot body cannot be sensed, and six-dimensional force sensors that meet relevant requirements are expensive. The second type is that a torque sensor or a double encoder is arranged at each joint of the robot to form a flexible joint. The structure simplifies a dynamic model of the robot, so that a relatively accurate model can be established, and contact information of all positions of the robot body can be sensed through a generalized momentum method, but the market acceptance is low due to the fact that the cost of the mode is too high. And in the third category, an external force and contact information of the robot body are estimated through current information of each joint of the robot without an external torque sensor, and the controller acquires the information and controls the robot to perform corresponding motion. The method is low in cost, but the operation hand feeling is not as good as that of a direct teaching method for installing the torque sensor on each joint, and the dynamic parameters and the friction force model of the robot can be accurately identified.
Disclosure of Invention
In view of this, the invention aims to provide an industrial robot dragging teaching method and system, which have the advantages of simplicity, accurate control, stability and the like.
The invention is realized by adopting the following scheme: an industrial robot dragging teaching method comprises the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMControlling the robot to move, simultaneously collecting the actual movement quantity Q of the robot, and controlling the movement quantity C according to the preset intervalMAnd carrying out error compensation on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot.
Further, step S1 specifically includes the following steps:
step S11: selecting an excitation track to drive the robot to move, collecting stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation;
step S12: theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT:
Wherein M (q) represents an inertia matrix,which represents the acceleration vector of the joint,a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR:
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to an impedance control law formula, respectively utilizing actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTWherein the impedance control law formula is as follows:
in the formula, τ (k) represents offThe sequence of the pitch moment is that,representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substitutedControlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substitutedControlling quantity C for theoretical movementT;
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTAnd performing parameter identification on the friction force model by using a reinforced iterative learning process to obtain a robot dynamics model containing friction force.
Further, the parameter identification of the friction force model in step S14 specifically includes the following steps:
step S141: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf;
Step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CR-CT;
Step S143: the enhancement signal R is calculated using the following formula:
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: updating the parameter matrix F using the following equationp:
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfThe original theoretical joint moment is compensated, and the theoretical joint moment is changed into tau after compensationT-τfCalculating a new theoretical motion control quantity C by using the compensated theoretical joint moment according to the impedance control lawT(ii) a And returns to step S142.
Further, step S2 specifically includes the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
step S22: calculating the motion control quantity C by using an impedance control law formulaM:
Wherein τ (k) is τeWhen substituted, the calculated joint velocity sequenceI.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; and when the cache region is full, re-identifying the friction force model parameters by utilizing a reinforced iterative learning process, and compensating errors.
Further, the parameter re-identification of the friction force model in step S23 specifically includes the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf;
Step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents a machineThe number of human joints, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CM-Q;
Step S233: the enhancement signal R is calculated using the following formula:
step S234: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp:
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfReplacing original tau in momentum deviation observer discretization formulafUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
The invention also provides an industrial robot dragging teaching system which comprises an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps as described in any one of the above items when the computer runs.
The invention also provides a computer-readable storage medium having stored thereon a computer program capable of being executed by a processor, which, when executing the computer program, performs the method steps as set forth in any of the above.
The robot dynamic parameter identification method based on the six-dimensional force sensor has the advantages of being simple, accurate and stable in control and the like.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts the reinforced iterative learning process to identify the friction force model parameters, and has the advantages of accurate obtained result, simple implementation process and no need of complex theoretical derivation.
2. The invention collects the relevant data of the robot while implementing the dragging teaching, and performs re-identification and error compensation of the model parameters according to the obtained data, thereby ensuring the precision of the dragging teaching.
3. The invention adopts an impedance control law, and can stably carry out the dragging teaching process by selecting proper control parameters.
Drawings
Fig. 1 is a flowchart of step S1 according to an embodiment of the present invention.
Fig. 2 is a control block diagram of step S2 according to the embodiment of the present invention.
Fig. 3 is a flowchart illustrating the friction parameter identification in step S14 according to the embodiment of the invention.
Fig. 4 is a flowchart illustrating the friction parameter re-identification in step S23 according to the embodiment of the invention.
Fig. 5 is a system composition diagram according to an embodiment of the invention.
In the figure, 1 is an industrial robot body, 2 is a robot controller, 3 is an ethernet cable, 4 is a computer, 5 is a six-dimensional force sensor, and 6 is a current feedback module.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiment provides an industrial robot dragging teaching method which comprises the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force; the specific flow is shown in figure 1;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMControlling the robot to move, and simultaneously acquiring the actual movement quantity Q of the robot according to the preset valueInterval according to movement control quantity CMError compensation is carried out on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot; the specific control block diagram is shown in fig. 2;
in this embodiment, step S1 specifically includes the following steps:
step S11: selecting an excitation track to drive the robot to move, acquiring stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation; in the embodiment, a Fourier series joint track is selected as an excitation track;
step S12: theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT:
Wherein M (q) represents an inertia matrix,which represents the acceleration vector of the joint,a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR:
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to an impedance control law formula, respectively utilizing actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTWherein the impedance control law formula is as follows:
wherein τ (k) represents a joint moment sequence,representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substitutedControlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substitutedControlling quantity C for theoretical movementT(ii) a In this embodiment, the parameter to be selected is Bd、MdAnd T, the selected parameter values are respectively Bd=diag(30,30,30,20,20,20)、Md=diag(1,1,1,1,1,1)、T=0.1;
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTIdentifying parameters of the friction force model by using a reinforced iterative learning process to obtain a robot dynamic model containing friction force; in this embodiment, the frictional force model is a coulomb viscous frictional model, and the number of samples is set to 1000.
In this embodiment, as shown in fig. 3, the parameter identification of the friction force model in step S14 specifically includes the following steps:
step S141: determining a gain matrix G and a contraction coefficient lambda, defining a parameter matrix Fx and assigning an initial value F according to the selected friction modelp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf(ii) a In this embodiment, the size of the parameter matrix corresponding to the selected coulomb viscous friction model is 6 rows and 4 columns, and λ is set to 0.8;
step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CR-CT(ii) a The robot body in the embodiment has six joints, so that l is 6, and meanwhile, the sampling number is 1000;
step S143: the enhancement signal R is calculated using the following formula:
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: by using a lower partFormula update parameter matrix Fp:
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfThe original theoretical joint moment is compensated, and the theoretical joint moment is changed into tau after compensationT-τfCalculating a new theoretical motion control quantity C by using the compensated theoretical joint moment according to the impedance control lawT(ii) a And returns to step S142.
In the present embodiment, steps S142 to S148 are repeated until the deviation term Δ C converges to zero, at which time F corresponds topThe final friction model parameters are obtained.
In this embodiment, step S2 specifically includes the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
step S22: calculating the motion control quantity C by using an impedance control law formulaM:
Wherein τ (k) is τeWhen substituted, the calculated joint velocity sequenceI.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; when the cache region is full, re-identifying the friction force model parameters by using a reinforced iterative learning process, and compensating errors; in this embodiment, the capacity of the buffer is 100 samples.
In this embodiment, as shown in fig. 4, the parameter re-identification of the friction force model in step S23 specifically includes the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf(ii) a In this embodiment, the size of the parameter matrix corresponding to the selected coulomb viscous friction model is 6 rows and 4 columns, and meanwhile, in order to increase the identification speed, λ is set to 0.6;
step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, and t represents samplingThe specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CM-Q; the robot in this embodiment has six joints, so l equals 6, and because the capacity of the buffer area is 100 samples, the number of samples t equals 100;
step S233: the enhancement signal R is calculated using the following formula:
step S234: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp:
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfSubstituting the original tau in the discretization formula of the momentum deviation observerfUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
In the present embodiment, the steps S232 to S238 are repeated untilUntil the deviation term Δ C converges to zero, when the corresponding F is usedpAnd original friction model parameters are replaced, and error compensation is realized.
The embodiment also provides an industrial robot dragging teaching system, as shown in fig. 5, which includes an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps as described in any one of the above items when the computer runs.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program capable of being executed by a processor, which, when executing the computer program, performs the method steps as set forth in any of the above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.
Claims (7)
1. An industrial robot dragging teaching method is characterized by comprising the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMTo control the movement of the robot and to control the movement of the robot,simultaneously collecting the actual motion quantity Q of the robot and controlling the motion quantity C according to the preset intervalMAnd carrying out error compensation on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot.
2. An industrial robot drag teaching method according to claim 1 wherein the step S1 comprises the following steps:
step S11: selecting an excitation track to drive the robot to move, collecting stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation;
step S12: theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT:
Wherein M (q) represents an inertia matrix,which represents the acceleration vector of the joint,a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR:
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to an impedance control law formula, respectively utilizing actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTWherein the impedance control law formula is as follows:
wherein τ (k) represents a joint moment sequence,representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substitutedControlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substitutedControlling quantity C for theoretical movementT;
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTAnd performing parameter identification on the friction force model by using a reinforced iterative learning process to obtain a robot dynamics model containing friction force.
3. An industrial robot drag teaching method according to claim 2 wherein the parameter identification of the friction force model in step S14 comprises the following steps:
step S141: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf;
Step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CR-CT;
Step S143: the enhancement signal R is calculated using the following formula:
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: updating the parameter matrix F using the following equationp:
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfThe original theoretical joint moment is compensated, and the theoretical joint moment is changed into tau after compensationT-τfCalculating a new theoretical motion control quantity C by using the compensated theoretical joint moment according to the impedance control lawT(ii) a And returns to step S142.
4. An industrial robot drag teaching method according to claim 1 wherein the step S2 comprises the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
step S22: calculating the motion control quantity C by using an impedance control law formulaM:
Wherein τ (k) is τeWhen substituted, the calculated joint velocity sequenceI.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; and when the cache region is full, re-identifying the friction force model parameters by utilizing a reinforced iterative learning process, and compensating errors.
5. An industrial robot dragging teaching method according to claim 4 wherein the parameter re-identification of the friction force model in step S23 comprises the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf;
Step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:wherein the deviation term Δ C ═ CM-Q;
Step S233: the enhancement signal R is calculated using the following formula:
step S234: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp:
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfReplacing original tau in momentum deviation observer discretization formulafUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
6. A dragging teaching system of an industrial robot is characterized by comprising an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, and the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps according to any one of claims 1-5 when running.
7. A computer-readable storage medium, on which a computer program is stored which can be executed by a processor, characterized in that the processor, when executing the computer program, executes the method steps of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911144507.3A CN110815190B (en) | 2019-11-20 | 2019-11-20 | Industrial robot dragging demonstration method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911144507.3A CN110815190B (en) | 2019-11-20 | 2019-11-20 | Industrial robot dragging demonstration method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110815190A true CN110815190A (en) | 2020-02-21 |
CN110815190B CN110815190B (en) | 2021-07-27 |
Family
ID=69557792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911144507.3A Active CN110815190B (en) | 2019-11-20 | 2019-11-20 | Industrial robot dragging demonstration method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110815190B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110774269A (en) * | 2019-11-26 | 2020-02-11 | 上海节卡机器人科技有限公司 | Robot drag teaching method and device, electronic device and storage medium |
CN112454333A (en) * | 2020-11-26 | 2021-03-09 | 青岛理工大学 | Robot teaching system and method based on image segmentation and surface electromyogram signals |
CN112847345A (en) * | 2020-12-30 | 2021-05-28 | 上海节卡机器人科技有限公司 | Method and device for determining robot dragging teaching mode |
CN112894821A (en) * | 2021-01-30 | 2021-06-04 | 同济大学 | Current method based collaborative robot dragging teaching control method, device and equipment |
CN114310851A (en) * | 2022-01-27 | 2022-04-12 | 华南理工大学 | Robot dragging-free teaching method for torque sensor |
CN114516052A (en) * | 2022-03-23 | 2022-05-20 | 杭州湖西云百生科技有限公司 | Dynamics control method and system of parallel real-time high-performance multi-axis mechanical arm |
CN114619440A (en) * | 2020-12-10 | 2022-06-14 | 北京配天技术有限公司 | Method for correcting friction model, robot and computer readable storage medium |
CN116442240A (en) * | 2023-05-26 | 2023-07-18 | 中山大学 | Robot zero-force control method and device based on high-pass filtering decoupling |
CN116512225A (en) * | 2023-06-16 | 2023-08-01 | 北京敏锐达致机器人科技有限责任公司 | Robot dragging teaching method and device, electronic equipment and readable storage medium |
CN117706931A (en) * | 2023-12-15 | 2024-03-15 | 苏州康多机器人有限公司 | Lifting teaching control method, device, equipment and medium for surgical robot |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105479459A (en) * | 2015-12-29 | 2016-04-13 | 深圳市汇川技术股份有限公司 | Zero-force control method and system for robot |
CN106426174A (en) * | 2016-11-05 | 2017-02-22 | 上海大学 | Robot contact force detecting method based on torque observation and friction identification |
CN108582069A (en) * | 2018-04-17 | 2018-09-28 | 上海达野智能科技有限公司 | Robot drags teaching system and method, storage medium, operating system |
CN108582078A (en) * | 2018-05-15 | 2018-09-28 | 清华大学深圳研究生院 | A kind of mechanical arm zero-force control method towards direct teaching |
CN108839023A (en) * | 2018-07-03 | 2018-11-20 | 上海节卡机器人科技有限公司 | Drag teaching system and method |
CN109397265A (en) * | 2018-11-13 | 2019-03-01 | 华中科技大学 | A kind of joint type industrial robot dragging teaching method based on kinetic model |
CN109454625A (en) * | 2018-09-12 | 2019-03-12 | 华中科技大学 | A kind of non-moment sensor industrial robot dragging teaching method |
CN109676607A (en) * | 2018-12-30 | 2019-04-26 | 江苏集萃智能制造技术研究所有限公司 | A kind of zero-g control method of non-moment sensing |
-
2019
- 2019-11-20 CN CN201911144507.3A patent/CN110815190B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105479459A (en) * | 2015-12-29 | 2016-04-13 | 深圳市汇川技术股份有限公司 | Zero-force control method and system for robot |
CN106426174A (en) * | 2016-11-05 | 2017-02-22 | 上海大学 | Robot contact force detecting method based on torque observation and friction identification |
CN108582069A (en) * | 2018-04-17 | 2018-09-28 | 上海达野智能科技有限公司 | Robot drags teaching system and method, storage medium, operating system |
CN108582078A (en) * | 2018-05-15 | 2018-09-28 | 清华大学深圳研究生院 | A kind of mechanical arm zero-force control method towards direct teaching |
CN108839023A (en) * | 2018-07-03 | 2018-11-20 | 上海节卡机器人科技有限公司 | Drag teaching system and method |
CN109454625A (en) * | 2018-09-12 | 2019-03-12 | 华中科技大学 | A kind of non-moment sensor industrial robot dragging teaching method |
CN109397265A (en) * | 2018-11-13 | 2019-03-01 | 华中科技大学 | A kind of joint type industrial robot dragging teaching method based on kinetic model |
CN109676607A (en) * | 2018-12-30 | 2019-04-26 | 江苏集萃智能制造技术研究所有限公司 | A kind of zero-g control method of non-moment sensing |
Non-Patent Citations (1)
Title |
---|
康永利: "七自由度协作机器人拖动示教及碰撞检测研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110774269B (en) * | 2019-11-26 | 2020-11-20 | 上海节卡机器人科技有限公司 | Robot drag teaching method and device, electronic device and storage medium |
CN110774269A (en) * | 2019-11-26 | 2020-02-11 | 上海节卡机器人科技有限公司 | Robot drag teaching method and device, electronic device and storage medium |
CN112454333A (en) * | 2020-11-26 | 2021-03-09 | 青岛理工大学 | Robot teaching system and method based on image segmentation and surface electromyogram signals |
CN112454333B (en) * | 2020-11-26 | 2022-02-11 | 青岛理工大学 | Robot teaching system and method based on image segmentation and surface electromyogram signals |
CN114619440B (en) * | 2020-12-10 | 2024-02-09 | 北京配天技术有限公司 | Method for correcting friction model, robot and computer readable storage medium |
CN114619440A (en) * | 2020-12-10 | 2022-06-14 | 北京配天技术有限公司 | Method for correcting friction model, robot and computer readable storage medium |
CN112847345A (en) * | 2020-12-30 | 2021-05-28 | 上海节卡机器人科技有限公司 | Method and device for determining robot dragging teaching mode |
CN112847345B (en) * | 2020-12-30 | 2022-04-15 | 上海节卡机器人科技有限公司 | Method and device for determining robot dragging teaching mode |
CN112894821A (en) * | 2021-01-30 | 2021-06-04 | 同济大学 | Current method based collaborative robot dragging teaching control method, device and equipment |
CN112894821B (en) * | 2021-01-30 | 2022-06-28 | 同济大学 | Current method based collaborative robot dragging teaching control method, device and equipment |
CN114310851A (en) * | 2022-01-27 | 2022-04-12 | 华南理工大学 | Robot dragging-free teaching method for torque sensor |
CN114310851B (en) * | 2022-01-27 | 2023-06-16 | 华南理工大学 | Dragging teaching method of robot moment-free sensor |
CN114516052A (en) * | 2022-03-23 | 2022-05-20 | 杭州湖西云百生科技有限公司 | Dynamics control method and system of parallel real-time high-performance multi-axis mechanical arm |
CN114516052B (en) * | 2022-03-23 | 2023-12-22 | 杭州湖西云百生科技有限公司 | Dynamics control method and system for parallel real-time high-performance multi-axis mechanical arm |
CN116442240B (en) * | 2023-05-26 | 2023-11-14 | 中山大学 | Robot zero-force control method and device based on high-pass filtering decoupling |
CN116442240A (en) * | 2023-05-26 | 2023-07-18 | 中山大学 | Robot zero-force control method and device based on high-pass filtering decoupling |
CN116512225A (en) * | 2023-06-16 | 2023-08-01 | 北京敏锐达致机器人科技有限责任公司 | Robot dragging teaching method and device, electronic equipment and readable storage medium |
CN117706931A (en) * | 2023-12-15 | 2024-03-15 | 苏州康多机器人有限公司 | Lifting teaching control method, device, equipment and medium for surgical robot |
Also Published As
Publication number | Publication date |
---|---|
CN110815190B (en) | 2021-07-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110815190B (en) | Industrial robot dragging demonstration method and system | |
CN108582078A (en) | A kind of mechanical arm zero-force control method towards direct teaching | |
Sharifi et al. | Nonlinear model reference adaptive impedance control for human–robot interactions | |
CN108656112B (en) | Mechanical arm zero-force control experiment system for direct teaching | |
CN110355751B (en) | Control device and machine learning device | |
Steinmetz et al. | Simultaneous kinesthetic teaching of positional and force requirements for sequential in-contact tasks | |
Fang et al. | Skill learning for human-robot interaction using wearable device | |
CN112218744A (en) | System and method for learning agile movement of multi-legged robot | |
CN115122325A (en) | Robust visual servo control method for anthropomorphic manipulator with view field constraint | |
CN109352656A (en) | A kind of multi-joint mechanical arm control method with time-varying output constraint | |
Mohammed et al. | Energy-efficient robot configuration for assembly | |
Li et al. | Neural learning and kalman filtering enhanced teaching by demonstration for a baxter robot | |
Zhang et al. | Model-based design of the vehicle dynamics control for an omnidirectional automated guided vehicle (AGV) | |
Khanesar et al. | A Neural Network Separation Approach for the Inclusion of Static Friction in Nonlinear Static Models of Industrial Robots | |
CN108227493A (en) | A kind of robot trace tracking method | |
CN111203883B (en) | Self-learning model prediction control method for robot electronic component assembly | |
Papp et al. | Navigation of differential drive mobile robot on predefined, software designed path | |
Renawi et al. | ROS validation for non-holonomic differential robot modeling and control: Case study: Kobuki robot trajectory tracking controller | |
CN116038697A (en) | Jeans automatic spraying method and system based on manual teaching | |
CN114800523A (en) | Mechanical arm track correction method, system, computer and readable storage medium | |
CN106292678B (en) | A kind of robot for space pedestal decoupling control method for object run | |
Wang et al. | Step-by-step identification of industrial robot dynamics model parameters and force-free control for robot teaching | |
CN114840947A (en) | Three-degree-of-freedom mechanical arm dynamic model with constraint | |
CN114454150A (en) | Arm type robot control method based on composite learning and robot system | |
CN113043269B (en) | Robot contact force observation system based on robot model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |