CN110815190B - Industrial robot dragging demonstration method and system - Google Patents

Industrial robot dragging demonstration method and system Download PDF

Info

Publication number
CN110815190B
CN110815190B CN201911144507.3A CN201911144507A CN110815190B CN 110815190 B CN110815190 B CN 110815190B CN 201911144507 A CN201911144507 A CN 201911144507A CN 110815190 B CN110815190 B CN 110815190B
Authority
CN
China
Prior art keywords
robot
joint
matrix
friction
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911144507.3A
Other languages
Chinese (zh)
Other versions
CN110815190A (en
Inventor
吴海彬
黄兴平
许锡阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201911144507.3A priority Critical patent/CN110815190B/en
Publication of CN110815190A publication Critical patent/CN110815190A/en
Application granted granted Critical
Publication of CN110815190B publication Critical patent/CN110815190B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed

Abstract

The invention relates to a dragging teaching method and a dragging teaching system for an industrial robot, wherein a six-dimensional force sensor is used for collecting stress information of the robot to identify robot dynamics parameters so as to establish a frictionless robot dynamics model, the actual motion control quantity and the theoretical motion control quantity of the robot are calculated according to an impedance control law, a reinforced iterative learning process is used for identifying corresponding friction force model parameters in the robot dynamics model containing friction force, dragging teaching is implemented and errors in the teaching process are compensated based on the robot dynamics model containing friction force and the reinforced iterative learning process, and the dragging teaching method and the dragging teaching system have the advantages of simplicity, accurate control, stability and the like.

Description

Industrial robot dragging demonstration method and system
Technical Field
The invention relates to the technical field of robot teaching, in particular to an industrial robot dragging teaching method and system.
Background
With the continuous development of the robot technology, the application of the robot is also greatly popularized. In many fields, the robot technology can play an important role. Industrial robots have long been widely used in the field of industrial production to perform welding, assembly, handling, etc. tasks to improve production efficiency. In order for a robot to accomplish a given task, it is necessary to teach the robot.
Teaching techniques for robots include direct teaching, teach pendant teaching, and offline teaching. The demonstrator teaching and the off-line teaching have the defects of long time consumption, low efficiency, complex operation, high requirement on operators and the like, and the requirement of industrial production on the teaching efficiency is difficult to meet. Direct teaching is a way to teach by directly pulling the robot to reach the target pose. Compared with teaching of a demonstrator and off-line teaching, the teaching device has the advantages that the direct teaching efficiency is higher, and the requirement on operators is not too high.
The direct teaching is divided into power level off teaching and servo level on teaching. The power level deviation teaching is that a teaching person overcomes the gravity and the joint friction force of the robot to pull the robot, and considerable labor intensity is required. The servo level connection teaching is that the robot is indirectly pulled by the joint actuator. Therefore, direct teaching is generally realized by adopting a servo-level connection teaching method.
The servo level turn-on teaching can be divided into three categories depending on the sensors used in the teaching process. The first type is that a six-dimensional force sensor is arranged at the tail end of a robot, a controller collects external force information applied by an operator through the sensor, and direct teaching is realized by combining impedance control. This implementation is simple, but only can be taught at the sensor mounting location, contact information on the robot body cannot be sensed, and six-dimensional force sensors that meet relevant requirements are expensive. The second type is that a torque sensor or a double encoder is arranged at each joint of the robot to form a flexible joint. The structure simplifies a dynamic model of the robot, so that a relatively accurate model can be established, and contact information of all positions of the robot body can be sensed through a generalized momentum method, but the market acceptance is low due to the fact that the cost of the mode is too high. And in the third category, an external force and contact information of the robot body are estimated through current information of each joint of the robot without an external torque sensor, and the controller acquires the information and controls the robot to perform corresponding motion. The method is low in cost, but the operation hand feeling is not as good as that of a direct teaching method for installing the torque sensor on each joint, and the dynamic parameters and the friction force model of the robot can be accurately identified.
Disclosure of Invention
In view of this, the invention aims to provide an industrial robot dragging teaching method and system, which have the advantages of simplicity, accurate control, stability and the like.
The invention is realized by adopting the following scheme: an industrial robot dragging teaching method comprises the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMTo control the robotThe actual motion quantity Q of the robot is collected at the same time of motion and is controlled according to the motion control quantity C at preset intervalsMAnd carrying out error compensation on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot.
Further, step S1 specifically includes the following steps:
step S11: selecting an excitation track to drive the robot to move, collecting stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation;
step S12: theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT
Figure BDA0002281803990000031
Wherein M (q) represents an inertia matrix,
Figure BDA0002281803990000032
which represents the acceleration vector of the joint,
Figure BDA0002281803990000033
a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure BDA0002281803990000034
representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR
Figure BDA0002281803990000035
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,
Figure BDA0002281803990000036
a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to an impedance control law formula, respectively utilizing actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTWherein the impedance control law formula is as follows:
Figure BDA0002281803990000037
wherein τ (k) represents a joint moment sequence,
Figure BDA0002281803990000038
representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substituted
Figure BDA0002281803990000041
Controlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substituted
Figure BDA0002281803990000042
Controlling quantity C for theoretical movementT
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTUsing a reinforced iterative learning processAnd (5) performing parameter identification on the friction force model to obtain a robot dynamic model containing the friction force.
Further, the parameter identification of the friction force model in step S14 specifically includes the following steps:
step S141: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf
Step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure BDA0002281803990000043
wherein the deviation term Δ C ═ CR-CT
Step S143: the enhancement signal R is calculated using the following formula:
Figure BDA0002281803990000044
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
Figure BDA0002281803990000051
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfThe original theoretical joint moment is compensated, and the theoretical joint moment is changed into tau after compensationTfCalculating a new theoretical motion control quantity C by using the compensated theoretical joint moment according to the impedance control lawT(ii) a And returns to step S142.
Further, step S2 specifically includes the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
Figure BDA0002281803990000052
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,
Figure BDA0002281803990000053
a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure BDA0002281803990000054
g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
Figure BDA0002281803990000055
step S22: calculating the motion control quantity C by using an impedance control law formulaM
Figure BDA0002281803990000056
Wherein τ (k) is τeWhen substituted, the calculated joint velocity sequence
Figure BDA0002281803990000061
I.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; and when the cache region is full, re-identifying the friction force model parameters by utilizing a reinforced iterative learning process, and compensating errors.
Further, the parameter re-identification of the friction force model in step S23 specifically includes the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf
Step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure BDA0002281803990000062
wherein the deviation term Δ C ═ CM-Q;
Step S233: the enhancement signal R is calculated using the following formula:
Figure BDA0002281803990000063
step S234: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
Figure BDA0002281803990000071
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfReplacing original tau in momentum deviation observer discretization formulafUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
The invention also provides an industrial robot dragging teaching system which comprises an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps as described in any one of the above items when the computer runs.
The invention also provides a computer-readable storage medium having stored thereon a computer program capable of being executed by a processor, which, when executing the computer program, performs the method steps as set forth in any of the above.
The robot dynamic parameter identification method based on the six-dimensional force sensor has the advantages of being simple, accurate and stable in control and the like.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts the reinforced iterative learning process to identify the friction force model parameters, and has the advantages of accurate obtained result, simple implementation process and no need of complex theoretical derivation.
2. The invention collects the relevant data of the robot while implementing the dragging teaching, and performs re-identification and error compensation of the model parameters according to the obtained data, thereby ensuring the precision of the dragging teaching.
3. The invention adopts an impedance control law, and can stably carry out the dragging teaching process by selecting proper control parameters.
Drawings
Fig. 1 is a flowchart of step S1 according to an embodiment of the present invention.
Fig. 2 is a control block diagram of step S2 according to the embodiment of the present invention.
Fig. 3 is a flowchart illustrating the friction parameter identification in step S14 according to the embodiment of the invention.
Fig. 4 is a flowchart illustrating the friction parameter re-identification in step S23 according to the embodiment of the invention.
Fig. 5 is a system composition diagram according to an embodiment of the invention.
In the figure, 1 is an industrial robot body, 2 is a robot controller, 3 is an ethernet cable, 4 is a computer, 5 is a six-dimensional force sensor, and 6 is a current feedback module.
Detailed Description
The invention is further explained below with reference to the drawings and the embodiments.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The embodiment provides an industrial robot dragging teaching method which comprises the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force; the specific flow is shown in figure 1;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMControlling the robot to move, simultaneously collecting the actual movement quantity Q of the robot, and controlling the movement quantity C according to the preset intervalMError compensation is carried out on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot; the specific control block diagram is shown in fig. 2;
in this embodiment, step S1 specifically includes the following steps:
step S11: selecting an excitation track to drive the robot to move, acquiring stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation; in the embodiment, a Fourier series joint track is selected as an excitation track;
step S12: theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT
Figure BDA0002281803990000091
Wherein M (q) represents an inertia matrix,
Figure BDA0002281803990000101
which represents the acceleration vector of the joint,
Figure BDA0002281803990000102
a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure BDA0002281803990000103
representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR
Figure BDA0002281803990000104
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,
Figure BDA0002281803990000105
a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to an impedance control law formula, respectively utilizing actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTIn which the impedance is controlledThe law equation is as follows:
Figure BDA0002281803990000106
wherein τ (k) represents a joint moment sequence,
Figure BDA0002281803990000109
representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substituted
Figure BDA0002281803990000107
Controlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substituted
Figure BDA0002281803990000108
Controlling quantity C for theoretical movementT(ii) a In this embodiment, the parameter to be selected is Bd、MdAnd T, the selected parameter values are respectively Bd=diag(30,30,30,20,20,20)、Md=diag(1,1,1,1,1,1)、T=0.1;
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTIdentifying parameters of the friction force model by using a reinforced iterative learning process to obtain a robot dynamic model containing friction force; in this embodiment, the frictional force model is a coulomb viscous frictional model, and the number of samples is set to 1000.
In this embodiment, as shown in fig. 3, the parameter identification of the friction force model in step S14 specifically includes the following steps:
step S141: determining a gain matrix G and a contraction coefficient lambda, defining a parameter matrix Fx and assigning an initial value F according to the selected friction modelp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf(ii) a In thatIn this embodiment, the size of the parameter matrix corresponding to the selected coulomb viscous friction model is 6 rows and 4 columns, and λ is set to 0.8;
step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure BDA0002281803990000111
wherein the deviation term Δ C ═ CR-CT(ii) a The robot body in the embodiment has six joints, so that l is 6, and meanwhile, the sampling number is 1000;
step S143: the enhancement signal R is calculated using the following formula:
Figure BDA0002281803990000112
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
Figure BDA0002281803990000121
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfCompensate the original theoretical joint momentThe moment of the theoretical joint is changed into tau after being compensatedTfCalculating a new theoretical motion control quantity C by using the compensated theoretical joint moment according to the impedance control lawT(ii) a And returns to step S142.
In the present embodiment, steps S142 to S148 are repeated until the deviation term Δ C converges to zero, at which time F corresponds topThe final friction model parameters are obtained.
In this embodiment, step S2 specifically includes the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
Figure BDA0002281803990000122
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,
Figure BDA0002281803990000123
a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure BDA0002281803990000124
g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
Figure BDA0002281803990000125
step S22: calculating the motion control quantity C by using an impedance control law formulaM
Figure BDA0002281803990000131
Wherein τ (k) isBy τeWhen substituted, the calculated joint velocity sequence
Figure BDA0002281803990000132
I.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; when the cache region is full, re-identifying the friction force model parameters by using a reinforced iterative learning process, and compensating errors; in this embodiment, the capacity of the buffer is 100 samples.
In this embodiment, as shown in fig. 4, the parameter re-identification of the friction force model in step S23 specifically includes the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf(ii) a In this embodiment, the size of the parameter matrix corresponding to the selected coulomb viscous friction model is 6 rows and 4 columns, and meanwhile, in order to increase the identification speed, λ is set to 0.6;
step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure BDA0002281803990000133
wherein the deviation term Δ C ═ CM-Q; the robot in this embodiment has six joints, so l equals 6, and because the capacity of the buffer area is 100 samples, the number of samples t equals 100;
step S233: the enhancement signal R is calculated using the following formula:
Figure BDA0002281803990000134
step S234: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
Figure BDA0002281803990000141
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfSubstituting the original tau in the discretization formula of the momentum deviation observerfUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
The embodiment repeats steps S232 to S238 until the deviation term Δ C converges to zero, and then uses the corresponding FpAnd original friction model parameters are replaced, and error compensation is realized.
The embodiment also provides an industrial robot dragging teaching system, as shown in fig. 5, which includes an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps as described in any one of the above items when the computer runs.
The present embodiment also provides a computer-readable storage medium having stored thereon a computer program capable of being executed by a processor, which, when executing the computer program, performs the method steps as set forth in any of the above.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is directed to preferred embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow. However, any simple modification, equivalent change and modification of the above embodiments according to the technical essence of the present invention are within the protection scope of the technical solution of the present invention.

Claims (6)

1. An industrial robot dragging teaching method is characterized by comprising the following steps:
step S1: establishing a robot dynamic model containing friction force, and calculating the actual motion control quantity C in real timeRAnd the theoretical motion control quantity CTPerforming parameter identification on a friction item in a robot dynamic model containing friction force;
step S2: based on a robot dynamic model containing friction force, the joint torque borne by the current robot is observed, and the motion control quantity C is calculated by using the joint torque obtained by observationMControlling the robot to move, simultaneously collecting the actual movement quantity Q of the robot, and controlling the movement quantity C according to the preset intervalMError compensation is carried out on the robot dynamic model containing the friction force with the actual motion quantity Q of the robot;
wherein, step S1 specifically includes the following steps:
step S11: selecting an excitation track to drive the robot to move, collecting stress information at the tail end of the robot to identify kinetic parameters of the robot, acquiring kinetic parameters of each joint of the robot, including joint mass, joint centroid coordinates, joint inertia moment and inertia product, and establishing a frictionless robot kinetic equation;
step S12:theoretical joint moment tau is calculated by adopting friction-free robot dynamic equationT
Figure FDA0003103766130000011
Wherein M (q) represents an inertia matrix,
Figure FDA0003103766130000012
which represents the acceleration vector of the joint,
Figure FDA0003103766130000013
a matrix of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure FDA0003103766130000014
representing the angular velocity vector of the joint, G (q) representing the gravity vector, τiThe total moment of the joint is expressed by the calculation formula: tau isi=Kii, K in the formulaiAnd i represents a moment coefficient and a joint current respectively; the joint current is collected by a current feedback module arranged on the robot joint;
simultaneously, the actual joint moment tau is calculated by adopting the following formulaR
Figure FDA0003103766130000015
In the formula, JnRepresenting a jacobian matrix relative to a robot tool coordinate system, F representing a force vector acting on the robot tip, the force vector being obtained by a six-dimensional force sensor mounted to the robot tip,
Figure FDA0003103766130000021
a pose matrix, G, representing the robot tool coordinate system in a base coordinate systemtoolRepresenting a gravity vector of the robot end tool in a base coordinate system;
step S13: according to impedanceControl law formula, respectively using actual joint torque tauRAnd theoretical joint moment tauTCalculating the actual motion control quantity C of the robotRAnd a theoretical motion control quantity CTWherein the impedance control law formula is as follows:
Figure FDA0003103766130000022
wherein τ (k) represents a joint moment sequence,
Figure FDA0003103766130000023
representing a sequence of joint velocities, BdRepresenting a damping parameter matrix, MdRepresenting a quality parameter matrix, T representing a sampling period, and a prime sign + representing solving a generalized inverse matrix; motion control, i.e. joint velocity sequence, using τ (k) as τ (k)RCalculated when substituted
Figure FDA0003103766130000024
Controlling the quantity C for the actual movementRWhen τ (k) is usedTCalculated when substituted
Figure FDA0003103766130000025
Controlling quantity C for theoretical movementT
Step S14: collecting robot actual motion control quantity C according to preset sampling numberRAnd the theoretical motion control quantity CTAnd performing parameter identification on the friction force model by using a reinforced iterative learning process to obtain a robot dynamics model containing friction force.
2. An industrial robot drag teaching method according to claim 1 wherein the parameter identification of the friction force model in step S14 comprises the following steps:
step S141: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf
Step S142: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure FDA0003103766130000026
wherein the deviation term Δ C ═ CR-CT
Step S143: the enhancement signal R is calculated using the following formula:
Figure FDA0003103766130000031
step S144: the learning direction matrix S is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S145: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S146: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S147: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S148; otherwise, completing parameter identification of the friction item in the robot dynamic model containing the friction force;
Figure FDA0003103766130000032
step S148: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfThe original theoretical joint moment is compensated, and the theoretical joint moment is changed into tau after compensationTfCalculating by using the compensated theoretical joint moment according to the impedance control lawNew theoretical motion control quantity CT(ii) a And returns to step S142.
3. An industrial robot drag teaching method according to claim 1 wherein the step S2 comprises the following steps:
step S21: based on a robot dynamics model containing friction force, a momentum deviation observer is adopted to detect joint torque tau borne by a roboteWherein the discretization formula of the momentum deviation observer is as follows:
Figure FDA0003103766130000041
wherein r is the observed joint moment applied to the robot, where K represents the gain coefficient and T represents the gain coefficientsWhich represents the period of the sampling,
Figure FDA0003103766130000042
a matrix sequence of velocity terms representing the dependence of centrifugal force and coriolis force,
Figure FDA0003103766130000043
g(k)、τf(k) respectively representing a joint velocity sequence, a gravity vector sequence and a friction torque sequence, taui(k) The total moment sequence of the joint is expressed, P is generalized momentum, and the calculation formula is as follows:
Figure FDA0003103766130000044
step S22: calculating the motion control quantity C by using an impedance control law formulaM
Figure FDA0003103766130000045
Wherein τ (k) is τeWhen substituted, the calculated joint velocity sequence
Figure FDA0003103766130000046
I.e. the motion control quantity C of the robotM(ii) a Using motion control quantity CMControlling the robot to move;
step S23: real-time acquisition of robot motion control quantity CMMeanwhile, the actual amount of robot motion Q provided by the robot controller is collected and stored in a buffer area; and when the cache region is full, re-identifying the friction force model parameters by utilizing a reinforced iterative learning process, and compensating errors.
4. A method for teaching industrial robot dragging according to claim 3, wherein the parameter re-identification of the friction force model in step S23 comprises the following steps:
step S231: determining a gain matrix G and a shrinkage factor lambda, defining a parameter matrix F based on the selected friction modelpAnd assigning an initial value Fp0; wherein, lambda is less than 1, FpCapable of uniquely determining friction torque tauf
Step S232: obtaining an error matrix E with the size of l rows and t columns, wherein l represents the number of joints of the robot, t represents the number of samples, and the specific calculation formula of the error matrix is as follows:
Figure FDA0003103766130000051
wherein the deviation term Δ C ═ CM-Q;
Step S233: the enhancement signal R is calculated using the following formula:
Figure FDA0003103766130000052
step S234: the learning direction matrix s is calculated using the following equation:
S=sign[R(i,k)-R(i,k-1)];
in the formula, sign represents a sign taking function;
step S235: the learning matrix L is calculated using the following equation:
L=diag(R(:,k))diag(S)G;
step S236: updating the parameter matrix F using the following equationp
Fp=Fp-L;
Step S237: judging whether the deviation item delta C converges to zero or not, and when the deviation item diverges, updating the parameter matrix and the gain matrix by adopting the following formula, and entering the step S238; otherwise, completing the parameter re-identification of the friction item in the robot dynamic model containing the friction force;
Figure FDA0003103766130000053
step S238: from friction models and parameter matrices FpCalculating the friction torque taufBy using τfReplacing original tau in momentum deviation observer discretization formulafUpdating the momentum deviation observer formula, and calculating the motion control quantity C by using the updated momentum deviation observer formulaM(ii) a And returning to the step S232 when the buffer area is full.
5. A dragging teaching system of an industrial robot is characterized by comprising an industrial robot body, a current feedback module, a six-dimensional force sensor, a computer and a robot controller; the current feedback module is arranged on each joint of the robot, the six-dimensional force sensor is arranged at the tail end of the robot, and the current feedback module, the six-dimensional force sensor and the robot controller are all in communication connection with the computer, and the computer executes the method steps according to any one of claims 1-4 when running.
6. A computer-readable storage medium, on which a computer program is stored which can be executed by a processor, characterized in that the processor, when executing the computer program, performs the method steps of any of claims 1-4.
CN201911144507.3A 2019-11-20 2019-11-20 Industrial robot dragging demonstration method and system Active CN110815190B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911144507.3A CN110815190B (en) 2019-11-20 2019-11-20 Industrial robot dragging demonstration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911144507.3A CN110815190B (en) 2019-11-20 2019-11-20 Industrial robot dragging demonstration method and system

Publications (2)

Publication Number Publication Date
CN110815190A CN110815190A (en) 2020-02-21
CN110815190B true CN110815190B (en) 2021-07-27

Family

ID=69557792

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911144507.3A Active CN110815190B (en) 2019-11-20 2019-11-20 Industrial robot dragging demonstration method and system

Country Status (1)

Country Link
CN (1) CN110815190B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110774269B (en) * 2019-11-26 2020-11-20 上海节卡机器人科技有限公司 Robot drag teaching method and device, electronic device and storage medium
CN112454333B (en) * 2020-11-26 2022-02-11 青岛理工大学 Robot teaching system and method based on image segmentation and surface electromyogram signals
CN114619440B (en) * 2020-12-10 2024-02-09 北京配天技术有限公司 Method for correcting friction model, robot and computer readable storage medium
CN112847345B (en) * 2020-12-30 2022-04-15 上海节卡机器人科技有限公司 Method and device for determining robot dragging teaching mode
CN112894821B (en) * 2021-01-30 2022-06-28 同济大学 Current method based collaborative robot dragging teaching control method, device and equipment
CN114310851B (en) * 2022-01-27 2023-06-16 华南理工大学 Dragging teaching method of robot moment-free sensor
CN114516052B (en) * 2022-03-23 2023-12-22 杭州湖西云百生科技有限公司 Dynamics control method and system for parallel real-time high-performance multi-axis mechanical arm
CN116442240B (en) * 2023-05-26 2023-11-14 中山大学 Robot zero-force control method and device based on high-pass filtering decoupling

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105479459B (en) * 2015-12-29 2017-11-14 汇川技术(东莞)有限公司 Robot zero-force control method and system
CN106426174B (en) * 2016-11-05 2019-01-11 上海大学 A kind of robotic contact power detection method based on torque observation and Friction identification
CN108582069A (en) * 2018-04-17 2018-09-28 上海达野智能科技有限公司 Robot drags teaching system and method, storage medium, operating system
CN108582078A (en) * 2018-05-15 2018-09-28 清华大学深圳研究生院 A kind of mechanical arm zero-force control method towards direct teaching
CN108839023B (en) * 2018-07-03 2021-12-07 上海节卡机器人科技有限公司 Drag teaching system and method
CN109454625B (en) * 2018-09-12 2021-04-06 华中科技大学 Dragging demonstration method for industrial robot without torque sensor
CN109397265B (en) * 2018-11-13 2020-10-16 华中科技大学 Joint type industrial robot dragging teaching method based on dynamic model
CN109676607B (en) * 2018-12-30 2021-10-29 江苏集萃智能制造技术研究所有限公司 Zero gravity control method without torque sensing

Also Published As

Publication number Publication date
CN110815190A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110815190B (en) Industrial robot dragging demonstration method and system
CN108656112B (en) Mechanical arm zero-force control experiment system for direct teaching
Sharifi et al. Nonlinear model reference adaptive impedance control for human–robot interactions
CN109249397B (en) Six-degree-of-freedom robot dynamics parameter identification method and system
CN108582078A (en) A kind of mechanical arm zero-force control method towards direct teaching
Liu et al. Sensorless haptic control for human-robot collaborative assembly
Fang et al. Skill learning for human-robot interaction using wearable device
CN112218744A (en) System and method for learning agile movement of multi-legged robot
CN108638070A (en) Robot based on dynamic equilibrium loads weight parameter discrimination method
CN112743541A (en) Soft floating control method for mechanical arm of powerless/torque sensor
Shan et al. Structural error and friction compensation control of a 2 (3PUS+ S) parallel manipulator
Sun Kinematics model identification and motion control of robot based on fast learning neural network
Zhang et al. Model-based design of the vehicle dynamics control for an omnidirectional automated guided vehicle (AGV)
CN115122325A (en) Robust visual servo control method for anthropomorphic manipulator with view field constraint
WO2020133881A1 (en) Learning control method for mechanical apparatus, and mechanical apparatus learning control system having learning function
CN116038697A (en) Jeans automatic spraying method and system based on manual teaching
Renawi et al. ROS validation for non-holonomic differential robot modeling and control: Case study: Kobuki robot trajectory tracking controller
CN114800523A (en) Mechanical arm track correction method, system, computer and readable storage medium
Khanesar et al. A Neural Network Separation Approach for the Inclusion of Static Friction in Nonlinear Static Models of Industrial Robots
CN114840947A (en) Three-degree-of-freedom mechanical arm dynamic model with constraint
Wang et al. The simulation of nonlinear model predictive control for a human-following mobile robot
CN113043269B (en) Robot contact force observation system based on robot model
CN114211478A (en) Optimal control method and system for coordinated operation of modular mechanical arm
Rivera et al. Discrete-time modeling and control of an under-actuated robotic system
Lee et al. A decentralized model identification scheme by random-walk rls process for robot manipulators: Experimental studies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant