Background
With the development of industrial automation technology, industrial robots play an important role in more and more production tasks, but limited by technical maturity and implementation cost, some complex operation tasks still need to be completed manually by people, thereby forcing the generation of cooperative robots capable of operating in a human-machine co-fusion environment.
Compared with the traditional industrial robot, the cooperative robot does not need an independent isolation space, can cooperate closely with human beings to complete production tasks, for example, on the assembly line of 3C products, the human beings can complete complex assembly tasks, and the cooperative robot can rapidly and accurately complete part picking and placing tasks, so that the cooperative work division is greatly improved in production efficiency, and the production cost is reduced. In order to achieve the cooperation goal, a safe human-computer interaction environment needs to be ensured, and requirements on control of the cooperative robot in terms of accuracy and flexibility are far higher than those of a traditional robot.
At present, industrial robots generally adopt a distributed control mode of a central motion controller and a plurality of servo drivers, and the mode is convenient in layout and simple in application. Most of the traditional industrial robots work in a position control mode, each joint utilizes a driver to realize accurate position loop PID control, and receives the instruction requirement of a motion controller through a bus. For a cooperative robot, complex algorithms such as feedforward control and compliance control need to be realized, a distributed architecture has the problems of limited signal transmission rate and a synchronization mechanism, and the real-time performance and the rapidity of the distributed architecture hardly meet the requirements of the cooperative robot. In order to solve the problem, a driving and controlling integrated controller for a cooperative robot is provided at present, and has the characteristics of compact structure, high response speed, high control precision, low cost and the like. However, the existing integrated control device still has the following problems in application: firstly, the algorithm implementation still needs a controller at an upper layer, which can generate the problems of data transmission, real-time performance and synchronization among different systems; secondly, most of the existing kinetic model parameter identification algorithms are realized by iterative estimation based on the traditional excitation track and least square method, the modeling is complex, the estimation precision is not high, and the parameters which can not be modeled can not be identified; thirdly, the control information of the active compliance force of the existing robot is mainly obtained through a force/torque sensor on a joint, but the force/torque sensor is large in size and is not suitable for being used on a cooperative robot, and if the force/torque sensor with smaller size is adopted, the problem of high price exists; and fourthly, the collision prevention detection capability is limited, and the safety protection strategy is not suitable for being used on the cooperative robot.
Disclosure of Invention
In order to solve the problems, the invention provides an intelligent driving and controlling integrated control system which can identify the parameters which can not be modeled, can carry out active compliance control without a sensor, has good anti-collision detection capability and safety protection strategies and is suitable for being used on a cooperative robot.
The technical scheme of the invention is as follows: provided is a drive-control integrated control system for controlling a cooperative robot (6), the drive-control integrated control system including: the system comprises a control module (1), an intelligent dynamic parameter identification module (2), a sensorless active compliance control module (3), a force feedback man-machine cooperation anti-collision control module (4) and a multi-axis driving module (5); the control module (1) is used for controlling the multi-axis driving module (5) in real time according to a preset instruction or/and a signal fed back by the intelligent dynamic parameter identification module (2), the sensorless active compliance control module (3) and the force feedback man-machine cooperation anti-collision control module (4) according to the state of the cooperative robot (6) so as to further enable the multi-axis driving module (5) to control the movement of the cooperative robot (6); the intelligent dynamic parameter identification module (2) is used for feeding back a mechanical parameter identification signal of the cooperative robot (6) to the control module (1) according to the motion state of the cooperative robot (6); the sensorless active compliance control module (3) is used for feeding back a position signal, a force signal and an environment signal of the cooperative robot (6) to the control module (1) according to the motion state of the cooperative robot (6); the force feedback man-machine cooperation anti-collision control module (4) is used for feeding back a safety state signal of the cooperation robot (6) to the control module (1) according to the motion state of the cooperation robot (6).
Wherein, the intelligent kinetic parameter identification module (2) comprises: a nominal model (21), the nominal model (21) being based on a Lagrangian dynamics model for: according to the motion state of the cooperative robot (6), the motion speed of any point on each connecting rod of the cooperative robot is obtained, and the kinetic energy of each connecting rod of the cooperative robot in the motion process and the total kinetic energy of the cooperative robot in motion are calculated; calculating potential energy of each connecting rod of the cooperative robot in the motion process and total potential energy of each connecting rod of the cooperative robot relative to a reference potential energy plane in the motion process; constructing a Lagrange function of the cooperative robot according to the total kinetic energy and the total potential energy of the cooperative robot; performing derivation operation on the Lagrange function to obtain a nominal kinetic equation of the cooperative robot; the actual dynamic model (22) is used for obtaining an actual dynamic equation of the actual dynamic model of the cooperative robot according to preset parameters; a parameter discriminating neural network (23) for coupling the cooperating robotsSetting a torque working mode, selecting a smooth torque curve in the range from minimum to maximum of joint torque as the input of the cooperative robot, and acquiring angular displacement, angular velocity and angular acceleration of each joint by using a code disc of each joint; setting sampling time (T) in a sampling period (T), and taking N groups of data containing torque, angular displacement, angular velocity and angular acceleration as one-time training sample data; a learning optimization module (24) for obtaining a theoretical output value of the moment tau (k) in the sample data through a nominal model
Combining the moments τ (k) with the actual output values in the samples
Inputting the data into a parameter identification neural network to obtain an output correction value
And combining the theoretical output value with the output correction value to obtain an identification output value
The difference between the actual output value and the identified output value is used to obtain the output error
And establishing a loss function of the parameter identification neural network by using the output error, and training the parameter identification neural network to further finish the correction of the dynamic model.
Wherein, the nominal kinetic equation is:
wherein D (q) e R
n×nIs a symmetric positive definite inertia matrix;
is a matrix of Coriolis force and centrifugal force; g (q) ε R
n×1Is a gravity center item matrix;
q is a mechanical joint angular displacement vector,
Is the angular velocity vector of the mechanical arm
Is the angular acceleration vector of the mechanical arm; tau epsilon to R
nAnd controlling the moment vector for each joint of the mechanical arm.
Wherein, the actual kinetic equation is:
wherein F (q) represents the friction of the joint movement,
representing disturbances in the motion of the robot arm.
The sensorless active compliance control module (3) comprises a position ring (31), a force ring (32) and a force-position hybrid control law output module (33); the position ring (31) comprises a tail end position input end (311), a position selection matrix (312) and a position control law module (313); the tail end position input end (311) is used for inputting tail end position signals to the position selection matrix (312), and the tail end position signals are input to the force position hybrid control law output module (33) according to the position signals processed by the position selection matrix (312) and the position control law module (313); the force ring (32) comprises a tail end force input end (321), a force selection matrix (322), a force control law module (313) and a joint moment estimation module (324) based on motor current, wherein the tail end force input end (321) is used for inputting a tail end force signal to the force selection matrix (322), the tail end force signal is input to a force position hybrid control law output module (33) according to the force signal processed by the force selection matrix (322) and the force control law module (323), and the joint moment estimation module (324) feeds back the real-time current of a joint motor (35) to the tail end force input end (321); and the force and position mixed control law output module (33) inputs a force and position mixed control law output signal (G) to the joint motor (35).
The sensorless active compliance control module (3) further comprises a robot kinematic model (314), and the robot kinematic model (314) feeds back joint angles and angular velocities of the cooperative robot (6) to the tail end position input end (311).
The sensorless active compliance control module (3) further comprises a compensation module (34), and the compensation module (34) is arranged between the force-position hybrid control law output module (33) and the joint motor (35).
The joint moment estimation module is constructed by a complete robot dynamic equation as follows:
wherein M is the same as R
n×nIs a joint space inertia matrix; c is belonged to R
n×nCalculating a matrix for the coriolis force and the centripetal force; g is as large as R
n×1Is a gravity term vector; q is an element of R
n×1Is the drive joint angle vector; tau epsilon to R
n×1To drive joint torque; motor torque tau
mThe equation for the drive is: τ ═ N τ
m(ii) a Wherein N ∈ R
n×nFor a diagonal matrix of each joint reduction ratio, let J
mInertia of the motor rotor; in the derivation process, friction terms at the rotor of the motor are calculated
Substituting the relation between the motor torque module and the joint torque to obtain a joint torque estimation module based on the motor current, thereby obtaining force detection output:
wherein, collision avoidance detection module (4) of force feedback human-computer cooperation includes: the dynamic equation establishing module (41) is used for establishing a connecting rod coordinate system on a preset robot platform by adopting a D-H parameter method and establishing a robot dynamic equation according to a Lagrange dynamic formula; a collision detection operator and disturbance observer establishing module (42) which constructs a collision detection operator based on the unchanged energy of the robot and a disturbance observer based on the variable quantity of the generalized momentum according to a robot dynamic equation and a momentum equation; the data analysis module (43) determines the relation between the torque and the collision force of each joint based on the real-time feedback of the current of the robot system, provides a Jacobian matrix solving method for the robot, and analyzes the effectiveness of the detection collision; the safety protection strategy making module (44) is used for making different safety protection strategies according to different collision situations and by combining actual working conditions based on the detection result of the collision detection model; the simulation verification and optimization module (45) is used for performing simulation verification and optimization on the effectiveness of a robot collision detection operator and the rationality of a safety protection strategy based on an ADAMS-Simul ink combined simulation platform; and the actual effect verification module (46) is used for verifying and evaluating the actual effect of the obstacle avoidance and protection safety strategy based on force feedback based on a preset robot platform.
Wherein, the anti-collision detection module (4) of force feedback human-computer cooperation further comprises: the monocular binocular stereo matching module (47) is used for constructing a monocular binocular stereo matching model based on SVS, optimizing geometric constraint conditions on a loss function, and realizing accurate estimation of the depth of a detected target in a monocular image through a left-right view synthesis process and binocular stereo matching; the convolution feature extraction module (48) is used for extracting deep convolution features by adopting a ResNet model based on the RGB images collected by the monocular camera; the human skeleton key point processing module (49) optimizes the structure design of the double-branch depth convolution neural network according to the geometric priori knowledge of the human skeleton joints and the correlation among the joints, realizes the synchronous processing of the joint points and the correlation among the joints, one branch performs human skeleton key point regression in a mode of combining a probability heat map and an offset, and one branch detects the joint correlation information of a plurality of people in an image and forms human skeleton sequence data through bipartite map matching; and the human body skeleton image data processing module (410) is used for reconstructing a human body skeleton image data set by combining the characteristics of an industrial human-computer cooperation scene, labeling joint point data and obtaining an attitude data set facing the industrial cooperation scene by combining manual adjustment.
The invention has the advantages of identifying the parameters which can not be modeled, carrying out active compliance control without a sensor, having good anti-collision detection capability and safety protection strategy, and being suitable for being used on a cooperative robot.
Detailed Description
Referring to fig. 1, fig. 1 discloses an intelligent drive and control integrated control system for a multi-axis cooperative industrial robot, which comprises: the system comprises a control module 1, an intelligent dynamic parameter identification module 2, a sensorless active compliance control module 3, a force feedback human-computer cooperation anti-collision control module 4 and a multi-axis driving module 5.
The control module 1 is used for controlling the multi-axis driving module 5 in real time according to a preset instruction or/and a signal fed back by the intelligent dynamic parameter identification module 2, the sensorless active compliance control module 3 and the force feedback man-machine cooperation anti-collision control module 4 according to the state of the cooperative robot 6.
The intelligent kinetic parameter identification module 2 provides a mechanical parameter identification signal of the cooperative robot 6 to the control module 1 in time according to the motion state of the cooperative robot 6.
The sensorless active compliance control module 3 provides the position signal, the force signal and the environment signal of the cooperative robot 6 to the control module 1 in time according to the motion state of the cooperative robot 6.
And the force feedback man-machine cooperation anti-collision control module 4 is used for providing a safety state signal of the cooperation robot 6 to the control module 1 in time according to the motion state of the cooperation robot 6.
And the multi-axis driving module 5 controls the movement of the cooperative robot 6 in real time according to the instruction of the control module 1.
The utility power input interface 7 in fig. 1 supplies power to the control module 1 and the multi-axis driving module 5, and since the control module 1 needs low-voltage direct current, a power adapter 8 is provided between the utility power input interface 7 and the control module 1.
Referring to fig. 2, the intelligent kinetic parameter identification module 2 of the present invention includes a nominal model 21 based on a lagrangian kinetic model, and obtains the movement speed of any point on each link of the cooperative robot according to the movement state of the cooperative robot 6; calculating the kinetic energy of each connecting rod of the cooperative robot in the motion process and the total kinetic energy of the whole cooperative robot in motion; calculating potential energy of each connecting rod of the cooperative robot in the motion process and total potential energy of the whole cooperative robot relative to a reference potential energy surface in the motion process; constructing a Lagrange function of the cooperative robot system according to the obtained total kinetic energy and total potential energy of the cooperative robot; and performing derivation operation on the Lagrange function obtained in the process to obtain a nominal kinetic equation of the cooperative robot system.
And the actual dynamic model 22 is used for establishing an actual dynamic model on the basis of the nominal model, and adding preset parameters which are difficult to model from the actual use of the cooperative robot system to obtain an actual dynamic equation of the actual dynamic model of the cooperative robot.
The neural network training sample acquisition module 23 is used for acquiring neural network training sample data, setting the cooperative robot into a torque working mode, selecting a smooth torque curve as the input of the cooperative robot in the range from the minimum to the maximum of joint torque, and acquiring the angular displacement, the angular speed and the angular acceleration of each joint by using a code disc of each joint; setting the sampling time as T in a sampling period T, and adopting N groups of data containing torque, angular displacement, angular velocity and angular acceleration as one-time training sample data.
The parameter identification neural
network training module 24 obtains a theoretical output value of the moment tau (k) in the sample data through a nominal model
Combining the moments τ (k) with the actual output values in the samples
Inputting the data into a parameter identification neural network; obtaining an output correction value
The theoretical output value and the output correction value are combined to obtain an identification output value
The difference between the actual output value and the identified output value is used to obtain the output error
Establishing a loss function of the parameter identification neural network by using the output error; adopting an optimization strategy of self-learning evolution; and training the neural network to further finish the correction of the dynamic model.
Preferably, the nominal kinetic equation is:
wherein D (q) e R
n×nIs a symmetric positive definite inertia matrix;
is a matrix of Coriolis force and centrifugal force; g (q) ε R
n×1Is a gravity center item matrix;
q is a mechanical joint angular displacement vector,
Is the angular velocity vector of the mechanical arm
Is the angular acceleration vector of the mechanical arm; tau epsilon to R
nAnd controlling the moment vector for each joint of the mechanical arm.
Preferably, the actual kinetic equation is:
in the above formula, F (q) represents the friction of the joint movement,
representing disturbances in the motion of the robot arm.
Preferably, the disturbance includes load variation, modeling error, or/and electrical interference.
Preferably, the parameters difficult to model include friction parameters, clearance parameters or/and deformation parameters of the cooperative robot.
Referring to fig. 3, fig. 3 shows a sensorless active compliance control module 3, which includes a position ring 31 and a force ring 32, where the position ring 31 includes an end position input 311, a position selection matrix 312, and a position control law module 313; the end position input terminal 311 is configured to input an end position signal to the position selection matrix 312, and the end position signal is input to the force-position hybrid control law output module 33 according to a position signal processed by the position selection matrix 312 and the position control law module 313; the force loop 32 comprises a terminal force input end 321, a force selection matrix 322, a force control law module 323 and a joint moment estimation module 324 based on motor current, wherein the terminal force input end 321 is used for inputting a terminal force signal to the force selection matrix 322, the terminal force signal is input to the force position hybrid control law output module 33 according to the force signal processed by the force selection matrix 322 and the force control law module 323, and the joint moment estimation module 324 feeds back the real-time current of the joint motor 35 to the terminal force input end 321; the force position hybrid control law output module 33 inputs a force position hybrid control law output signal G to the joint motor 35, and the joint motor 35 controls the cooperative robot 6 to act through the transmission mechanism 36.
In the context of figure 3, it is shown,
and f
dRespectively the desired tip position and contact force.
Preferably, the present invention further includes a robot kinematics model 314, and the robot kinematics model 314 feeds back the joint angle and the angular velocity of the cooperative robot 6 to the end position input 311.
Preferably, the present invention further comprises a compensation module 34, wherein the compensation module 34 is interposed between the force position hybrid control law output module 33 and the joint motor 35.
Preferably, the position selection matrix 312 and the force selection matrix 322 are combined into one, referred to as the compliance selection matrix S.
Preferably, the expression of the compliance selection matrix S is:
S=diag(s1,s2,...,sn);
when the ith degree of freedom of the end of the mechanical arm is position control, si1, when force control is applied, si0; the compliance selection matrix S is adjusted online based on the force sense information.
Preferably, the motor current-based joint torque estimation module 324 is configured such that the complete robot dynamics equation is one of the following equations:
wherein M is the same as Rn×nIs a joint space inertia matrix; c is belonged to Rn×nCalculating a matrix for the coriolis force and the centripetal force; g is as large as Rn×1Is a gravity term vector; q is an element of Rn×1Is the drive joint angle vector; tau epsilon to Rn×1To drive joint torque;
motor torque taumThe equation for the drive is:
τ=Nτm;
wherein N ∈ R
n×nFor a diagonal matrix of each joint reduction ratio, let J
mInertia of the motor rotor; in the derivation process, friction terms at the rotor of the motor are calculated
Substituted motor torque module and jointAnd obtaining a joint torque estimation module based on the motor current according to the relation of the torque, so as to obtain force detection output:
where Ψ (i) ═ τmThe motor output torque and current mapping module.
Referring to fig. 4 and 5, in the present invention, the force feedback human-machine cooperation anti-collision detection module 4 includes:
the dynamic equation establishing module 41 is used for establishing a connecting rod coordinate system on a preset robot platform by adopting a D-H parameter method and establishing a robot dynamic equation according to a Lagrange dynamic formula;
the collision detection operator and disturbance observer establishing module 42 is used for constructing a collision detection operator based on the unchanged energy of the robot and a disturbance observer based on the variable quantity of the generalized momentum according to a robot dynamic equation and a momentum equation;
the data analysis module 43 determines the relation between the torque and the collision force of each joint based on the real-time feedback of the current of the robot system, provides a Jacobian matrix solving method for the robot, and analyzes the effectiveness of the detection collision;
the safety protection strategy making module 44 is used for making different safety protection strategies according to different collision situations and by combining actual working conditions based on the detection result of the collision detection model;
the simulation verification and optimization module 45 is used for performing simulation verification and optimization on the effectiveness of the robot collision detection operator and the rationality of the safety protection strategy based on the ADAMS-Simulink combined simulation platform;
and the actual effect verification module 46 is used for verifying and evaluating the actual effect of the obstacle avoidance and protection safety strategy based on force feedback based on a preset robot platform.
As an improvement to the present invention, the security protection policy includes: 1. stopping after collision, namely immediately enabling the servo driver to be disconnected by the control system after the robot control system detects a collision signal; or 2, switching the control mode by the robot control system after collision, and converting the position mode into a moment mode; or 3, after the collision, the robot changes the original motion track and leaves the collision area.
Preferably, the present invention further comprises:
the monocular binocular stereo matching module 47 is used for constructing a monocular binocular stereo matching model based on SVS, optimizing geometric constraint conditions on a loss function, and realizing accurate estimation of the depth of a detection target in a monocular image through a left-right view synthesis process and binocular stereo matching;
the convolution feature extraction module 48 is used for performing deep convolution feature extraction by adopting a ResNet model based on the RGB image acquired by the monocular camera;
the human skeleton key point processing module 49 optimizes the structure design of the double-branch depth convolution neural network according to the geometric priori knowledge of the human skeleton joints and the correlation relationship among the joints, realizes the synchronous processing of the joint points and the correlation relationship among the joints, one branch performs human skeleton key point regression in a mode of combining a probability heat map and an offset, and one branch detects the joint correlation information of a plurality of people in an image and forms human skeleton sequence data through bipartite map matching;
the human body skeleton image data processing module 410 is used for reconstructing a human body skeleton image data set by combining the characteristics of an industrial man-machine cooperation scene based on a Microsoft COCO data set, carrying out joint point data annotation by adopting open source alphaphase of Shanghai university of transportation, and obtaining an attitude data set facing the industrial cooperation scene by combining manual adjustment.