CN116551670A - Compliant control method based on training robot and readable storage medium - Google Patents

Compliant control method based on training robot and readable storage medium Download PDF

Info

Publication number
CN116551670A
CN116551670A CN202310236480.0A CN202310236480A CN116551670A CN 116551670 A CN116551670 A CN 116551670A CN 202310236480 A CN202310236480 A CN 202310236480A CN 116551670 A CN116551670 A CN 116551670A
Authority
CN
China
Prior art keywords
sliding mode
determining
law
representing
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310236480.0A
Other languages
Chinese (zh)
Inventor
施长城
李国宁
张佳楫
左国坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Institute of Material Technology and Engineering of CAS
Cixi Institute of Biomedical Engineering CIBE of CAS
Original Assignee
Ningbo Institute of Material Technology and Engineering of CAS
Cixi Institute of Biomedical Engineering CIBE of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Institute of Material Technology and Engineering of CAS, Cixi Institute of Biomedical Engineering CIBE of CAS filed Critical Ningbo Institute of Material Technology and Engineering of CAS
Priority to CN202310236480.0A priority Critical patent/CN116551670A/en
Publication of CN116551670A publication Critical patent/CN116551670A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/1633Programme controls characterised by the control loop compliant, force, torque control, e.g. combined with position control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention provides a compliant control method based on a training robot and a readable storage medium, wherein the method comprises the following steps: determining a desired trajectory; acquiring a joint angle of a training robot, and establishing a dynamic model according to the joint angle; obtaining a slip form surface based on the desired trajectory and the joint angle, wherein the slip form surface is used to compensate for errors in the unmodeled portion of the training robot; an interactive force learning law is obtained based on the sliding mode surface, wherein the interactive force learning law is used for estimating man-machine interactive force; obtaining an inertial parameter learning law based on the dynamics model and the sliding mode surface, wherein the inertial parameter learning law is used for compensating errors of modeled parts of the training robot; determining a control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law; and determining a flexible control moment according to the control law, and realizing real-time flexible control.

Description

Compliant control method based on training robot and readable storage medium
Technical Field
The invention relates to the technical field of robot control, in particular to a compliant control method based on a training robot and a readable storage medium.
Background
As brain injury and degenerative disease patients increase year by year, 80% of stroke patients experiencing motor dysfunction may experience loss of upper limb function in nearly half of them. At this stage, the effect of exercising the patient's limb through passive movement is typically achieved by rehabilitation therapists by repeatedly flexing the patient's affected limb to reduce muscle cramps and stimulate neural plasticity. However, therapists are seriously insufficient at the present stage, and the traditional method is time-consuming and labor-consuming, and cannot train each patient in a one-to-one adaptability manner, so that a training robot is generated as training auxiliary equipment.
While part of the training auxiliary equipment in the prior art can draw the affected limb of the patient by making a preset track, and the limb is actively drawn by a mechanical arm at a preset speed and with the preset track so as to simulate the bending treatment of a rehabilitation therapist, the training auxiliary equipment is difficult to determine the proper traction force applied to the upper limb of the patient in actual training, and if the traction force change amount is too large, the upper limb of the user can be injured by pulling; if the traction force variation is too small, the training effect is poor, and the training effect of a user is affected.
Disclosure of Invention
The problem to be solved by the invention is how to improve the training effect of the training aid.
In order to solve the above problems, the present invention provides a compliant control method based on a training robot, including:
determining a desired trajectory;
acquiring a joint angle of a training robot, and establishing a dynamic model according to the joint angle;
obtaining a sliding mode surface based on the expected track and the joint angle, and obtaining a sliding mode item according to the sliding mode surface, wherein the sliding mode item is used for compensating errors of an unmodeled part of the training robot;
an interactive force learning law is obtained based on the sliding mode surface, wherein the interactive force learning law is used for estimating man-machine interactive force;
obtaining an inertial parameter learning law based on the dynamics model and the sliding mode surface, wherein the inertial parameter learning law is used for compensating errors of modeled parts of the training robot;
determining a control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law;
and determining the flexible control moment according to the control law.
Compared with the prior art, the method has the advantages that the dynamic model is obtained by carrying out dynamic modeling on the joint angle of the training robot, and the compliance control strategy of the training robot is determined based on the dynamic model. Specifically, a sliding mode surface is determined through joint angles and expected tracks, and a sliding mode item constructed through the sliding mode surface is used for compensating errors of a part which is not subjected to dynamic modeling; because the man-machine interaction force cannot be directly obtained, the interactive force learning law is determined through the sliding mode item and is used for estimating the man-machine interaction force; because the part subjected to dynamic modeling also has a certain error, determining an inertia parameter learning law through a dynamic model and a sliding mode surface, and performing error estimation on the modeled part; after the sliding mode item, the interactive force learning law and the inertia parameter learning law are obtained, the modeling error, the unmodeled error and the man-machine interaction force in the dynamics model are compensated by taking the dynamics model as the basis, the control law of the training robot is obtained, and the real-time flexible control of the training robot on the training track is realized.
Optionally, the determining the desired trajectory includes:
and setting a repeated motion path of a preset geometric figure in a Cartesian space as the expected track of the training robot.
Optionally, the obtaining a sliding mode surface based on the desired track and the joint angle, and obtaining a sliding mode item according to the sliding mode surface includes:
determining an actual track of the training robot according to the joint angle;
determining a tracking error according to the expected track and the actual track;
determining a sliding mode surface according to the tracking error;
and determining the sliding mode item according to a preset gain parameter and the sliding mode surface.
Optionally, the obtaining the interactive force learning law based on the sliding mode surface includes:
and determining the interactive force learning law according to the sliding mode surface and a positive diagonal matrix, wherein the positive diagonal matrix is used for influencing the magnitude of the interactive force learning law.
Optionally, the obtaining an inertia parameter learning law based on the dynamics model and the sliding mode surface includes:
linearizing the dynamics model, and expressing a robot dynamics equation under general description through a first matrix and inertia parameters;
solving a minimum inertial parameter combination of the first matrix and the inertial parameters, wherein the minimum inertial parameter combination comprises a minimum inertial parameter set and a second matrix;
the inertial parameter learning law is determined based on the minimum inertial parameter set and the second matrix.
Optionally, the determining the sliding mode item according to the preset gain parameter and the sliding mode surface includes:
determining the tracking error from the difference between the desired track and the actual track, expressed as:
wherein ,representing the tracking error, q d Representing the desired trajectory, q representing the actual trajectory;
determining the sliding mode surface from the derivative of the tracking error, the positive-definite diagonal matrix and the tracking error, expressed as:
wherein s represents the sliding mode surface,representing the derivative of the tracking error, Λ representing an n x n positive definite diagonal matrix;
according to the sliding mode surface s and the gain parameter K d Is used for determining the sliding mode term K d s。
Optionally, the determining the interactive force learning law according to the sliding mode surface and the positive diagonal matrix includes:
determining that the interactive force learning law is expressed as follows according to the sliding mode surface and the positive-definite diagonal matrix:
wherein ,derivative of learning law representing the man-machine interaction force,/->And s represents the sliding mode surface, N and ψ represent the positive diagonal matrix.
Optionally, the determining the inertia parameter learning law based on the minimum inertia parameter set and the second matrix includes:
representing the kinetic equation as the product of the first matrix and the inertial parameters, and thus as the product of the minimum set of inertial parameters and the second matrix, as:
wherein D (q) represents a symmetrically positive inertia matrix,representing the Coriolis and centrifugal force matrices, G (q) representing the gravity term matrix, Y representing the first matrix, p representing the inertial parameter, τ con Representing the control law, a representing the minimum inertial parameter set,/>Representing the inertia parameter learning law, Γ representing the positive diagonal matrix, Θ representing the second matrix, Θ T Representing a transpose of the second matrix.
Optionally, the determining the control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law includes:
determining a human-computer interaction force estimated value according to the interaction force learning law;
compensating the unmodeled part of the kinetic model according to the sliding mode term;
determining an error of the modeled part of the dynamic model according to the inertial parameter learning rate, and updating the inertial parameter of the training robot through the error;
and obtaining the control law by making differences between the inertia parameter and the human-computer interaction force estimated value and between the inertia parameter and the unmodeled part of the dynamics model.
In another aspect, the present invention also provides a computer readable storage medium having a computer program stored thereon, which when executed by a processor, implements a compliance control method based on a training robot as described above.
The beneficial effects of the computer readable storage medium compared with the prior art are the same as those of the compliant control method based on the training robot, and are not described in detail herein.
Drawings
FIG. 1 is a flow chart of a compliance control method based on a training robot according to an embodiment of the present invention;
FIG. 2 is a strategy control block diagram of a compliant control method based on a training robot in accordance with an embodiment of the present invention;
FIG. 3 is an exemplary diagram of a desired trajectory for a training robot-based compliance control method in accordance with an embodiment of the present invention;
FIG. 4 is a schematic flow chart of a training robot-based compliance control method according to an embodiment of the present invention after refinement of step S300;
fig. 5 is a schematic flow chart of the training robot-based compliance control method according to the embodiment of the present invention after refinement of step S500.
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. While the invention is susceptible of embodiment in the drawings, it is to be understood that the invention may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided to provide a more thorough and complete understanding of the invention. It should be understood that the drawings and embodiments of the invention are for illustration purposes only and are not intended to limit the scope of the present invention.
It should be understood that the various steps recited in the method embodiments of the present invention may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the invention is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments"; the term "optionally" means "alternative embodiments". Related definitions of other terms will be given in the description below. It should be noted that the terms "first," "second," and the like herein are merely used for distinguishing between different devices, modules, or units and not for limiting the order or interdependence of the functions performed by such devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those skilled in the art will appreciate that "one or more" is intended to be construed as "one or more" unless the context clearly indicates otherwise.
As shown in fig. 1 and fig. 2, a compliance control method based on a training robot according to an embodiment of the present invention includes:
step S100, determining a desired trajectory.
In an embodiment, the preset training track is used as a desired track, and after the desired track is determined, the mechanical arm is actively displaced by a user to move along a path defined by the desired track.
In an embodiment, the training robot comprises a display device, through which a prescribed desired trajectory is displayed for guiding a user to reproduce a path of the desired trajectory through the robotic arm.
In another embodiment, the training robot has at least two mechanical arms, and the movement track of the first mechanical arm is used as a desired track by actively controlling the movement of the first mechanical arm, and then the desired track specified by the first mechanical arm is reproduced by controlling the second mechanical arm, so that the training of one side limb is realized.
Step S200, acquiring joint angles of the training robot, and establishing a dynamic model according to the joint angles.
In an embodiment, an inertia matrix, a coriolis and centrifugal force matrix, a gravity term matrix and a friction matrix are respectively established through joint angular displacement vectors, and a relation between a control law and an interaction force is established according to the matrices.
And step S300, a sliding mode surface is obtained based on the expected track and the joint angle, and a sliding mode item is obtained according to the sliding mode surface, wherein the sliding mode item is used for compensating errors of an unmodeled part of the training robot.
A slip form term is determined by comparison of the desired trajectory and joint angle, wherein the slip form term is used to compensate for errors in the unmodeled portion of the training robot.
The sliding mode control is a robust control, and because the dynamic model of the robot is not enough to be trained to completely represent the physical model of the robot, the constructed dynamic model and the real model have a certain error, and the sliding mode item is calculated to offset the error in the control law, so that the control of the control law is ensured to be more accurate.
And step S400, obtaining an interactive force learning law based on the sliding mode surface, wherein the interactive force learning law is used for estimating human-computer interaction force.
Human-machine interaction force means force applied by a user to a training robot when performing an interaction task of a specific task object.
In the invention, the real interaction force data cannot be obtained because the control is not performed by the force sensor, and the man-machine interaction force can be approximately estimated by other known quantities such as the sliding die surface. In one embodiment, the human-machine interaction force is represented by stiffness, damping, feed-forward force, and errors in the actual and target positions of the robotic arm.
The interactive force learning law is used for updating the data of the human-computer interaction force through the actual training condition, and because the interactive force learning law is used for updating the human-computer interaction force, the human-computer interaction force is a parameter for determining the control law, and the human-computer interaction force is updated in real time through the interactive force learning law, so that the control law can make the most suitable flexible control strategy in real time according to the training condition of a user.
And S500, obtaining an inertia parameter learning law based on the dynamics model and the sliding mode surface, wherein the inertia parameter learning law is used for compensating errors of the modeled part of the training robot.
In the dynamic model, a certain error exists between a measured value and an actual value of an inertial parameter of the training robot, the inertial parameter is required to be estimated by calculating an inertial parameter learning law, and a control strategy of the training robot is adjusted in real time according to the track error so as to achieve an ideal control effect.
And step S600, determining the control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law.
In an embodiment, errors of unmodeled parts of the training robot are compensated through sliding mode items, human-computer interaction force which cannot be calculated is updated through an interaction force learning law, and the estimated human-computer interaction force is introduced into a control law to be counteracted, so that track tracking errors of the training robot are reduced, measurement errors of fixed parameters of the modeled parts are estimated according to an inertia parameter learning law, and then the measurement errors are introduced into the control law to be counteracted. The sliding mode item, the interactive force learning law and the inertia parameter learning law are introduced into a dynamic model, the control law of the training robot is determined based on the dynamic model, the control moment applied to the body of a trainer by the training robot is ensured to be updated in real time and influenced by the actual training condition, a flexible control strategy is ensured to be obtained, and an ideal control effect is achieved.
And step S700, determining the compliant control moment according to the control law.
Optionally, as shown in fig. 3, the determining the desired trajectory includes:
the repeated motion path of the preset geometric figure is set in the Cartesian space.
In the figure, the X-axis represents time, the Y-axis represents the X-axis coordinates of the trajectory in cartesian space, and the Z-axis represents the Y-axis coordinates.
In one embodiment, the repetitive motion path comprises a circular path. The preset expected track is set into a circular path, so that the upper limbs of the trainer repeatedly move on the circular path to exercise the movement capacity of the upper limbs.
Optionally, as shown in fig. 4, the obtaining a sliding surface based on the desired trajectory and the joint angle includes:
step S310, determining the actual track of the training robot according to the joint angle;
step S320, determining tracking error according to the expected track and the actual track;
step S330, determining a sliding mode surface according to the tracking error;
and S340, determining the sliding mode item according to a preset gain parameter and the sliding mode surface.
In an embodiment, the actual track of the tail end of the mechanical arm of the training robot is determined through the joint angle of the training robot, so that the tracking error is determined according to the difference between the actual track and the expected track, the sliding mode surface and the sliding mode item are further determined, and the error of the unmodeled part is compensated through the sliding mode item.
Optionally, the determining the sliding mode item according to the preset gain parameter and the sliding mode surface includes:
determining the tracking error from the difference between the desired track and the actual track, expressed as:
wherein ,representing the tracking error, q d Representing the desired trajectory, q representing the actual trajectory;
determining the sliding mode surface from the derivative of the tracking error, the positive-definite diagonal matrix and the tracking error, expressed as:
wherein s represents the sliding mode surface,representing the derivative of the tracking error, Λ representing an n x n positive definite diagonal matrix;
according to the sliding mode surface s and the gain parameter K d Is used for determining the sliding mode term K d s。
In one embodiment, tracking error is determined by Λ pairThe larger the adjustment is, the smaller the track error is under the action of external interference force (namely the force exerted on the training robot by a user), the more 'rigid' the control law is finally obtained, and when the lambda is too large, the instability of the whole control system can be caused; the smaller it is, the more compliant the control law that is ultimately obtained under the influence of external disturbance. Gain parameter K d The larger the value of (C) is a preset set value, the larger the sliding mode item K is d The better the robustness of s, but the greater the moment oscillation of the control strategy obtained by the control law, according to this sliding mode term K d The more inaccurate the human-computer interaction force estimation of s, the gain parameter K d Too large a value of (c) may also cause system instability.
In another embodiment, Λ and K are adjusted by a track tracking error and an interaction force estimation error d
Optionally, the relationship of the slip-form surface to the reference speed comprises:
wherein s represents the surface of the sliding die,representing a velocity vector +.>Indicating the reference speed.
Pass bitsPlacement errorChanging the desired speed->To form a reference speed-> wherein ,/>Velocity vector to be energy dependent +.>The target speed and the target position can be introduced into a control law through the reference speed to realize track tracking control, and a flexible control strategy is formulated according to the difference between the actual training condition and the expected track.
Optionally, the obtaining the interactive force learning law based on the sliding mode surface includes:
and determining the interactive force learning law according to the sliding mode surface and a positive diagonal matrix, wherein the positive diagonal matrix is used for influencing the magnitude of the interactive force learning law.
Optionally, the determining the interactive force learning law according to the sliding mode surface and the positive diagonal matrix includes:
determining that the interactive force learning law is expressed as follows according to the sliding mode surface and the positive-definite diagonal matrix:
wherein ,derivative of learning law representing the man-machine interaction force,/->And s represents the sliding mode surface, N and ψ represent the positive diagonal matrix.
In one embodiment, τ hum The interaction force received by each joint of the training robot is represented, and the interaction force cannot be directly measured, so that the human-computer interaction force is estimated through an interaction force learning law, and the N influence is achievedThe larger the learning speed, the faster the learning, the more "rigid" the system, and the larger the learning speed, the system will be unstable; ψ also affects +.>The larger the error of the estimated value and the true value is influenced, the faster the learning speed is, but the larger the error of the stable interaction force estimated value and the actual interaction force is, the smaller the psi is, the slower the learning speed is, but the smaller the error of the stable interaction force estimated value and the actual value of the man-machine interaction force is.
Alternatively, Λ=8i, k d =2, Γ=1.5i, n=15i, ψ=0.03i, I represents the identity matrix.
Optionally, as shown in fig. 5, the obtaining an inertia parameter learning law based on the dynamics model and the sliding mode surface includes:
step S510, linearizing the dynamics model, and expressing a robot dynamics equation under general description through a first matrix and inertia parameters;
step S520, solving a minimum inertial parameter combination of the first matrix and the inertial parameters, wherein the minimum inertial parameter combination comprises a minimum inertial parameter set and a second matrix;
step S530, determining the inertia parameter learning law based on the minimum inertia parameter set and the second matrix.
In an embodiment, the inertial parameter identification of the mechanical arm of the training robot is realized by linearizing a kinetic equation of the training robot, the kinetic model is converted into a form of a product between a moment=a first matrix and the inertial parameter in an equation form, and the first matrix only contains parameters related to angles and does not contain the mechanical arm inertial parameter, so that the inertial parameter can be solved by utilizing the generalized inverse of the first matrix, and further the inertial parameter of each joint of the mechanical arm is identified by a parameter identification method. Because some columns in the first matrix and the inertia parameters are linearly related, the columns in which the columns are linearly related can be combined to solve the minimum inertia parameter combination, thereby obtaining an inertia parameter learning law, and then the inertia parameters are estimated through the inertia parameter learning law.
Alternatively, the friction term is offset by pre-compensation, and in the flexible control algorithm of the present invention, the influence of the friction is not considered, since the pre-compensation has offset the friction term of the training robot.
Optionally, the determining the inertia parameter learning law based on the minimum inertia parameter set and the second matrix includes:
representing the kinetic equation as the product of the first matrix and the inertial parameters, and thus as the product of the minimum set of inertial parameters and the second matrix, as:
wherein D (q) represents a symmetrically positive inertia matrix,representing the Coriolis and centrifugal force matrices, G (q) representing the gravity term matrix, Y representing the first matrix, p representing the inertial parameter, τ con Representing the control law, a representing the minimum inertial parameter set,/>Representing the inertia parameter learning law, Γ representing the positive diagonal matrix, Θ representing the second matrix, Θ T Representing a transpose of the second matrix.
Wherein Θ isIn shorthand form,/->Is reference speed +.>Is a derivative of (a).
The control law is expressed as:
wherein ,τcon The control law is indicated as such,estimated value representing inertial matrix, +.>Estimated values representing the coriolis and centrifugal force matrices, < >>Representing an estimate of the gravity term matrix.
Optionally, the determining the control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law includes:
determining a human-computer interaction force estimated value according to the interaction force learning law;
compensating the unmodeled part of the kinetic model according to the sliding mode term;
determining an error of the modeled part of the dynamic model according to the inertial parameter learning rate, and updating the inertial parameter of the training robot through the error;
and obtaining the control law by making differences between the inertia parameter and the human-computer interaction force estimated value and between the inertia parameter and the unmodeled part of the dynamics model.
Calculated by the previous steps and />After integration, substituting a formula of a control law, counteracting the force applied to the training robot by a user through the estimated interaction force, counteracting the influence of the inaccuracy of the modeling of the dynamics model through a sliding mode item, and finally determining a compliance control strategy in the current state through the angle of the actual mechanical arm and the difference between preset expected tracks.
According to another embodiment of the invention, a training robot is provided for implementing the compliance control method based on the training robot, and the training robot comprises a driving arm and a driven arm, wherein the driving arm and the driven arm respectively comprise at least three joints and at least three connecting rods, each joint of the driving arm comprises at least one driving motor, and each joint of the driven arm comprises at least one angle sensor.
An electronic device provided in another embodiment of the present invention includes a memory and a processor; the memory is used for storing a computer program; the processor is configured to implement the compliance control method based on the training robot as described above when executing the computer program.
A further embodiment of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a compliance control method based on a training robot as described above.
An electronic device that can be a server or a client of the present invention will now be described, which is an example of a hardware device that can be applied to aspects of the present invention. Electronic devices are intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
The electronic device includes a computing unit that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) or a computer program loaded from a storage unit into a Random Access Memory (RAM). In the RAM, various programs and data required for the operation of the device may also be stored. The computing unit, ROM and RAM are connected to each other by a bus. An input/output (I/O) interface is also connected to the bus.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like. In this application, the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment of the present invention. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
Although the present disclosure is described above, the scope of protection of the present disclosure is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the disclosure, and these changes and modifications will fall within the scope of the invention.

Claims (10)

1. A compliant control method based on a training robot is characterized by comprising the following steps:
determining a desired trajectory;
acquiring a joint angle of a training robot, and establishing a dynamic model according to the joint angle;
obtaining a sliding mode surface based on the expected track and the joint angle, and obtaining a sliding mode item according to the sliding mode surface, wherein the sliding mode surface is used for compensating errors of an unmodeled part of the training robot;
an interactive force learning law is obtained based on the sliding mode surface, wherein the interactive force learning law is used for estimating man-machine interactive force;
obtaining an inertial parameter learning law based on the dynamics model and the sliding mode surface, wherein the inertial parameter learning law is used for compensating errors of modeled parts of the training robot;
determining a control law of the training robot according to the sliding mode item, the interactive force learning law and the inertia parameter learning law;
and determining the flexible control moment according to the control law.
2. The method of training robot-based compliance control of claim 1, wherein the determining the desired trajectory comprises:
and setting a repeated motion path of a preset geometric figure in a Cartesian space as the expected track of the training robot.
3. The method of claim 1, wherein the obtaining a slip form surface based on the desired trajectory and the joint angle, the obtaining a slip form term from the slip form surface comprises:
determining an actual track of the training robot according to the joint angle;
determining a tracking error according to the expected track and the actual track;
determining a sliding mode surface according to the tracking error;
and determining the sliding mode item according to a preset gain parameter and the sliding mode surface.
4. The method of claim 3, wherein determining the slip form term according to the preset gain parameter and the slip form surface comprises:
determining the tracking error from the difference between the desired track and the actual track, expressed as:
wherein ,representing the tracking error, q d Representing the desired trajectory, q representing the actual trajectory;
determining the sliding mode surface from the derivative of the tracking error, the positive-definite diagonal matrix and the tracking error, expressed as:
wherein s represents the sliding mode surface,representing the derivative of the tracking error, Λ representing an n x n positive definite diagonal matrix;
according to the sliding mode surface s and the gain parameter K d Is used for determining the sliding mode term K d s。
5. The method for compliant control over a training robot of claim 1, wherein said obtaining an interactive force learning law based on said slip-form surface comprises:
and determining the interactive force learning law according to the sliding mode surface and a positive diagonal matrix, wherein the positive diagonal matrix is used for influencing the magnitude of the interactive force learning law.
6. The method of claim 5, wherein determining the interactive force learning law from the slip plane and positive diagonal matrix comprises:
determining that the interactive force learning law is expressed as follows according to the sliding mode surface and the positive-definite diagonal matrix:
wherein ,derivative of learning law representing the man-machine interaction force,/->And s represents the sliding mode surface, N and ψ represent the positive diagonal matrix.
7. The method of claim 1, wherein the obtaining an inertial parameter learning law based on the dynamics model and the sliding surface comprises:
linearizing the dynamics model, and expressing a robot dynamics equation under general description through a first matrix and inertia parameters;
solving a minimum inertial parameter combination of the first matrix and the inertial parameters, wherein the minimum inertial parameter combination comprises a minimum inertial parameter set and a second matrix;
the inertial parameter learning law is determined based on the minimum inertial parameter set and the second matrix.
8. The method of claim 7, wherein the determining the inertial parameter learning law based on the minimum inertial parameter set and the second matrix comprises:
representing the kinetic equation as the product of the first matrix and the inertial parameters, and thus as the product of the minimum set of inertial parameters and the second matrix, as:
wherein D (q) represents a symmetrically positive inertia matrix,representing the Coriolis and centrifugal force matrices, G (q) representing the gravity term matrix, Y representing the first matrix, p representing the inertial parameter, τ con Representing the control law, a representing the minimum inertial parameter set,/>Representing the inertia parameter learning law, Γ representing the positive diagonal matrix, Θ representing the second matrix, Θ T Representing a transpose of the second matrix.
9. The method of claim 1, wherein determining the control law of the training robot based on the slip-form term, the interactive force learning law, and the inertial parameter learning law comprises:
determining a human-computer interaction force estimated value according to the interaction force learning law;
compensating the unmodeled part of the kinetic model according to the sliding mode term;
determining an error of the modeled part of the dynamic model according to the inertial parameter learning rate, and updating the inertial parameter of the training robot through the error;
and obtaining the control law by making differences between the inertia parameter and the human-computer interaction force estimated value and between the inertia parameter and the unmodeled part of the dynamics model.
10. A computer readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when executed by a processor, implements a compliance control method based on a training robot according to any of claims 1-9.
CN202310236480.0A 2023-03-13 2023-03-13 Compliant control method based on training robot and readable storage medium Pending CN116551670A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310236480.0A CN116551670A (en) 2023-03-13 2023-03-13 Compliant control method based on training robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310236480.0A CN116551670A (en) 2023-03-13 2023-03-13 Compliant control method based on training robot and readable storage medium

Publications (1)

Publication Number Publication Date
CN116551670A true CN116551670A (en) 2023-08-08

Family

ID=87486803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310236480.0A Pending CN116551670A (en) 2023-03-13 2023-03-13 Compliant control method based on training robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN116551670A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220009104A1 (en) * 2018-10-26 2022-01-13 Franka Emika Gmbh Robot
WO2022007358A1 (en) * 2020-07-08 2022-01-13 深圳市优必选科技股份有限公司 Impedance control method and apparatus, impedance controller, and robot
CN114355771A (en) * 2021-12-15 2022-04-15 荆楚理工学院 Cooperative robot force and position hybrid control method and system
CN114770478A (en) * 2022-05-18 2022-07-22 南京航空航天大学 Remote variable-stiffness reconfigurable modular exoskeleton and control system and control method thereof
CN114800489A (en) * 2022-03-22 2022-07-29 华南理工大学 Mechanical arm compliance control method based on combination of definite learning and composite learning, storage medium and robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220009104A1 (en) * 2018-10-26 2022-01-13 Franka Emika Gmbh Robot
WO2022007358A1 (en) * 2020-07-08 2022-01-13 深圳市优必选科技股份有限公司 Impedance control method and apparatus, impedance controller, and robot
CN114355771A (en) * 2021-12-15 2022-04-15 荆楚理工学院 Cooperative robot force and position hybrid control method and system
CN114800489A (en) * 2022-03-22 2022-07-29 华南理工大学 Mechanical arm compliance control method based on combination of definite learning and composite learning, storage medium and robot
CN114770478A (en) * 2022-05-18 2022-07-22 南京航空航天大学 Remote variable-stiffness reconfigurable modular exoskeleton and control system and control method thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓峰等: "基于无模型自适应的外骨骼式上肢康复机器人主动交互训练控制方法", 自动化学报, vol. 42, no. 12, 2 December 2016 (2016-12-02), pages 1899 - 1914 *

Similar Documents

Publication Publication Date Title
US11790587B2 (en) Animation processing method and apparatus, computer storage medium, and electronic device
US11179855B2 (en) Acceleration compensation method for humanoid robot and apparatus and humanoid robot using the same
CN111546315B (en) Robot flexible teaching and reproducing method based on human-computer cooperation
Brahmi et al. Passive and active rehabilitation control of human upper-limb exoskeleton robot with dynamic uncertainties
Hashemzadeh et al. Nonlinear trilateral teleoperation stability analysis subjected to time-varying delays
JP2010011926A (en) Method for simulating golf club swing
CN110412866A (en) Ectoskeleton list leg cooperative control method based on adaptive iteration study
WO2020118730A1 (en) Compliance control method and apparatus for robot, device, and storage medium
US10967505B1 (en) Determining robot inertial properties
JP2013180380A (en) Control device, control method, and robot apparatus
CN114102600B (en) Multi-space fusion human-machine skill migration and parameter compensation method and system
CN114191791B (en) Rehabilitation robot active control method and device and rehabilitation robot
Yang et al. Neural learning impedance control of lower limb rehabilitation exoskeleton with flexible joints in the presence of input constraints
Han et al. Visual servoing control of robotics with a neural network estimator based on spectral adaptive law
CN113858201A (en) Intention-driven adaptive impedance control method, system, device, storage medium and robot
CN116551670A (en) Compliant control method based on training robot and readable storage medium
Bauer et al. Telerehabilitation with exoskeletons using adaptive robust integral RBF-neural-network impedance control under variable time delays
CN114851171B (en) Gait track tracking control method of lower limb exoskeleton rehabilitation robot
Kim et al. Adaptation of human motion capture data to humanoid robots for motion imitation using optimization
Cao et al. Adaptive sliding mode impedance control in lower limbs rehabilitation robotic
CN114434452B (en) Mirror image mechanical arm control method based on potential energy field and mirror image mechanical arm equipment
CN114905514B (en) Human skill learning method and system for outer limb grasping control
Aloulou et al. A minimum jerk-impedance controller for planning stable and safe walking patterns of biped robots
CN117260718B (en) Self-adaptive load compensation control method and system for four-legged robot
Lima et al. Realistic behaviour simulation of a humanoid robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination