CN116604532A - Intelligent control method for upper limb rehabilitation robot - Google Patents

Intelligent control method for upper limb rehabilitation robot Download PDF

Info

Publication number
CN116604532A
CN116604532A CN202310533522.7A CN202310533522A CN116604532A CN 116604532 A CN116604532 A CN 116604532A CN 202310533522 A CN202310533522 A CN 202310533522A CN 116604532 A CN116604532 A CN 116604532A
Authority
CN
China
Prior art keywords
upper limb
rehabilitation robot
limb rehabilitation
learning
tracks
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310533522.7A
Other languages
Chinese (zh)
Inventor
李可
张娜
张付凯
王聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202310533522.7A priority Critical patent/CN116604532A/en
Publication of CN116604532A publication Critical patent/CN116604532A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0081Programme-controlled manipulators with master teach-in means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/408Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by data handling or data format, e.g. reading, buffering or conversion of data
    • G05B19/4086Coordinate conversions; Other special calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/35Nc in input of data, input till input file format
    • G05B2219/35356Data handling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Primary Health Care (AREA)
  • Manufacturing & Machinery (AREA)
  • Pain & Pain Management (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Fuzzy Systems (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides an intelligent control method and system for an upper limb rehabilitation robot, wherein tracking errors are designed aiming at uncertainty of a discrete time system dynamic model of an actual mechanical arm, then an adaptive neural network controller is constructed based on a discrete determination learning theory, accurate modeling/learning can be performed on internal unknown dynamics along a periodic track, and an experience-based learning controller is constructed by using learned knowledge; and the control performance of the rehabilitation robot in response to an uncertain environment is improved by combining an interpersonal skill transmission method. The invention can realize rapid convergence, high precision and better transient performance dynamic performance, and has important significance for improving the efficiency of rehabilitation training of the upper limb rehabilitation robot.

Description

Intelligent control method for upper limb rehabilitation robot
Technical Field
The invention belongs to the technical field of robot control, and relates to an intelligent control method of an upper limb rehabilitation robot.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Upper limb motor dysfunction is a common phenomenon after aging, stroke, motor injury or spinal cord injury in humans. According to nerve rehabilitation theory, repeatability and task oriented training have positive effects on reconstructing upper limb nerve and muscle functions. Aiming at the defects of high labor intensity, long time consumption, high treatment cost, poor persistence, poor repeatability and the like of the manual treatment of the traditional therapist, the robot auxiliary rehabilitation treatment has proved to be an effective treatment method for promoting the neural plasticity and the motor function recovery of the patient.
The control strategy is one of the cores of the upper limb rehabilitation robots, but most of the current control strategies of the upper limb rehabilitation robots are rough, so that the human behavior cannot be learned, and the control precision and the transient performance have great limitations in an uncertain discrete-time system environment.
Disclosure of Invention
The invention provides an intelligent control method of an upper limb rehabilitation robot for solving the problems. The method is based on Discrete-deterministic learning (DTDL) theory and consists of an adaptive neural network controller (Adaptive neural network controller, ANNC) and an empirical-based learning controller (Learning controller, LC) which are respectively used for knowledge acquisition and knowledge utilization. The invention designs tracking error aiming at uncertainty of a discrete time system dynamic model of an actual mechanical arm, then constructs a proper ANNC meeting continuous excitation (Persistent excitation, PE), combines an interpersonal skill transfer (Human-robot skill transfer, HRST) method, can accurately model/learn internal unknown dynamics along a periodic track, and further constructs an LC-based structure by utilizing learned knowledge; the control performance of the rehabilitation robot in the uncertain environment is improved. The invention can realize rapid convergence, high precision and better transient performance dynamic performance, and has important significance for improving the efficiency of rehabilitation training of the upper limb rehabilitation robot.
According to some embodiments, the present invention employs the following technical solutions:
an intelligent control method of an upper limb rehabilitation robot comprises the following steps:
acquiring a plurality of teaching tracks, wherein the teaching tracks are obtained according to rehabilitation training tracks customized according to rehabilitation requirements;
determining periodic reference tracks based on the teaching tracks, performing motion characterization and skill modeling on each periodic reference track, and fitting a final reference track;
the method comprises the steps of constructing a discrete time self-adaptive radial basis function neural network controller, learning unknown dynamics in an upper limb rehabilitation robot system in a tracking control process, approximating/learning interaction force between a rehabilitation demander and the upper limb rehabilitation robot system, constructing an experience-based learning controller by using the learned knowledge, and further improving control performance.
As an alternative implementation mode, the specific process of customizing the rehabilitation training track according to the rehabilitation requirement comprises the following steps: according to the actual demand of the rehabilitation demander, corresponding rehabilitation training actions are determined, and a plurality of periodic reference tracks are obtained through multiple teaching of the demonstrator.
As an alternative embodiment, the motion characterization and skill modeling are performed on each periodic reference track, and the specific process of fitting the final reference track includes:
performing alignment on inconsistent time lengths of teaching tracks by adopting a spline interpolation algorithm;
aligning the multiple teaching tracks by using a generalized time warping algorithm according to inconsistent starting positions of the teaching tracks;
and integrating the multiple teaching tracks after the alignment and the alignment into a final reference track by using a Gaussian mixture model and Gaussian mixture regression.
Further, when using the gaussian mixture model, the parameters used by the model are model parameters of the gaussian mixture model estimated by the expectation maximization algorithm.
As an alternative embodiment, the tracking learning process of the discrete time adaptive radial basis function neural network controller includes:
acquiring a discrete time dynamic model of the upper limb rehabilitation robot;
transmitting the discrete time dynamic model to an output feedback control system, and establishing a dynamic model in a standard system form;
defining a tracking error by using a back-stepping method, and obtaining an ideal form of the controller by using a dynamic equation of a system error;
the method comprises the steps that a radial basis function neural network is utilized to approximate/learn unknown dynamics of a robot and uncertainty generated by mutual interaction between a rehabilitation demander and an upper limb rehabilitation robot system, and a self-adaptive radial basis function neural network controller is obtained;
and carrying out stability analysis and weight updating on the discrete time self-adaptive radial basis function neural network controller according to the Lyapunov stability theory.
Further, in order to avoid interference of affine items of a system on learning, the learning error is converted into discrete linear time-varying disturbance through state conversion, and closed loop learning is performed;
as an alternative embodiment, the process of building an experience-based learning controller includes:
and storing the neural network weight after stable convergence in the tracking control process of the discrete time self-adaptive radial basis function neural network controller as a constant value.
An experience-based learning controller is built using constant neural networks to further improve control performance in the same or similar tasks.
An upper limb rehabilitation robot intelligent control system, comprising:
the fitting module is configured to determine periodic reference tracks based on the acquired teaching tracks, perform motion characterization and skill modeling on each periodic reference track, and fit a final reference track;
the training module is configured as a discrete time self-adaptive radial basis function neural network controller, identifies unknown dynamics in the upper limb rehabilitation robot system in the tracking control process, and approximates/learns and stores interaction force between a rehabilitation demander and the upper limb rehabilitation robot system;
and a learning control module configured as an experience-based learning controller that further improves control performance using knowledge learned from the adaptive radial basis function neural network controller.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to perform the steps in the method.
A terminal device comprising a processor and a computer readable storage medium, the processor configured to implement instructions; the computer readable storage medium is for storing a plurality of instructions adapted to be loaded by a processor and to perform the steps in the method.
Compared with the prior art, the invention has the beneficial effects that:
compared with the control strategy in the existing passive training process of the upper limb rehabilitation robot, the control method provided by the invention not only considers the discrete time characteristic of the actual upper limb rehabilitation robot, but also can realize that the tracking error of any periodic track in the passive training finally tends to zero in the neighborhood, and has higher tracking precision and better transient performance.
The training track is formed by optimizing and integrating multiple dragging teachings based on patient appeal and through the healthy side of the patient, and the requirements of the patient and the fine distinction between multiple movements of the human body are fully considered.
According to the method, the learning error-free system is converted into the discrete linear time-varying disturbance system through state conversion, so that the problem that the learning cannot be performed due to the existence of an uncertain affine term in the upper limb rehabilitation robot discrete system is solved.
The method can carry out local accurate modeling on unknown dynamic and unpredictable interference in a nonlinear system, store learned experience knowledge in the form of a constant neural network, directly call stored knowledge for control on the same or similar control tasks, and no longer need to calculate controller parameters on line, thereby saving energy and control time of a rehabilitation training system and further improving dynamic control performance in a transient process.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
FIG. 1 is a block diagram of the overall control of an upper limb rehabilitation robot of the present invention;
FIG. 2 is an overall block diagram of the HRST technique of the present invention;
FIG. 3 is a flowchart of an implementation of the intelligent control method of the upper limb rehabilitation robot based on DTDL and HRST in the invention;
fig. 4 is a schematic diagram of an actual implementation of the intelligent control method of the upper limb rehabilitation robot based on DTDL and HRST in the invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is illustrative and is intended to provide further explanation of the invention. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the present invention. As used herein, the singular is also intended to include the plural unless the context clearly indicates otherwise, and furthermore, it is to be understood that the terms "comprises" and/or "comprising" when used in this specification are taken to specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof.
The intelligent control method of the upper limb rehabilitation robot aims at the uncertainty in the Discrete time system dynamic model and man-machine interaction process of an actual mechanical arm, firstly, a self-adaptive neural network controller (Adaptive neural network controller, ANNC) is constructed by using a back-stepping method and a Discrete determination learning (Discrete-time deterministic Learning, DTDL) theory, and the internal unknown dynamics can be accurately modeled/learned along a periodic track, so that an experience-based learning controller (Learning controller, LC) is constructed by using the learned knowledge. The control method designed by the invention can realize rapid convergence and high-precision and better transient performance. In addition, the method can directly realize rapid control for the same or similar control tasks by using the learned knowledge, and the controller parameters do not need to be calculated on line by using the learned knowledge, so that the control time is shortened and the system energy is saved.
Aiming at the problems that the training tracks of the current rehabilitation robot are mostly preset tracks and the individuation degree is low, the intelligent control method of the rehabilitation robot based on DTDL is combined with the interpersonal skill transfer (Human-robot skill transfer, HRST) method, the basic idea is to define a teaching training mode according to patient demands and daily life requirements, then transmit the individualized movement mode rich in patients to the rehabilitation robot by using the HRST method, and improve the control performance of the rehabilitation robot in the uncertain environment by using a control method with a discrete definite learning theory as a core. The intelligent control method has important significance for improving the rehabilitation training efficiency of the upper limb rehabilitation robot.
Firstly, to facilitate understanding of the skilled person, the key technical points related to the present invention are described:
the first part is the construction and learning of the controller.
The method mainly comprises the steps of designing ANNC and LC according to a discrete time dynamic model and a DTDL theory of the upper limb rehabilitation robot.
According to the following discrete time dynamic model of the upper limb rehabilitation robot:
wherein k is a discrete time point, and T is a discrete time interval; j (j) k ,v k Representing joint position and joint velocity, J k =j k +T s v k ;M(j k ) And M (J) k ) Is an inertial matrix; f (j) k ,v k ) Is a coriolis force-centrifugal force and gravity moment matrix; τ k Is the control input torque;is man-machine interaction moment.
Transmitting the discrete dynamics of the upper limb rehabilitation robot to an output feedback control system, and establishing a dynamic model in a standard system form;
wherein,,
U(X k )=M -1 (J k ),X k =[j k ,v k ] T
design error variable:
wherein z is 1 k ,z 2 k For joint position and velocity tracking errors,a joint position reference trajectory that is periodic or cycle-like; θ k The virtual controller is in the form of:
where p is the designed controller gain constant,
from equation (2) and equation (3), we can get:
to ensure closed loop system stability, the required control inputs are selected as:
where m is the designed controller gain constant,is an unknown dynamic in the upper limb rehabilitation robot system.
The discrete time self-adaptive neural network controller is properly designed by utilizing a discrete-based upper limb rehabilitation robot dynamics model and DTDL so as to accurately identify (learn) unknown dynamics in the upper limb rehabilitation robot system in the tracking control process, specifically:
wherein the neural network is radial basis function neural network (Radial basis function neural network, RBFNN),for RBFNN output, ++>Is the weight of RBFNN, phi (Y k ) Is a set of radial basis functions,is RBFNN input. Unlike the conventional RBFNN, the RBFN of the present inventionN needs to meet a sustained excitation (persistent excitation, PE) condition to achieve learning.
The neural network weight update law is designed according to the Lyapunov stability theory and the DTDL theory, and is specifically as follows:
wherein Γ=Γ T >0 is a positive diagonal matrix, σ>0 is a constant.
Learning a closed loop system from the self-adaptive RBFNN;
according to the DTDL theory, when neurons of RBFNN along a periodic training trajectory satisfy the PE condition, both state errors and weight estimates are bounded and exponentially converged. But due to the uncertain affine term U (X in the upper limb rehabilitation robot system k )=M -1 (J k ) The presence of (2) can amplify errors, resulting in situations where learning is impossible. In order to solve the problem, the learning error system is converted into a discrete linear time-varying (DLTV) disturbance system through state conversion, and the system is enabled to be exponentially stable, so that the learning effect is achieved. The method comprises the following steps: order theWherein the method comprises the steps ofAnd->The DLTV system can be expressed as:
in the method, in the process of the invention,D k =T 2 Γφ(Y k )U(X kT (Y k ),/>δ ζ for neural network approximation errors, the ζ subscript indicates that the neurons of RBFNN satisfy the PE condition.
Designing an experience-based learning controller using the learned knowledge; according to the DTDL theory, the neural network weight after stable convergence is obtainedStored as constant +.>The method comprises the following steps:
wherein [ k ] α ,k β ]The time interval after the system is stably converged is set.
Then using the obtained constant value neural network controllerAnd correcting the controller, wherein the controller is in the form of:
for the same control task, the learning controller based on experience has better transient performance and higher control precision, and saves system energy and control time because on-line calculation of controller parameters is not needed, which has extremely important value for rehabilitation training.
The second part is the reproduction of the self-defined track based on the HRST technique, as shown in fig. 2, mainly comprising the following steps:
step (1): acquiring teaching data;
selecting corresponding rehabilitation training actions according to actual requirements of patients; multiple teaching by a demonstrator to obtain multiple periodic reference tracks;
step (2): action skill representation and modeling;
after the teaching is completed, the acquired plurality of track data are subjected to motion characterization and skill modeling, so that a reference track rich in patient individuation is finally fitted. In this process, the problem of alignment of the teaching trajectories has to be considered. Even if the same person performs a plurality of repeated operations, the teaching trajectory does not have the same length of time and the same starting position. Aiming at the problem that the time length of the teaching track is inconsistent, a spline interpolation algorithm is adopted for filling. Aiming at the problem of inconsistent starting positions, a generalized time warping (Generalized time warping, GTW) algorithm is adopted to align multiple teaching tracks. Consider a time series of m teaching trajectories { U ] 1 ,…,U m GTW minimization cost function is:
wherein W is i And V i Nonlinear time transformation and low-dimensional spatial embedding, respectively, phi (V i ) Andis a regularization function. For each U i The GTW may find a W i And V i So that the sequence V i T U i W i Well aligned with other sequences in the least squares sense. To optimize the cost function, the GTW uses a gaussian-newton algorithm with linear complexity in sequence length to optimize the time-warping function, uses a multi-set canonical correlation analysis to account for differences in dimensions, and uses a more flexible warping model parameterized by a set of monotonic bases to compensate for changes in time space.
Integrating the multiple teaching tracks into a final reference track by using a Gaussian mixture model and Gaussian mixture regression, wherein the final control variable can be expressed as:
wherein,,h i (x) The parameters in the above formula are normalized weights, namely model parameters of the Gaussian mixture model estimated by a expectation maximization algorithm.
Step (3): skill transfer.
After obtaining skill characteristics, the learned movement strategy control variables can be mapped into a controller of the robotic arm, and the robot can reproduce the motor skills of the healthy side of the patient to complete rehabilitation training of the affected side.
In some embodiments, in the task reproduction stage, a suitable upper limb rehabilitation robot controller needs to be selected, and the intelligent control method based on discrete determination learning described in the previous section can be selected.
As a specific application, as shown in fig. 3 and 4, includes:
1) Firstly, a rehabilitation training track is customized according to the needs of a patient, and the patient health side performs multiple teaching (if the patient health side cannot complete the teaching, a therapist replaces the teaching), and the final rehabilitation training reference track is obtained after the algorithm in the second part is integrated and optimized.
2) The upper limb rehabilitation robot embedded with the ANNC controller is utilized to drive a patient to move, and the neural network is utilized to approach/learn the internal dynamic state of the upper limb rehabilitation robot and the interaction force between the patient and the robot.
3) Building an empirically based learning controller using the knowledge learned in 2). The experience-based learning controller can work quickly and further improve control performance when later patients perform the same or similar rehabilitation training tasks.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
While the foregoing description of the embodiments of the present invention has been presented in conjunction with the drawings, it should be understood that it is not intended to limit the scope of the invention, but rather, it is intended to cover all modifications or variations within the scope of the invention as defined by the claims of the present invention.

Claims (10)

1. An intelligent control method of an upper limb rehabilitation robot is characterized by comprising the following steps:
acquiring a plurality of teaching tracks, wherein the teaching tracks are obtained according to rehabilitation training tracks customized according to rehabilitation requirements;
determining periodic reference tracks based on the teaching tracks, performing motion characterization and skill modeling on each periodic reference track, and fitting a final reference track;
and constructing a discrete time self-adaptive neural network controller, identifying unknown dynamics in the upper limb rehabilitation robot system in the tracking control process, and approximating/learning and storing interaction forces between a rehabilitation demander and the upper limb rehabilitation robot system.
An empirically based learning controller is constructed using knowledge learned from discrete time adaptive neural network controllers to further enhance control performance in the same or similar tasks.
2. The intelligent control method of the upper limb rehabilitation robot according to claim 1, wherein the specific process of customizing the rehabilitation training track according to the rehabilitation requirement comprises the following steps: according to the actual demand of the rehabilitation demander, corresponding rehabilitation training actions are determined, and a plurality of periodic reference tracks are obtained through multiple teaching of the demonstrator.
3. The intelligent control method of an upper limb rehabilitation robot according to claim 1, wherein the specific process of performing motion characterization and skill modeling on each periodic reference track and fitting out a final reference track comprises the following steps:
performing alignment on inconsistent time lengths of teaching tracks by adopting a spline interpolation algorithm;
aligning the multiple teaching tracks by using a generalized time warping algorithm according to inconsistent starting positions of the teaching tracks;
and integrating the multiple teaching tracks after the alignment and the alignment into a final reference track by using a Gaussian mixture model and Gaussian mixture regression.
4. The intelligent control method for an upper limb rehabilitation robot according to claim 3, wherein when using a gaussian mixture model, parameters adopted by the model are model parameters of the gaussian mixture model estimated by a expectation maximization algorithm.
5. The intelligent control method of an upper limb rehabilitation robot according to claim 1, wherein the discrete time adaptive neural network controller is required to use a radial basis function neural network, and the neuron vector is required to satisfy a continuous excitation condition.
6. The intelligent control method of the upper limb rehabilitation robot according to claim 1, wherein the learning process of the discrete time adaptive neural network controller is to perform closed loop learning by converting learning errors into discrete linear time-varying disturbances through state conversion so as to overcome the interference of affine terms.
7. The intelligent control method of an upper limb rehabilitation robot according to claim 1, wherein the neural network weight after stable convergence after learning is stored as a constant value, and an experience-based learning controller is constructed by using the constant value.
8. An upper limb rehabilitation robot intelligent control system, which is characterized by comprising:
the fitting module is configured to determine periodic reference tracks based on the acquired teaching tracks, perform motion characterization and skill modeling on each periodic reference track, and fit a final reference track;
the training module is configured as a discrete time self-adaptive radial basis function neural network controller, identifies unknown dynamics in the upper limb rehabilitation robot system in the tracking control process, and approximates/learns and stores interaction forces between a rehabilitation demander and the upper limb rehabilitation robot system;
and a learning control module configured as an experience-based learning controller that further improves control performance using knowledge learned from the adaptive radial basis function neural network controller.
9. A computer readable storage medium, characterized in that a plurality of instructions are stored, which instructions are adapted to be loaded by a processor of a terminal device and to perform the steps in the method of any of claims 1-7.
10. A terminal device, comprising a processor and a computer readable storage medium, the processor configured to implement instructions; a computer readable storage medium for storing a plurality of instructions adapted to be loaded by a processor and to perform the steps of the method of any of claims 1-7.
CN202310533522.7A 2023-05-09 2023-05-09 Intelligent control method for upper limb rehabilitation robot Pending CN116604532A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310533522.7A CN116604532A (en) 2023-05-09 2023-05-09 Intelligent control method for upper limb rehabilitation robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310533522.7A CN116604532A (en) 2023-05-09 2023-05-09 Intelligent control method for upper limb rehabilitation robot

Publications (1)

Publication Number Publication Date
CN116604532A true CN116604532A (en) 2023-08-18

Family

ID=87677466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310533522.7A Pending CN116604532A (en) 2023-05-09 2023-05-09 Intelligent control method for upper limb rehabilitation robot

Country Status (1)

Country Link
CN (1) CN116604532A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117539153A (en) * 2023-11-21 2024-02-09 山东大学 Upper limb rehabilitation robot self-adaptive control method and system based on definite learning
CN117558406A (en) * 2023-11-01 2024-02-13 河南翔宇医疗设备股份有限公司 Active training method and system based on upper limb rehabilitation

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117558406A (en) * 2023-11-01 2024-02-13 河南翔宇医疗设备股份有限公司 Active training method and system based on upper limb rehabilitation
CN117539153A (en) * 2023-11-21 2024-02-09 山东大学 Upper limb rehabilitation robot self-adaptive control method and system based on definite learning
CN117539153B (en) * 2023-11-21 2024-05-28 山东大学 Upper limb rehabilitation robot self-adaptive control method and system based on definite learning

Similar Documents

Publication Publication Date Title
CN108161934B (en) Method for realizing robot multi-axis hole assembly by utilizing deep reinforcement learning
US11772264B2 (en) Neural network adaptive tracking control method for joint robots
Li et al. Asymmetric bimanual control of dual-arm exoskeletons for human-cooperative manipulations
CN110238839B (en) Multi-shaft-hole assembly control method for optimizing non-model robot by utilizing environment prediction
Liu et al. Adaptive neural control for dual-arm coordination of humanoid robot with unknown nonlinearities in output mechanism
CN116604532A (en) Intelligent control method for upper limb rehabilitation robot
Li et al. Skill learning strategy based on dynamic motion primitives for human–robot cooperative manipulation
CN112454359B (en) Robot joint tracking control method based on neural network self-adaptation
CN111522243A (en) Robust iterative learning control strategy for five-degree-of-freedom upper limb exoskeleton system
Qi et al. Stable indirect adaptive control based on discrete-time T–S fuzzy model
CN115446867B (en) Industrial mechanical arm control method and system based on digital twin technology
CN114102600B (en) Multi-space fusion human-machine skill migration and parameter compensation method and system
Liang et al. A novel impedance control method of rubber unstacking robot dealing with unpredictable and time-variable adhesion force
Sun et al. Repetitive impedance learning-based physically human–robot interactive control
CN113219825B (en) Single-leg track tracking control method and system for four-leg robot
CN116690561B (en) Self-adaptive optimal backstepping control method and system for single-connecting-rod mechanical arm
Xie et al. A fuzzy neural controller for model-free control of redundant manipulators with unknown kinematic parameters
Zhong et al. A New Approach to Modeling and Controlling a Pneumatic Muscle Actuator‐Driven Setup Using Back Propagation Neural Networks
Stulp et al. Reinforcement learning of impedance control in stochastic force fields
Suc et al. Skill modeling through symbolic reconstruction of operator's trajectories
Chen et al. [Retracted] Model‐Free Adaptive Sliding Mode Robust Control with Neural Network Estimator for the Multi‐Degree‐of‐Freedom Robotic Exoskeleton
Lee et al. Combining GRN modeling and demonstration-based programming for robot control
CN117539153B (en) Upper limb rehabilitation robot self-adaptive control method and system based on definite learning
Tao et al. A Multiobjective Collaborative Deep Reinforcement Learning Algorithm for Jumping Optimization of Bipedal Robot
Zhai et al. Effective learning and online modulation for robotic variable impedance skills

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination