CN113995629B - Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system - Google Patents

Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system Download PDF

Info

Publication number
CN113995629B
CN113995629B CN202111295849.2A CN202111295849A CN113995629B CN 113995629 B CN113995629 B CN 113995629B CN 202111295849 A CN202111295849 A CN 202111295849A CN 113995629 B CN113995629 B CN 113995629B
Authority
CN
China
Prior art keywords
representing
subject
intention
matrix
healthy side
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111295849.2A
Other languages
Chinese (zh)
Other versions
CN113995629A (en
Inventor
李智军
苏航
李国欣
康宇
刘碧珊
王昶茹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Technology University of Science and Technology of China
Shanghai Robot Industrial Technology Research Institute Co Ltd
Original Assignee
Institute of Advanced Technology University of Science and Technology of China
Shanghai Robot Industrial Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Technology University of Science and Technology of China, Shanghai Robot Industrial Technology Research Institute Co Ltd filed Critical Institute of Advanced Technology University of Science and Technology of China
Priority to CN202111295849.2A priority Critical patent/CN113995629B/en
Publication of CN113995629A publication Critical patent/CN113995629A/en
Application granted granted Critical
Publication of CN113995629B publication Critical patent/CN113995629B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled
    • A61H2201/501Control means thereof computer controlled connected to external computer devices or networks
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2230/00Measuring physical parameters of the user
    • A61H2230/08Other bio-electrical signals
    • A61H2230/085Other bio-electrical signals used as a control parameter for the apparatus
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides an admittance control method and system of an upper limb double-arm rehabilitation robot based on a mirror image force field, wherein the method comprises the following steps: step 1, modeling a human-computer tight coupling healthy side force field based on multi-sensor signal fusion to obtain the exercise intention of the healthy side of a subject; step 2: performing physiological signal and force field mapping of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject; step 3: and synchronous coupling control of the healthy side based on the force field mirror image is performed according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled. Aiming at the important clinical requirement of reconstructing the upper limb movement function after the clinical nerve shift operation, the invention combines the force field control strategy of human-computer tight coupling with the mirror image rehabilitation strategy, explores a new mirror image force field rehabilitation strategy for guiding the action of the affected side based on the information of the force field of the affected side of the patient, is more natural, and improves the participation feeling and the active rehabilitation capability of the patient.

Description

Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system
Technical Field
The invention relates to the technical field of admittance control of upper limb double-arm rehabilitation robots, in particular to an admittance control method and system of an upper limb double-arm rehabilitation robot based on a mirror image force field.
Background
The current peripheral nerve injury is a clinical multiple disease, tens of millions of patients with new trauma are annually, wherein the most serious peripheral nerve injury, such as brachial plexus injury, can cause complete paralysis of one side upper limb, seriously affect the life quality of the patients, and the treatment is a worldwide difficult problem. The currently accepted optimal treatment for brachial plexus avulsion is nerve shift. Rehabilitation is the key to the functional recovery of the upper limb after operation, and the cerebral cortex can be widely remodeled in the recovery process, and the result of the remodelling is critical to clinical prognosis. Studies have shown that following nerve shift surgery, the original functional areas of the affected limbs in the brain are reactivated by remodeling and effective control of the affected limbs is achieved. However, in clinical rehabilitation, many patients have the problems of poor limb movement control, wrong movement pattern and the like, and the root cause is that peripheral nerve innervation and access of the affected limb are greatly changed after nerve shift operation.
Research proves that compared with single unilateral rehabilitation training, the rehabilitation training is completed by guiding the affected side through the healthy side, so that the rehabilitation training is more in accordance with the natural movement mode of the upper limb of the human body, and the rehabilitation training is favorable for the neural plasticity of the semi-brain of the affected side and is more favorable for improving the rehabilitation effect of the movement function of the affected limb of the patient. Mirror therapy is the most mainly used therapy means for guiding the patient side movement by using the health side information in the traditional clinic. Mirror therapy is also called mirror vision feedback therapy, which uses the principle of plane mirror imaging to copy the motion picture of the healthy side to the affected side, so that the patient can imagine the motion of the affected side, and a rehabilitation training therapy means combining optical illusion, vision feedback and virtual reality is used. In the mirror image treatment, after the patient sees the mirror image of the exercise on the healthy side, mirror image neurons of the corresponding cerebral cortex are activated, which helps to restore the exercise function on the affected side. However, the mirror is not immersed in the mirror as a mirror carrier, and the stability and improvement of the clinical research result of the mirror therapy are directly affected. In addition, since the rehabilitation training of the traditional mirror image therapy can only realize the control of the movement track of the upper limb, and neglect the stress state of the muscle group of the affected limb, the improvement of the clinical research effect of the mirror image therapy is directly affected.
The Chinese patent document with publication number of CN109091818A discloses a training method and a training system of a rope traction upper limb rehabilitation robot based on admittance control, wherein when a user performs upper limb joint rehabilitation training exercise, interactive force signals applied to the rope traction upper limb rehabilitation robot by the user and kinematic signals of the upper limb are collected in real time; converting the interaction force signal into a motion parameter of an expected motion track through an admittance model, and determining the motion parameter of a target motion track according to the motion parameter of the expected motion track and the motion signal of the upper limb; and the determined motion parameters are used as control quantity and are converted into motor control quantity of the rope traction rehabilitation robot, and the corresponding motor output is controlled, so that the user can autonomously control the rehabilitation training action, and the active participation of the user is improved.
Aiming at the related technology, the inventor considers that the method is to use a mirror to perform visual feedback for rehabilitation of the affected side after the nerve repair operation, and the patient has poor participation feeling and general rehabilitation effect.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an upper limb double-arm rehabilitation robot method and system based on a mirror image force field.
The invention provides an admittance control method of an upper limb double-arm rehabilitation robot based on a mirror image force field, which comprises the following steps:
step 1: human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion is adopted to obtain the exercise intention of the healthy side of the subject;
step 2: performing physiological signal and force field mapping of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject;
step 3: and synchronous coupling control of the healthy side based on the force field mirror image is performed according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled.
Preferably, the step 1 includes predicting the movement intention of the subject in real time by the healthy side myoelectric sensor, modeling the acting force in the interaction process as an impedance model, and predicting the joint state of the subject by the impedance model, wherein the impedance model is shown in formula (1):
Figure BDA0003336561920000021
wherein u is h Acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; x is the actual position of the tail end of the upper limb double-arm rehabilitation robot; x is x r The method is characterized in that the method is a desired position of the tail end of an upper limb double-arm rehabilitation robot; superscript notation represents the derivative of the corresponding state quantity with respect to time; l (L) h,1 Gain for position error; l (L) h,2 Is the speed gain;
estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure BDA0003336561920000022
wherein,,
Figure BDA0003336561920000023
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity;
Figure BDA0003336561920000024
an initial value representing the error gain at any virtual target position; />
Figure BDA0003336561920000025
An initial value representing the velocity gain at any virtual target; the superscript v indicates that the value is based on any initial value given by the virtual target.
Preferably, the step 2 includes the steps of:
step 2.1: according to the steps1 modeling the subject's healthy side force field and physiological electromyographic signals to obtain the exercise intention of the subject's healthy side
Figure BDA0003336561920000031
Step 2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle.
Preferably, the step 3 includes combining the interaction force generated by the affected side during the interaction with the established affected side motion trajectory and intention, and controlling the motion of the exoskeleton through admittance control, wherein the affected side intention is expressed as:
Figure BDA0003336561920000032
wherein,,
Figure BDA0003336561920000033
subject intent predicted for robust model; τ r Original exercise intention for the affected side model; lambda is a super parameter that adjusts the weight ratio of the two.
Preferably, the admittance control in step 3 includes:
the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure BDA0003336561920000034
wherein M and G respectively represent an inertial matrix and a gravity matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; c represents a Coriolis force and centrifugal force matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; f (f) dis Is a disturbance in the interactive system; u is the control input of the system; the superscript · represents the second derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the acceleration;
supposing upper limb double arm rehabilitation machineThe derivative of the actual position x of the human end and the actual position of the upper arm rehabilitation robot end with respect to time
Figure BDA0003336561920000035
Is obtained by measurement; let x be 1 =[q 1 ,q 2 ,…,q n ] T ,/>
Figure BDA0003336561920000036
Wherein q i And->
Figure BDA0003336561920000037
Respectively representing the rotation angle and the angular velocity of the ith joint, i is more than or equal to 1 and less than or equal to n; x is x 1 A position matrix formed by the rotation angles of all joints of the robot is shown; x is x 2 A velocity matrix including angular velocities of the joints; the superscript T denotes a transpose; the dynamics of the interaction task is expressed as follows:
Figure BDA0003336561920000038
defining position error z 1 =x 1 -x r Error of speed z 2 =x 21 ,α 1 To z 1 Virtual control of (c) to obtain:
Figure BDA0003336561920000039
using lyapunov function
Figure BDA00033365619200000310
V 1 Representing the constructed function in the form of a lyapunov function; symbol represents matrix multiplication; and (3) deriving time:
Figure BDA00033365619200000311
order the
Figure BDA00033365619200000312
Wherein K is 1 For the gain matrix, reset equation (7) to get:
Figure BDA00033365619200000313
the expression (8) is used for obtaining:
Figure BDA0003336561920000041
definition of Lyapunov function
Figure BDA0003336561920000042
V 2 Representing the constructed function in the form of a lyapunov function; and (3) deriving time:
Figure BDA0003336561920000043
when the parameters of the dynamics are known, the control is expressed in the form:
Figure BDA0003336561920000044
wherein K is 2 Representing a gain matrix;
g, C and M terms of the robot dynamics are approximated using a radial basis neural network; the external disturbance is compensated by a disturbance observer; the self-adaptive control law is as follows:
Figure BDA0003336561920000045
wherein,,
Figure BDA0003336561920000046
is a radial basis function neural network, W is a weight coefficient,y (Z) is a dynamic regression matrix, namely the distance between a sample point and each radial base center, and Z represents the input of a radial function network; let->
Figure BDA0003336561920000047
The form of the high-order disturbance observer is as follows:
Figure BDA0003336561920000048
wherein K is d Representing a gain matrix in the disturbance observation process;
Figure BDA0003336561920000049
representing an estimation error; y is Y d (Z d ) Representing a dynamic regression matrix, Y d (Z d ) Representing a dynamic regression matrix; z is Z d Representing the actual sampling point; w (W) d Representing the weight coefficient; />
Figure BDA00033365619200000410
Figure BDA00033365619200000411
The update of the weight matrix is as follows:
Figure BDA00033365619200000412
Figure BDA00033365619200000413
Figure BDA00033365619200000414
Y d (Z d )W d =M -1 (u+u h -C(x 1 ,x 2 )x 2 -G(x 1 ))-∈ d
wherein Y is i (Z) represents an updated value of the dynamic regression quantity matrix; z 2i An update indicative of a speed error; w (W) i Representing an update of the estimated value; superscript symbol
Figure BDA00033365619200000415
Representing the expected value of the weight derivative; w (W) di An updated value representing the physical parameter; epsilon represents the estimation error; e-shaped article d Representing the expected estimation error; y (Z) W represents the output of the radial basis function; Γ -shaped structure i And Γ di For update rate, θ i And theta di Is the weight.
The invention provides an admittance control system of an upper limb double-arm rehabilitation robot based on a mirror image force field, which comprises the following modules:
module M1: human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion is adopted to obtain the exercise intention of the healthy side of the subject;
module M2: performing physiological signal and force field mapping of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject;
module M3: and synchronous coupling control of the healthy side based on the force field mirror image is performed according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled.
Preferably, the module M1 includes predicting the movement intention of the subject in real time by the healthy side myoelectric sensor, modeling the acting force in the interaction process as an impedance model, and predicting the joint state of the subject by the impedance model, wherein the impedance model is shown in formula (1):
Figure BDA0003336561920000051
wherein u is h Acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; x is the actual position of the tail end of the upper limb double-arm rehabilitation robot; x is x r The method is characterized in that the method is a desired position of the tail end of an upper limb double-arm rehabilitation robot; superscript notation represents the derivative of the corresponding state quantity with respect to time; l (L) h,1 Is bitSetting an error gain; l (L) h,2 Is the speed gain;
estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure BDA0003336561920000052
wherein,,
Figure BDA0003336561920000053
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity;
Figure BDA0003336561920000054
an initial value representing the error gain at any virtual target position; />
Figure BDA0003336561920000055
An initial value representing the velocity gain at any virtual target; the superscript v indicates that the value is based on any initial value given by the virtual target.
Preferably, the module M2 includes the following modules:
module M2.1: modeling a subject's exercise stress field and physiological myoelectric signals according to the module M1 to obtain the exercise intention of the subject's exercise
Figure BDA0003336561920000056
Module M2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle.
Preferably, the module M3 includes combining the interaction force generated by the patient side during the interaction with the established patient side motion trajectory and intent to control the movement of the exoskeleton through admittance control, wherein the patient side intent is expressed as:
Figure BDA0003336561920000057
wherein,,
Figure BDA0003336561920000058
subject intent predicted for robust model; τ r Original exercise intention for the affected side model; lambda is a super parameter that adjusts the weight ratio of the two.
Preferably, the admittance control in the module M3 includes:
the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure BDA0003336561920000061
wherein M and G respectively represent an inertial matrix and a gravity matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; c represents a Coriolis force and centrifugal force matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; f (f) dis Is a disturbance in the interactive system; u is the control input of the system; the superscript · represents the second derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the acceleration;
assume that the actual position x of the upper arm rehabilitation robot tip and the derivative of the actual position of the upper arm rehabilitation robot tip with respect to time
Figure BDA0003336561920000062
Is obtained by measurement; let x be 1 =[q 1 ,q 2 ,…,q n ] T ,/>
Figure BDA0003336561920000063
Wherein q i And->
Figure BDA0003336561920000064
Respectively representing the rotation angle and the angular velocity of the ith joint, i is more than or equal to 1 and less than or equal to n; x is x 1 A position matrix formed by the rotation angles of all joints of the robot is shown; x is x 2 A velocity matrix including angular velocities of the joints; upper partThe label T denotes transposition; the dynamics of the interaction task is expressed as follows:
Figure BDA0003336561920000065
defining position error z 1 =x 1 -x r Error of speed z 2 =x 21 ,α 1 To z 1 Virtual control of (c) to obtain:
Figure BDA0003336561920000066
using lyapunov function
Figure BDA0003336561920000067
V 1 Representing the constructed function in the form of a lyapunov function; symbol represents matrix multiplication; and (3) deriving time:
Figure BDA0003336561920000068
order the
Figure BDA0003336561920000069
Wherein K is 1 For the gain matrix, reset equation (7) to get:
Figure BDA00033365619200000610
the expression (8) is used for obtaining:
Figure BDA00033365619200000611
definition of Lyapunov function
Figure BDA00033365619200000612
V 2 Representation structureA function in the form of a built lyapunov function; and (3) deriving time:
Figure BDA00033365619200000613
when the parameters of the dynamics are known, the control is expressed in the form:
Figure BDA00033365619200000614
wherein K is 2 Representing a gain matrix;
g, C and M terms of the robot dynamics are approximated using a radial basis neural network; the external disturbance is compensated by a disturbance observer; the self-adaptive control law is as follows:
Figure BDA00033365619200000615
wherein,,
Figure BDA00033365619200000616
the method is characterized in that the method is a radial basis function network, W is a weight coefficient, Y (Z) is a dynamic regression matrix, namely the distance between a sample point and each radial basis center, and Z represents the input of the radial function network; let->
Figure BDA0003336561920000071
The form of the high-order disturbance observer is as follows:
Figure BDA0003336561920000072
wherein K is d Representing a gain matrix in the disturbance observation process;
Figure BDA0003336561920000073
representing an estimation error; y is Y d (Z d ) Representing a dynamic regression matrix, Y d (Z d ) Representing a dynamic regression matrix; z is Z d Representing the actual sampling point; w (W) d Representing the weight coefficient; />
Figure BDA0003336561920000074
Figure BDA0003336561920000075
The update of the weight matrix is as follows:
Figure BDA0003336561920000076
Figure BDA0003336561920000077
Figure BDA0003336561920000078
Y d (Z d )W d =M -1 (u+u h -C(x 1 ,x 2 )x 2 -G(x 1 ))-∈ d
wherein Y is i (Z) represents an updated value of the dynamic regression quantity matrix; z 2i An update indicative of a speed error; w (W) i Representing an update of the estimated value; superscript symbol
Figure BDA0003336561920000079
Representing the expected value of the weight derivative; w (W) di An updated value representing the physical parameter; epsilon represents the estimation error; e-shaped article d Representing the expected estimation error; y (Z) W represents the output of the radial basis function; Γ -shaped structure i And Γ di For update rate, θ i And theta di Is the weight.
Compared with the prior art, the invention has the following beneficial effects:
1. aiming at the important clinical requirement of reconstructing the upper limb movement function after the clinical nerve shift operation, the invention combines the force field control strategy of human-computer tight coupling with the mirror image rehabilitation strategy, explores a new mirror image force field rehabilitation strategy for guiding the action of the affected side based on the information of the force field of the affected side of the patient, and the method is more natural, and improves the participation feeling and the active rehabilitation capability of the patient;
2. aiming at the clinical important clinical requirements of the rehabilitation of the motor function after the nerve shift operation, the invention explores the force field mirror image rehabilitation strategy for effectively guiding the action of the affected side based on the information of the force field of the affected side of the patient based on the new technology in the engineering and medical fields, and the research result is a great breakthrough in the research field of the peripheral nerve rehabilitation, which not only drives the clinical treatment method in the research field of the peripheral nerve rehabilitation to be greatly innovated, but also provides a new technical means for exploring the rehabilitation of the function after the nerve shift operation and the recovery mechanism of the brain function, and has great academic value and clinical significance; the achievements can be used for hospitals, rehabilitation centers and communities to benefit patients in a shared manner;
3. the invention realizes the mirror image coupling of the healthy side force field and the sick side force field of the upper limb rehabilitation robot by an admittance control method, and obtains the sick side rehabilitation training effect which accords with the real exercise habit of a patient.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of non-limiting embodiments, given with reference to the accompanying drawings in which:
FIG. 1 is a schematic diagram of an admittance control system and method for an upper limb double-arm rehabilitation robot based on a mirror image force field;
FIG. 2 is a schematic diagram of the super-parameter lambda adjustment at different rehabilitation training stages according to the present invention;
fig. 3 is a block diagram of an admittance control method of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the present invention, but are not intended to limit the invention in any way. It should be noted that variations and modifications could be made by those skilled in the art without departing from the inventive concept. These are all within the scope of the present invention.
The embodiment of the invention discloses an admittance control method of an upper limb double-arm rehabilitation robot based on a mirror image force field, which is shown in fig. 1 and 2 and comprises man-machine tight coupling healthy side force field modeling based on multi-sensor signal fusion, healthy side physiological signals and force field mapping based on a state space and healthy side synchronous coupling control based on force field mirror image. The healthy side includes a healthy side and a diseased side. The method comprises the following steps: step 1: and (3) human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion to obtain the exercise intention of the healthy side of the subject. The exercise intention of the subject is predicted in real time through the side-building myoelectric sensor, acting force in the interaction process is modeled as an impedance model, and the joint state of the subject is predicted through the impedance model. Human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion comprises the steps of estimating and predicting the motion intention of a person through healthy side myoelectric sensors, modeling acting force in the interaction process into an impedance model, and predicting the joint state of the person by using the model, wherein the impedance model is shown in a formula (1):
Figure BDA0003336561920000081
wherein u is h Acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject, namely interaction force between the upper limb double-arm rehabilitation robot and the subject; x is the actual position of the tail end of the upper limb double-arm rehabilitation robot, and x is r For the expected position of the end of the upper limb double-arm rehabilitation robot, the superscript symbol represents the derivative of the corresponding state quantity with respect to time, L h,1 For position error gain, L h,2 Is the speed gain.
Estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure BDA0003336561920000082
wherein,,
Figure BDA0003336561920000083
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity;
Figure BDA0003336561920000084
an initial value representing the error gain at any virtual target position; />
Figure BDA0003336561920000085
An initial value representing the velocity gain at any virtual target; the superscript v indicates that the value is based on any initial value given by the virtual target.
Step 2: and mapping physiological signals and force fields of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject. Step 2 comprises the following steps: step 2.1: modeling the subject's exercise side force field and physiological electromyographic signals according to the step 1 to obtain the exercise intention of the subject's exercise side
Figure BDA0003336561920000091
Step 2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle.
The specific process of physiological signal and force field mapping of healthy side based on the state space comprises the following steps: modeling the healthy side force field and the physiological electromyographic signals of the subject according to the method in the step 1, namely obtaining the exercise intention of the healthy side of the subject. In the upper arm rehabilitation training process, rehabilitation actions are carried out on the healthy side of the subject along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle.
Step 3: as shown in fig. 1 and 3, synchronous coupling control of the healthy side based on force field mirror image is performed according to the motion track and intention of the healthy side of the subject, so as to control the motion of the exoskeleton.
The interaction force generated by the affected side in the interaction process is combined with the established affected side movement track and intention, and the movement of the exoskeleton is controlled through admittance control. Synchronous coupling control of healthy and wounded sides based on force field mirror images combines interaction force generated by wounded sides in the interaction process with established movement tracks and intentions of the wounded sides, and controls the movement of the exoskeleton through an admittance control method, wherein the wounded side intentions are expressed as follows:
Figure BDA0003336561920000092
wherein,,
Figure BDA0003336561920000093
subject intent predicted for robust model; τ r For the original motor intent of the affected side model, λ is the hyper-parameter that adjusts the weight ratio of the two. The influence of interaction force between the affected side and the rehabilitation robot is small at the initial stage of rehabilitation training by adjusting lambda at different stages of rehabilitation training of a hemiplegic patient, at this time, the mirror image of the exercise intention of the healthy side is taken as guide, namely lambda=1, the exercise intention of the affected side in the control process of the rehabilitation robot can be increased by reducing lambda along with the increase of rehabilitation training, and the super-parameter lambda adjusting schematic diagram at different rehabilitation training stages is shown in fig. 2.
The synchronous coupling control of the healthy side based on the force field mirror image comprises the following specific processes of admittance control: the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure BDA0003336561920000094
wherein M and G respectively represent an inertial matrix and a gravity matrix of the lower limb exoskeleton robot and the human interaction system of the Cartesian space coordinate system, and C represents a Coriolis force and centrifugal force matrix of the lower limb exoskeleton robot and the human interaction system of the Cartesian space coordinate system; f (f) dis U is the control input of the system, which is the disturbance in the interactive system; u (u) h Acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; the superscript "here specifically means that the first derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the robot tip speed, is onThe second derivative of the actual position of the end of the rehabilitation robot with respect to time, i.e. the acceleration, is shown by the label.
Assume that the actual position x of the upper arm rehabilitation robot tip and the derivative of the actual position of the upper arm rehabilitation robot tip with respect to time
Figure BDA0003336561920000101
Is obtained by measurement. Let x be 1 =[q 1 ,q 2 ,…,q n ] T ,/>
Figure BDA0003336561920000102
Wherein q is i And->
Figure BDA0003336561920000103
Respectively representing the rotation angle and the angular velocity of the ith joint, i is more than or equal to 1 and less than or equal to n; x is x 1 A position matrix formed by the rotation angles of all joints of the robot is shown; x is x 2 A velocity matrix including angular velocities of the joints; the superscript T denotes a transpose; the dynamics of the interaction task is expressed as follows:
Figure BDA0003336561920000104
defining position error z 1 =x 1 -x r Error of speed z 2 =x 21 Wherein x is r For the expected position of the tail end of the upper limb double-arm rehabilitation robot, namely the expected reference track alpha 1 To z 1 Virtual control of (c) to obtain:
Figure BDA0003336561920000105
consider the use of the Lyapunov function
Figure BDA0003336561920000106
V 1 Representing the function V of the constructed Lyapunov function form 1 The method comprises the steps of carrying out a first treatment on the surface of the Symbol represents matrix multiplicationA method; and (3) deriving time:
Figure BDA0003336561920000107
order the
Figure BDA0003336561920000108
Wherein K is 1 Equation (7) is reset for the gain matrix. Namely, the formula (7) is rewritten to obtain:
Figure BDA0003336561920000109
the expression (8) is used for obtaining:
Figure BDA00033365619200001010
definition of Lyapunov function
Figure BDA00033365619200001011
V 2 Representing the function V of the constructed Lyapunov function form 2 The method comprises the steps of carrying out a first treatment on the surface of the And (3) deriving time:
Figure BDA00033365619200001012
when the parameters of the dynamics are known, the control method is expressed in the following form:
Figure BDA00033365619200001013
wherein K is 2 Representing the gain matrix.
Due to interference f dis Is difficult to obtain and items such as G, C, M of robot dynamics are not readily available. The G, C and M terms of robot dynamics are approximated using radial basis neural networks (RBFNN). In addition, external disturbances are compensated by a disturbance observerCompensating; the self-adaptive control law is as follows:
Figure BDA00033365619200001014
wherein,,
Figure BDA00033365619200001015
the method is characterized in that the radial basis function neural network RBFNN is adopted, W is a weight coefficient, Y (Z) is a dynamic regression matrix, namely the distance between a sample point and each radial basis center, and Z represents the input of the radial function network; let->
Figure BDA00033365619200001016
The form of the high-order disturbance observer is as follows:
Figure BDA00033365619200001017
wherein K is d Representing a gain matrix in the disturbance observation process;
Figure BDA0003336561920000111
representing an estimation error; y is Y d (Z d ) Representing a dynamic regression matrix, Y d Representing a dynamic regression matrix; z is Z d Representing the actual sample dataset; w (W) d Representing the weight coefficient; />
Figure BDA0003336561920000112
Figure BDA0003336561920000113
The update of the weight matrix is as follows:
Figure BDA0003336561920000114
Figure BDA0003336561920000115
Figure BDA0003336561920000116
Y d (Z d )W d =M -1 (u+u h -C(x 1 ,x 2 )x 2 -G(x 1 ))-∈ d
wherein Y is i (Z) represents an updated value of the dynamic regression quantity matrix; z 2i An update indicative of a speed error; w (W) i Representing an update of the estimated value; superscript symbol
Figure BDA0003336561920000117
Representing the expected value of the weight derivative; w (W) di An updated value representing the physical parameter; epsilon represents the estimation error; e-shaped article d Representing the expected estimation error; y (Z) W represents the output of the radial basis function; Γ -shaped structure i And Γ di For update rate, θ i And theta di Is the weight. The characteristics of the regressor are also exploited in the disturbance observer.
The admittance control method is shown in fig. 3, and the reference track of the rehabilitation robot is reconstructed based on the physiological signals of the healthy side and the force field mapping in the state space, so that the control method can be suitable for people with different skill levels and different forces under the condition of not performing offline model adjustment, and the robustness of the controller is ensured. The control scheme consists of an inner ring and an outer ring. The former is able to handle unknown mass and moment of inertia in the robot dynamics, the latter is to adjust the interaction model taking into account the human subject's intent.
The embodiment of the invention discloses an admittance control system of an upper limb double-arm rehabilitation robot based on a mirror image force field, which comprises the following modules: module M1: and (3) human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion to obtain the exercise intention of the healthy side of the subject. Predicting the movement intention of a subject in real time through a healthy side myoelectric sensor, modeling acting force in the interaction process as an impedance model, and predicting the joint state of the subject through the impedance model, wherein the impedance model is shown in a formula (1):
Figure BDA0003336561920000118
wherein u is h Acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; x is the actual position of the tail end of the upper limb double-arm rehabilitation robot; x is x r The method is characterized in that the method is a desired position of the tail end of an upper limb double-arm rehabilitation robot; superscript notation represents the derivative of the corresponding state quantity with respect to time; l (L) h,1 Gain for position error; l (L) h,2 Is the speed gain;
estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure BDA0003336561920000119
wherein,,
Figure BDA00033365619200001110
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity;
Figure BDA00033365619200001111
an initial value representing the error gain at any virtual target position; />
Figure BDA00033365619200001112
An initial value representing the velocity gain at any virtual target; the superscript v indicates that the value is based on any initial value given by the virtual target.
Module M2: and mapping physiological signals and force fields of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject. The module M2 includes the following modules: module M2.1: modeling a subject's exercise stress field and physiological myoelectric signals according to the module M1 to obtain the exercise intention of the subject's exercise
Figure BDA0003336561920000121
Module M2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle.
Module M3: and synchronous coupling control of the healthy side based on the force field mirror image is performed according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled. Combining the interaction force generated by the affected side in the interaction process with the established affected side movement track and intention, and controlling the movement of the exoskeleton through admittance control, wherein the affected side intention is expressed as:
Figure BDA0003336561920000122
wherein,,
Figure BDA0003336561920000123
subject intent predicted for robust model; τ r Original exercise intention for the affected side model; lambda is a super parameter that adjusts the weight ratio of the two.
Admittance control includes: the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure BDA0003336561920000124
wherein M and G respectively represent an inertial matrix and a gravity matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; c represents a Coriolis force and centrifugal force matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system; f (f) dis Is a disturbance in the interactive system; u is the control input of the system; the superscript · indicates the second derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the acceleration.
Assume that the actual position x of the upper arm rehabilitation robot tip and the derivative of the actual position of the upper arm rehabilitation robot tip with respect to time
Figure BDA0003336561920000125
Is obtained by measurement; let x be 1 =[q 1 ,q 2 ,…,q n ] T ,/>
Figure BDA0003336561920000126
Wherein q i And->
Figure BDA0003336561920000127
Respectively representing the rotation angle and the angular velocity of the ith joint, i is more than or equal to 1 and less than or equal to n; x is x 1 A position matrix formed by the rotation angles of all joints of the robot is shown; x is x 2 A velocity matrix including angular velocities of the joints; the superscript T denotes a transpose; the dynamics of the interaction task is expressed as follows:
Figure BDA0003336561920000128
defining position error z 1 =x 1 -x r Error of speed z 2 =x 21 ,α 1 To z 1 Virtual control of (c) to obtain:
Figure BDA0003336561920000129
using lyapunov function
Figure BDA00033365619200001210
V 1 Representing the constructed function in the form of a lyapunov function; symbol represents matrix multiplication; and (3) deriving time:
Figure BDA00033365619200001211
order the
Figure BDA00033365619200001212
Wherein K is 1 For the gain matrix, reset equation (7) to get:
Figure BDA0003336561920000131
the expression (8) is used for obtaining:
Figure BDA0003336561920000132
definition of Lyapunov function
Figure BDA0003336561920000133
V 2 Representing the constructed function in the form of a lyapunov function; and (3) deriving time:
Figure BDA0003336561920000134
when the parameters of the dynamics are known, the control is expressed in the form:
Figure BDA0003336561920000135
wherein K is 2 Representing a gain matrix;
g, C and M terms of the robot dynamics are approximated using a radial basis neural network; the external disturbance is compensated by a disturbance observer; the self-adaptive control law is as follows:
Figure BDA0003336561920000136
wherein,,
Figure BDA0003336561920000137
the method is characterized in that the method is a radial basis function network, W is a weight coefficient, Y (Z) is a dynamic regression matrix, namely the distance between a sample point and each radial basis center, and Z represents the input of the radial function network; let->
Figure BDA0003336561920000138
The form of the high-order disturbance observer is as follows:
Figure BDA0003336561920000139
wherein K is d Representing a gain matrix in the disturbance observation process;
Figure BDA00033365619200001310
representing an estimation error; y is Y d (Z d ) Representing a dynamic regression matrix, Y d (Z d ) Representing a dynamic regression matrix; z is Z d Representing the actual sampling point; w (W) d Representing the weight coefficient; />
Figure BDA00033365619200001311
Figure BDA00033365619200001312
The update of the weight matrix is as follows:
Figure BDA00033365619200001313
Figure BDA00033365619200001314
Figure BDA00033365619200001315
Y d (Z d )W d =M -1 (u+u h -C(x 1 ,x 2 )x 2 -G(x 1 ))-∈ d
wherein Y is i (Z) represents an updated value of the dynamic regression quantity matrix; z 2i An update indicative of a speed error; w (W) i Representing an update of the estimated value; superscript symbol
Figure BDA00033365619200001316
Representing the expected value of the weight derivative; w (W) di An updated value representing the physical parameter; epsilon represents the estimation error; e-shaped article d Representing the expected estimation error; y (Z) W represents the output of the radial basis function; Γ -shaped structure i And Γ di For update rate, θ i And theta di Is the weight.
The invention explores a novel mirror image force field rehabilitation strategy for guiding the action of the affected side based on the information of the force field of the affected side of a patient, which comprises a human-computer tight coupling force field modeling based on multi-sensor signal fusion, physiological signals and force field mapping of the affected side based on a state space and synchronous coupling control of the affected side based on force field mirror image. Aiming at the important clinical requirement of reconstruction of the upper limb movement function after the clinical nerve shift operation, the invention combines the force field control strategy of human-machine tight coupling with the mirror image rehabilitation strategy.
The invention provides a new research on a medical rehabilitation method, and a research result is a great breakthrough in the field of peripheral nerve rehabilitation research, which not only drives a clinical treatment method in the field of peripheral nerve rehabilitation research to be greatly innovated, but also provides a new technical means for exploring a nerve shift postoperative function reconstruction and brain function recovery mechanism, and has great academic value and clinical significance.
The invention relates to the technical fields of man-machine interaction, artificial intelligence and interaction control, and the man-machine tight coupling healthy side force field modeling based on multi-sensor signal fusion, healthy side physiological signals and force field mapping based on a state space and healthy side synchronous coupling control based on force field mirror image are included. The traditional method is to use a mirror to perform visual feedback for the rehabilitation of the affected side after the nerve repair operation, and the method has weak participation of patients and general rehabilitation effect. Different from the existing rehabilitation means. Aiming at the important clinical requirement of reconstructing the upper limb movement function after the clinical nerve shift operation, the invention combines the force field control strategy of human-computer tight coupling with the mirror image rehabilitation strategy, explores a novel mirror image force field rehabilitation strategy for guiding the action of the affected side based on the information of the force field of the affected side of the patient, and the method is more natural and improves the participation feeling and the active rehabilitation capability of the patient.
Those skilled in the art will appreciate that the invention provides a system and its individual devices, modules, units, etc. that can be implemented entirely by logic programming of method steps, in addition to being implemented as pure computer readable program code, in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Therefore, the system and various devices, modules and units thereof provided by the invention can be regarded as a hardware component, and the devices, modules and units for realizing various functions included in the system can also be regarded as structures in the hardware component; means, modules, and units for implementing the various functions may also be considered as either software modules for implementing the methods or structures within hardware components.
The foregoing describes specific embodiments of the present invention. It is to be understood that the invention is not limited to the particular embodiments described above, and that various changes or modifications may be made by those skilled in the art within the scope of the appended claims without affecting the spirit of the invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily without conflict.

Claims (2)

1. The utility model provides an upper limbs both arms rehabilitation robot admittance control system based on mirror image force field which characterized in that includes following module:
module M1: human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion is adopted to obtain the exercise intention of the healthy side of the subject;
module M2: performing physiological signal and force field mapping of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject;
module M3: synchronous coupling control of the healthy side based on force field mirror image is carried out according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled;
the module M1 includes predicting the movement intention of the subject in real time through the healthy side myoelectric sensor, modeling the acting force in the interaction process as an impedance model, and predicting the joint state of the subject through the impedance model, wherein the impedance model is shown as formula (1):
Figure QLYQS_1
wherein,,
Figure QLYQS_2
acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; />
Figure QLYQS_3
The actual position of the tail end of the upper limb double-arm rehabilitation robot; />
Figure QLYQS_4
The method is characterized in that the method is a desired position of the tail end of an upper limb double-arm rehabilitation robot; superscript notation represents the derivative of the corresponding state quantity with respect to time; />
Figure QLYQS_5
Gain for position error; />
Figure QLYQS_6
Is the speed gain;
estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure QLYQS_7
wherein,,
Figure QLYQS_8
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity; />
Figure QLYQS_9
Expressed in arbitrary virtual target positionSetting an initial value of an error gain; />
Figure QLYQS_10
An initial value representing the velocity gain at any virtual target; superscriptvThe representation value is based on an arbitrary initial value given by the virtual target;
the module M2 comprises the following modules:
module M2.1: modeling a subject's exercise stress field and physiological myoelectric signals according to the module M1 to obtain the exercise intention of the subject's exercise
Figure QLYQS_11
Module M2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle;
the module M3 includes combining the interaction force generated by the patient side during the interaction with the established patient side motion trajectory and intent to control the exoskeleton's motion through admittance control, wherein the patient side intent is expressed as:
Figure QLYQS_12
wherein,,
Figure QLYQS_13
subject intent predicted for robust model; />
Figure QLYQS_14
Original exercise intention for the affected side model; />
Figure QLYQS_15
Super parameters for adjusting the weight ratio of the two;
admittance control in the module M3 includes:
the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure QLYQS_16
wherein,,MandGrespectively representing an inertia matrix and a gravity matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system;Crepresenting a coriolis force and centrifugal force matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system;
Figure QLYQS_17
is a disturbance in the interactive system; />
Figure QLYQS_18
Is a control input of the system; the superscript · represents the second derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the acceleration;
assume the actual position of the end of an upper limb double arm rehabilitation robot
Figure QLYQS_20
And the derivative of the actual position of the upper arm rehabilitation robot tip with respect to time +.>
Figure QLYQS_22
Is obtained by measurement; is provided with->
Figure QLYQS_26
=/>
Figure QLYQS_19
,/>
Figure QLYQS_23
Wherein->
Figure QLYQS_25
And->
Figure QLYQS_27
Respectively represent the firstiRotation angle of individual jointsThe degree and the angular velocity are less than or equal to 1 percentin;/>
Figure QLYQS_21
A position matrix formed by the rotation angles of all joints of the robot is shown; />
Figure QLYQS_24
A velocity matrix including angular velocities of the joints; superscriptTRepresenting a transpose; the dynamics of the interaction task is expressed as follows:
Figure QLYQS_28
defining position errors
Figure QLYQS_29
Speed error->
Figure QLYQS_30
,/>
Figure QLYQS_31
For->
Figure QLYQS_32
Virtual control of (c) to obtain:
Figure QLYQS_33
using lyapunov function
Figure QLYQS_34
,/>
Figure QLYQS_35
Representing the constructed function in the form of a lyapunov function; sign->
Figure QLYQS_36
Representing a matrix multiplication; and (3) deriving time:
Figure QLYQS_37
order the
Figure QLYQS_38
Wherein->
Figure QLYQS_39
For the gain matrix, reset equation (7) to get:
Figure QLYQS_40
the expression (8) is used for obtaining:
Figure QLYQS_41
definition of Lyapunov function
Figure QLYQS_42
,/>
Figure QLYQS_43
Representing the constructed function in the form of a lyapunov function; and (3) deriving time:
Figure QLYQS_44
when the parameters of the dynamics are known, the control is expressed in the form:
Figure QLYQS_45
wherein,,
Figure QLYQS_46
representing a gain matrix;
approximating robot dynamics using radial basis neural networksGCAndMan item; the external disturbance is compensated by a disturbance observer; the self-adaptive control law is as follows:
Figure QLYQS_47
wherein,,
Figure QLYQS_48
is radial basis function neural network, < >>
Figure QLYQS_49
Is a weight coefficient>
Figure QLYQS_50
For the dynamic regression matrix, i.e. the distance of the sample point from each radial basis center,/for the radial basis center>
Figure QLYQS_51
Representing a radial function network input; let->
Figure QLYQS_52
The form of the high-order disturbance observer is as follows:
Figure QLYQS_53
wherein,,
Figure QLYQS_54
representing a gain matrix in the disturbance observation process; />
Figure QLYQS_55
Representing an estimation error; />
Figure QLYQS_56
Representing a dynamic regression matrix,/->
Figure QLYQS_57
Representing the actual sampling point; />
Figure QLYQS_58
Representing the weight coefficient; />
Figure QLYQS_59
,/>
Figure QLYQS_60
The update of the weight matrix is as follows:
Figure QLYQS_61
Figure QLYQS_62
Figure QLYQS_63
Figure QLYQS_64
wherein,,
Figure QLYQS_66
representing updated values of the dynamic regression quantity matrix; />
Figure QLYQS_69
An update indicative of a speed error; />
Figure QLYQS_72
Representing an update of the estimated value; superscript symbol->
Figure QLYQS_65
Representing the expected value of the weight derivative; />
Figure QLYQS_67
An updated value representing the physical parameter; />
Figure QLYQS_70
Representing an estimation error; />
Figure QLYQS_73
Representing the expected estimation error; />
Figure QLYQS_68
An output representing a radial basis function; />
Figure QLYQS_71
For update rate->
Figure QLYQS_74
And->
Figure QLYQS_75
Is the weight.
2. The mirror-image force field-based upper limb double-arm rehabilitation robot admittance control system according to claim 1, characterized in that the following steps can be realized:
step 1: human-computer tight coupling healthy side force field modeling based on multi-sensor signal fusion is adopted to obtain the exercise intention of the healthy side of the subject;
step 2: performing physiological signal and force field mapping of the healthy side based on the state space according to the motion intention of the healthy side of the subject to obtain the motion trail and intention of the healthy side of the subject;
step 3: synchronous coupling control of the healthy side based on force field mirror image is carried out according to the motion track and intention of the affected side of the subject, so that the motion of the exoskeleton is controlled;
the step 1 includes predicting the movement intention of the subject in real time through the side-strengthening myoelectric sensor, modeling the acting force in the interaction process as an impedance model, and predicting the joint state of the subject through the impedance model, wherein the impedance model is shown as a formula (1):
Figure QLYQS_76
wherein,,
Figure QLYQS_77
acting force in the interaction process of the upper limb double-arm rehabilitation robot and the subject; />
Figure QLYQS_78
The actual position of the tail end of the upper limb double-arm rehabilitation robot; />
Figure QLYQS_79
The method is characterized in that the method is a desired position of the tail end of an upper limb double-arm rehabilitation robot; superscript notation represents the derivative of the corresponding state quantity with respect to time; />
Figure QLYQS_80
Gain for position error; />
Figure QLYQS_81
Is the speed gain;
estimating the movement intention of the subject by the formula (1), as shown in the formula (2):
Figure QLYQS_82
wherein,,
Figure QLYQS_83
an estimated value representing the exercise intention of the healthy side of the subject, and the superscript symbol is an estimated value of the corresponding quantity; />
Figure QLYQS_84
An initial value representing the error gain at any virtual target position; />
Figure QLYQS_85
An initial value representing the velocity gain at any virtual target; superscriptvThe representation value is based on an arbitrary initial value given by the virtual target;
the step 2 comprises the following steps:
step 2.1: modeling the subject's exercise side force field and physiological electromyographic signals according to the step 1 to obtain the exercise intention of the subject's exercise side
Figure QLYQS_86
Step 2.2: the double arms of the healthy side of the subject perform rehabilitation actions along the same track, and the movement track and intention of the healthy side of the subject are obtained through the mirror image principle;
step 3 includes combining the interaction force generated by the affected side during the interaction with the established affected side motion trail and intention, and controlling the motion of the exoskeleton through admittance control, wherein the affected side intention is expressed as:
Figure QLYQS_87
wherein,,
Figure QLYQS_88
subject intent predicted for robust model; />
Figure QLYQS_89
Original exercise intention for the affected side model; />
Figure QLYQS_90
Super parameters for adjusting the weight ratio of the two;
the admittance control in step 3 includes:
the dynamic equation of the interaction process of the upper limb double-arm rehabilitation robot and the subject is shown in a formula (4):
Figure QLYQS_91
wherein,,MandGrespectively representing an inertia matrix and a gravity matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system;Crepresenting a coriolis force and centrifugal force matrix of the Cartesian space coordinate system lower limb exoskeleton robot and the human interaction system;
Figure QLYQS_92
is a disturbance in the interactive system; />
Figure QLYQS_93
Is a control input of the system; the superscript · represents the second derivative of the actual position of the rehabilitation robot tip with respect to time, i.e. the acceleration;
assume the actual position of the end of an upper limb double arm rehabilitation robot
Figure QLYQS_95
And the derivative of the actual position of the upper arm rehabilitation robot tip with respect to time +.>
Figure QLYQS_99
Is obtained by measurement; is provided with->
Figure QLYQS_102
=/>
Figure QLYQS_96
,/>
Figure QLYQS_98
Wherein->
Figure QLYQS_100
And->
Figure QLYQS_101
Respectively represent the firstiThe rotation angle and the angular velocity of each joint are less than or equal to 1 percentin;/>
Figure QLYQS_94
A position matrix formed by the rotation angles of all joints of the robot is shown; />
Figure QLYQS_97
A velocity matrix including angular velocities of the joints; superscriptTRepresenting a transpose; the dynamics of the interaction task is expressed as follows:
Figure QLYQS_103
defining position errors
Figure QLYQS_104
Speed error->
Figure QLYQS_105
,/>
Figure QLYQS_106
For->
Figure QLYQS_107
Virtual control of (c) to obtain:
Figure QLYQS_108
using lyapunov function
Figure QLYQS_109
,/>
Figure QLYQS_110
Representing the constructed function in the form of a lyapunov function; sign->
Figure QLYQS_111
Representing a matrix multiplication; and (3) deriving time:
Figure QLYQS_112
order the
Figure QLYQS_113
Wherein->
Figure QLYQS_114
For the gain matrix, reset equation (7) to get:
Figure QLYQS_115
the expression (8) is used for obtaining:
Figure QLYQS_116
definition of Lyapunov function
Figure QLYQS_117
,/>
Figure QLYQS_118
Representing the constructed function in the form of a lyapunov function; and (3) deriving time:
Figure QLYQS_119
when the parameters of the dynamics are known, the control is expressed in the form:
Figure QLYQS_120
wherein,,
Figure QLYQS_121
representing a gain matrix;
approximating robot dynamics using radial basis neural networksGCAndMan item; the external disturbance is compensated by a disturbance observer; the self-adaptive control law is as follows:
Figure QLYQS_122
wherein,,
Figure QLYQS_123
is radial basis function neural network, < >>
Figure QLYQS_124
Is a weight coefficient>
Figure QLYQS_125
For the dynamic regression matrix, i.e. the distance of the sample point from each radial basis center,/for the radial basis center>
Figure QLYQS_126
Representing a radial function network input; let->
Figure QLYQS_127
The form of the high-order disturbance observer is as follows:
Figure QLYQS_128
wherein,,
Figure QLYQS_129
representing a gain matrix in the disturbance observation process; />
Figure QLYQS_130
Representing an estimateCounting errors; />
Figure QLYQS_131
Representing a dynamic regression matrix,/->
Figure QLYQS_132
Representing the actual sampling point; />
Figure QLYQS_133
Representing the weight coefficient; />
Figure QLYQS_134
,/>
Figure QLYQS_135
The update of the weight matrix is as follows:
Figure QLYQS_136
Figure QLYQS_137
Figure QLYQS_138
Figure QLYQS_139
wherein,,
Figure QLYQS_142
representing updated values of the dynamic regression quantity matrix; />
Figure QLYQS_145
An update indicative of a speed error; />
Figure QLYQS_148
Representing an update of the estimated value; superscript symbol->
Figure QLYQS_141
Representing the expected value of the weight derivative; />
Figure QLYQS_144
An updated value representing the physical parameter; />
Figure QLYQS_146
Representing an estimation error; />
Figure QLYQS_149
Representing the expected estimation error; />
Figure QLYQS_140
An output representing a radial basis function; />
Figure QLYQS_143
For update rate->
Figure QLYQS_147
And->
Figure QLYQS_150
Is the weight.
CN202111295849.2A 2021-11-03 2021-11-03 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system Active CN113995629B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111295849.2A CN113995629B (en) 2021-11-03 2021-11-03 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111295849.2A CN113995629B (en) 2021-11-03 2021-11-03 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system

Publications (2)

Publication Number Publication Date
CN113995629A CN113995629A (en) 2022-02-01
CN113995629B true CN113995629B (en) 2023-07-11

Family

ID=79926998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111295849.2A Active CN113995629B (en) 2021-11-03 2021-11-03 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system

Country Status (1)

Country Link
CN (1) CN113995629B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115463003B (en) * 2022-09-09 2024-09-20 燕山大学 Upper limb rehabilitation robot control method based on information fusion
CN116138909B (en) * 2023-04-24 2023-10-27 北京市春立正达医疗器械股份有限公司 Intelligent control method and system for dental implant robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586608A (en) * 2015-02-05 2015-05-06 华南理工大学 Wearable assistance finger based on myoelectric control and control method thereof
CN104881038A (en) * 2015-04-22 2015-09-02 哈尔滨工业大学 Unmanned underwater vehicle (UUV) track tracking control optimization method under environmental interference
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
CN108324503A (en) * 2018-03-16 2018-07-27 燕山大学 Healing robot self-adaptation control method based on flesh bone model and impedance control
CN108785997A (en) * 2018-05-30 2018-11-13 燕山大学 A kind of lower limb rehabilitation robot Shared control method based on change admittance
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural network control system of exoskeleton robot based on intention recognition
EP3705105A1 (en) * 2019-03-08 2020-09-09 Syco di Menga Giuseppe & C. S.A.S. Control system for a haptic lower limb exoskeleton for rehabilitation or walking, with improved equilibrium control, man-machine interface
CN112743540A (en) * 2020-12-09 2021-05-04 华南理工大学 Hexapod robot impedance control method based on reinforcement learning
CN113478462A (en) * 2021-07-08 2021-10-08 中国科学技术大学 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368091B (en) * 2017-08-02 2019-08-20 华南理工大学 A kind of stabilized flight control method of more rotor unmanned aircrafts based on finite time neurodynamics
US11337881B2 (en) * 2017-08-22 2022-05-24 New Jersey Institute Of Technology Exoskeleton with admittance control
WO2021034784A1 (en) * 2019-08-16 2021-02-25 Poltorak Technologies, LLC Device and method for medical diagnostics
CN111281743B (en) * 2020-02-29 2021-04-02 西北工业大学 Self-adaptive flexible control method for exoskeleton robot for upper limb rehabilitation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104586608A (en) * 2015-02-05 2015-05-06 华南理工大学 Wearable assistance finger based on myoelectric control and control method thereof
CN104881038A (en) * 2015-04-22 2015-09-02 哈尔滨工业大学 Unmanned underwater vehicle (UUV) track tracking control optimization method under environmental interference
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
CN108324503A (en) * 2018-03-16 2018-07-27 燕山大学 Healing robot self-adaptation control method based on flesh bone model and impedance control
CN108785997A (en) * 2018-05-30 2018-11-13 燕山大学 A kind of lower limb rehabilitation robot Shared control method based on change admittance
EP3705105A1 (en) * 2019-03-08 2020-09-09 Syco di Menga Giuseppe & C. S.A.S. Control system for a haptic lower limb exoskeleton for rehabilitation or walking, with improved equilibrium control, man-machine interface
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural network control system of exoskeleton robot based on intention recognition
CN112743540A (en) * 2020-12-09 2021-05-04 华南理工大学 Hexapod robot impedance control method based on reinforcement learning
CN113478462A (en) * 2021-07-08 2021-10-08 中国科学技术大学 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《基于表面肌电信号的上肢外骨骼控制研究》;刘健;中国优秀硕士学位论文全文数据库信息科技辑(第02期);全文 *
《面向人机智能融合医疗康复的辅助机器人关键技术及其应用》;李智军,康宇等;中国知网;全文 *

Also Published As

Publication number Publication date
CN113995629A (en) 2022-02-01

Similar Documents

Publication Publication Date Title
Noohi et al. A model for human–human collaborative object manipulation and its application to human–robot interaction
US10959863B2 (en) Multi-dimensional surface electromyogram signal prosthetic hand control method based on principal component analysis
CN113995629B (en) Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system
Casadio et al. The body-machine interface: a new perspective on an old theme
Ai et al. Machine learning in robot assisted upper limb rehabilitation: A focused review
Osu et al. Short-and long-term changes in joint co-contraction associated with motor learning as revealed from surface EMG
Bhattacharyya et al. A synergetic brain-machine interfacing paradigm for multi-DOF robot control
Gunasekara et al. Control methodologies for upper limb exoskeleton robots
Pang et al. Electromyography-based quantitative representation method for upper-limb elbow joint angle in sagittal plane
Esfahlani et al. An adaptive self-organizing fuzzy logic controller in a serious game for motor impairment rehabilitation
Esfahlani et al. Fusion of artificial intelligence in neuro-rehabilitation video games
Fang et al. Modelling EMG driven wrist movements using a bio-inspired neural network
CN115363907A (en) Rehabilitation decision-making method based on virtual reality rehabilitation training system
Sitole et al. Continuous prediction of human joint mechanics using emg signals: A review of model-based and model-free approaches
Xiao et al. AI-driven rehabilitation and assistive robotic system with intelligent PID controller based on RBF neural networks
Yang et al. A review on human intent understanding and compliance control strategies for lower limb exoskeletons
Koochaki et al. A novel architecture for cooperative remote rehabilitation system
CN117697717A (en) Exoskeleton physical man-machine two-way interaction simulation system
Babaiasl et al. Mechanical design, simulation and nonlinear control of a new exoskeleton robot for use in upper-limb rehabilitation after stroke
Li et al. Prediction of passive torque on human shoulder joint based on BPANN
CN114795604B (en) Method and system for controlling lower limb prosthesis in coordination based on non-zero and game
Ahmadian et al. ℒ 1–ℬℒ Adaptive Controller Design for Wrist Rehabilitation Robot
Halder et al. An overview of artificial intelligence-based soft upper limb exoskeleton for rehabilitation: a descriptive review
Suryanarayanan et al. EMG-based interface for position tracking and control in VR environments and teleoperation
Covaciu et al. VR interface for cooperative robots applied in dynamic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant