CN115463003A - Upper limb rehabilitation robot control method based on information fusion - Google Patents

Upper limb rehabilitation robot control method based on information fusion Download PDF

Info

Publication number
CN115463003A
CN115463003A CN202211105108.8A CN202211105108A CN115463003A CN 115463003 A CN115463003 A CN 115463003A CN 202211105108 A CN202211105108 A CN 202211105108A CN 115463003 A CN115463003 A CN 115463003A
Authority
CN
China
Prior art keywords
robot
tail end
force
patient
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211105108.8A
Other languages
Chinese (zh)
Other versions
CN115463003B (en
Inventor
秦利
秦淑凡
李志杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN202211105108.8A priority Critical patent/CN115463003B/en
Publication of CN115463003A publication Critical patent/CN115463003A/en
Application granted granted Critical
Publication of CN115463003B publication Critical patent/CN115463003B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H1/00Apparatus for passive exercising; Vibrating apparatus; Chiropractic devices, e.g. body impacting devices, external devices for briefly extending or aligning unbroken bones
    • A61H1/02Stretching or bending or torsioning apparatus for exercising
    • A61H1/0274Stretching or bending or torsioning apparatus for exercising for the upper limbs
    • A61H1/0285Hand
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/02Computing arrangements based on specific mathematical models using fuzzy logic
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/16Physical interface with patient
    • A61H2201/1657Movement of interface, i.e. force application means
    • A61H2201/1659Free spatial automatic movement of interface within a working area, e.g. Robot
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H2201/00Characteristics of apparatus not provided for in the preceding codes
    • A61H2201/50Control means thereof
    • A61H2201/5007Control means thereof computer controlled

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Public Health (AREA)
  • Computing Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Algebra (AREA)
  • Fuzzy Systems (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Automation & Control Theory (AREA)
  • Pain & Pain Management (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

The invention relates to an upper limb rehabilitation robot control method based on information fusion, which belongs to the technical field of rehabilitation robots and comprises the steps of predicting the movement intention of a patient according to the position and the speed of the tail end of a robot and the force applied by the patient to the tail end of the robot; estimating environmental characteristics according to the position of the tail end of the robot and the force of collision with the external environment; performing information fusion on the movement intention and the environmental characteristics based on a Kalman filtering algorithm according to the force applied to the tail end of the robot by a patient and the change rate of the force; based on a fuzzy naive Bayes principle, calculating the basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed; due to collision in the task process, a compliant model is introduced, and the force generated when the tail end of the robot collides with the external environment is reduced, so that the safety in the task operation process is ensured. The invention not only improves the completion degree of the training task and the active participation degree of the patient, but also improves the intelligence of the rehabilitation robot assistance.

Description

Upper limb rehabilitation robot control method based on information fusion
Technical Field
The invention relates to an upper limb rehabilitation robot control method based on information fusion, and belongs to the technical field of rehabilitation robots.
Background
Upper limb movement dysfunction is the common sequelae of stroke, spinal cord injury, traumatic brain injury, and multiple sclerosis. Compared with normal people, the patient has small muscle strength and inconvenient movement, and particularly, the dysfunction of the upper limb function movement can cause poor self-care ability of life and influence the life quality, so the rehabilitation training of the upper limb is particularly important. At present, research shows that a rehabilitation training method for performing exercise relearning by actively participating in training of a patient can help the patient to recover the exercise function of limbs to a certain extent.
The robot can not only be unaware of tired continuous execution repeated tasks, but also can record the training data of the patient through the sensor, thereby evaluating the motion performance of the patient in the training process and being a second choice for assisting the rehabilitation training of the patient at present. The upper limb rehabilitation robot can be divided into two types, one type is a tail end traction type robot, and the other type is an exoskeleton type robot. The tail end traction type robot guides the hand of a patient to move through a handle at the tail end of the robot, and a rehabilitation training task is completed.
Most of the existing tail-end traction robots only carry out passive traction training on patients, and the patients hardly have autonomous participation, so that the training effect is not ideal. Although a few end traction robots can enable a patient to actively participate in a training task, adverse effects on a training environment when the patient is out of control are ignored.
Therefore, based on the above problems of the existing end-pull robot, those skilled in the art are dedicated to developing a control method for information fusion, so as to increase the autonomous participation of the patient and improve the training effect.
Disclosure of Invention
The invention aims to provide an upper limb rehabilitation robot control method based on information fusion, which improves the autonomy, accuracy and assistance of rehabilitation training of patients.
In order to achieve the purpose, the invention adopts the technical scheme that:
an upper limb rehabilitation robot control method based on information fusion comprises the following steps:
s101, acquiring the current position and speed of the tail end of the robot, the force applied by a patient to the tail end of the robot and the collision force between the tail end and the external environment;
s102, predicting the movement intention of the patient according to the current position and speed of the tail end of the robot and the force applied to the tail end of the robot;
s103, estimating environmental characteristics according to the current position of the tail end of the robot and the collision force between the tail end of the robot and the external environment;
s104, calculating the trust of the movement intention predicted in the S102 and the environment characteristic estimated in the S103 according to the force applied by the patient to the robot end and the change rate of the force, and performing information fusion on the movement intention and the environment characteristic by combining with a Kalman filtering algorithm;
s105, based on a fuzzy naive Bayes principle, calculating basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed;
s106, if the task is judged to be completed, terminating; and if the task is judged not to be completed, introducing a compliance model, transmitting the command position to the robot while ensuring the safety of the task, and continuing to execute the step S102.
The technical scheme of the invention is further improved in that the specific steps of the step 102 are as follows:
the patient holds the handle at the tail end of the robot to apply force, so that the robot tail end executor moves according to the movement intention of the patient; the motion intention is estimated through a radial basis function neural network according to the position and the speed of the current robot tail end and the force applied to the robot tail end;
the force applied by the patient to the robot tip is f h The position of the end of the robot is x and the speed is
Figure BDA0003841431360000021
Predicted movement intentionComprises the following steps:
Figure BDA0003841431360000022
wherein h (-) is an unknown nonlinear function;
learning the movement intention of the patient by using a Radial Basis Function Neural Network (RBFNN), and estimating the movement intention as follows:
Figure BDA0003841431360000023
Figure BDA0003841431360000031
wherein the RBFNN is input as
Figure BDA0003841431360000032
S (-) is a radial basis function,
Figure BDA0003841431360000033
representing the estimated weights and epsilon the estimated error.
The technical scheme of the invention is further improved in that the specific steps of the step 103 are as follows:
it is assumed that the characteristics of the unknown environment at each instant follow a Gaussian distribution, i.e.
Figure BDA0003841431360000034
Wherein,
Figure BDA0003841431360000035
a mean vector representing a gaussian distribution,
Figure BDA0003841431360000036
a covariance matrix representing a Gaussian distribution, P represents a characteristic probability of an unknown environment at each moment, and z represents a particle;
a mixed monte carlo sampling (HMC) is used to obtain particles,
Figure BDA0003841431360000037
t is the current time, N is the number of the sampling particles, and the particles
Figure BDA0003841431360000038
Is a radical of X d A six-dimensional vector of representations of (a);
estimating environmental characteristics according to the position of the robot tail end:
Figure BDA0003841431360000039
wherein,
Figure BDA00038414313600000310
is a covariance matrix of a Gaussian distribution function, mu d Is the mean value of the Gaussian distribution,
Figure BDA00038414313600000311
for the current robot end position and particle
Figure BDA00038414313600000312
Distance between corresponding unknown environments;
estimating environmental features from the impact force of the tip with the external environment:
Figure BDA00038414313600000313
wherein,
Figure BDA00038414313600000314
is a covariance matrix of a Gaussian distribution function, theta f Mean value of the Gaussian distribution (angle of the friction cone), F e t The impact force at each moment in time is indicated,
Figure BDA00038414313600000315
is represented by the formula e t A corresponding friction cone;
more accurate environmental characteristics are obtained from the tip position and the impact force of the tip with the external environment:
Figure BDA00038414313600000316
wherein,
Figure BDA00038414313600000317
is the weight of the particle;
after integration of the end position and the collision force of the end with the external environment a new particle gaussian distribution is obtained:
Figure BDA00038414313600000318
information X containing environmental characteristics in new particles e2
The technical scheme of the invention is further improved in that the specific steps of the step 104 are as follows:
using fuzzy logic, the force F applied to the robot tip for the patient is input h And rate of change thereof
Figure BDA0003841431360000041
The output is the confidence level n epsilon (0, 1) of the movement intention of the patient;
input variables are: i F h |={NB,NM,NS,ZO,PS,PM,PB},
Figure BDA0003841431360000042
Output variables are: n = { SS, SB, M, BS, BB }, all membership functions adopt Gaussian types;
designing a fuzzy rule table according to fuzzy rules;
designing weight factor distribution:
α 1 =n,α 2 =1-α 1
wherein alpha is 1 As confidence of predicted movement intention, alpha 2 A confidence level for the estimated environmental characteristic;
kalman filtering is respectively carried out on the movement intention and the environmental characteristics,
x i (k+1)=A i (k)x i (k)+B i (k)u i (k)+Γ i (k)ω i (k)
z i (k)=C i (k)x i (k)+υ i (k)
wherein A is i For the state transition matrix, x is the state variable, B i Is a known matrix, u is a control item, Γ represents a known matrix gain, ω is system noise, z is observation, C is an observation matrix, and υ is observation noise;
incorporating confidence in a patient's motor intent n E (0, 1) carries out information fusion on the filtered movement intention and the environment characteristic,
Figure BDA0003841431360000043
wherein,
Figure BDA0003841431360000044
for the purpose of the motion after the kalman filter,
Figure BDA0003841431360000045
for the environmental features after the kalman filtering,
Figure BDA0003841431360000046
for the desired position X after information fusion e
The technical scheme of the invention is further improved in that the specific steps of the step 105 are as follows:
based on a fuzzy naive Bayes principle, calculating the basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed:
the basic probability distributions of the position information and the impact force information are calculated separately:
Figure BDA0003841431360000051
wherein, V is a characteristic value vector, j represents different information sources (position information and force information), and C is a classification label corresponding to V;
and (3) normalization calculation:
Figure BDA0003841431360000052
wherein, L is a normalization factor;
based on Dempster-Shafer evidence theory, the overall underlying probability distribution of the position information and the impact force information is combined:
m all =m 1 ⊕m 2
wherein m is all Represents the overall fundamental probability distribution, m 1 Representing the basic probability distribution, m, of the position information 2 Representing a base probability distribution of the impact force information;
after the combination of the position information and the collision force information is completed, the whole judging process is changed from two information sources into a single information source; the hypothesis with the highest probability is selected as the prediction class of the sample.
The technical solution of the present invention is further improved in that the compliance model of step 106 is:
Figure BDA0003841431360000053
wherein M is d 、B d And K d Respectively an inertia matrix, a damping matrix and a rigidity matrix required by the impedance model, wherein the inertia matrix, the damping matrix and the rigidity matrix are positive definite diagonal matrixes; x e Is the desired target position in cartesian space;
Figure BDA0003841431360000054
and
Figure BDA0003841431360000055
respectively a desired velocity and a desired acceleration in cartesian space; f d Is the desired impact force.
Due to the adoption of the technical scheme, the invention has the following technical effects:
the method includes the steps that according to force applied to the robot tail end by a patient and the position and the speed of the robot tail end, the movement intention of the patient is predicted based on a radial basis function neural network; according to the position of the tail end of the robot and the collision force with the external environment, autonomously sensing and estimating environmental characteristics to predict the next step of movement; and then, carrying out information fusion on the two estimated information through a fuzzy Kalman filtering algorithm to obtain a more accurate required track. And after the fusion of each step, entering a stage of judging whether the task is completed. Judging whether the task is finished according to the position information of the tail end of the robot and the collision force information of the external environment, and if the position error and the collision force exceed a specified threshold value, considering that the task needs to be continued; if the position error and the impact force are within the specified threshold, the task is deemed complete. The hand of the affected limb of the patient is fixed on a handle at the tail end of the robot, and the fused track follows the movement intention of the patient and eliminates the movement intention predicted by excessive force or insufficient force caused by the uncontrolled hand of the patient.
According to the upper limb rehabilitation robot control method based on information fusion, the movement intention of a patient and the estimation of the robot to the environment are fused to serve as the required track, the participation degree of the patient is improved, meanwhile, the movement intention predicted by the force of the out-of-control hand of the patient is considered, the patient can be helped to reach the target position more accurately, the complete training task is completed, and the training effect is improved.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a control framework diagram of the present invention.
Detailed Description
The invention is described in further detail below with reference to the following figures and specific embodiments:
in the rehabilitation training process, the hands of a patient are held on the handle at the tail end of the robot; the position, velocity of the patient and robot are considered to be consistent.
An upper limb rehabilitation robot control method based on information fusion is shown in fig. 1 and 2, and comprises the following steps:
s101, acquiring the current position and speed of the tail end of the robot, the force applied by a patient to the tail end of the robot and the collision force of the tail end and the external environment. The force applied by the patient to the robot end is acquired by a force sensor on the handle; the impact force between the robot tip and the external environment, i.e. the force to which a tool mounted on the robot tip is subjected when colliding with the environment, is the difference between the total external force acquired by the force sensor at the robot tip and the force applied by the patient to the robot tip.
S102, predicting the movement intention of the patient according to the position, the speed and the force applied to the robot end of the current robot end
The patient holds a handle at the tail end of the robot to apply force, so that the robot tail end executor moves according to the movement intention of the patient; the motion intention is unknown to the robot and needs to be estimated by a radial basis function neural network based on the current robot tip position, velocity and force applied to the robot tip.
The force applied by the patient to the robot tip is f h The position of the end of the robot is x and the speed is
Figure BDA0003841431360000071
The predicted intent of the sport is:
Figure BDA0003841431360000072
where h (-) is an unknown nonlinear function.
Learning the movement intention of the patient by using a Radial Basis Function Neural Network (RBFNN), and estimating the movement intention as follows:
Figure BDA0003841431360000073
Figure BDA0003841431360000074
wherein the RBFNN is input as
Figure BDA0003841431360000075
S (-) is a radial basis function,
Figure BDA0003841431360000076
representing the estimated weights and epsilon the estimated error.
S103, estimating environmental characteristics according to the current position of the robot tail end and the collision force of the robot tail end and the external environment
It is assumed that the characteristics of the unknown environment at each instant follow a Gaussian distribution, i.e.
Figure BDA0003841431360000077
Wherein,
Figure BDA0003841431360000078
a mean vector representing a gaussian distribution,
Figure BDA0003841431360000079
a covariance matrix representing a gaussian distribution, P represents a characteristic probability of the unknown environment at each time instant, and z represents a particle.
A mixed monte carlo sampling (HMC) is used to obtain particles,
Figure BDA00038414313600000710
t is the current time, N is the number of the sampling particles, and
Figure BDA00038414313600000711
is a radical of X d Is a six-dimensional vector of the representation of (a).
Estimating environmental characteristics according to the position of the robot tail end:
Figure BDA00038414313600000712
wherein,
Figure BDA00038414313600000713
covariance matrix, μ, being a Gaussian distribution function d Is the mean value of the Gaussian distribution,
Figure BDA00038414313600000714
for the current robot end position and particle
Figure BDA0003841431360000081
The distance between the corresponding unknown environments.
Estimating environmental features from the impact force of the tip with the external environment:
Figure BDA0003841431360000082
wherein,
Figure BDA0003841431360000083
is a covariance matrix of a Gaussian distribution function, theta f Mean value of the Gaussian distribution (angle of the friction cone), F e t The impact force at each moment in time is represented,
Figure BDA0003841431360000084
is represented by the formula e t A corresponding friction cone.
More accurate environmental characteristics are obtained from the tip position and the impact force of the tip with the external environment:
Figure BDA0003841431360000085
wherein,
Figure BDA0003841431360000086
is the weight of the particle.
After integration of the end position and the collision force of the end with the external environment a new particle gaussian distribution is obtained:
Figure BDA0003841431360000087
information X containing environmental characteristics in new particles e2
S104, calculating the credibility of the motion intention predicted in S102 and the environment characteristic estimated in S103 according to the force applied by the patient to the robot end and the change rate of the force, and performing information fusion on the motion intention and the environment characteristic by combining with a Kalman filtering algorithm
Using fuzzy logic, the force F applied to the robot tip by the patient is input h And rate of change thereof
Figure BDA0003841431360000088
The output is the confidence level n ∈ (0, 1) on the patient's motor intention.
Input variable | F h |={NB,NM,NS,ZO,PS,PM,PB},
Figure BDA0003841431360000089
And output variables n = { SS, SB, M, BS, BB }, and membership functions are all in a Gaussian form.
Designing a fuzzy rule table according to fuzzy rules:
fuzzy rule table
Figure BDA00038414313600000810
Figure BDA0003841431360000091
And (3) weight factor distribution:
α 1 =n,α 2 =1-α 1
wherein alpha is 1 As confidence level of predicted movement intention, α 2 Is the confidence level of the estimated environmental characteristic.
Kalman filtering is respectively carried out on the movement intention and the environmental characteristics,
x i (k+1)=A i (k)x i (k)+B i (k)u i (k)+Γ i (k)ω i (k)
z i (k)=C i (k)x i (k)+υ i (k)
wherein A is i For the state transition matrix, x is the state variable, B i Is a known matrix, u is a control term, Γ represents a known matrix gain, ω is system noise, z is an observation, C is an observation matrix, and ν is observation noise.
Incorporating confidence in a patient's motor intent n E (0, 1) carries out information fusion on the filtered movement intention and the environment characteristic,
Figure BDA0003841431360000092
wherein,
Figure BDA0003841431360000093
for the purpose of the motion after the kalman filter,
Figure BDA0003841431360000094
is an environmental feature after Kalman filtering.
Figure BDA0003841431360000095
For the desired position X after information fusion e
S105, based on the fuzzy naive Bayes principle, calculating the basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed or not
Based on a fuzzy naive Bayes principle, calculating the basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed:
the basic probability distributions of the position information and the impact force information are calculated, respectively:
Figure BDA0003841431360000096
wherein V is a characteristic value vector, j represents different information sources (position information and force information), i represents an i-dimensional independent characteristic variable, C is a classification label corresponding to V,
Figure BDA0003841431360000101
and representing the composite probability of the label corresponding to the i-dimensional independent characteristic variable.
And (3) normalization calculation:
Figure BDA0003841431360000102
wherein L is a normalization factor.
Based on Dempster-Shafer evidence theory, the overall underlying probability distribution of the position information and the impact force information is combined:
Figure BDA0003841431360000103
wherein m is all Representing the overall fundamental probability distribution, m 1 Representing the fundamental probability distribution, m, of the location information 2 Representing the underlying probability distribution of the impact force information.
After the combination of the position information and the collision force information is completed, the whole judgment process is changed from two information sources into a single information source. The hypothesis with the highest probability is selected as the prediction class of the sample. When m is all When the current value is less than a specified threshold value, obtaining a decision result xi =0, and indicating that the task is not finished; when m is all When the current time is larger than a specified threshold value, a decision result xi =1 is obtained, and the task is shown to be finishedAnd (4) obtaining.
S106, if the task is judged to be completed, terminating; if the task is not finished, introducing a compliance model, transmitting the command position to the robot while ensuring the task safety, continuously executing the step S102, predicting the movement intention, estimating the environmental characteristics, fusing the information and judging whether the task is finished or not
When the tail end of the robot interacts with the external environment, if the collision force is too large, the system is unsafe, and a compliant model is introduced, wherein the compliant model is
Figure BDA0003841431360000104
Wherein M is d 、B d And K d The inertia matrix, the damping matrix and the rigidity matrix required by the impedance model are respectively, and are positive definite diagonal matrixes. X e Is the desired target position in cartesian space.
Figure BDA0003841431360000105
And
Figure BDA0003841431360000106
respectively the desired velocity and the desired acceleration in Cartesian space, F d Is the desired impact force.
The present invention provides a detailed description of the principle and the implementation of the present invention through the specific embodiments to help understand the method of the present invention and the core idea thereof. Meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the foregoing, the description is not to be taken in a limiting sense.

Claims (6)

1. An upper limb rehabilitation robot control method based on information fusion is characterized by comprising the following steps:
s101, acquiring the current position and speed of the tail end of the robot, the force applied by a patient to the tail end of the robot and the collision force between the tail end and the external environment;
s102, predicting the movement intention of the patient according to the position and the speed of the current robot tail end and the force applied to the robot tail end;
s103, estimating environmental characteristics according to the current position of the tail end of the robot and the collision force between the tail end of the robot and the external environment;
s104, calculating the trust of the movement intention predicted in the S102 and the environment characteristic estimated in the S103 according to the force applied by the patient to the robot end and the change rate of the force, and performing information fusion on the movement intention and the environment characteristic by combining with a Kalman filtering algorithm;
s105, based on a fuzzy naive Bayes principle, calculating basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed;
s106, if the task is judged to be completed, terminating; and if the task is judged not to be completed, introducing a compliance model, transmitting the command position to the robot while ensuring the safety of the task, and continuing to execute the step S102.
2. The method for controlling the upper limb rehabilitation robot based on information fusion as claimed in claim 1, wherein the specific steps of the step 102 are as follows:
the patient holds the handle at the tail end of the robot to apply force, so that the robot tail end executor moves according to the movement intention of the patient; the motion intention is estimated through a radial basis function neural network according to the position and the speed of the current robot tail end and the force applied to the robot tail end;
the force f applied by the patient to the robot tip h The position of the end of the robot is x and the speed is
Figure FDA0003841431350000011
The predicted intent of the sport is:
Figure FDA0003841431350000012
wherein h (-) is an unknown nonlinear function;
learning the movement intention of the patient by using a Radial Basis Function Neural Network (RBFNN), and estimating the movement intention as follows:
Figure FDA0003841431350000021
Figure FDA0003841431350000022
wherein the RBFNN is input as
Figure FDA0003841431350000023
S (-) is a radial basis function,
Figure FDA0003841431350000024
representing the estimated weights and epsilon the estimated error.
3. The method for controlling the upper limb rehabilitation robot based on information fusion as claimed in claim 1, wherein the specific steps of step 103 are as follows:
it is assumed that the characteristics of the unknown environment at each instant follow a Gaussian distribution, i.e.
Figure FDA0003841431350000025
Wherein,
Figure FDA0003841431350000026
a mean vector representing a gaussian distribution,
Figure FDA0003841431350000027
a covariance matrix representing a Gaussian distribution, P represents a characteristic probability of an unknown environment at each moment, and z represents a particle;
a mixed monte carlo sampling (HMC) is used to obtain particles,
Figure FDA0003841431350000028
t is the current time, N is the number of the sampling particles, and
Figure FDA0003841431350000029
is a reaction product of X d A six-dimensional vector of representations of (a);
estimating environmental characteristics according to the position of the robot terminal:
Figure FDA00038414313500000210
wherein,
Figure FDA00038414313500000211
is a covariance matrix of a Gaussian distribution function, mu d Is the mean value of the Gaussian distribution,
Figure FDA00038414313500000212
for the current robot end position and particle
Figure FDA00038414313500000213
Distance between corresponding unknown environments;
estimating environmental characteristics from the impact force of the tip with the external environment:
Figure FDA00038414313500000214
wherein,
Figure FDA00038414313500000215
is a covariance matrix of a Gaussian distribution function, theta f Is the mean value of the angular Gaussian distribution of the friction cone, F e t Representing each moment of timeThe impact force is generated by the impact of the elastic body,
Figure FDA00038414313500000216
is represented by the formula e t A corresponding friction cone;
more accurate environmental characteristics are obtained from the tip position and the impact force of the tip with the external environment:
Figure FDA00038414313500000217
wherein,
Figure FDA0003841431350000031
is the weight of the particle;
after integration of the end position and the collision force of the end with the external environment a new particle gaussian distribution is obtained:
Figure FDA0003841431350000032
information X containing environmental characteristics in new particles e2
4. The upper limb rehabilitation robot control method based on information fusion as claimed in claim 1, wherein the specific steps of step 104 are as follows:
using fuzzy logic, the force F applied to the robot tip for the patient is input h And rate of change thereof
Figure FDA0003841431350000037
The output is the confidence level n epsilon (0, 1) of the movement intention of the patient;
input variables are as follows: i F h |={NB,NM,NS,ZO,PS,PM,PB},
Figure FDA0003841431350000033
Output variables are: n = { SS, SB, M, BS, BB }, and the membership function adopts a Gaussian type;
designing a fuzzy rule table according to fuzzy rules;
designing weight factor distribution:
α 1 =n,α 2 =1-α 1
wherein alpha is 1 As confidence level of predicted movement intention, α 2 A degree of confidence for the estimated environmental characteristic;
respectively carrying out Kalman filtering on the movement intention and the environmental characteristics,
x i (k+1)=A i (k)x i (k)+B i (k)u i (k)+Γ i (k)ω i (k)
z i (k)=C i (k)x i (k)+υ i (k)
wherein, A i For the state transition matrix, x is the state variable, B i Is a known matrix, u is a control item, Γ represents a known matrix gain, ω is system noise, z is observation, C is an observation matrix, and υ is observation noise;
the filtered movement intention and the environmental characteristics are fused by combining the confidence n epsilon (0, 1) of the movement intention of the patient,
Figure FDA0003841431350000034
wherein,
Figure FDA0003841431350000035
for the intention of the motion after the kalman filter,
Figure FDA0003841431350000036
for the environmental features after the kalman filtering,
Figure FDA0003841431350000041
for the desired position X after information fusion e
5. The method for controlling an upper limb rehabilitation robot based on information fusion as claimed in claim 1, wherein the specific steps of step 105 are as follows:
based on a fuzzy naive Bayes principle, calculating the basic probability distribution of the current tail end position of the robot and the collision force with the external environment, and judging whether the task is completed or not:
the basic probability distributions of the position information and the impact force information are calculated separately:
Figure FDA0003841431350000042
wherein V is a characteristic value vector, j represents different information sources, i represents an i-dimensional independent characteristic variable, C is a classification label corresponding to V,
Figure FDA0003841431350000043
representing the composite probability of the label corresponding to the i-dimensional independent characteristic variable;
and (3) normalization calculation:
Figure FDA0003841431350000044
wherein, L is a normalization factor;
based on Dempster-Shafer evidence theory, the overall underlying probability distribution of the position information and the impact force information is combined:
Figure FDA0003841431350000045
wherein m is all Representing the overall fundamental probability distribution, m 1 Representing the fundamental probability distribution, m, of the location information 2 Representing a basic probability distribution of the impact force information;
after the combination of the position information and the collision force information is completed, the whole judging process is changed from two information sources into a single information source, and the hypothesis with the maximum probability is selected as the prediction category of the sample.
6. The upper limb rehabilitation robot control method based on information fusion according to claim 1, characterized in that the compliance model of step 106 is:
Figure FDA0003841431350000046
wherein M is d 、B d And K d Respectively an inertia matrix, a damping matrix and a rigidity matrix which are needed by the impedance model and are positive definite diagonal matrixes; x e Is the desired target position in cartesian space;
Figure FDA0003841431350000047
and
Figure FDA0003841431350000048
a desired velocity and a desired acceleration in cartesian space, respectively; f d Is the desired impact force.
CN202211105108.8A 2022-09-09 2022-09-09 Upper limb rehabilitation robot control method based on information fusion Active CN115463003B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211105108.8A CN115463003B (en) 2022-09-09 2022-09-09 Upper limb rehabilitation robot control method based on information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211105108.8A CN115463003B (en) 2022-09-09 2022-09-09 Upper limb rehabilitation robot control method based on information fusion

Publications (2)

Publication Number Publication Date
CN115463003A true CN115463003A (en) 2022-12-13
CN115463003B CN115463003B (en) 2024-09-20

Family

ID=84369874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211105108.8A Active CN115463003B (en) 2022-09-09 2022-09-09 Upper limb rehabilitation robot control method based on information fusion

Country Status (1)

Country Link
CN (1) CN115463003B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143104A1 (en) * 2009-06-02 2012-06-07 Agency For Science, Technology And Research System and method for motor learning
CN104013513A (en) * 2014-06-05 2014-09-03 电子科技大学 Rehabilitation robot sensing system and method
KR20160141095A (en) * 2015-05-28 2016-12-08 주식회사 셈앤텍 upper limb rehabilitating system
CN108693973A (en) * 2018-04-17 2018-10-23 北京理工大学 A kind of emergency detecting system of fusion EEG signals and environmental information
CN109623835A (en) * 2018-12-05 2019-04-16 济南大学 Wheelchair arm-and-hand system based on multimodal information fusion
KR20190115483A (en) * 2018-03-12 2019-10-14 한국기계연구원 Wearable robot control system using augmented reality and method for controlling wearable robot using the same
CN110900638A (en) * 2019-10-31 2020-03-24 东北大学 Upper limb wearable transfer robot motion recognition system based on multi-signal fusion
CN112008725A (en) * 2020-08-27 2020-12-01 北京理工大学 Human-computer fusion brain-controlled robot system
CN112133089A (en) * 2020-07-21 2020-12-25 西安交通大学 Vehicle track prediction method, system and device based on surrounding environment and behavior intention
CN112276944A (en) * 2020-10-19 2021-01-29 哈尔滨理工大学 Man-machine cooperation system control method based on intention recognition
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN114131635A (en) * 2021-12-08 2022-03-04 山东大学 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120143104A1 (en) * 2009-06-02 2012-06-07 Agency For Science, Technology And Research System and method for motor learning
CN104013513A (en) * 2014-06-05 2014-09-03 电子科技大学 Rehabilitation robot sensing system and method
KR20160141095A (en) * 2015-05-28 2016-12-08 주식회사 셈앤텍 upper limb rehabilitating system
KR20190115483A (en) * 2018-03-12 2019-10-14 한국기계연구원 Wearable robot control system using augmented reality and method for controlling wearable robot using the same
CN108693973A (en) * 2018-04-17 2018-10-23 北京理工大学 A kind of emergency detecting system of fusion EEG signals and environmental information
CN109623835A (en) * 2018-12-05 2019-04-16 济南大学 Wheelchair arm-and-hand system based on multimodal information fusion
CN110900638A (en) * 2019-10-31 2020-03-24 东北大学 Upper limb wearable transfer robot motion recognition system based on multi-signal fusion
CN112133089A (en) * 2020-07-21 2020-12-25 西安交通大学 Vehicle track prediction method, system and device based on surrounding environment and behavior intention
CN112008725A (en) * 2020-08-27 2020-12-01 北京理工大学 Human-computer fusion brain-controlled robot system
CN112276944A (en) * 2020-10-19 2021-01-29 哈尔滨理工大学 Man-machine cooperation system control method based on intention recognition
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN114131635A (en) * 2021-12-08 2022-03-04 山东大学 Multi-degree-of-freedom auxiliary external limb grasping robot system integrating visual sense and tactile sense active perception

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LI QIN, HONGYU WANG, YAZHOU YUAN, SHUFAN QIN: "Multi-Sensor Perception Strategy to Enhance Autonomy of Robotic Operation for Uncertain Peg-in-Hole Task", SENSORS, vol. 21, no. 11, 31 May 2021 (2021-05-31), pages 1 - 26 *
周武啸: "智能助行机器人运动控制方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2011, no. 07, 15 July 2011 (2011-07-15), pages 140 - 183 *

Also Published As

Publication number Publication date
CN115463003B (en) 2024-09-20

Similar Documents

Publication Publication Date Title
Sui et al. Formation control with collision avoidance through deep reinforcement learning using model-guided demonstration
Deng et al. A learning-based hierarchical control scheme for an exoskeleton robot in human–robot cooperative manipulation
Wang et al. Controlling object hand-over in human–robot collaboration via natural wearable sensing
Wang et al. Predicting human intentions in human–robot hand-over tasks through multimodal learning
Bu et al. A hybrid motion classification approach for EMG-based human–robot interfaces using bayesian and neural networks
Chen et al. Neural learning enhanced variable admittance control for human–robot collaboration
Trick et al. Multimodal uncertainty reduction for intention recognition in human-robot interaction
CN113059570B (en) Human-robot cooperative control method based on human body dynamic arm strength estimation model
Zeng et al. Encoding multiple sensor data for robotic learning skills from multimodal demonstration
Sun et al. Fused fuzzy petri nets: a shared control method for brain–computer interface systems
Yudha et al. Performance comparison of fuzzy logic and neural network design for mobile robot navigation
JP2005238422A (en) Robot device, its state transition model construction method and behavior control method
CN112966816A (en) Multi-agent reinforcement learning method surrounded by formation
Bao et al. Prediction of personalized driving behaviors via driver-adaptive deep generative models
Barfi et al. Improving robotic hand control via adaptive Fuzzy-PI controller using classification of EMG signals
Lang et al. Object handover prediction using gaussian processes clustered with trajectory classification
CN116578024A (en) Multi-mode control method and system for rehabilitation robot based on mixed mode signals
CN115463003A (en) Upper limb rehabilitation robot control method based on information fusion
Anvaripour et al. Safe human robot cooperation in task performed on the shared load
Phinni et al. Obstacle Avoidance of a wheeled mobile robot: A Genetic-neurofuzzy approach
CN113760099A (en) Movement intention prediction method and system
Levinson et al. Automatic language acquisition by an autonomous robot
Feng et al. Robot intelligent communication based on deep learning and TRIZ ergonomics for personalized healthcare
Luo et al. Automatic guided intelligent wheelchair system using hierarchical grey-fuzzy motion decision-making algorithms
CN115091467A (en) Intent prediction and disambiguation method and system based on fuzzy Petri net

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant