CN113478462A - Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal - Google Patents

Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal Download PDF

Info

Publication number
CN113478462A
CN113478462A CN202110775590.5A CN202110775590A CN113478462A CN 113478462 A CN113478462 A CN 113478462A CN 202110775590 A CN202110775590 A CN 202110775590A CN 113478462 A CN113478462 A CN 113478462A
Authority
CN
China
Prior art keywords
robot
upper limb
limb exoskeleton
human
exoskeleton robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110775590.5A
Other languages
Chinese (zh)
Other versions
CN113478462B (en
Inventor
李智军
刘玉柱
李国欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110775590.5A priority Critical patent/CN113478462B/en
Publication of CN113478462A publication Critical patent/CN113478462A/en
Application granted granted Critical
Publication of CN113478462B publication Critical patent/CN113478462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0006Exoskeletons, i.e. resembling a human figure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Manipulator (AREA)
  • Prostheses (AREA)

Abstract

The invention provides an intention assimilation control method and system for an upper limb exoskeleton robot based on a surface electromyogram signal, which comprises the following steps: step 1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method; step 2: performing intention recognition through a surface electromyogram signal based on a dynamic model; and step 3: the intention assimilation control is performed by the virtual object. The intention assimilation control method provided by the invention covers continuous interactive behaviors from cooperation to competition, has less strength guidance, and is safer obstacle avoidance and wider interactive behaviors.

Description

Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
Technical Field
The invention relates to the technical field of man-machine interaction, artificial intelligence and interaction control, in particular to an intention assimilation control method and system for an upper limb exoskeleton robot based on surface electromyogram signals.
Background
In recent years, the robot technology is developed rapidly, particularly, a man-machine interface is the most important link in the man-machine interaction research, the quality of signals of the man-machine interface directly influences the control effect and the experimental result, the man-machine interface can measure the human body force and the movement intention signals, the surface electromyogram signals have great advantages in the aspects of accuracy and time delay, and the estimation on the human body movement and force is accurate.
In the aspect of control strategies, the diversification of the control strategies of the interactive robot is an important factor for popularization and application, the basic control strategy is PID control, the application is simple and convenient, but the control can be only carried out according to a fixed track, and the human intention cannot be introduced; in order to reflect human intention, surface electromyographic signals are also introduced into a robot control strategy, meanwhile, the electromyographic signals are connected with human joints by means of an artificial intelligence algorithm, a certain control effect is achieved, and from the concept of homotopic switching of master and slave roles, human-computer interaction behaviors can be divided into: assistance, cooperation, collaboration, opposition, and the like, the intended assimilation control covers continuous interactive behaviors from cooperation to competition, and less strength guidance, safer obstacle avoidance, and wider interactive behaviors.
Patent document CN108283569A (application number: CN201711449077.7) discloses a control system and a control method for an exoskeleton robot, which are used for solving the problems that the existing rehabilitation exoskeleton robot has poor universality, cannot correctly judge the requirement of human body movement intention, and cannot realize the function and effect of man-machine cooperation. The exoskeleton robot control system comprises an attitude sensor, an angle sensor, a pressure sensor, a surface electromyogram signal sensor, a processor, an exoskeleton robot wearing part and a human-computer interaction module.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to provide an intention assimilation control method and system for an upper limb exoskeleton robot based on surface electromyogram signals.
The invention provides an intention assimilation control method of an upper limb exoskeleton robot based on a surface electromyogram signal, which comprises the following steps:
step 1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
step 2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
and step 3: the intention assimilation control is performed by the virtual object.
Preferably, the step 1 comprises:
step 1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure BDA0003154674450000021
wherein,
Figure BDA0003154674450000022
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
step 1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure BDA0003154674450000023
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure BDA0003154674450000024
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure BDA0003154674450000025
wherein u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000026
Figure BDA0003154674450000027
Figure BDA0003154674450000028
Figure BDA0003154674450000029
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure BDA00031546744500000212
Representing a pseudo-inverse of the matrix;
step 1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure BDA00031546744500000210
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
step 1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure BDA00031546744500000211
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure BDA0003154674450000031
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure BDA0003154674450000032
preferably, the step 2 comprises:
step 2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
step 2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
step 2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
step 2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
Preferably, the step 3 comprises:
step 3.1: by a virtual target of a person
Figure BDA0003154674450000033
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure BDA0003154674450000034
wherein the human controls the gain
Figure BDA0003154674450000035
And
Figure BDA0003154674450000036
using measured average values, or the same values as the robot controller gains, i.e.
Figure BDA0003154674450000037
The superscript symbol v represents the estimated value;
step 3.2: using an intention recognition method based on surface electromyography signals, or by internal model parameterization
Figure BDA0003154674450000038
And estimating, wherein the expression is as follows:
Figure BDA0003154674450000039
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure BDA00031546744500000310
The vector of parameters of (a) is,
Figure BDA00031546744500000311
Figure BDA00031546744500000312
t represents time, m is a predetermined parameter, and therefore
Figure BDA00031546744500000313
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure BDA00031546744500000314
The extended model is obtained after substituting the formula (5):
Figure BDA0003154674450000041
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
step 3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure BDA0003154674450000042
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
step 3.4: calculating an extended state estimate of the robot using the system observer:
Figure BDA0003154674450000043
Figure BDA0003154674450000044
Figure BDA0003154674450000045
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure BDA0003154674450000046
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure BDA0003154674450000047
Figure BDA0003154674450000048
preferably, the interaction between the person and the robot is determined by the relationship τ and τ between the person and the robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure BDA0003154674450000051
τrrepresenting an original target position of the upper limb exoskeleton robot; lambda represents the super-position for adjusting the original target position and the human target position of the upper limb exoskeleton robotAnd the parameters are dynamically adjusted according to the terminal position x.
The invention provides an upper limb exoskeleton robot intention assimilation control system based on a surface electromyogram signal, which comprises:
module M1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
module M2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
module M3: the intention assimilation control is performed by the virtual object.
Preferably, the module M1 includes:
module M1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure BDA0003154674450000052
wherein,
Figure BDA0003154674450000053
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
module M1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure BDA0003154674450000054
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure BDA0003154674450000055
is Coriolis and centrifugal torqueMoment, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure BDA0003154674450000056
wherein u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000057
Figure BDA0003154674450000058
Figure BDA0003154674450000059
Figure BDA00031546744500000510
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure BDA00031546744500000511
Representing a pseudo-inverse of the matrix;
module M1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure BDA0003154674450000061
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
module M1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure BDA0003154674450000062
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure BDA0003154674450000063
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure BDA0003154674450000064
preferably, the module M2 includes:
module M2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
module M2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
module M2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
module M2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
Preferably, the module M3 includes:
module M3.1: by a virtual target of a person
Figure BDA0003154674450000065
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure BDA0003154674450000066
wherein the human controls the gain
Figure BDA0003154674450000067
And
Figure BDA0003154674450000068
using measured average values, or the same values as the robot controller gains, i.e.
Figure BDA0003154674450000069
The superscript symbol v represents the estimated value;
module M3.2: using an intention recognition method based on surface electromyography signals, or by internal model parameterization
Figure BDA00031546744500000610
And estimating, wherein the expression is as follows:
Figure BDA0003154674450000071
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure BDA0003154674450000072
The vector of parameters of (a) is,
Figure BDA0003154674450000073
Figure BDA0003154674450000074
t represents time, m is a predetermined parameter, and therefore
Figure BDA0003154674450000075
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure BDA0003154674450000076
The extended model is obtained after substituting the formula (5):
Figure BDA0003154674450000077
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
module M3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure BDA0003154674450000078
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
module M3.4: calculating an extended state estimate of the robot using the system observer:
Figure BDA0003154674450000079
Figure BDA00031546744500000710
Figure BDA00031546744500000711
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure BDA00031546744500000712
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure BDA0003154674450000081
Figure BDA0003154674450000082
preferably, the interaction between the person and the robot is determined by the relationship τ and τ between the person and the robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure BDA0003154674450000083
τrrepresenting an original target position of the upper limb exoskeleton robot; and lambda represents a hyper-parameter for adjusting the original target position and the human target position of the upper limb exoskeleton robot, and is dynamically adjusted according to the tail end position x.
Compared with the prior art, the invention has the following beneficial effects:
(1) the invention introduces the surface electromyogram signal into the robot control strategy, and has advantages in the aspects of accuracy and time delay;
(2) the intention assimilation control method provided by the invention covers continuous interaction behaviors from cooperation to competition, has less strength guidance, and is safer obstacle avoidance and wider interaction behaviors;
(3) the method is simple and easy to implement, and is a compliance control method with high robustness.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a schematic diagram of an intention assimilation control method of an upper limb exoskeleton robot based on surface electromyogram signals;
FIG. 2 is a schematic view of an obstacle avoidance and auxiliary task scenario of the present invention;
FIG. 3 is a schematic flow chart of an intention identification method based on surface electromyography signals according to the present invention;
FIG. 4 is a schematic diagram of the MCLP Boost algorithm of the present invention;
fig. 5 is a schematic diagram of the variation of the human-computer interaction strategy corresponding to the parameter λ adjustment of the present invention.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example (b):
as shown in fig. 1, a schematic diagram of an intention assimilation control method for an upper limb exoskeleton robot based on a surface electromyogram signal according to the present invention includes an upper limb exoskeleton robot dynamics model established by a Kane method, an intention identification method based on a surface electromyogram signal, and an intention assimilation control method, and different task scenarios are shown in fig. 2, and the intention assimilation control method according to the present invention can unify different human-computer interaction strategies and perform continuous control;
further, the specific process of establishing the upper limb exoskeleton robot dynamics model by using the Kane method comprises the following steps:
1) assuming that there is no relative motion between the robot gripper, the hand, and the object, and that the robot gripper and the hand are manipulating a rigid object together, the object is a mass point. General object manipulation considers only linear motion, and the object satisfies the dynamic equation:
Figure BDA0003154674450000091
where x (t) is the position coordinate of the object, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object.
2) The method comprises the following steps of utilizing an upper limb exoskeleton robot dynamics model established by a Kane method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure BDA0003154674450000092
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure BDA0003154674450000093
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure BDA0003154674450000094
wherein u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000095
Figure BDA0003154674450000096
Figure BDA0003154674450000097
Figure BDA0003154674450000098
Mr、Cr、Grthe meaning of (1) is respectively an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system; symbol
Figure BDA00031546744500000910
Representing a pseudo-inverse of the matrix;
3) the simultaneous equations (1) and (3) can obtain the combined kinetic equation of the object and the robot:
Figure BDA0003154674450000099
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G, respectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
4) consider the robot information about its local environment and make measurements of the position, velocity and human force at the end of the upper extremity exoskeleton robot interaction system, all affected by measurement noise. With a robot controller with gravity compensation and linear feedback:
Figure BDA0003154674450000101
where τ is the target position of the robot, L1And L2Is a gain corresponding to the position error and the velocity.
The force of a human hand on an object is modeled as:
Figure BDA0003154674450000102
wherein L ish,1And Lh,2Control gain, τ, for humanshSubstituting equations (6) and (7) into equation (5) for the target position of the human can obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system:
Figure BDA0003154674450000103
further, as shown in fig. 3, the intention identification method based on the surface electromyogram signal specifically includes the following steps:
1) collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
2) filtering the collected electromyographic signals, then carrying out data segmentation and feature extraction, wherein the overlap is not applicable when long-sequence waveforms are used for feature extraction, and the overlap operation can be considered when the waveforms are short, so that the extracted features can correspond to different intention categories;
3) the MCLPBoost algorithm is shown in FIG. 4, and has good generalization performance, so that the method is based on comparison, and has the characteristic and advantage of small time overhead compared with a calculation model when prediction is carried out;
4) during model prediction, comparing the prediction category of each base classifier with the corresponding confidence coefficient (probability) with a preset threshold value, determining whether the base classifier votes, finally collecting the voting results of all the base classifiers by using a Boost algorithm, weighting and summing the voting results, finding the prediction category with the largest number of votes, and outputting the activity intention when the number of votes is greater than the mean value;
5) the result of the intention recognition based on the surface electromyographic signal can be used to generate a "virtual" object.
Further, the influence of human on the dynamics of the upper limb exoskeleton robot and human interaction system is completely dependent on uhNo matter what internal model it is based on, an alternative method is developed that does not require the estimation of human control gains, a "virtual" target being generated by using arbitrary values of these assumed gains
Figure BDA0003154674450000104
The intention assimilation control method comprises the following specific processes:
1) "virtual" target
Figure BDA0003154674450000105
The influence of human on the dynamics of the upper limb exoskeleton robot and human interaction system can be effectively evaluated if the following conditions are met:
Figure BDA0003154674450000106
wherein the virtual human controls the gain
Figure BDA0003154674450000111
And
Figure BDA0003154674450000112
some average value measured from many people, or the same value as the robot controller gain, may be used, i.e.
Figure BDA0003154674450000113
2) To estimate
Figure BDA0003154674450000114
The surface electromyogram signal-based intention recognition method of claim 3 may be used, or it may be parameterized using an internal model:
Figure BDA0003154674450000115
wherein theta means calculating a virtual target position of a person
Figure BDA0003154674450000116
The vector of parameters of (a) is,
Figure BDA0003154674450000117
t represents time, m is a predetermined parameter, and therefore
Figure BDA0003154674450000118
For the time-varying quantities determined by the internal model parameters, state vectors are used
Figure BDA0003154674450000119
Substituting this into equation (5) yields an extended model:
Figure BDA00031546744500001110
where φ represents: state vectors of the upper limb exoskeleton robot and the human interaction system; v is an element of N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise.
3) Considering that the robot can measure its end position and velocity and the interaction force with the person with suitable sensors, the measurement vector of the robot is obtained:
Figure BDA00031546744500001111
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise.
4) However, in equation (10)
Figure BDA00031546744500001112
And θ are unknown, so the following system observer is used to calculate the extended state estimate of the robot:
Figure BDA00031546744500001113
Figure BDA00031546744500001114
Figure BDA00031546744500001115
where Λ represents the estimated value, z represents: measuring vectors of the upper limb exoskeleton robot and the human interaction system; linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure BDA00031546744500001116
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]Using a to denote the system matrix, equation (11) can be expressed as follows:
Figure BDA0003154674450000121
Figure BDA0003154674450000122
all parameters except theta can be observed to obtain
Figure BDA0003154674450000123
The value is obtained.
5) The interaction between a person and a robot can be determined by the relationships τ and τ between the person and the robothTo determine, e.g. when τ ═ τhCorresponding to the assistance of the robot using the human virtual target, when tau is taurThe robot follows its original target taurWhen τ is 2 τrhCorresponding to "confrontation", i.e. the robot imposes its own target by eliminating the target of the human from the upper extremity exoskeleton robot and human interaction system.
To assimilate the interactive behavior, the target position of the robot is designed from the estimated human target using the following equation:
Figure BDA0003154674450000124
τrrepresents: an original target position of the upper limb exoskeleton robot; λ represents: adjusting hyper-parameters of an original target position and a human target position of the upper limb exoskeleton robot, and dynamically adjusting according to a terminal position x;
the variation of the human-computer interaction strategy corresponding to the adjustment of the parameter lambda is shown in FIG. 5 when lambda is measured<1, the intention assimilation controller will coordinate human-machine goals; when λ is 1, the intended assimilation controller will ignore
Figure BDA0003154674450000125
Thereby completing man-machine cooperation; when lambda is 2, the intention assimilation controller eliminates the influence of the simulated human on the dynamic state of the upper limb exoskeleton robot and the human interaction systemAnd the positions of the upper limb exoskeleton robot and the human interaction system finally converge to the target tau of the intention assimilation controllerr
Further, verify
Figure BDA0003154674450000126
The stability of the human-computer interaction system after introduction, the human target can be estimated by the second equation of equation (13)
Figure BDA0003154674450000127
Figure BDA0003154674450000128
Substituting the corrected equation (6) into the combined kinetics equation (5) yields:
Figure BDA0003154674450000129
wherein,
Figure BDA00031546744500001210
representing the error between the estimated value and the actual value, if defined
Figure BDA00031546744500001211
And the force (7) of the human hand on the object is substituted into the above equation to obtain:
Figure BDA0003154674450000131
thus, τ can be analyzedrAnd τhInfluence on the dynamic system, taking into account the steady-state position:
Figure BDA0003154674450000132
equation (19) is simplified to analyze stability by defining the following equation:
Figure BDA0003154674450000133
Figure BDA0003154674450000134
deducing:
Figure BDA0003154674450000135
this accounts for the position error x-xssWill disappear if the error of the estimation of the manpower is
Figure BDA0003154674450000136
The formal formula (15) and the system observer (13) in the state space according to the dynamics of the upper limb exoskeleton robot and the human interaction system are as follows:
Figure BDA0003154674450000137
by defining ξ ≡ [ x-x ]ss,x,φT]TCombining equation (22) and equation (23) yields:
Figure BDA0003154674450000138
Figure BDA0003154674450000139
Figure BDA00031546744500001310
where ξ is the system state vector defined in the system transient performance analysis, this equation is a combined system that includes the system dynamics and the observer.
Can be calculated by solving the following characteristic equation
Figure BDA00031546744500001311
Further study of the stability of equation (24):
[yI-(A-KH)][My2+(C+L2)y+L1]=0…………(25)
if the two systems are stable
Figure BDA00031546744500001312
It will also be stable:
Figure BDA00031546744500001313
Figure BDA00031546744500001314
the stability of the two systems is respectively checked by utilizing the Lyapunov theory, and the stability of the first system is firstly proved by considering a Lyapunov candidate function:
Figure BDA0003154674450000141
the time derivative can be:
Figure BDA0003154674450000142
the stability of the second system is then demonstrated by considering the lyapunov candidate function:
Figure BDA0003154674450000143
Pvby, etcThe Riccati equation in equation (14) can be found:
Figure BDA0003154674450000144
the time derivative can be:
Figure BDA0003154674450000145
combining equation (13) can result in:
Figure BDA0003154674450000146
Figure BDA0003154674450000147
substituting equation (31) can obtain:
Figure BDA0003154674450000148
it is thus demonstrated that both systems in equation (26) are stable and therefore the upper extremity exoskeleton robot and human interaction system is stable.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. An intention assimilation control method of an upper limb exoskeleton robot based on a surface electromyogram signal is characterized by comprising the following steps:
step 1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
step 2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
and step 3: the intention assimilation control is performed by the virtual object.
2. The method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 1, wherein the step 1 comprises:
step 1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure FDA0003154674440000011
wherein,
Figure FDA0003154674440000012
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
step 1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure FDA0003154674440000013
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure FDA0003154674440000014
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure FDA0003154674440000015
wherein u represents the control input of the upper limb exoskeleton robot,
Figure FDA0003154674440000016
Figure FDA0003154674440000017
Figure FDA0003154674440000018
Figure FDA0003154674440000019
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure FDA00031546744400000110
Representing a pseudo-inverse of the matrix;
step 1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure FDA00031546744400000111
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
step 1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure FDA0003154674440000021
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure FDA0003154674440000022
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure FDA0003154674440000023
3. the method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 1, wherein the step 2 comprises:
step 2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
step 2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
step 2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
step 2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
4. The method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 2, wherein the step 3 comprises:
step 3.1: by a virtual target of a person
Figure FDA0003154674440000024
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure FDA0003154674440000025
wherein the human controls the gain
Figure FDA0003154674440000026
And
Figure FDA0003154674440000027
using measured average values, or the same values as the robot controller gains, i.e.
Figure FDA0003154674440000028
The superscript symbol v represents the estimated value;
step 3.2: using an intention recognition method based on surface electromyography signals, or by internal model parameterization
Figure FDA0003154674440000029
And estimating, wherein the expression is as follows:
Figure FDA0003154674440000031
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure FDA0003154674440000032
The vector of parameters of (a) is,
Figure FDA0003154674440000033
Figure FDA0003154674440000034
t represents time, m is a predetermined parameter, and therefore
Figure FDA0003154674440000035
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure FDA0003154674440000036
The extended model is obtained after substituting the formula (5):
Figure FDA0003154674440000037
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
step 3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure FDA0003154674440000038
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
step 3.4: calculating an extended state estimate of the robot using the system observer:
Figure FDA0003154674440000039
Figure FDA00031546744400000310
Figure FDA00031546744400000311
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure FDA00031546744400000312
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure FDA0003154674440000041
Figure FDA0003154674440000042
5. the method for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyography signals as claimed in claim 4, wherein interaction between human and robot is performed through relations τ and τ between human and robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure FDA0003154674440000043
τrrepresenting an original target position of the upper limb exoskeleton robot; and lambda represents a hyper-parameter for adjusting the original target position and the human target position of the upper limb exoskeleton robot, and is dynamically adjusted according to the tail end position x.
6. The utility model provides an upper limbs ectoskeleton robot intention assimilation control system based on surface electromyogram signal which characterized in that includes:
module M1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
module M2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
module M3: the intention assimilation control is performed by the virtual object.
7. The system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 6, wherein the module M1 comprises:
module M1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure FDA0003154674440000044
wherein,
Figure FDA0003154674440000045
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
module M1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure FDA0003154674440000051
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure FDA0003154674440000052
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure FDA0003154674440000053
wherein u represents the control input of the upper limb exoskeleton robot,
Figure FDA0003154674440000054
Figure FDA0003154674440000055
Figure FDA0003154674440000056
Figure FDA0003154674440000057
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure FDA00031546744400000512
Representing a pseudo-inverse of the matrix;
module M1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure FDA0003154674440000058
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
module M1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure FDA0003154674440000059
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure FDA00031546744400000510
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure FDA00031546744400000511
8. the system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 6, wherein the module M2 comprises:
module M2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
module M2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
module M2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
module M2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
9. The system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 7, wherein the module M3 comprises:
module M3.1: by a virtual target of a person
Figure FDA0003154674440000061
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure FDA0003154674440000062
wherein the human controls the gain
Figure FDA0003154674440000063
And
Figure FDA0003154674440000064
using measured average values, or the same values as the robot controller gains, i.e.
Figure FDA0003154674440000065
The superscript symbol v represents the estimated value;
module M3.2: using methods of intention recognition based on surface electromyographic signals, or byInternal model parameterization pair
Figure FDA0003154674440000066
And estimating, wherein the expression is as follows:
Figure FDA0003154674440000067
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure FDA0003154674440000068
The vector of parameters of (a) is,
Figure FDA0003154674440000069
Figure FDA00031546744400000610
t represents time, m is a predetermined parameter, and therefore
Figure FDA00031546744400000611
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure FDA00031546744400000612
The extended model is obtained after substituting the formula (5):
Figure FDA00031546744400000613
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
module M3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure FDA00031546744400000614
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
module M3.4: calculating an extended state estimate of the robot using the system observer:
Figure FDA0003154674440000071
Figure FDA0003154674440000072
Figure FDA0003154674440000073
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure FDA0003154674440000074
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure FDA0003154674440000075
Figure FDA0003154674440000076
10. the system for controlling assimilation of upper limb exoskeleton robot based on surface electromyography signals of claim 9, wherein interaction between human and robot is performed through the relations τ and τ between human and robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure FDA0003154674440000077
τrrepresenting an original target position of the upper limb exoskeleton robot; and lambda represents a hyper-parameter for adjusting the original target position and the human target position of the upper limb exoskeleton robot, and is dynamically adjusted according to the tail end position x.
CN202110775590.5A 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal Active CN113478462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110775590.5A CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110775590.5A CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Publications (2)

Publication Number Publication Date
CN113478462A true CN113478462A (en) 2021-10-08
CN113478462B CN113478462B (en) 2022-12-30

Family

ID=77938116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775590.5A Active CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Country Status (1)

Country Link
CN (1) CN113478462B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN114474051A (en) * 2021-12-30 2022-05-13 西北工业大学 Individualized gain teleoperation control method based on physiological signals of operator

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2497610A1 (en) * 2011-03-09 2012-09-12 Syco Di Hedvig Haberl & C. S.A.S. System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device
WO2018000854A1 (en) * 2016-06-29 2018-01-04 深圳光启合众科技有限公司 Human upper limb motion intention recognition and assistance method and device
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural network control system of exoskeleton robot based on intention recognition
CN112107397A (en) * 2020-10-19 2020-12-22 中国科学技术大学 Myoelectric signal driven lower limb artificial limb continuous control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2497610A1 (en) * 2011-03-09 2012-09-12 Syco Di Hedvig Haberl & C. S.A.S. System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device
WO2018000854A1 (en) * 2016-06-29 2018-01-04 深圳光启合众科技有限公司 Human upper limb motion intention recognition and assistance method and device
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural network control system of exoskeleton robot based on intention recognition
CN112107397A (en) * 2020-10-19 2020-12-22 中国科学技术大学 Myoelectric signal driven lower limb artificial limb continuous control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李想等: "基于脑肌电信号的机械臂控制方法与实现", 《计算机测量与控制》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN113995629B (en) * 2021-11-03 2023-07-11 中国科学技术大学先进技术研究院 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system
CN114474051A (en) * 2021-12-30 2022-05-13 西北工业大学 Individualized gain teleoperation control method based on physiological signals of operator

Also Published As

Publication number Publication date
CN113478462B (en) 2022-12-30

Similar Documents

Publication Publication Date Title
Yang et al. Haptics electromyography perception and learning enhanced intelligence for teleoperated robot
CN111281743B (en) Self-adaptive flexible control method for exoskeleton robot for upper limb rehabilitation
CN113478462B (en) Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
CN109702740B (en) Robot compliance control method, device, equipment and storage medium
Neto et al. Real-time and continuous hand gesture spotting: An approach based on artificial neural networks
CN108115681A (en) Learning by imitation method, apparatus, robot and the storage medium of robot
Adachi et al. Imitation learning for object manipulation based on position/force information using bilateral control
Chen et al. Neural learning enhanced variable admittance control for human–robot collaboration
CN111522243A (en) Robust iterative learning control strategy for five-degree-of-freedom upper limb exoskeleton system
CN111673733B (en) Intelligent self-adaptive compliance control method of robot in unknown environment
Skoglund et al. Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives
Zenha et al. Incremental adaptation of a robot body schema based on touch events
Sidiropoulos et al. A human inspired handover policy using gaussian mixture models and haptic cues
Li et al. Observer-based multivariable fixed-time formation control of mobile robots
Ma et al. Active manipulation of elastic rods using optimization-based shape perception and sensorimotor model approximation
JPH0724766A (en) Robot controller
Zhao et al. Robotic peg-in-hole assembly based on reversible dynamic movement primitives and trajectory optimization
Wei et al. Research on robotic arm movement grasping system based on MYO
CN114952791A (en) Control method and device for musculoskeletal robot
CN114594757B (en) Visual path planning method of cooperative robot
McCarragher et al. Hybrid dynamic modeling and control of constrained manipulation systems
Dimeas et al. Robot collision detection based on fuzzy identification and time series modelling
Wei et al. Decoupling Observer for Contact Force Estimation of Robot Manipulators Based on Enhanced Gaussian Process Model
Veiga et al. Tactile based forward modeling for contact location control
Guanshan Neural network applications in sensor fusion for a mobile robot motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant