CN113478462A - Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal - Google Patents

Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal Download PDF

Info

Publication number
CN113478462A
CN113478462A CN202110775590.5A CN202110775590A CN113478462A CN 113478462 A CN113478462 A CN 113478462A CN 202110775590 A CN202110775590 A CN 202110775590A CN 113478462 A CN113478462 A CN 113478462A
Authority
CN
China
Prior art keywords
robot
upper limb
human
limb exoskeleton
exoskeleton robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110775590.5A
Other languages
Chinese (zh)
Other versions
CN113478462B (en
Inventor
李智军
刘玉柱
李国欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN202110775590.5A priority Critical patent/CN113478462B/en
Publication of CN113478462A publication Critical patent/CN113478462A/en
Application granted granted Critical
Publication of CN113478462B publication Critical patent/CN113478462B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0006Exoskeletons, i.e. resembling a human figure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Prostheses (AREA)
  • Manipulator (AREA)

Abstract

本发明提供了一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统,包括:步骤1:利用凯恩方法建立上肢外骨骼机器人动力学模型;步骤2:基于动力学模型,通过表面肌电信号进行意图识别;步骤3:通过虚拟目标,进行意图同化控制。本发明提出意图同化控制方法,涵盖了从合作到竞争的连续交互行为,力量引导更少,是更安全的避障和更广泛的交互行为。

Figure 202110775590

The present invention provides an intention assimilation control method and system for an upper limb exoskeleton robot based on surface EMG signals, including: step 1: using the Kane method to establish a dynamic model of the upper limb exoskeleton robot; step 2: based on the dynamic model, through The surface EMG signal is used for intention recognition; Step 3: Intention assimilation control is carried out through the virtual target. The present invention proposes an intent assimilation control method, which covers continuous interaction behaviors from cooperation to competition, with less force guidance, safer obstacle avoidance and wider interaction behaviors.

Figure 202110775590

Description

基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和 系统Intention assimilation control method and system for upper limb exoskeleton robot based on surface electromyographic signals

技术领域technical field

本发明涉及人机交互、人工智能和交互控制技术领域,具体地,涉及一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统。The invention relates to the technical fields of human-computer interaction, artificial intelligence and interactive control, and in particular, to a method and a system for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography signals.

背景技术Background technique

机器人技术近年来发展势头迅猛,尤其是人机交互机器人,人机接口是人机交互研究中最重要的一个环节,人机接口信号的质量直接影响着控制效果以及实验结果,可以测量的人体力和运动意图信号的人机接口,表面肌电信号无论是在准确度还是延时方面都有很大优势,在对人体运动及力的估计上较为精确。Robot technology has developed rapidly in recent years, especially human-computer interaction robots. Human-computer interface is the most important part of human-computer interaction research. The quality of human-computer interface signals directly affects the control effect and experimental results. Compared with the human-machine interface of the motion intention signal, the surface EMG signal has great advantages in terms of accuracy and delay, and is more accurate in the estimation of human motion and force.

在控制策略方面,交互机器人控制策略的多样化是其得以推广和应用的重要因素,基本的控制策略是PID控制,应用简单方便,但只能依照固定轨迹进行控制,无法引入人体意图;为了反映人体意图,表面肌电信号也引入到机器人控制策略中,同时借助人工智能算法将肌电信号与人体关节建立联系,取得了一定的控制效果,从主从角色同伦切换的概念出发,可将人机交互行为分为:援助、合作、协同和对抗等,意图同化控制涵盖了从合作到竞争的连续交互行为,力量引导更少,更安全的避障和更广泛的交互行为。In terms of control strategies, the diversification of interactive robot control strategies is an important factor for its promotion and application. The basic control strategy is PID control, which is simple and convenient to apply, but it can only be controlled according to a fixed trajectory and cannot introduce human intentions; in order to reflect Human body intentions and surface EMG signals are also introduced into the robot control strategy. At the same time, artificial intelligence algorithms are used to connect EMG signals with human joints, and a certain control effect has been achieved. Starting from the concept of master-slave role homotopy switching, the Human-computer interaction behaviors are divided into: assistance, cooperation, coordination and confrontation, etc. Intent assimilation control covers continuous interaction behaviors from cooperation to competition, less force guidance, safer obstacle avoidance and wider interaction behaviors.

专利文献CN108283569A(申请号:CN201711449077.7)公开了一种外骨骼机器人控制系统及控制方法,以解决现有康复外骨骼机器人通用性差,以及无法正确判断人体运动意图的需求,不能实现人机协同的功能效果。外骨骼机器人控制系统,包括姿态传感器、角度传感器、压力传感器、表面肌电信号传感器、处理器、外骨骼机器人穿戴部件和人机交互模块。The patent document CN108283569A (application number: CN201711449077.7) discloses an exoskeleton robot control system and control method, so as to solve the problem that the existing rehabilitation exoskeleton robot has poor versatility and cannot correctly judge the motion intention of the human body, and cannot realize human-machine collaboration function effect. Exoskeleton robot control system, including attitude sensor, angle sensor, pressure sensor, surface electromyographic signal sensor, processor, exoskeleton robot wearable parts and human-computer interaction module.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的缺陷,本发明的目的是提供一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统。In view of the defects in the prior art, the purpose of the present invention is to provide a method and system for the intention assimilation control of an upper limb exoskeleton robot based on surface EMG signals.

根据本发明提供的基于表面肌电信号的上肢外骨骼机器人意图同化控制方法,包括:The intention assimilation control method for an upper limb exoskeleton robot based on surface EMG signals provided according to the present invention includes:

步骤1:利用凯恩方法建立上肢外骨骼机器人动力学模型;Step 1: Use the Kane method to establish a dynamic model of the upper limb exoskeleton robot;

步骤2:基于动力学模型,通过表面肌电信号进行意图识别;Step 2: Based on the dynamic model, the intent is identified by the surface EMG signal;

步骤3:通过虚拟目标,进行意图同化控制。Step 3: Intention assimilation control is carried out through the virtual target.

优选的,所述步骤1包括:Preferably, the step 1 includes:

步骤1.1:机器人、人和物体之间没有相对运动,并且机器人和人一起操纵物体,物体满足动态方程:Step 1.1: There is no relative motion between the robot, the human and the object, and the robot and the human manipulate the object together, and the object satisfies the dynamic equation:

Figure BDA0003154674450000021
Figure BDA0003154674450000021

其中,

Figure BDA0003154674450000022
为物体的位置坐标对时间的二阶导数,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力;in,
Figure BDA0003154674450000022
is the second derivative of the position coordinate of the object to time, f and u h are the forces acting on the object by the robot and the human, M o is the mass matrix of the object, and G o is the gravity of the object;

步骤1.2:利用凯恩方法建立上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:Step 1.2: Use the Kane method to establish the dynamic model of the upper limb exoskeleton robot, and obtain the joint space dynamic equation when the n-degree-of-freedom upper limb exoskeleton robot is in contact with the environment:

Figure BDA0003154674450000023
Figure BDA0003154674450000023

其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,

Figure BDA0003154674450000024
是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix,
Figure BDA0003154674450000024
are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;

转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:

Figure BDA0003154674450000025
Figure BDA0003154674450000025

其中,u表示上肢外骨骼机器人的控制输入,

Figure BDA0003154674450000026
where u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000026

Figure BDA0003154674450000027
Figure BDA0003154674450000027

Figure BDA0003154674450000028
Figure BDA0003154674450000028

Figure BDA0003154674450000029
Figure BDA0003154674450000029

Mr、Cr、Gr分别表示笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵、科氏力与离心力矩阵和重力矩阵,符号

Figure BDA00031546744500000212
表示矩阵的伪逆;M r , C r , G r represent the inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system, respectively, the symbol
Figure BDA00031546744500000212
represents the pseudo-inverse of a matrix;

步骤1.3:联立公式(1)和(3),得到物体与机器人的组合动力学方程:Step 1.3: Simultaneously combine formulas (1) and (3) to obtain the combined dynamic equation of the object and robot:

Figure BDA00031546744500000210
Figure BDA00031546744500000210

M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)

M、C、G分别表示笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;M, C, and G represent the inertial matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper-limb exoskeleton robot and human interaction system in the Cartesian space coordinate system, respectively;

步骤1.4:对上肢外骨骼机器人末端的位置、速度和人的力进行测量,采用具有重力补偿和线性反馈的机器人控制器,表达式为:Step 1.4: Measure the position, velocity and human force of the upper limb exoskeleton robot end, using a robot controller with gravity compensation and linear feedback, the expression is:

Figure BDA00031546744500000211
Figure BDA00031546744500000211

其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益;where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity ;

将人作用于物体的力建模为:Model the force a person acts on an object as:

Figure BDA0003154674450000031
Figure BDA0003154674450000031

其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将式(6)和(7)带入式(5)得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, τ h is the target position of the human, and the equations (6) and (7) are brought into the equation (5) to obtain the upper-limb exoskeleton robot-human interaction closed-loop system dynamics equation:

Figure BDA0003154674450000032
Figure BDA0003154674450000032

优选的,所述步骤2包括:Preferably, the step 2 includes:

步骤2.1:通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;Step 2.1: Collect the electromyographic signals of the human wrist, forearm and elbow by means of electromyography;

步骤2.2:对采集后的肌电信号进行过滤、数据分割和特征提取,根据波形类型进行特征提取,使提取的特征对应不同的意图类别;Step 2.2: Perform filtering, data segmentation and feature extraction on the collected EMG signals, and perform feature extraction according to the waveform type, so that the extracted features correspond to different intent categories;

步骤2.3:使用数据库中的多准则线性规划结合在线随机森林的分类方法进行训练预测;Step 2.3: Use the multi-criteria linear programming in the database combined with the online random forest classification method for training prediction;

步骤2.4:在模型预测时,对每个基分类器预测类别与对应置信度与预设阈值进行比较,决定该基分类器是否进行投票,最终使用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图。Step 2.4: When the model predicts, compare the predicted category of each base classifier with the corresponding confidence level and the preset threshold to decide whether the base classifier will vote. Finally, use the Boost algorithm to collect the voting results of all base classifiers and weight them. and, find the predicted category with the most votes, and output the activity intent when the votes are greater than the mean.

优选的,所述步骤3包括:Preferably, the step 3 includes:

步骤3.1:通过人的虚拟目标

Figure BDA0003154674450000033
评估人类对上肢外骨骼机器人与人交互系统动力学的影响,公式为:Step 3.1: Through the human virtual target
Figure BDA0003154674450000033
To evaluate the impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system, the formula is:

Figure BDA0003154674450000034
Figure BDA0003154674450000034

其中,人类控制增益

Figure BDA0003154674450000035
Figure BDA0003154674450000036
使用测量平均值,或者与机器人控制器增益相同的值,即
Figure BDA0003154674450000037
上标符号v表示估计值;Among them, the human control gain
Figure BDA0003154674450000035
and
Figure BDA0003154674450000036
Use the measured average, or the same value as the robot controller gain, i.e.
Figure BDA0003154674450000037
The superscript symbol v represents the estimated value;

步骤3.2:使用基于表面肌电信号的意图识别方法,或者通过内部模型参数化对

Figure BDA0003154674450000038
进行估计,表达式为:Step 3.2: Use a surface EMG-based intent recognition method, or parameterize the pair via an internal model.
Figure BDA0003154674450000038
To estimate, the expression is:

Figure BDA0003154674450000039
Figure BDA0003154674450000039

其中,上标符号T表示转置,θ为计算人的虚拟目标位置

Figure BDA00031546744500000310
的参数向量,
Figure BDA00031546744500000311
Figure BDA00031546744500000312
t表示时间,m为预先设定的参数,因此
Figure BDA00031546744500000313
为由内模型参数决定并随时间变化的量;Among them, the superscript symbol T represents the transposition, and θ is the virtual target position of the calculation person
Figure BDA00031546744500000310
The parameter vector of ,
Figure BDA00031546744500000311
Figure BDA00031546744500000312
t represents time, m is a preset parameter, so
Figure BDA00031546744500000313
is the quantity determined by the internal model parameters and varies with time;

使用上肢外骨骼机器人与人交互系统的状态向量

Figure BDA00031546744500000314
将其带入式(5)后得到扩展模型:State Vectors of Robot-Human Interaction Systems Using Upper Limb Exoskeletons
Figure BDA00031546744500000314
After taking it into formula (5), the extended model is obtained:

Figure BDA0003154674450000041
Figure BDA0003154674450000041

其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量,v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声;Among them, φ represents: the state vector of the upper limb exoskeleton robot and human interaction system, v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise;

步骤3.3:通过传感器测量机器人端点位置和速度以及与人的相互作用力,得到上肢外骨骼机器人与人交互系统的测量矢量:Step 3.3: Measure the position and speed of the robot's endpoints and the interaction force with the human through the sensor, and obtain the measurement vector of the upper limb exoskeleton robot-human interaction system:

Figure BDA0003154674450000042
Figure BDA0003154674450000042

其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声;Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean value 0 and variance E[μ, μ T ];

步骤3.4:使用系统观测器计算机器人的扩展状态估计:Step 3.4: Compute the extended state estimate of the robot using the system observer:

Figure BDA0003154674450000043
Figure BDA0003154674450000043

Figure BDA0003154674450000044
Figure BDA0003154674450000044

Figure BDA0003154674450000045
Figure BDA0003154674450000045

其中,∧表示估计值;z表示上肢外骨骼机器人与人交互系统的测量向量;Among them, ∧ represents the estimated value; z represents the measurement vector of the upper limb exoskeleton robot and human interaction system;

线性二次估计增益K=PHTR-1,P是正定矩阵,通过求解黎卡提微分方程获得:The linear quadratic estimation gain K=PH T R -1 , where P is a positive definite matrix, is obtained by solving the Riccati differential equation:

Figure BDA0003154674450000046
Figure BDA0003154674450000046

其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],A表示系统矩阵,代入等式(11),表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], A represents the system matrix, substituting into equation (11), expressed as the following form:

Figure BDA0003154674450000047
Figure BDA0003154674450000047

Figure BDA0003154674450000048
Figure BDA0003154674450000048

优选的,人与机器人之间的相互作用通过人与机器人之间的关系τ和τh来确定:Preferably, the interaction between the human and the robot is determined by the relationship τ and τ h between the human and the robot:

当τ=τh时,表示机器人使用人类虚拟目标的辅助,机器人跟随它的原始目标τrWhen τ=τ h , it means that the robot uses the assistance of the human virtual target, and the robot follows its original target τ r ;

当τ=2τrh时,机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标;When τ = 2τ rh , the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system;

使用下式根据估计的人的目标设计机器人的目标位置进行交互行为同化:Use the following formula to design the target position of the robot based on the estimated human target for interactive behavior assimilation:

Figure BDA0003154674450000051
Figure BDA0003154674450000051

τr表示上肢外骨骼机器人的原始目标位置;λ表示调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,根据末端位置x动态调整。τ r represents the original target position of the upper limb exoskeleton robot; λ represents the hyperparameter for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which is dynamically adjusted according to the end position x.

根据本发明提供的基于表面肌电信号的上肢外骨骼机器人意图同化控制系统,包括:The intention assimilation control system for an upper limb exoskeleton robot based on surface EMG signals provided according to the present invention includes:

模块M1:利用凯恩方法建立上肢外骨骼机器人动力学模型;Module M1: Use the Kane method to establish a dynamic model of an upper limb exoskeleton robot;

模块M2:基于动力学模型,通过表面肌电信号进行意图识别;Module M2: Intention recognition through surface EMG signals based on dynamic model;

模块M3:通过虚拟目标,进行意图同化控制。Module M3: Control of intention assimilation through virtual targets.

优选的,所述模块M1包括:Preferably, the module M1 includes:

模块M1.1:机器人、人和物体之间没有相对运动,并且机器人和人一起操纵物体,物体满足动态方程:Module M1.1: There is no relative motion between the robot, the human and the object, and the robot and the human manipulate the object together, and the object satisfies the dynamic equation:

Figure BDA0003154674450000052
Figure BDA0003154674450000052

其中,

Figure BDA0003154674450000053
为物体的位置坐标对时间的二阶导数,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力;in,
Figure BDA0003154674450000053
is the second derivative of the position coordinate of the object to time, f and u h are the forces acting on the object by the robot and the human, M o is the mass matrix of the object, and G o is the gravity of the object;

模块M1.2:利用凯恩方法建立上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:Module M1.2: Use the Kane method to establish the dynamic model of the upper limb exoskeleton robot, and obtain the joint space dynamic equation when the n-degree-of-freedom upper limb exoskeleton robot is in contact with the environment:

Figure BDA0003154674450000054
Figure BDA0003154674450000054

其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,

Figure BDA0003154674450000055
是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix,
Figure BDA0003154674450000055
are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;

转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:

Figure BDA0003154674450000056
Figure BDA0003154674450000056

其中,u表示上肢外骨骼机器人的控制输入,

Figure BDA0003154674450000057
where u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000057

Figure BDA0003154674450000058
Figure BDA0003154674450000058

Figure BDA0003154674450000059
Figure BDA0003154674450000059

Figure BDA00031546744500000510
Figure BDA00031546744500000510

Mr、Cr、Gr分别表示笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵、科氏力与离心力矩阵和重力矩阵,符号

Figure BDA00031546744500000511
表示矩阵的伪逆;M r , C r , G r represent the inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system, respectively, the symbol
Figure BDA00031546744500000511
represents the pseudo-inverse of a matrix;

模块M1.3:联立公式(1)和(3),得到物体与机器人的组合动力学方程:Module M1.3: Simultaneous formulas (1) and (3) to obtain the combined dynamic equation of the object and robot:

Figure BDA0003154674450000061
Figure BDA0003154674450000061

M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)

M、C、G分别表示笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;M, C, and G represent the inertial matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper-limb exoskeleton robot and human interaction system in the Cartesian space coordinate system, respectively;

模块M1.4:对上肢外骨骼机器人末端的位置、速度和人的力进行测量,采用具有重力补偿和线性反馈的机器人控制器,表达式为:Module M1.4: Measure the position, velocity and human force at the end of the upper limb exoskeleton robot, using a robot controller with gravity compensation and linear feedback, the expression is:

Figure BDA0003154674450000062
Figure BDA0003154674450000062

其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益;where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity ;

将人作用于物体的力建模为:Model the force a person acts on an object as:

Figure BDA0003154674450000063
Figure BDA0003154674450000063

其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将式(6)和(7)带入式(5)得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, τ h is the target position of the human, and the equations (6) and (7) are brought into the equation (5) to obtain the upper-limb exoskeleton robot-human interaction closed-loop system dynamics equation:

Figure BDA0003154674450000064
Figure BDA0003154674450000064

优选的,所述模块M2包括:Preferably, the module M2 includes:

模块M2.1:通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;Module M2.1: collect electromyographic signals from human wrist, forearm and elbow through electromyography;

模块M2.2:对采集后的肌电信号进行过滤、数据分割和特征提取,根据波形类型进行特征提取,使提取的特征对应不同的意图类别;Module M2.2: Perform filtering, data segmentation and feature extraction on the collected EMG signals, and perform feature extraction according to the waveform type, so that the extracted features correspond to different intent categories;

模块M2.3:使用数据库中的多准则线性规划结合在线随机森林的分类方法进行训练预测;Module M2.3: Use the multi-criteria linear programming in the database combined with the online random forest classification method for training prediction;

模块M2.4:在模型预测时,对每个基分类器预测类别与对应置信度与预设阈值进行比较,决定该基分类器是否进行投票,最终使用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图。Module M2.4: During model prediction, compare the predicted category of each base classifier with the corresponding confidence level and a preset threshold to decide whether the base classifier will vote, and finally use the Boost algorithm to collect the voting results of all base classifiers. Weighted summation to find the predicted category with the most votes, and output the activity intent when the number of votes is greater than the mean.

优选的,所述模块M3包括:Preferably, the module M3 includes:

模块M3.1:通过人的虚拟目标

Figure BDA0003154674450000065
评估人类对上肢外骨骼机器人与人交互系统动力学的影响,公式为:Module M3.1: Virtual Goals Through Humans
Figure BDA0003154674450000065
To evaluate the impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system, the formula is:

Figure BDA0003154674450000066
Figure BDA0003154674450000066

其中,人类控制增益

Figure BDA0003154674450000067
Figure BDA0003154674450000068
使用测量平均值,或者与机器人控制器增益相同的值,即
Figure BDA0003154674450000069
上标符号v表示估计值;Among them, the human control gain
Figure BDA0003154674450000067
and
Figure BDA0003154674450000068
Use the measured average, or the same value as the robot controller gain, i.e.
Figure BDA0003154674450000069
The superscript symbol v represents the estimated value;

模块M3.2:使用基于表面肌电信号的意图识别方法,或者通过内部模型参数化对

Figure BDA00031546744500000610
进行估计,表达式为:Module M3.2: Use surface EMG-based intent recognition methods, or parameterize the
Figure BDA00031546744500000610
To estimate, the expression is:

Figure BDA0003154674450000071
Figure BDA0003154674450000071

其中,上标符号T表示转置,θ为计算人的虚拟目标位置

Figure BDA0003154674450000072
的参数向量,
Figure BDA0003154674450000073
Figure BDA0003154674450000074
t表示时间,m为预先设定的参数,因此
Figure BDA0003154674450000075
为由内模型参数决定并随时间变化的量;Among them, the superscript symbol T represents the transposition, and θ is the virtual target position of the calculation person
Figure BDA0003154674450000072
The parameter vector of ,
Figure BDA0003154674450000073
Figure BDA0003154674450000074
t represents time, m is a preset parameter, so
Figure BDA0003154674450000075
is the quantity determined by the internal model parameters and varies with time;

使用上肢外骨骼机器人与人交互系统的状态向量

Figure BDA0003154674450000076
将其带入式(5)后得到扩展模型:State Vectors of Robot-Human Interaction Systems Using Upper Limb Exoskeletons
Figure BDA0003154674450000076
After taking it into formula (5), the extended model is obtained:

Figure BDA0003154674450000077
Figure BDA0003154674450000077

其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量,v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声;Among them, φ represents: the state vector of the upper limb exoskeleton robot and human interaction system, v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise;

模块M3.3:通过传感器测量机器人端点位置和速度以及与人的相互作用力,得到上肢外骨骼机器人与人交互系统的测量矢量:Module M3.3: The sensor measures the position and velocity of the robot's endpoints and the interaction force with the human, and obtains the measurement vector of the upper limb exoskeleton robot-human interaction system:

Figure BDA0003154674450000078
Figure BDA0003154674450000078

其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声;Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean value 0 and variance E[μ, μ T ];

模块M3.4:使用系统观测器计算机器人的扩展状态估计:Module M3.4: Compute Extended State Estimates for Robots Using System Observers:

Figure BDA0003154674450000079
Figure BDA0003154674450000079

Figure BDA00031546744500000710
Figure BDA00031546744500000710

Figure BDA00031546744500000711
Figure BDA00031546744500000711

其中,∧表示估计值;z表示上肢外骨骼机器人与人交互系统的测量向量;Among them, ∧ represents the estimated value; z represents the measurement vector of the upper limb exoskeleton robot and human interaction system;

线性二次估计增益K=PHTR-1,P是正定矩阵,通过求解黎卡提微分方程获得:The linear quadratic estimation gain K=PH T R -1 , where P is a positive definite matrix, is obtained by solving the Riccati differential equation:

Figure BDA00031546744500000712
Figure BDA00031546744500000712

其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],A表示系统矩阵,代入等式(11),表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], A represents the system matrix, substituting into equation (11), expressed as the following form:

Figure BDA0003154674450000081
Figure BDA0003154674450000081

Figure BDA0003154674450000082
Figure BDA0003154674450000082

优选的,人与机器人之间的相互作用通过人与机器人之间的关系τ和τh来确定:Preferably, the interaction between the human and the robot is determined by the relationship τ and τ h between the human and the robot:

当τ=τh时,表示机器人使用人类虚拟目标的辅助,机器人跟随它的原始目标τrWhen τ=τ h , it means that the robot uses the assistance of the human virtual target, and the robot follows its original target τ r ;

当τ=2τrh时,机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标;When τ = 2τ rh , the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system;

使用下式根据估计的人的目标设计机器人的目标位置进行交互行为同化:Use the following formula to design the target position of the robot based on the estimated human target for interactive behavior assimilation:

Figure BDA0003154674450000083
Figure BDA0003154674450000083

τr表示上肢外骨骼机器人的原始目标位置;λ表示调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,根据末端位置x动态调整。τ r represents the original target position of the upper limb exoskeleton robot; λ represents the hyperparameter for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which is dynamically adjusted according to the end position x.

与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

(1)本发明将表面肌电信号引入到机器人控制策略,在准确率和时延方面都有优势;(1) The present invention introduces the surface EMG signal into the robot control strategy, and has advantages in terms of accuracy and time delay;

(2)本发明提出意图同化控制方法,涵盖了从合作到竞争的连续交互行为,力量引导更少,是更安全的避障和更广泛的交互行为;(2) The present invention proposes an intention assimilation control method, which covers continuous interactive behaviors from cooperation to competition, with less force guidance, safer obstacle avoidance and wider interactive behaviors;

(3)本发明简单易于实施,是一种具有高鲁棒性的柔顺控制方法。(3) The present invention is simple and easy to implement, and is a compliance control method with high robustness.

附图说明Description of drawings

通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:

图1为本发明的基于表面肌电信号的上肢外骨骼机器人意图同化控制方法示意图;1 is a schematic diagram of an intention assimilation control method for an upper limb exoskeleton robot based on surface EMG signals of the present invention;

图2为本发明的避障、辅助任务场景示意图;2 is a schematic diagram of obstacle avoidance and auxiliary task scenarios of the present invention;

图3为本发明的基于表面肌电信号的意图识别方法流程示意图;FIG. 3 is a schematic flowchart of an intention identification method based on surface EMG signals of the present invention;

图4为本发明的MCLP Boost算法示意图;4 is a schematic diagram of the MCLP Boost algorithm of the present invention;

图5为本发明的参数λ调整对应人机交互策略变化示意图。FIG. 5 is a schematic diagram of the change of the human-computer interaction strategy corresponding to the parameter λ adjustment of the present invention.

具体实施方式Detailed ways

下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.

实施例:Example:

如图1,为本发明所述的一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法示意图,包括利用Kane方法建立的上肢外骨骼机器人动力学模型、基于表面肌电信号的意图识别方法和意图同化控制方法,不同任务场景如图2所示,本发明的意图同化控制方法可将不同人机交互策略统一起来并进行连续控制;Fig. 1 is a schematic diagram of an intention assimilation control method of an upper limb exoskeleton robot based on surface EMG signals according to the present invention, including the upper limb exoskeleton robot dynamics model established by the Kane method, and the intention recognition based on surface EMG signals. Method and intent assimilation control method, different task scenarios are shown in Figure 2, the intent assimilation control method of the present invention can unify different human-computer interaction strategies and perform continuous control;

进一步的,利用Kane方法建立上肢外骨骼机器人动力学模型具体过程为:Further, the specific process of using the Kane method to establish the dynamics model of the upper limb exoskeleton robot is as follows:

1)假设机器人抓手、人手和物体之间没有相对运动,并且机器人抓手和人手一起操纵刚性物体,物体是一个质点。一般的物体操作只考虑线性运动,物体满足动态方程:1) It is assumed that there is no relative motion between the robot gripper, the human hand and the object, and the robot gripper and the human hand manipulate a rigid object together, and the object is a mass point. The general object operation only considers linear motion, and the object satisfies the dynamic equation:

Figure BDA0003154674450000091
Figure BDA0003154674450000091

其中,x(t)为物体的位置坐标,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力。Among them, x(t) is the position coordinate of the object, f and u h are the forces acting on the object by the robot and human, M o is the mass matrix of the object, and G o is the gravity of the object.

2)利用Kane方法建立的上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:2) Using the dynamic model of the upper limb exoskeleton robot established by the Kane method, the joint space dynamic equation of the n-degree-of-freedom upper limb exoskeleton robot in contact with the environment is obtained:

Figure BDA0003154674450000092
Figure BDA0003154674450000092

其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,

Figure BDA0003154674450000093
是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix,
Figure BDA0003154674450000093
are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;

转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:

Figure BDA0003154674450000094
Figure BDA0003154674450000094

其中,u表示上肢外骨骼机器人的控制输入,

Figure BDA0003154674450000095
where u represents the control input of the upper limb exoskeleton robot,
Figure BDA0003154674450000095

Figure BDA0003154674450000096
Figure BDA0003154674450000096

Figure BDA0003154674450000097
Figure BDA0003154674450000097

Figure BDA0003154674450000098
Figure BDA0003154674450000098

Mr、Cr、Gr的含义分别是笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵,科氏力与离心力矩阵和重力矩阵;符号

Figure BDA00031546744500000910
表示矩阵的伪逆;The meanings of M r , Cr , and Gr are respectively the inertia matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system ; the symbol
Figure BDA00031546744500000910
represents the pseudo-inverse of a matrix;

3)联立公式(1)和(3)可以得到物体与机器人的组合动力学方程:3) Simultaneous formulas (1) and (3) can obtain the combined dynamic equation of the object and the robot:

Figure BDA0003154674450000099
Figure BDA0003154674450000099

M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)

M、C、G的含义分别是笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;The meanings of M, C, and G are the inertial matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in the Cartesian space coordinate system respectively;

4)考虑机器人有关于其局部环境的信息,并对上肢外骨骼机器人交互系统末端的位置、速度和人的力进行测量,这些都受到测量噪声的影响。采用具有重力补偿和线性反馈的机器人控制器:4) Consider that the robot has information about its local environment, and measure the position, velocity and human force at the end of the upper limb exoskeleton robot interaction system, which are all affected by measurement noise. Using a robot controller with gravity compensation and linear feedback:

Figure BDA0003154674450000101
Figure BDA0003154674450000101

其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益。where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity.

将人手作用于物体的力建模为:Model the force of a human hand on an object as:

Figure BDA0003154674450000102
Figure BDA0003154674450000102

其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将等式(6)和(7)带入等式(5)可得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, and τ h is the target position of the human. Bringing equations (6) and (7) into equation (5) can obtain a closed-loop interaction between the upper limb exoskeleton robot and the human System Dynamics Equation:

Figure BDA0003154674450000103
Figure BDA0003154674450000103

进一步的,基于表面肌电信号的意图识别方法如图3所示,具体过程为:Further, the intent recognition method based on the surface EMG signal is shown in Figure 3, and the specific process is as follows:

1)通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;1) The electromyography signal is collected from the human wrist, forearm and elbow by electromyography;

2)对采集后的肌电信号进行过滤,然后进行数据分割和特征提取,当使用长序波形进行特征提取时不适用overlap,而当波形较短时可考虑使用overlap操作,使提取的特征能够对应不同的意图类别;2) Filter the collected EMG signals, and then perform data segmentation and feature extraction. When long-sequence waveforms are used for feature extraction, overlap is not applicable, and when the waveform is short, the overlap operation can be considered, so that the extracted features can be extracted. Corresponding to different intent categories;

3)使用MCLPBoost结合Online Random Forest的分类方法进行训练预测,MCLPBoost算法如图4所示,该方法具有良好的泛化性能,以为该方法是基于比较的,因此在进行预测时,较使用计算模型具有时间开销小的特点和优势;3) Use MCLPBoost combined with the classification method of Online Random Forest for training prediction. The MCLPBoost algorithm is shown in Figure 4. This method has good generalization performance, and it is thought that the method is based on comparison. Therefore, when making predictions, it is better to use the computational model It has the characteristics and advantages of small time overhead;

4)模型预测时,对每个基分类器预测类别与对应置信度(概率)与预设阈值进行比较,决定该基分类器是否进行投票,最终只用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图;4) When the model predicts, compare the predicted category of each base classifier with the corresponding confidence (probability) and the preset threshold to decide whether the base classifier will vote. Finally, only the Boost algorithm is used to collect the voting results of all base classifiers. Weighted summation to find the prediction category with the most votes, and output the activity intent when the votes are greater than the mean;

5)基于表面肌电信号的意图识别结果可用于产生的“虚拟”目标。5) Intent recognition results based on surface EMG signals can be used to generate "virtual" targets.

进一步的,人类对上肢外骨骼机器人与人交互系统动力学的影响完全取决于uh,不管它基于什么内部模型,开发一种不需要估计人类控制收益的替代方法,通过使用这些假设收益的任意值而产生的“虚拟”目标

Figure BDA0003154674450000104
意图同化控制方法具体过程为:Further, the human influence on the dynamics of the upper-limb exoskeleton robot-human interaction system depends entirely on u h , no matter what internal model it is based on, to develop an alternative method that does not require estimates of human control gains, by using arbitrary assumptions of these gains "virtual" target
Figure BDA0003154674450000104
The specific process of the intent assimilation control method is as follows:

1)“虚拟”目标

Figure BDA0003154674450000105
可有效地评估人类对上肢外骨骼机器人与人交互系统动力学的影响,如果它满足:1) "Virtual" goals
Figure BDA0003154674450000105
The impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system can be effectively assessed if it satisfies:

Figure BDA0003154674450000106
Figure BDA0003154674450000106

其中,虚拟人类控制增益

Figure BDA0003154674450000111
Figure BDA0003154674450000112
可以使用一些从许多人测量的平均值,或者与机器人控制器增益相同的值,即
Figure BDA0003154674450000113
Among them, the virtual human control gain
Figure BDA0003154674450000111
and
Figure BDA0003154674450000112
You can use some average measured from many people, or the same value as the robot controller gain, i.e.
Figure BDA0003154674450000113

2)为了估计

Figure BDA0003154674450000114
可使用权利要求3所述的基于表面肌电信号的意图识别方法,或者使用内部模型参数化它:2) To estimate
Figure BDA0003154674450000114
The surface EMG-based intent recognition method of claim 3 can be used, or it can be parameterized using an internal model:

Figure BDA0003154674450000115
Figure BDA0003154674450000115

其中,θ含义为计算人的虚拟目标位置

Figure BDA0003154674450000116
的参数向量,
Figure BDA0003154674450000117
t表示时间,m为预先设定的参数,因此
Figure BDA0003154674450000118
为由内模型参数决定并随时间变化的量,使用状态向量
Figure BDA0003154674450000119
将其带入等式(5)后可得到扩展模型:Among them, θ means the virtual target position of the calculation person
Figure BDA0003154674450000116
The parameter vector of ,
Figure BDA0003154674450000117
t represents time, m is a preset parameter, so
Figure BDA0003154674450000118
For quantities that are determined by internal model parameters and vary over time, use the state vector
Figure BDA0003154674450000119
After plugging it into equation (5), the extended model can be obtained:

Figure BDA00031546744500001110
Figure BDA00031546744500001110

其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量;v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声。Among them, φ represents: the state vector of the upper limb exoskeleton robot-human interaction system; v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise.

3)考虑到机器人可以用合适的传感器测量其端点位置和速度以及与人的相互作用力,得到了机器人的测量矢量:3) Considering that the robot can measure its endpoint position and velocity and the interaction force with the human with suitable sensors, the measurement vector of the robot is obtained:

Figure BDA00031546744500001111
Figure BDA00031546744500001111

其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声。Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean 0 and variance E[μ, μ T ].

4)然而等式(10)中

Figure BDA00031546744500001112
与θ是未知的,因此使用以下系统观测器来计算机器人的扩展状态估计:4) However, in equation (10)
Figure BDA00031546744500001112
and θ are unknown, so the following system observer is used to compute the extended state estimate of the robot:

Figure BDA00031546744500001113
Figure BDA00031546744500001113

Figure BDA00031546744500001114
Figure BDA00031546744500001114

Figure BDA00031546744500001115
Figure BDA00031546744500001115

其中,∧表示估计值,z表示:上肢外骨骼机器人与人交互系统的测量向量;线性二次估计增益K=PHTR-1,P是一个正定矩阵,通过求解黎卡提微分方程获得:Among them, ∧ represents the estimated value, z represents: the measurement vector of the upper limb exoskeleton robot and the human interaction system; the linear quadratic estimation gain K=PH T R -1 , P is a positive definite matrix, obtained by solving the Riccati differential equation:

Figure BDA00031546744500001116
Figure BDA00031546744500001116

其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],使用A表示系统矩阵,等式(11)可表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], using A to represent the system matrix, equation (11) can be expressed as the following form:

Figure BDA0003154674450000121
Figure BDA0003154674450000121

Figure BDA0003154674450000122
Figure BDA0003154674450000122

除了θ外其余参数均可观察获得,从而获得

Figure BDA0003154674450000123
值。All parameters except θ can be observed and obtained, thus obtaining
Figure BDA0003154674450000123
value.

5)人与机器人之间的相互作用可以通过人与机器人之间的关系τ和τh来确定,比如当τ=τh对应于机器人使用人类虚拟目标的辅助,当τ=τr机器人跟随它的原始目标τr,当τ=2τrh对应于“对抗”,即机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标。5) The interaction between humans and robots can be determined by the relationship τ and τ h between humans and robots, such as when τ = τ h corresponds to the robot using the assistance of a human virtual target, and when τ = τ r the robot follows it The original goal τ r , when τ = 2τ r −τ h corresponds to “adversarial”, that is, the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system.

为了同化交互行为,使用以下等式根据估计的人的目标设计机器人的目标位置:To assimilate the interaction behavior, the robot's target position is designed according to the estimated human target using the following equation:

Figure BDA0003154674450000124
Figure BDA0003154674450000124

τr表示:上肢外骨骼机器人的原始目标位置;λ表示:调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,可根据末端位置x动态调整;τ r represents: the original target position of the upper limb exoskeleton robot; λ represents: the hyperparameters for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which can be dynamically adjusted according to the end position x;

参数λ调整对应人机交互策略变化如图5所示,当λ<1时,意图同化控制器将协调人机目标;当λ=1时,意图同化控制器将忽略

Figure BDA0003154674450000125
从而完成人机协同;当λ=2时,意图同化控制器将消除模拟人类对上肢外骨骼机器人与人交互系统动态的影响,上肢外骨骼机器人与人交互系统位置最终趋同到意图同化控制器的目标τr。The adjustment of parameter λ corresponds to the change of human-computer interaction strategy as shown in Figure 5. When λ<1, the intent assimilation controller will coordinate the human-machine goal; when λ=1, the intent assimilation controller will ignore
Figure BDA0003154674450000125
In this way, the human-machine collaboration is completed; when λ=2, the intention assimilation controller will eliminate the influence of the simulated human on the dynamics of the upper limb exoskeleton robot and the human interaction system, and the positions of the upper limb exoskeleton robot and the human interaction system will eventually converge to the intention assimilation controller. target τ r .

进一步的,验证

Figure BDA0003154674450000126
的引入后人机交互系统的稳定性,通过等式(13)的第二个等式可估计人的目标
Figure BDA0003154674450000127
Further, verify
Figure BDA0003154674450000126
The stability of the human-computer interaction system after the introduction of , the human goal can be estimated by the second equation of Eq.
Figure BDA0003154674450000127

Figure BDA0003154674450000128
Figure BDA0003154674450000128

将更正后的等式(6)带入组合动力学方程(5)可得到:Substituting the corrected equation (6) into the combined kinetic equation (5) yields:

Figure BDA0003154674450000129
Figure BDA0003154674450000129

其中,

Figure BDA00031546744500001210
表示估计值与实际值之间的误差,若定义
Figure BDA00031546744500001211
并将人手作用于物体的力(7)代入上面等式可获得:in,
Figure BDA00031546744500001210
represents the error between the estimated value and the actual value, if defined
Figure BDA00031546744500001211
Substitute the force (7) of the human hand on the object into the above equation to obtain:

Figure BDA0003154674450000131
Figure BDA0003154674450000131

因此可以分析τr和τh对动力学系统的影响,考虑到稳态位置:The effect of τ r and τ h on the dynamical system can therefore be analyzed, taking into account the steady state position:

Figure BDA0003154674450000132
Figure BDA0003154674450000132

通过定义以下公式化简公式(19)以便分析稳定性:Equation (19) is simplified for stability analysis by defining:

Figure BDA0003154674450000133
Figure BDA0003154674450000133

Figure BDA0003154674450000134
Figure BDA0003154674450000134

推导出:Deduced:

Figure BDA0003154674450000135
Figure BDA0003154674450000135

这说明位置误差x-xss会消失,如果人力的估计误差是

Figure BDA0003154674450000136
This means that the position error xx ss will disappear, if the estimated error of manpower is
Figure BDA0003154674450000136

根据上肢外骨骼机器人与人交互系统动力学在状态空间中的形式式(15)及系统观测器(13)有:According to the form (15) and the system observer (13) of the upper limb exoskeleton robot-human interaction system dynamics in the state space, there are:

Figure BDA0003154674450000137
Figure BDA0003154674450000137

通过定义ξ≡[x-xss,x,φT]T,结合等式(22)和等式(23)可得:By defining ξ≡[xx ss ,x,φ T ] T , combining equations (22) and (23) we get:

Figure BDA0003154674450000138
Figure BDA0003154674450000138

Figure BDA0003154674450000139
Figure BDA0003154674450000139

Figure BDA00031546744500001310
Figure BDA00031546744500001310

其中,ξ为系统瞬态性能分析中定义的系统状态向量,该等式是包括系统动力学和观测器的组合系统。where ξ is the system state vector defined in the system transient performance analysis, and the equation is a combined system including system dynamics and observers.

可通过如下特征方程的解计算

Figure BDA00031546744500001311
的特征值,进而研究式(24)的稳定性:It can be calculated by the solution of the following characteristic equation
Figure BDA00031546744500001311
, and then study the stability of formula (24):

[yI-(A-KH)][My2+(C+L2)y+L1]=0…………(25)[yI-(A-KH)][My 2 +(C+L 2 )y+L 1 ]=0…………(25)

如果当以下两个系统是稳定的,那么

Figure BDA00031546744500001312
也将是稳定的:If the following two systems are stable, then
Figure BDA00031546744500001312
will also be stable:

Figure BDA00031546744500001313
Figure BDA00031546744500001313

Figure BDA00031546744500001314
Figure BDA00031546744500001314

利用李雅普诺夫理论分别对上述两个系统的稳定性进行检验,首先通过考虑李雅普诺夫候选函数证明第一个系统的稳定性:The Lyapunov theory is used to test the stability of the above two systems respectively. First, the stability of the first system is proved by considering the Lyapunov candidate function:

Figure BDA0003154674450000141
Figure BDA0003154674450000141

对时间求导可得:Derivation with respect to time gives:

Figure BDA0003154674450000142
Figure BDA0003154674450000142

然后通过考虑李雅普诺夫候选函数证明第二个系统的稳定性:The stability of the second system is then proved by considering Lyapunov candidate functions:

Figure BDA0003154674450000143
Figure BDA0003154674450000143

Pv由等式(14)中的Riccati方程可得:P v is given by the Riccati equation in equation (14):

Figure BDA0003154674450000144
Figure BDA0003154674450000144

对时间求导可得:Derivation with respect to time gives:

Figure BDA0003154674450000145
Figure BDA0003154674450000145

结合等式(13)可得:Combining equation (13), we get:

Figure BDA0003154674450000146
Figure BDA0003154674450000146

Figure BDA0003154674450000147
Figure BDA0003154674450000147

代入(31)式可得:Substitute into (31) to get:

Figure BDA0003154674450000148
Figure BDA0003154674450000148

由此证明了式(26)中的两个系统是稳定的,因此上肢外骨骼机器人与人交互系统稳定。This proves that the two systems in equation (26) are stable, so the upper limb exoskeleton robot and human interaction system are stable.

本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system, device and each module provided by the present invention in the form of pure computer readable program code, the system, device and each module provided by the present invention can be completely implemented by logically programming the method steps. The same program is implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, among others. Therefore, the system, device and each module provided by the present invention can be regarded as a kind of hardware component, and the modules used for realizing various programs included in it can also be regarded as the structure in the hardware component; A module for realizing various functions can be regarded as either a software program for realizing a method or a structure within a hardware component.

以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily, provided that there is no conflict.

Claims (10)

1. An intention assimilation control method of an upper limb exoskeleton robot based on a surface electromyogram signal is characterized by comprising the following steps:
step 1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
step 2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
and step 3: the intention assimilation control is performed by the virtual object.
2. The method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 1, wherein the step 1 comprises:
step 1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure FDA0003154674440000011
wherein,
Figure FDA0003154674440000012
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
step 1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure FDA0003154674440000013
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure FDA0003154674440000014
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure FDA0003154674440000015
wherein u represents the control input of the upper limb exoskeleton robot,
Figure FDA0003154674440000016
Figure FDA0003154674440000017
Figure FDA0003154674440000018
Figure FDA0003154674440000019
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure FDA00031546744400000110
Representing a pseudo-inverse of the matrix;
step 1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure FDA00031546744400000111
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
step 1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure FDA0003154674440000021
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure FDA0003154674440000022
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure FDA0003154674440000023
3. the method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 1, wherein the step 2 comprises:
step 2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
step 2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
step 2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
step 2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
4. The method for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography according to claim 2, wherein the step 3 comprises:
step 3.1: by a virtual target of a person
Figure FDA0003154674440000024
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure FDA0003154674440000025
wherein the human controls the gain
Figure FDA0003154674440000026
And
Figure FDA0003154674440000027
using measured average values, or the same values as the robot controller gains, i.e.
Figure FDA0003154674440000028
The superscript symbol v represents the estimated value;
step 3.2: using an intention recognition method based on surface electromyography signals, or by internal model parameterization
Figure FDA0003154674440000029
And estimating, wherein the expression is as follows:
Figure FDA0003154674440000031
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure FDA0003154674440000032
The vector of parameters of (a) is,
Figure FDA0003154674440000033
Figure FDA0003154674440000034
t represents time, m is a predetermined parameter, and therefore
Figure FDA0003154674440000035
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure FDA0003154674440000036
The extended model is obtained after substituting the formula (5):
Figure FDA0003154674440000037
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
step 3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure FDA0003154674440000038
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
step 3.4: calculating an extended state estimate of the robot using the system observer:
Figure FDA0003154674440000039
Figure FDA00031546744400000310
Figure FDA00031546744400000311
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure FDA00031546744400000312
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure FDA0003154674440000041
Figure FDA0003154674440000042
5. the method for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyography signals as claimed in claim 4, wherein interaction between human and robot is performed through relations τ and τ between human and robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure FDA0003154674440000043
τrrepresenting an original target position of the upper limb exoskeleton robot; and lambda represents a hyper-parameter for adjusting the original target position and the human target position of the upper limb exoskeleton robot, and is dynamically adjusted according to the tail end position x.
6. The utility model provides an upper limbs ectoskeleton robot intention assimilation control system based on surface electromyogram signal which characterized in that includes:
module M1: establishing an upper limb exoskeleton robot dynamic model by using a Kenn method;
module M2: performing intention recognition through a surface electromyogram signal based on a dynamic model;
module M3: the intention assimilation control is performed by the virtual object.
7. The system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 6, wherein the module M1 comprises:
module M1.1: there is no relative motion between the robot, the person and the object, and the robot and the person together manipulate the object, the object satisfying the dynamic equation:
Figure FDA0003154674440000044
wherein,
Figure FDA0003154674440000045
as the second derivative of the position coordinates of the object with respect to time, f and uhIs the force of the robot and person on the object, MoIs a mass matrix of the object, GoIs the weight of the object;
module M1.2: establishing an upper limb exoskeleton robot dynamics model by using a Kenn method to obtain a joint space dynamics equation when the upper limb exoskeleton robot with n degrees of freedom is in contact with the environment:
Figure FDA0003154674440000051
wherein q is the joint coordinate of the robot, tauqFor control input, JT(q) is the Jacobian matrix, Mq(q) is the robot inertia matrix,
Figure FDA0003154674440000052
is the Coriolis and centrifugal torque, Gq(q) is the moment of gravity;
and converting into a robot operating space to obtain a kinetic equation:
Figure FDA0003154674440000053
wherein u represents the control input of the upper limb exoskeleton robot,
Figure FDA0003154674440000054
Figure FDA0003154674440000055
Figure FDA0003154674440000056
Figure FDA0003154674440000057
Mr、Cr、Grrespectively representing an inertia matrix, a Coriolis force and centrifugal force matrix and a gravity matrix of the upper limb exoskeleton robot under a Cartesian space coordinate system, and symbols
Figure FDA00031546744400000512
Representing a pseudo-inverse of the matrix;
module M1.3: simultaneous equations (1) and (3) are obtained to obtain the combined kinetic equation of the object and the robot:
Figure FDA0003154674440000058
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)
m, C, G respectively representing inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in a Cartesian space coordinate system;
module M1.4: the position, the speed and the human force of the tail end of the upper limb exoskeleton robot are measured, a robot controller with gravity compensation and linear feedback is adopted, and the expression is as follows:
Figure FDA0003154674440000059
where τ is the target position of the robot, L1And L2Is a gain corresponding to position error and velocity;
the force of a person acting on an object is modeled as:
Figure FDA00031546744400000510
wherein L ish,1And Lh,2Control gain, τ, for humanshAnd (5) bringing the formulas (6) and (7) into the formula (5) to obtain the dynamic equation of the upper limb exoskeleton robot and human interactive closed-loop system for the target position of the human:
Figure FDA00031546744400000511
8. the system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 6, wherein the module M2 comprises:
module M2.1: collecting electromyographic signals of wrists, forearms and elbows of a person through an electromyograph;
module M2.2: filtering, data segmentation and feature extraction are carried out on the collected electromyographic signals, and feature extraction is carried out according to waveform types, so that the extracted features correspond to different intention categories;
module M2.3: training and predicting by using a multi-criterion linear programming in a database and combining a classification method of an online random forest;
module M2.4: during model prediction, the prediction category of each base classifier is compared with the corresponding confidence coefficient and a preset threshold value to determine whether the base classifier votes, finally, a Boost algorithm is used for collecting voting results of all the base classifiers and carrying out weighted summation to find the prediction category with the largest votes, and when the votes are larger than the mean value, the activity intention is output.
9. The system for controlling the ideographic assimilation of an upper limb exoskeleton robot based on surface electromyography of claim 7, wherein the module M3 comprises:
module M3.1: by a virtual target of a person
Figure FDA0003154674440000061
Evaluating the influence of human on the dynamics of the upper limb exoskeleton robot and the human interaction system, wherein the formula is as follows:
Figure FDA0003154674440000062
wherein the human controls the gain
Figure FDA0003154674440000063
And
Figure FDA0003154674440000064
using measured average values, or the same values as the robot controller gains, i.e.
Figure FDA0003154674440000065
The superscript symbol v represents the estimated value;
module M3.2: using methods of intention recognition based on surface electromyographic signals, or byInternal model parameterization pair
Figure FDA0003154674440000066
And estimating, wherein the expression is as follows:
Figure FDA0003154674440000067
wherein the superscript symbol T represents transposition, and θ is the virtual target position of the person being calculated
Figure FDA0003154674440000068
The vector of parameters of (a) is,
Figure FDA0003154674440000069
Figure FDA00031546744400000610
t represents time, m is a predetermined parameter, and therefore
Figure FDA00031546744400000611
Is a quantity that is determined by the internal model parameters and varies with time;
state vector using upper limb exoskeleton robot and human interaction system
Figure FDA00031546744400000612
The extended model is obtained after substituting the formula (5):
Figure FDA00031546744400000613
where φ represents: state vector of upper limb exoskeleton robot and human interaction system, v ∈ N (0, E [ v, v ]T]) Is the system noise, i.e., mean 0, variance E [ v, vT]Gaussian noise of (2);
module M3.3: measuring the position and the speed of the end point of the robot and the interaction force with a human through a sensor to obtain a measurement vector of the upper limb exoskeleton robot and the human interaction system:
Figure FDA00031546744400000614
wherein, mu is N (0, E [ mu, mu ]T]) Is the environmental measurement noise, i.e., the mean is 0 and the variance is E [ mu, mu ]T]Gaussian noise of (2);
module M3.4: calculating an extended state estimate of the robot using the system observer:
Figure FDA0003154674440000071
Figure FDA0003154674440000072
Figure FDA0003154674440000073
wherein Λ represents an estimated value; z represents a measurement vector of the upper limb exoskeleton robot and the human interaction system;
linear quadratic estimation gain K-PHTR-1P is a positive definite matrix obtained by solving the ricatt differential equation:
Figure FDA0003154674440000074
wherein the noise covariance matrix Q ≡ E [ v, v ≡ E ≡ VT],R≡E[μ,μT]And a denotes a system matrix, and is substituted into equation (11) and expressed as follows:
Figure FDA0003154674440000075
Figure FDA0003154674440000076
10. the system for controlling assimilation of upper limb exoskeleton robot based on surface electromyography signals of claim 9, wherein interaction between human and robot is performed through the relations τ and τ between human and robothTo determine:
when τ is τhRepresenting assistance of a robot using a human virtual target, the robot follows its original target τr
When τ is 2 τrhWhile, the robot imposes its own target by eliminating the human target from the upper extremity exoskeleton robot and human interaction system;
interactive behavior assimilation from the estimated target position of the human target design robot using the following formula:
Figure FDA0003154674440000077
τrrepresenting an original target position of the upper limb exoskeleton robot; and lambda represents a hyper-parameter for adjusting the original target position and the human target position of the upper limb exoskeleton robot, and is dynamically adjusted according to the tail end position x.
CN202110775590.5A 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal Active CN113478462B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110775590.5A CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110775590.5A CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Publications (2)

Publication Number Publication Date
CN113478462A true CN113478462A (en) 2021-10-08
CN113478462B CN113478462B (en) 2022-12-30

Family

ID=77938116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110775590.5A Active CN113478462B (en) 2021-07-08 2021-07-08 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal

Country Status (1)

Country Link
CN (1) CN113478462B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Admittance control method and system of upper limb and dual-arm rehabilitation robot based on mirror force field
CN114377358A (en) * 2022-02-22 2022-04-22 南京医科大学 A Home Rehabilitation System for Upper Limbs Based on Sphero Spherical Robot
CN114474051A (en) * 2021-12-30 2022-05-13 西北工业大学 A Personalized Gain Teleoperation Control Method Based on Operator Physiological Signals

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2497610A1 (en) * 2011-03-09 2012-09-12 Syco Di Hedvig Haberl & C. S.A.S. System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device
WO2018000854A1 (en) * 2016-06-29 2018-01-04 深圳光启合众科技有限公司 Human upper limb motion intention recognition and assistance method and device
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural Network Control System of Exoskeleton Robot Based on Intention Recognition
CN112107397A (en) * 2020-10-19 2020-12-22 中国科学技术大学 Myoelectric signal driven lower limb artificial limb continuous control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2497610A1 (en) * 2011-03-09 2012-09-12 Syco Di Hedvig Haberl & C. S.A.S. System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device
WO2018000854A1 (en) * 2016-06-29 2018-01-04 深圳光启合众科技有限公司 Human upper limb motion intention recognition and assistance method and device
CN111631923A (en) * 2020-06-02 2020-09-08 中国科学技术大学先进技术研究院 Neural Network Control System of Exoskeleton Robot Based on Intention Recognition
CN112107397A (en) * 2020-10-19 2020-12-22 中国科学技术大学 Myoelectric signal driven lower limb artificial limb continuous control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李想等: "基于脑肌电信号的机械臂控制方法与实现", 《计算机测量与控制》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Admittance control method and system of upper limb and dual-arm rehabilitation robot based on mirror force field
CN113995629B (en) * 2021-11-03 2023-07-11 中国科学技术大学先进技术研究院 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system
CN114474051A (en) * 2021-12-30 2022-05-13 西北工业大学 A Personalized Gain Teleoperation Control Method Based on Operator Physiological Signals
CN114377358A (en) * 2022-02-22 2022-04-22 南京医科大学 A Home Rehabilitation System for Upper Limbs Based on Sphero Spherical Robot

Also Published As

Publication number Publication date
CN113478462B (en) 2022-12-30

Similar Documents

Publication Publication Date Title
Su et al. An incremental learning framework for human-like redundancy optimization of anthropomorphic manipulators
CN113478462B (en) Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
Li et al. Asymmetric bimanual control of dual-arm exoskeletons for human-cooperative manipulations
CN109702740B (en) Robot compliance control method, device, equipment and storage medium
CN111281743A (en) An adaptive compliance control method for an upper limb rehabilitation exoskeleton robot
Chen et al. Neural learning enhanced variable admittance control for human–robot collaboration
El-Hussieny et al. Adaptive learning of human motor behaviors: An evolving inverse optimal control approach
Wang et al. Operator-based robust nonlinear tracking control for a human multi-joint arm-like manipulator with unknown time-varying delays
CN111522243A (en) Robust iterative learning control strategy for five-degree-of-freedom upper limb exoskeleton system
CN111673733B (en) Intelligent self-adaptive compliance control method of robot in unknown environment
Zeng et al. Encoding multiple sensor data for robotic learning skills from multimodal demonstration
WO2020118730A1 (en) Compliance control method and apparatus for robot, device, and storage medium
Lin et al. Three-domain fuzzy wavelet broad learning system for tremor estimation
Li et al. Observer-based multivariable fixed-time formation control of mobile robots
WO2024146961A1 (en) Controlling agents using language-based success detectors
Ma et al. Active manipulation of elastic rods using optimization-based shape perception and sensorimotor model approximation
WO2023082404A1 (en) Control method for robot, and robot, storage medium, and grabbing system
Kawaharazuka et al. Adaptive robotic tool-tip control learning considering online changes in grasping state
Wei et al. Research on robotic arm movement grasping system based on MYO
Zhou et al. Modeling of endpoint feedback learning implemented through point-to-point learning control
CN116749214A (en) A five-finger manipulator grasping control method and system with teaching and learning capabilities
Hu et al. A Hybrid Framework Based on Bio-Signal and Built-in Force Sensor for Human-Robot Active Co-Carrying
Wang et al. Integrating sensor fusion for teleoperation control of anthropomorphic dual-arm robots
Veiga et al. Tactile based forward modeling for contact location control
Chen et al. Vision-based dexterous motion planning by dynamic movement primitives with human hand demonstration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant