CN113478462A - Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal - Google Patents
Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal Download PDFInfo
- Publication number
- CN113478462A CN113478462A CN202110775590.5A CN202110775590A CN113478462A CN 113478462 A CN113478462 A CN 113478462A CN 202110775590 A CN202110775590 A CN 202110775590A CN 113478462 A CN113478462 A CN 113478462A
- Authority
- CN
- China
- Prior art keywords
- robot
- upper limb
- human
- limb exoskeleton
- exoskeleton robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000001364 upper extremity Anatomy 0.000 title claims abstract description 103
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000003993 interaction Effects 0.000 claims abstract description 66
- 239000011159 matrix material Substances 0.000 claims description 65
- 238000002567 electromyography Methods 0.000 claims description 35
- 239000013598 vector Substances 0.000 claims description 26
- 230000005484 gravity Effects 0.000 claims description 20
- 238000005259 measurement Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 10
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 9
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 210000001513 elbow Anatomy 0.000 claims description 5
- 230000007613 environmental effect Effects 0.000 claims description 5
- 210000000245 forearm Anatomy 0.000 claims description 5
- 238000007637 random forest analysis Methods 0.000 claims description 5
- 230000011218 segmentation Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 210000000707 wrist Anatomy 0.000 claims description 5
- 238000013461 design Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 4
- 230000017105 transposition Effects 0.000 claims description 4
- 230000006399 behavior Effects 0.000 abstract description 10
- 241000282412 Homo Species 0.000 description 6
- 238000011217 control strategy Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005183 dynamical system Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/0006—Exoskeletons, i.e. resembling a human figure
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
- B25J13/087—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/1605—Simulation of manipulator lay-out, design, modelling of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
- B25J9/161—Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1628—Programme controls characterised by the control loop
- B25J9/163—Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Physics & Mathematics (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Prostheses (AREA)
- Manipulator (AREA)
Abstract
本发明提供了一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统,包括:步骤1:利用凯恩方法建立上肢外骨骼机器人动力学模型;步骤2:基于动力学模型,通过表面肌电信号进行意图识别;步骤3:通过虚拟目标,进行意图同化控制。本发明提出意图同化控制方法,涵盖了从合作到竞争的连续交互行为,力量引导更少,是更安全的避障和更广泛的交互行为。
The present invention provides an intention assimilation control method and system for an upper limb exoskeleton robot based on surface EMG signals, including: step 1: using the Kane method to establish a dynamic model of the upper limb exoskeleton robot; step 2: based on the dynamic model, through The surface EMG signal is used for intention recognition; Step 3: Intention assimilation control is carried out through the virtual target. The present invention proposes an intent assimilation control method, which covers continuous interaction behaviors from cooperation to competition, with less force guidance, safer obstacle avoidance and wider interaction behaviors.
Description
技术领域technical field
本发明涉及人机交互、人工智能和交互控制技术领域,具体地,涉及一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统。The invention relates to the technical fields of human-computer interaction, artificial intelligence and interactive control, and in particular, to a method and a system for controlling the intention assimilation of an upper limb exoskeleton robot based on surface electromyography signals.
背景技术Background technique
机器人技术近年来发展势头迅猛,尤其是人机交互机器人,人机接口是人机交互研究中最重要的一个环节,人机接口信号的质量直接影响着控制效果以及实验结果,可以测量的人体力和运动意图信号的人机接口,表面肌电信号无论是在准确度还是延时方面都有很大优势,在对人体运动及力的估计上较为精确。Robot technology has developed rapidly in recent years, especially human-computer interaction robots. Human-computer interface is the most important part of human-computer interaction research. The quality of human-computer interface signals directly affects the control effect and experimental results. Compared with the human-machine interface of the motion intention signal, the surface EMG signal has great advantages in terms of accuracy and delay, and is more accurate in the estimation of human motion and force.
在控制策略方面,交互机器人控制策略的多样化是其得以推广和应用的重要因素,基本的控制策略是PID控制,应用简单方便,但只能依照固定轨迹进行控制,无法引入人体意图;为了反映人体意图,表面肌电信号也引入到机器人控制策略中,同时借助人工智能算法将肌电信号与人体关节建立联系,取得了一定的控制效果,从主从角色同伦切换的概念出发,可将人机交互行为分为:援助、合作、协同和对抗等,意图同化控制涵盖了从合作到竞争的连续交互行为,力量引导更少,更安全的避障和更广泛的交互行为。In terms of control strategies, the diversification of interactive robot control strategies is an important factor for its promotion and application. The basic control strategy is PID control, which is simple and convenient to apply, but it can only be controlled according to a fixed trajectory and cannot introduce human intentions; in order to reflect Human body intentions and surface EMG signals are also introduced into the robot control strategy. At the same time, artificial intelligence algorithms are used to connect EMG signals with human joints, and a certain control effect has been achieved. Starting from the concept of master-slave role homotopy switching, the Human-computer interaction behaviors are divided into: assistance, cooperation, coordination and confrontation, etc. Intent assimilation control covers continuous interaction behaviors from cooperation to competition, less force guidance, safer obstacle avoidance and wider interaction behaviors.
专利文献CN108283569A(申请号:CN201711449077.7)公开了一种外骨骼机器人控制系统及控制方法,以解决现有康复外骨骼机器人通用性差,以及无法正确判断人体运动意图的需求,不能实现人机协同的功能效果。外骨骼机器人控制系统,包括姿态传感器、角度传感器、压力传感器、表面肌电信号传感器、处理器、外骨骼机器人穿戴部件和人机交互模块。The patent document CN108283569A (application number: CN201711449077.7) discloses an exoskeleton robot control system and control method, so as to solve the problem that the existing rehabilitation exoskeleton robot has poor versatility and cannot correctly judge the motion intention of the human body, and cannot realize human-machine collaboration function effect. Exoskeleton robot control system, including attitude sensor, angle sensor, pressure sensor, surface electromyographic signal sensor, processor, exoskeleton robot wearable parts and human-computer interaction module.
发明内容SUMMARY OF THE INVENTION
针对现有技术中的缺陷,本发明的目的是提供一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法和系统。In view of the defects in the prior art, the purpose of the present invention is to provide a method and system for the intention assimilation control of an upper limb exoskeleton robot based on surface EMG signals.
根据本发明提供的基于表面肌电信号的上肢外骨骼机器人意图同化控制方法,包括:The intention assimilation control method for an upper limb exoskeleton robot based on surface EMG signals provided according to the present invention includes:
步骤1:利用凯恩方法建立上肢外骨骼机器人动力学模型;Step 1: Use the Kane method to establish a dynamic model of the upper limb exoskeleton robot;
步骤2:基于动力学模型,通过表面肌电信号进行意图识别;Step 2: Based on the dynamic model, the intent is identified by the surface EMG signal;
步骤3:通过虚拟目标,进行意图同化控制。Step 3: Intention assimilation control is carried out through the virtual target.
优选的,所述步骤1包括:Preferably, the
步骤1.1:机器人、人和物体之间没有相对运动,并且机器人和人一起操纵物体,物体满足动态方程:Step 1.1: There is no relative motion between the robot, the human and the object, and the robot and the human manipulate the object together, and the object satisfies the dynamic equation:
其中,为物体的位置坐标对时间的二阶导数,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力;in, is the second derivative of the position coordinate of the object to time, f and u h are the forces acting on the object by the robot and the human, M o is the mass matrix of the object, and G o is the gravity of the object;
步骤1.2:利用凯恩方法建立上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:Step 1.2: Use the Kane method to establish the dynamic model of the upper limb exoskeleton robot, and obtain the joint space dynamic equation when the n-degree-of-freedom upper limb exoskeleton robot is in contact with the environment:
其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix, are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;
转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:
其中,u表示上肢外骨骼机器人的控制输入, where u represents the control input of the upper limb exoskeleton robot,
Mr、Cr、Gr分别表示笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵、科氏力与离心力矩阵和重力矩阵,符号表示矩阵的伪逆;M r , C r , G r represent the inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system, respectively, the symbol represents the pseudo-inverse of a matrix;
步骤1.3:联立公式(1)和(3),得到物体与机器人的组合动力学方程:Step 1.3: Simultaneously combine formulas (1) and (3) to obtain the combined dynamic equation of the object and robot:
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)
M、C、G分别表示笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;M, C, and G represent the inertial matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper-limb exoskeleton robot and human interaction system in the Cartesian space coordinate system, respectively;
步骤1.4:对上肢外骨骼机器人末端的位置、速度和人的力进行测量,采用具有重力补偿和线性反馈的机器人控制器,表达式为:Step 1.4: Measure the position, velocity and human force of the upper limb exoskeleton robot end, using a robot controller with gravity compensation and linear feedback, the expression is:
其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益;where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity ;
将人作用于物体的力建模为:Model the force a person acts on an object as:
其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将式(6)和(7)带入式(5)得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, τ h is the target position of the human, and the equations (6) and (7) are brought into the equation (5) to obtain the upper-limb exoskeleton robot-human interaction closed-loop system dynamics equation:
优选的,所述步骤2包括:Preferably, the step 2 includes:
步骤2.1:通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;Step 2.1: Collect the electromyographic signals of the human wrist, forearm and elbow by means of electromyography;
步骤2.2:对采集后的肌电信号进行过滤、数据分割和特征提取,根据波形类型进行特征提取,使提取的特征对应不同的意图类别;Step 2.2: Perform filtering, data segmentation and feature extraction on the collected EMG signals, and perform feature extraction according to the waveform type, so that the extracted features correspond to different intent categories;
步骤2.3:使用数据库中的多准则线性规划结合在线随机森林的分类方法进行训练预测;Step 2.3: Use the multi-criteria linear programming in the database combined with the online random forest classification method for training prediction;
步骤2.4:在模型预测时,对每个基分类器预测类别与对应置信度与预设阈值进行比较,决定该基分类器是否进行投票,最终使用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图。Step 2.4: When the model predicts, compare the predicted category of each base classifier with the corresponding confidence level and the preset threshold to decide whether the base classifier will vote. Finally, use the Boost algorithm to collect the voting results of all base classifiers and weight them. and, find the predicted category with the most votes, and output the activity intent when the votes are greater than the mean.
优选的,所述步骤3包括:Preferably, the step 3 includes:
步骤3.1:通过人的虚拟目标评估人类对上肢外骨骼机器人与人交互系统动力学的影响,公式为:Step 3.1: Through the human virtual target To evaluate the impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system, the formula is:
其中,人类控制增益和使用测量平均值,或者与机器人控制器增益相同的值,即上标符号v表示估计值;Among them, the human control gain and Use the measured average, or the same value as the robot controller gain, i.e. The superscript symbol v represents the estimated value;
步骤3.2:使用基于表面肌电信号的意图识别方法,或者通过内部模型参数化对进行估计,表达式为:Step 3.2: Use a surface EMG-based intent recognition method, or parameterize the pair via an internal model. To estimate, the expression is:
其中,上标符号T表示转置,θ为计算人的虚拟目标位置的参数向量, t表示时间,m为预先设定的参数,因此为由内模型参数决定并随时间变化的量;Among them, the superscript symbol T represents the transposition, and θ is the virtual target position of the calculation person The parameter vector of , t represents time, m is a preset parameter, so is the quantity determined by the internal model parameters and varies with time;
使用上肢外骨骼机器人与人交互系统的状态向量将其带入式(5)后得到扩展模型:State Vectors of Robot-Human Interaction Systems Using Upper Limb Exoskeletons After taking it into formula (5), the extended model is obtained:
其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量,v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声;Among them, φ represents: the state vector of the upper limb exoskeleton robot and human interaction system, v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise;
步骤3.3:通过传感器测量机器人端点位置和速度以及与人的相互作用力,得到上肢外骨骼机器人与人交互系统的测量矢量:Step 3.3: Measure the position and speed of the robot's endpoints and the interaction force with the human through the sensor, and obtain the measurement vector of the upper limb exoskeleton robot-human interaction system:
其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声;Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean value 0 and variance E[μ, μ T ];
步骤3.4:使用系统观测器计算机器人的扩展状态估计:Step 3.4: Compute the extended state estimate of the robot using the system observer:
其中,∧表示估计值;z表示上肢外骨骼机器人与人交互系统的测量向量;Among them, ∧ represents the estimated value; z represents the measurement vector of the upper limb exoskeleton robot and human interaction system;
线性二次估计增益K=PHTR-1,P是正定矩阵,通过求解黎卡提微分方程获得:The linear quadratic estimation gain K=PH T R -1 , where P is a positive definite matrix, is obtained by solving the Riccati differential equation:
其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],A表示系统矩阵,代入等式(11),表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], A represents the system matrix, substituting into equation (11), expressed as the following form:
优选的,人与机器人之间的相互作用通过人与机器人之间的关系τ和τh来确定:Preferably, the interaction between the human and the robot is determined by the relationship τ and τ h between the human and the robot:
当τ=τh时,表示机器人使用人类虚拟目标的辅助,机器人跟随它的原始目标τr;When τ=τ h , it means that the robot uses the assistance of the human virtual target, and the robot follows its original target τ r ;
当τ=2τr-τh时,机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标;When τ = 2τ r -τ h , the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system;
使用下式根据估计的人的目标设计机器人的目标位置进行交互行为同化:Use the following formula to design the target position of the robot based on the estimated human target for interactive behavior assimilation:
τr表示上肢外骨骼机器人的原始目标位置;λ表示调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,根据末端位置x动态调整。τ r represents the original target position of the upper limb exoskeleton robot; λ represents the hyperparameter for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which is dynamically adjusted according to the end position x.
根据本发明提供的基于表面肌电信号的上肢外骨骼机器人意图同化控制系统,包括:The intention assimilation control system for an upper limb exoskeleton robot based on surface EMG signals provided according to the present invention includes:
模块M1:利用凯恩方法建立上肢外骨骼机器人动力学模型;Module M1: Use the Kane method to establish a dynamic model of an upper limb exoskeleton robot;
模块M2:基于动力学模型,通过表面肌电信号进行意图识别;Module M2: Intention recognition through surface EMG signals based on dynamic model;
模块M3:通过虚拟目标,进行意图同化控制。Module M3: Control of intention assimilation through virtual targets.
优选的,所述模块M1包括:Preferably, the module M1 includes:
模块M1.1:机器人、人和物体之间没有相对运动,并且机器人和人一起操纵物体,物体满足动态方程:Module M1.1: There is no relative motion between the robot, the human and the object, and the robot and the human manipulate the object together, and the object satisfies the dynamic equation:
其中,为物体的位置坐标对时间的二阶导数,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力;in, is the second derivative of the position coordinate of the object to time, f and u h are the forces acting on the object by the robot and the human, M o is the mass matrix of the object, and G o is the gravity of the object;
模块M1.2:利用凯恩方法建立上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:Module M1.2: Use the Kane method to establish the dynamic model of the upper limb exoskeleton robot, and obtain the joint space dynamic equation when the n-degree-of-freedom upper limb exoskeleton robot is in contact with the environment:
其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix, are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;
转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:
其中,u表示上肢外骨骼机器人的控制输入, where u represents the control input of the upper limb exoskeleton robot,
Mr、Cr、Gr分别表示笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵、科氏力与离心力矩阵和重力矩阵,符号表示矩阵的伪逆;M r , C r , G r represent the inertia matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system, respectively, the symbol represents the pseudo-inverse of a matrix;
模块M1.3:联立公式(1)和(3),得到物体与机器人的组合动力学方程:Module M1.3: Simultaneous formulas (1) and (3) to obtain the combined dynamic equation of the object and robot:
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)
M、C、G分别表示笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;M, C, and G represent the inertial matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper-limb exoskeleton robot and human interaction system in the Cartesian space coordinate system, respectively;
模块M1.4:对上肢外骨骼机器人末端的位置、速度和人的力进行测量,采用具有重力补偿和线性反馈的机器人控制器,表达式为:Module M1.4: Measure the position, velocity and human force at the end of the upper limb exoskeleton robot, using a robot controller with gravity compensation and linear feedback, the expression is:
其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益;where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity ;
将人作用于物体的力建模为:Model the force a person acts on an object as:
其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将式(6)和(7)带入式(5)得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, τ h is the target position of the human, and the equations (6) and (7) are brought into the equation (5) to obtain the upper-limb exoskeleton robot-human interaction closed-loop system dynamics equation:
优选的,所述模块M2包括:Preferably, the module M2 includes:
模块M2.1:通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;Module M2.1: collect electromyographic signals from human wrist, forearm and elbow through electromyography;
模块M2.2:对采集后的肌电信号进行过滤、数据分割和特征提取,根据波形类型进行特征提取,使提取的特征对应不同的意图类别;Module M2.2: Perform filtering, data segmentation and feature extraction on the collected EMG signals, and perform feature extraction according to the waveform type, so that the extracted features correspond to different intent categories;
模块M2.3:使用数据库中的多准则线性规划结合在线随机森林的分类方法进行训练预测;Module M2.3: Use the multi-criteria linear programming in the database combined with the online random forest classification method for training prediction;
模块M2.4:在模型预测时,对每个基分类器预测类别与对应置信度与预设阈值进行比较,决定该基分类器是否进行投票,最终使用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图。Module M2.4: During model prediction, compare the predicted category of each base classifier with the corresponding confidence level and a preset threshold to decide whether the base classifier will vote, and finally use the Boost algorithm to collect the voting results of all base classifiers. Weighted summation to find the predicted category with the most votes, and output the activity intent when the number of votes is greater than the mean.
优选的,所述模块M3包括:Preferably, the module M3 includes:
模块M3.1:通过人的虚拟目标评估人类对上肢外骨骼机器人与人交互系统动力学的影响,公式为:Module M3.1: Virtual Goals Through Humans To evaluate the impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system, the formula is:
其中,人类控制增益和使用测量平均值,或者与机器人控制器增益相同的值,即上标符号v表示估计值;Among them, the human control gain and Use the measured average, or the same value as the robot controller gain, i.e. The superscript symbol v represents the estimated value;
模块M3.2:使用基于表面肌电信号的意图识别方法,或者通过内部模型参数化对进行估计,表达式为:Module M3.2: Use surface EMG-based intent recognition methods, or parameterize the To estimate, the expression is:
其中,上标符号T表示转置,θ为计算人的虚拟目标位置的参数向量, t表示时间,m为预先设定的参数,因此为由内模型参数决定并随时间变化的量;Among them, the superscript symbol T represents the transposition, and θ is the virtual target position of the calculation person The parameter vector of , t represents time, m is a preset parameter, so is the quantity determined by the internal model parameters and varies with time;
使用上肢外骨骼机器人与人交互系统的状态向量将其带入式(5)后得到扩展模型:State Vectors of Robot-Human Interaction Systems Using Upper Limb Exoskeletons After taking it into formula (5), the extended model is obtained:
其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量,v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声;Among them, φ represents: the state vector of the upper limb exoskeleton robot and human interaction system, v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise;
模块M3.3:通过传感器测量机器人端点位置和速度以及与人的相互作用力,得到上肢外骨骼机器人与人交互系统的测量矢量:Module M3.3: The sensor measures the position and velocity of the robot's endpoints and the interaction force with the human, and obtains the measurement vector of the upper limb exoskeleton robot-human interaction system:
其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声;Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean value 0 and variance E[μ, μ T ];
模块M3.4:使用系统观测器计算机器人的扩展状态估计:Module M3.4: Compute Extended State Estimates for Robots Using System Observers:
其中,∧表示估计值;z表示上肢外骨骼机器人与人交互系统的测量向量;Among them, ∧ represents the estimated value; z represents the measurement vector of the upper limb exoskeleton robot and human interaction system;
线性二次估计增益K=PHTR-1,P是正定矩阵,通过求解黎卡提微分方程获得:The linear quadratic estimation gain K=PH T R -1 , where P is a positive definite matrix, is obtained by solving the Riccati differential equation:
其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],A表示系统矩阵,代入等式(11),表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], A represents the system matrix, substituting into equation (11), expressed as the following form:
优选的,人与机器人之间的相互作用通过人与机器人之间的关系τ和τh来确定:Preferably, the interaction between the human and the robot is determined by the relationship τ and τ h between the human and the robot:
当τ=τh时,表示机器人使用人类虚拟目标的辅助,机器人跟随它的原始目标τr;When τ=τ h , it means that the robot uses the assistance of the human virtual target, and the robot follows its original target τ r ;
当τ=2τr-τh时,机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标;When τ = 2τ r -τ h , the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system;
使用下式根据估计的人的目标设计机器人的目标位置进行交互行为同化:Use the following formula to design the target position of the robot based on the estimated human target for interactive behavior assimilation:
τr表示上肢外骨骼机器人的原始目标位置;λ表示调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,根据末端位置x动态调整。τ r represents the original target position of the upper limb exoskeleton robot; λ represents the hyperparameter for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which is dynamically adjusted according to the end position x.
与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:
(1)本发明将表面肌电信号引入到机器人控制策略,在准确率和时延方面都有优势;(1) The present invention introduces the surface EMG signal into the robot control strategy, and has advantages in terms of accuracy and time delay;
(2)本发明提出意图同化控制方法,涵盖了从合作到竞争的连续交互行为,力量引导更少,是更安全的避障和更广泛的交互行为;(2) The present invention proposes an intention assimilation control method, which covers continuous interactive behaviors from cooperation to competition, with less force guidance, safer obstacle avoidance and wider interactive behaviors;
(3)本发明简单易于实施,是一种具有高鲁棒性的柔顺控制方法。(3) The present invention is simple and easy to implement, and is a compliance control method with high robustness.
附图说明Description of drawings
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:
图1为本发明的基于表面肌电信号的上肢外骨骼机器人意图同化控制方法示意图;1 is a schematic diagram of an intention assimilation control method for an upper limb exoskeleton robot based on surface EMG signals of the present invention;
图2为本发明的避障、辅助任务场景示意图;2 is a schematic diagram of obstacle avoidance and auxiliary task scenarios of the present invention;
图3为本发明的基于表面肌电信号的意图识别方法流程示意图;FIG. 3 is a schematic flowchart of an intention identification method based on surface EMG signals of the present invention;
图4为本发明的MCLP Boost算法示意图;4 is a schematic diagram of the MCLP Boost algorithm of the present invention;
图5为本发明的参数λ调整对应人机交互策略变化示意图。FIG. 5 is a schematic diagram of the change of the human-computer interaction strategy corresponding to the parameter λ adjustment of the present invention.
具体实施方式Detailed ways
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.
实施例:Example:
如图1,为本发明所述的一种基于表面肌电信号的上肢外骨骼机器人意图同化控制方法示意图,包括利用Kane方法建立的上肢外骨骼机器人动力学模型、基于表面肌电信号的意图识别方法和意图同化控制方法,不同任务场景如图2所示,本发明的意图同化控制方法可将不同人机交互策略统一起来并进行连续控制;Fig. 1 is a schematic diagram of an intention assimilation control method of an upper limb exoskeleton robot based on surface EMG signals according to the present invention, including the upper limb exoskeleton robot dynamics model established by the Kane method, and the intention recognition based on surface EMG signals. Method and intent assimilation control method, different task scenarios are shown in Figure 2, the intent assimilation control method of the present invention can unify different human-computer interaction strategies and perform continuous control;
进一步的,利用Kane方法建立上肢外骨骼机器人动力学模型具体过程为:Further, the specific process of using the Kane method to establish the dynamics model of the upper limb exoskeleton robot is as follows:
1)假设机器人抓手、人手和物体之间没有相对运动,并且机器人抓手和人手一起操纵刚性物体,物体是一个质点。一般的物体操作只考虑线性运动,物体满足动态方程:1) It is assumed that there is no relative motion between the robot gripper, the human hand and the object, and the robot gripper and the human hand manipulate a rigid object together, and the object is a mass point. The general object operation only considers linear motion, and the object satisfies the dynamic equation:
其中,x(t)为物体的位置坐标,f和uh是机器人和人作用在物体上的力,Mo为物体的质量矩阵,Go为物体的重力。Among them, x(t) is the position coordinate of the object, f and u h are the forces acting on the object by the robot and human, M o is the mass matrix of the object, and G o is the gravity of the object.
2)利用Kane方法建立的上肢外骨骼机器人动力学模型,得到n自由度上肢外骨骼机器人与环境接触时的关节空间动力学方程:2) Using the dynamic model of the upper limb exoskeleton robot established by the Kane method, the joint space dynamic equation of the n-degree-of-freedom upper limb exoskeleton robot in contact with the environment is obtained:
其中,q为机器人的关节坐标,τq为控制输入,JT(q)为雅各比矩阵,Mq(q)为机器人惯性矩阵,是科里奥利和离心扭矩,Gq(q)是重力矩;Among them, q is the joint coordinates of the robot, τ q is the control input, J T (q) is the Jacobian matrix, M q (q) is the robot inertia matrix, are the Coriolis and centrifugal torques, and G q (q) is the gravitational moment;
转换到机器人操作空间得到动力学方程:Transform to the robot operation space to get the dynamic equation:
其中,u表示上肢外骨骼机器人的控制输入, where u represents the control input of the upper limb exoskeleton robot,
Mr、Cr、Gr的含义分别是笛卡尔空间坐标系下上肢外骨骼机器人的惯性矩阵,科氏力与离心力矩阵和重力矩阵;符号表示矩阵的伪逆;The meanings of M r , Cr , and Gr are respectively the inertia matrix, Coriolis force, centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot in the Cartesian space coordinate system ; the symbol represents the pseudo-inverse of a matrix;
3)联立公式(1)和(3)可以得到物体与机器人的组合动力学方程:3) Simultaneous formulas (1) and (3) can obtain the combined dynamic equation of the object and the robot:
M≡Mo+Mr,G≡Go+Gr,C≡Cr…………(5)M≡M o +M r ,G≡G o +G r ,C≡C r …………(5)
M、C、G的含义分别是笛卡尔空间坐标系下上肢外骨骼机器人与人交互系统的惯性矩阵,科氏力与离心力矩阵和重力矩阵;The meanings of M, C, and G are the inertial matrix, Coriolis force and centrifugal force matrix and gravity matrix of the upper limb exoskeleton robot and human interaction system in the Cartesian space coordinate system respectively;
4)考虑机器人有关于其局部环境的信息,并对上肢外骨骼机器人交互系统末端的位置、速度和人的力进行测量,这些都受到测量噪声的影响。采用具有重力补偿和线性反馈的机器人控制器:4) Consider that the robot has information about its local environment, and measure the position, velocity and human force at the end of the upper limb exoskeleton robot interaction system, which are all affected by measurement noise. Using a robot controller with gravity compensation and linear feedback:
其中,τ是机器人的目标位置,L1和L2是与位置误差和速度对应的增益。where τ is the target position of the robot, and L1 and L2 are the gains corresponding to the position error and velocity.
将人手作用于物体的力建模为:Model the force of a human hand on an object as:
其中,Lh,1和Lh,2为人的控制增益,τh为人的目标位置,将等式(6)和(7)带入等式(5)可得到上肢外骨骼机器人与人交互闭环系统动力学方程:Among them, L h,1 and L h,2 are the control gains of the human, and τ h is the target position of the human. Bringing equations (6) and (7) into equation (5) can obtain a closed-loop interaction between the upper limb exoskeleton robot and the human System Dynamics Equation:
进一步的,基于表面肌电信号的意图识别方法如图3所示,具体过程为:Further, the intent recognition method based on the surface EMG signal is shown in Figure 3, and the specific process is as follows:
1)通过肌电仪对人的手腕、前臂和肘部进行肌电信号的采集;1) The electromyography signal is collected from the human wrist, forearm and elbow by electromyography;
2)对采集后的肌电信号进行过滤,然后进行数据分割和特征提取,当使用长序波形进行特征提取时不适用overlap,而当波形较短时可考虑使用overlap操作,使提取的特征能够对应不同的意图类别;2) Filter the collected EMG signals, and then perform data segmentation and feature extraction. When long-sequence waveforms are used for feature extraction, overlap is not applicable, and when the waveform is short, the overlap operation can be considered, so that the extracted features can be extracted. Corresponding to different intent categories;
3)使用MCLPBoost结合Online Random Forest的分类方法进行训练预测,MCLPBoost算法如图4所示,该方法具有良好的泛化性能,以为该方法是基于比较的,因此在进行预测时,较使用计算模型具有时间开销小的特点和优势;3) Use MCLPBoost combined with the classification method of Online Random Forest for training prediction. The MCLPBoost algorithm is shown in Figure 4. This method has good generalization performance, and it is thought that the method is based on comparison. Therefore, when making predictions, it is better to use the computational model It has the characteristics and advantages of small time overhead;
4)模型预测时,对每个基分类器预测类别与对应置信度(概率)与预设阈值进行比较,决定该基分类器是否进行投票,最终只用Boost算法收集所有基分类器投票结果并加权求和,找到投票数最多的预测类别,当投票数大于均值时输出活动意图;4) When the model predicts, compare the predicted category of each base classifier with the corresponding confidence (probability) and the preset threshold to decide whether the base classifier will vote. Finally, only the Boost algorithm is used to collect the voting results of all base classifiers. Weighted summation to find the prediction category with the most votes, and output the activity intent when the votes are greater than the mean;
5)基于表面肌电信号的意图识别结果可用于产生的“虚拟”目标。5) Intent recognition results based on surface EMG signals can be used to generate "virtual" targets.
进一步的,人类对上肢外骨骼机器人与人交互系统动力学的影响完全取决于uh,不管它基于什么内部模型,开发一种不需要估计人类控制收益的替代方法,通过使用这些假设收益的任意值而产生的“虚拟”目标意图同化控制方法具体过程为:Further, the human influence on the dynamics of the upper-limb exoskeleton robot-human interaction system depends entirely on u h , no matter what internal model it is based on, to develop an alternative method that does not require estimates of human control gains, by using arbitrary assumptions of these gains "virtual" target The specific process of the intent assimilation control method is as follows:
1)“虚拟”目标可有效地评估人类对上肢外骨骼机器人与人交互系统动力学的影响,如果它满足:1) "Virtual" goals The impact of humans on the dynamics of the upper-limb exoskeleton robot-human interaction system can be effectively assessed if it satisfies:
其中,虚拟人类控制增益和可以使用一些从许多人测量的平均值,或者与机器人控制器增益相同的值,即 Among them, the virtual human control gain and You can use some average measured from many people, or the same value as the robot controller gain, i.e.
2)为了估计可使用权利要求3所述的基于表面肌电信号的意图识别方法,或者使用内部模型参数化它:2) To estimate The surface EMG-based intent recognition method of claim 3 can be used, or it can be parameterized using an internal model:
其中,θ含义为计算人的虚拟目标位置的参数向量,t表示时间,m为预先设定的参数,因此为由内模型参数决定并随时间变化的量,使用状态向量将其带入等式(5)后可得到扩展模型:Among them, θ means the virtual target position of the calculation person The parameter vector of , t represents time, m is a preset parameter, so For quantities that are determined by internal model parameters and vary over time, use the state vector After plugging it into equation (5), the extended model can be obtained:
其中,φ表示:上肢外骨骼机器人与人交互系统的状态向量;v∈N(0,E[v,vT])是系统噪声,即均值为0,方差为E[v,vT]的高斯噪声。Among them, φ represents: the state vector of the upper limb exoskeleton robot-human interaction system; v∈N(0, E[v, v T ]) is the system noise, that is, the mean value is 0, and the variance is E[v, v T ] Gaussian noise.
3)考虑到机器人可以用合适的传感器测量其端点位置和速度以及与人的相互作用力,得到了机器人的测量矢量:3) Considering that the robot can measure its endpoint position and velocity and the interaction force with the human with suitable sensors, the measurement vector of the robot is obtained:
其中,μ∈N(0,E[μ,μT])是环境测量噪声,即均值为0,方差为E[μ,μT]的高斯噪声。Among them, μ∈N(0, E[μ, μ T ]) is the environmental measurement noise, that is, the Gaussian noise with mean 0 and variance E[μ, μ T ].
4)然而等式(10)中与θ是未知的,因此使用以下系统观测器来计算机器人的扩展状态估计:4) However, in equation (10) and θ are unknown, so the following system observer is used to compute the extended state estimate of the robot:
其中,∧表示估计值,z表示:上肢外骨骼机器人与人交互系统的测量向量;线性二次估计增益K=PHTR-1,P是一个正定矩阵,通过求解黎卡提微分方程获得:Among them, ∧ represents the estimated value, z represents: the measurement vector of the upper limb exoskeleton robot and the human interaction system; the linear quadratic estimation gain K=PH T R -1 , P is a positive definite matrix, obtained by solving the Riccati differential equation:
其中,噪声协方差矩阵Q≡E[v,vT],R≡E[μ,μT],使用A表示系统矩阵,等式(11)可表示为如下形式:Among them, the noise covariance matrix Q≡E[v,v T ],R≡E[μ,μ T ], using A to represent the system matrix, equation (11) can be expressed as the following form:
除了θ外其余参数均可观察获得,从而获得值。All parameters except θ can be observed and obtained, thus obtaining value.
5)人与机器人之间的相互作用可以通过人与机器人之间的关系τ和τh来确定,比如当τ=τh对应于机器人使用人类虚拟目标的辅助,当τ=τr机器人跟随它的原始目标τr,当τ=2τr-τh对应于“对抗”,即机器人通过从上肢外骨骼机器人与人交互系统中消除人类的目标来强加自己的目标。5) The interaction between humans and robots can be determined by the relationship τ and τ h between humans and robots, such as when τ = τ h corresponds to the robot using the assistance of a human virtual target, and when τ = τ r the robot follows it The original goal τ r , when τ = 2τ r −τ h corresponds to “adversarial”, that is, the robot imposes its own goal by eliminating the human goal from the upper limb exoskeleton robot-human interaction system.
为了同化交互行为,使用以下等式根据估计的人的目标设计机器人的目标位置:To assimilate the interaction behavior, the robot's target position is designed according to the estimated human target using the following equation:
τr表示:上肢外骨骼机器人的原始目标位置;λ表示:调整上肢外骨骼机器人原始目标位置与人目标位置的超参数,可根据末端位置x动态调整;τ r represents: the original target position of the upper limb exoskeleton robot; λ represents: the hyperparameters for adjusting the original target position of the upper limb exoskeleton robot and the human target position, which can be dynamically adjusted according to the end position x;
参数λ调整对应人机交互策略变化如图5所示,当λ<1时,意图同化控制器将协调人机目标;当λ=1时,意图同化控制器将忽略从而完成人机协同;当λ=2时,意图同化控制器将消除模拟人类对上肢外骨骼机器人与人交互系统动态的影响,上肢外骨骼机器人与人交互系统位置最终趋同到意图同化控制器的目标τr。The adjustment of parameter λ corresponds to the change of human-computer interaction strategy as shown in Figure 5. When λ<1, the intent assimilation controller will coordinate the human-machine goal; when λ=1, the intent assimilation controller will ignore In this way, the human-machine collaboration is completed; when λ=2, the intention assimilation controller will eliminate the influence of the simulated human on the dynamics of the upper limb exoskeleton robot and the human interaction system, and the positions of the upper limb exoskeleton robot and the human interaction system will eventually converge to the intention assimilation controller. target τ r .
进一步的,验证的引入后人机交互系统的稳定性,通过等式(13)的第二个等式可估计人的目标 Further, verify The stability of the human-computer interaction system after the introduction of , the human goal can be estimated by the second equation of Eq.
将更正后的等式(6)带入组合动力学方程(5)可得到:Substituting the corrected equation (6) into the combined kinetic equation (5) yields:
其中,表示估计值与实际值之间的误差,若定义并将人手作用于物体的力(7)代入上面等式可获得:in, represents the error between the estimated value and the actual value, if defined Substitute the force (7) of the human hand on the object into the above equation to obtain:
因此可以分析τr和τh对动力学系统的影响,考虑到稳态位置:The effect of τ r and τ h on the dynamical system can therefore be analyzed, taking into account the steady state position:
通过定义以下公式化简公式(19)以便分析稳定性:Equation (19) is simplified for stability analysis by defining:
推导出:Deduced:
这说明位置误差x-xss会消失,如果人力的估计误差是 This means that the position error xx ss will disappear, if the estimated error of manpower is
根据上肢外骨骼机器人与人交互系统动力学在状态空间中的形式式(15)及系统观测器(13)有:According to the form (15) and the system observer (13) of the upper limb exoskeleton robot-human interaction system dynamics in the state space, there are:
通过定义ξ≡[x-xss,x,φT]T,结合等式(22)和等式(23)可得:By defining ξ≡[xx ss ,x,φ T ] T , combining equations (22) and (23) we get:
其中,ξ为系统瞬态性能分析中定义的系统状态向量,该等式是包括系统动力学和观测器的组合系统。where ξ is the system state vector defined in the system transient performance analysis, and the equation is a combined system including system dynamics and observers.
可通过如下特征方程的解计算的特征值,进而研究式(24)的稳定性:It can be calculated by the solution of the following characteristic equation , and then study the stability of formula (24):
[yI-(A-KH)][My2+(C+L2)y+L1]=0…………(25)[yI-(A-KH)][My 2 +(C+L 2 )y+L 1 ]=0…………(25)
如果当以下两个系统是稳定的,那么也将是稳定的:If the following two systems are stable, then will also be stable:
利用李雅普诺夫理论分别对上述两个系统的稳定性进行检验,首先通过考虑李雅普诺夫候选函数证明第一个系统的稳定性:The Lyapunov theory is used to test the stability of the above two systems respectively. First, the stability of the first system is proved by considering the Lyapunov candidate function:
对时间求导可得:Derivation with respect to time gives:
然后通过考虑李雅普诺夫候选函数证明第二个系统的稳定性:The stability of the second system is then proved by considering Lyapunov candidate functions:
Pv由等式(14)中的Riccati方程可得:P v is given by the Riccati equation in equation (14):
对时间求导可得:Derivation with respect to time gives:
结合等式(13)可得:Combining equation (13), we get:
代入(31)式可得:Substitute into (31) to get:
由此证明了式(26)中的两个系统是稳定的,因此上肢外骨骼机器人与人交互系统稳定。This proves that the two systems in equation (26) are stable, so the upper limb exoskeleton robot and human interaction system are stable.
本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system, device and each module provided by the present invention in the form of pure computer readable program code, the system, device and each module provided by the present invention can be completely implemented by logically programming the method steps. The same program is implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, among others. Therefore, the system, device and each module provided by the present invention can be regarded as a kind of hardware component, and the modules used for realizing various programs included in it can also be regarded as the structure in the hardware component; A module for realizing various functions can be regarded as either a software program for realizing a method or a structure within a hardware component.
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily, provided that there is no conflict.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110775590.5A CN113478462B (en) | 2021-07-08 | 2021-07-08 | Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110775590.5A CN113478462B (en) | 2021-07-08 | 2021-07-08 | Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113478462A true CN113478462A (en) | 2021-10-08 |
CN113478462B CN113478462B (en) | 2022-12-30 |
Family
ID=77938116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110775590.5A Active CN113478462B (en) | 2021-07-08 | 2021-07-08 | Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113478462B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113995629A (en) * | 2021-11-03 | 2022-02-01 | 中国科学技术大学先进技术研究院 | Admittance control method and system of upper limb and dual-arm rehabilitation robot based on mirror force field |
CN114377358A (en) * | 2022-02-22 | 2022-04-22 | 南京医科大学 | A Home Rehabilitation System for Upper Limbs Based on Sphero Spherical Robot |
CN114474051A (en) * | 2021-12-30 | 2022-05-13 | 西北工业大学 | A Personalized Gain Teleoperation Control Method Based on Operator Physiological Signals |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2497610A1 (en) * | 2011-03-09 | 2012-09-12 | Syco Di Hedvig Haberl & C. S.A.S. | System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device |
WO2018000854A1 (en) * | 2016-06-29 | 2018-01-04 | 深圳光启合众科技有限公司 | Human upper limb motion intention recognition and assistance method and device |
CN111631923A (en) * | 2020-06-02 | 2020-09-08 | 中国科学技术大学先进技术研究院 | Neural Network Control System of Exoskeleton Robot Based on Intention Recognition |
CN112107397A (en) * | 2020-10-19 | 2020-12-22 | 中国科学技术大学 | Myoelectric signal driven lower limb artificial limb continuous control system |
-
2021
- 2021-07-08 CN CN202110775590.5A patent/CN113478462B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2497610A1 (en) * | 2011-03-09 | 2012-09-12 | Syco Di Hedvig Haberl & C. S.A.S. | System for controlling a robotic device during walking, in particular for rehabilitation purposes, and corresponding robotic device |
WO2018000854A1 (en) * | 2016-06-29 | 2018-01-04 | 深圳光启合众科技有限公司 | Human upper limb motion intention recognition and assistance method and device |
CN111631923A (en) * | 2020-06-02 | 2020-09-08 | 中国科学技术大学先进技术研究院 | Neural Network Control System of Exoskeleton Robot Based on Intention Recognition |
CN112107397A (en) * | 2020-10-19 | 2020-12-22 | 中国科学技术大学 | Myoelectric signal driven lower limb artificial limb continuous control system |
Non-Patent Citations (1)
Title |
---|
李想等: "基于脑肌电信号的机械臂控制方法与实现", 《计算机测量与控制》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113995629A (en) * | 2021-11-03 | 2022-02-01 | 中国科学技术大学先进技术研究院 | Admittance control method and system of upper limb and dual-arm rehabilitation robot based on mirror force field |
CN113995629B (en) * | 2021-11-03 | 2023-07-11 | 中国科学技术大学先进技术研究院 | Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system |
CN114474051A (en) * | 2021-12-30 | 2022-05-13 | 西北工业大学 | A Personalized Gain Teleoperation Control Method Based on Operator Physiological Signals |
CN114377358A (en) * | 2022-02-22 | 2022-04-22 | 南京医科大学 | A Home Rehabilitation System for Upper Limbs Based on Sphero Spherical Robot |
Also Published As
Publication number | Publication date |
---|---|
CN113478462B (en) | 2022-12-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Su et al. | An incremental learning framework for human-like redundancy optimization of anthropomorphic manipulators | |
CN113478462B (en) | Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal | |
Li et al. | Asymmetric bimanual control of dual-arm exoskeletons for human-cooperative manipulations | |
CN109702740B (en) | Robot compliance control method, device, equipment and storage medium | |
CN111281743A (en) | An adaptive compliance control method for an upper limb rehabilitation exoskeleton robot | |
Chen et al. | Neural learning enhanced variable admittance control for human–robot collaboration | |
El-Hussieny et al. | Adaptive learning of human motor behaviors: An evolving inverse optimal control approach | |
Wang et al. | Operator-based robust nonlinear tracking control for a human multi-joint arm-like manipulator with unknown time-varying delays | |
CN111522243A (en) | Robust iterative learning control strategy for five-degree-of-freedom upper limb exoskeleton system | |
CN111673733B (en) | Intelligent self-adaptive compliance control method of robot in unknown environment | |
Zeng et al. | Encoding multiple sensor data for robotic learning skills from multimodal demonstration | |
WO2020118730A1 (en) | Compliance control method and apparatus for robot, device, and storage medium | |
Lin et al. | Three-domain fuzzy wavelet broad learning system for tremor estimation | |
Li et al. | Observer-based multivariable fixed-time formation control of mobile robots | |
WO2024146961A1 (en) | Controlling agents using language-based success detectors | |
Ma et al. | Active manipulation of elastic rods using optimization-based shape perception and sensorimotor model approximation | |
WO2023082404A1 (en) | Control method for robot, and robot, storage medium, and grabbing system | |
Kawaharazuka et al. | Adaptive robotic tool-tip control learning considering online changes in grasping state | |
Wei et al. | Research on robotic arm movement grasping system based on MYO | |
Zhou et al. | Modeling of endpoint feedback learning implemented through point-to-point learning control | |
CN116749214A (en) | A five-finger manipulator grasping control method and system with teaching and learning capabilities | |
Hu et al. | A Hybrid Framework Based on Bio-Signal and Built-in Force Sensor for Human-Robot Active Co-Carrying | |
Wang et al. | Integrating sensor fusion for teleoperation control of anthropomorphic dual-arm robots | |
Veiga et al. | Tactile based forward modeling for contact location control | |
Chen et al. | Vision-based dexterous motion planning by dynamic movement primitives with human hand demonstration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |