CN110377049B - Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method - Google Patents
Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method Download PDFInfo
- Publication number
- CN110377049B CN110377049B CN201910581534.0A CN201910581534A CN110377049B CN 110377049 B CN110377049 B CN 110377049B CN 201910581534 A CN201910581534 A CN 201910581534A CN 110377049 B CN110377049 B CN 110377049B
- Authority
- CN
- China
- Prior art keywords
- formation
- neural network
- term memory
- network
- deep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims description 51
- 238000013528 artificial neural network Methods 0.000 claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000015654 memory Effects 0.000 claims abstract description 7
- 230000006403 short-term memory Effects 0.000 claims description 43
- 230000006870 function Effects 0.000 claims description 26
- 230000007787 long-term memory Effects 0.000 claims description 22
- 238000001914 filtration Methods 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 238000011176 pooling Methods 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 11
- 230000002452 interceptive effect Effects 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 230000004913 activation Effects 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 230000014759 maintenance of location Effects 0.000 claims description 3
- 238000005381 potential energy Methods 0.000 claims description 3
- 238000007363 ring formation reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims 1
- 238000003672 processing method Methods 0.000 claims 1
- 230000003993 interaction Effects 0.000 abstract description 8
- 230000000694 effects Effects 0.000 abstract description 6
- 238000013461 design Methods 0.000 abstract description 3
- 238000007405 data analysis Methods 0.000 abstract description 2
- 238000004519 manufacturing process Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 17
- 238000005516 engineering process Methods 0.000 description 15
- 210000004556 brain Anatomy 0.000 description 13
- 230000008859 change Effects 0.000 description 5
- 238000011160 research Methods 0.000 description 5
- 230000002123 temporal effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 3
- 238000012827 research and development Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002301 combined effect Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Image Analysis (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
Description
技术领域technical field
本发明涉及数据分析、脑机接口、人机交互、软件开发等领域,尤其涉及一种基于脑机接口的无人机集群队形重构控制方法。The invention relates to the fields of data analysis, brain-computer interface, human-computer interaction, software development and the like, in particular to a control method for formation reconfiguration of unmanned aerial vehicles based on a brain-computer interface.
背景技术Background technique
脑机接口(BCI)系统提供了一种新兴的人机交互方法,可以通过提取操控人员的脑电信号,检测其中的有效信息,从而达到对其他设备进行控制的功能。在众多的脑机接口范式中,P300、稳态视觉电位(SSVEP)、运动想象是当下最热门的研究领域。其中,运动想象是唯一一个自发性,不需要外部刺激的脑机接口范式。The brain-computer interface (BCI) system provides an emerging human-computer interaction method, which can control other devices by extracting the EEG signals of the operator and detecting the effective information in it. Among the many brain-computer interface paradigms, P300, steady-state visual potential (SSVEP), and motor imagery are the most popular research fields. Among them, motor imagery is the only brain-computer interface paradigm that is spontaneous and does not require external stimulation.
运动想象,指大脑仅有肢体运动意图但不实际执行,反映人对动作的期望及对将要发生真实动作的预演。在想象某一特定的运动场景时,大脑会产生连续的EEG脑电信号。从该信号中提取获得的脑电特征与实验人员初始思维活动相关,从而可以将信号转化为对外部设备的控制指令。Motor imagery refers to the brain only has the intention of limb movement but does not actually execute it, reflecting people's expectations of the action and the rehearsal of the real action that will occur. When imagining a specific movement scenario, the brain produces continuous EEG signals. The EEG features extracted from this signal are related to the initial thinking activity of the experimenter, so that the signal can be converted into control instructions for external equipment.
深度学习作为机器学习中的一种,已在计算机视觉、语音识别和自然语言处理等领域大放光彩。在大数据时代,大量的运动想象数据集可以通过各种渠道获得。因此,深度学习方法可以更好的学习与分类大量脑电数据中运动想象特征。深度卷积网络(CNN)作为应用较为广泛的技术,可以充分的挖掘脑电数据中的空间特征;深度长短期记忆网络(LSTM)是一种时间递归神经网络,非常适合处理及分类时间序列信号,采用长短期记忆网络可以很好的提取脑电数据中的时间特征。因此构建基于深度卷积网络与深度长短期记忆网络的混合深度神经网络对脑电信号进行监督学习,将其中的运动想象特征在空间及时间上充分的挖掘,具有较好的实时性和准确性。As a type of machine learning, deep learning has flourished in the fields of computer vision, speech recognition, and natural language processing. In the era of big data, a large number of motor imagery datasets can be obtained through various channels. Therefore, deep learning methods can better learn and classify motor imagery features in a large amount of EEG data. As a widely used technology, deep convolutional network (CNN) can fully mine spatial features in EEG data; deep long short-term memory network (LSTM) is a temporal recurrent neural network, which is very suitable for processing and classifying time series signals. , the use of long short-term memory network can well extract the temporal features in EEG data. Therefore, a hybrid deep neural network based on deep convolutional network and deep long-term short-term memory network is constructed to conduct supervised learning of EEG signals, and fully excavate the motor imagery features in space and time, which has good real-time and accuracy. .
在现今航空航天研究领域中,多机编队、人机协同成为当下研究的趋势,对无人机的控制手段也提出了新的要求。传统的单机飞控设备已经不能满足当下无人机集群的控制需求,因此开发新的控制方法迫在眉睫。把BCI技术引入航空航天领域当中,无人机飞手不仅可以依靠传统飞控设备对无人机集群位置进行控制,同时可以采用意念对无人机集群队形进行重构控制,大大提高操纵人员对无人机集群的控制能力。In the current field of aerospace research, multi-aircraft formation and human-machine collaboration have become the current research trend, and new requirements have also been put forward for the control methods of UAVs. The traditional single-plane flight control equipment can no longer meet the control needs of the current UAV swarms, so it is urgent to develop new control methods. Introducing BCI technology into the aerospace field, UAV pilots can not only rely on traditional flight control equipment to control the position of UAV swarms, but also use ideas to reconfigure and control the formation of UAV swarms, which greatly improves the performance of operators. Ability to control drone swarms.
发明内容SUMMARY OF THE INVENTION
为克服现有技术的不足,本发明旨在提出一种脑控无人机集群队形重构控制方法,可以使操控人员通过基于运动想象的脑电信号控制无人机集群,变化成所期望的队形,提高对无人机集群队形的控制效果,让操控人员获得更便利、高效的操纵感受。为此,本发明采取的技术方案是,基于脑-机接口的无人机集群编队队形重构控制方法,包括离线训练步骤和在线训练步骤:In order to overcome the deficiencies of the prior art, the present invention aims to propose a brain-controlled UAV swarm formation reconstruction control method, which can enable the operator to control the UAV swarm through the electroencephalogram signal based on motor imagery, and change into the desired shape. It can improve the control effect of the UAV swarm formation, so that the operator can obtain a more convenient and efficient control experience. To this end, the technical solution adopted by the present invention is that the UAV swarm formation reconstruction control method based on the brain-computer interface includes an offline training step and an online training step:
离线训练步骤:S1、运动想象训练系统初始化;S2、启动交互界面,交互界面随机显示上、下、左、右指向的箭头;S3、操纵人员按照箭头方向的指示分别想象舌头、脚、左手、右手部位的运动,通过电极帽采集操纵人员的脑电信号;S4、处理脑电信号,包括:预处理,信号特征提取和利用基于深度卷积网络及深度长短期记忆网络的混合深度神经网络进行分类;S5、通过神经网络的分类值与标签值的比较,采用反向传播算法对混合深度神经网络进行训练,确定网络权值;Offline training steps: S1, initialization of the motor imagery training system; S2, start the interactive interface, the interactive interface randomly displays arrows pointing up, down, left and right; S3, the operator imagines the tongue, foot, left hand, For the movement of the right hand, the EEG signal of the operator is collected through the electrode cap; S4, processing the EEG signal, including: preprocessing, signal feature extraction and using a hybrid deep neural network based on a deep convolutional network and a deep long and short-term memory network. Classification; S5. By comparing the classification value of the neural network with the label value, the back-propagation algorithm is used to train the hybrid deep neural network to determine the network weight;
在线控制步骤:S6、启动虚拟无人机集群编队队形软件,进入无人机集群队形控制界面;S7、操纵人员根据自己期望的无人机集群队形分别想象舌头、脚、左手、右手部位的运动,同时电极帽采集操纵者的脑电信号;S8、采集到脑电信号后,处理采集到的脑电信号,包括:预处理,信号特征提取和利用基于深度卷积网络及深度长短期记忆网络的混合深度神经网络进行分类;S9、根据输出的分类结果生成控制命令,控制虚拟无人机集群队形重构。Online control steps: S6. Start the virtual UAV swarm formation software and enter the UAV swarm formation control interface; S7. The operator imagines the tongue, feet, left hand, and right hand according to the desired UAV swarm formation. At the same time, the electrode cap collects the EEG signal of the operator; S8. After the EEG signal is collected, process the collected EEG signal, including: preprocessing, signal feature extraction and utilization based on deep convolutional network and depth The hybrid deep neural network of the short-term memory network is used for classification; S9, a control command is generated according to the output classification result to control the formation reconstruction of the virtual UAV swarm.
具体地,1)脑电信号预处理,包括:Specifically, 1) EEG signal preprocessing, including:
S10、对脑电信号进行降采样处理,获得250Hz的脑电信号;S11、对采集到的脑电信号进行50Hz的工频滤波;S12、采用时间窗口对脑电信号时间序列进行分割;S13、采用滤波器组对脑电信号进行滤波;S10, down-sampling the EEG signal to obtain a 250Hz EEG signal; S11, perform 50Hz power frequency filtering on the collected EEG signal; S12, use a time window to segment the EEG signal time series; S13, The EEG signal is filtered by a filter bank;
2)脑电信号特征提取,包括:2) EEG feature extraction, including:
采用一对多-公共空间模式方法OVR-CSP对S13获得的脑电信号进行特征提取,一对多-公共空间模式方法步骤包括:The one-to-many-common space mode method OVR-CSP is used to extract the features of the EEG signals obtained by S13. The one-to-many-common space mode method steps include:
S14、分别对每类运动想象信号求其相对于其他信号的公共空间模式滤波权重Wj:S14, respectively, for each type of motor imagery signal, obtain its common spatial mode filtering weight W j relative to other signals:
其中,Cj表示该类运动想象信号的协方差矩阵,Ej表示包含Cj特征值的对角阵,Wj表示该类运动想象信号相对于其他信号的公共空间模式滤波权重,j=1,2,3,4分别代表四类运动想象信号;Among them, C j represents the covariance matrix of this type of motor imagery signal, E j represents the diagonal matrix containing the eigenvalues of C j , W j represents the common spatial mode filtering weight of this type of motor imagery signal relative to other signals, j=1 , 2, 3, and 4 represent four types of motor imagery signals, respectively;
S15、分别提取Wj的前两列和后两列组合成新的矩阵按顺序结合 获得 S15. Extract the first two columns and the last two columns of W j respectively and combine them into a new matrix Combine in order get
S16、对S13获得的脑电信号进行一对多-公共空间模式滤波:S16. Perform one-to-many-common spatial mode filtering on the EEG signal obtained in S13:
其中,X表示S13获得的脑电信号,Z表示一对多-公共空间模式滤波之后的信号;Among them, X represents the EEG signal obtained in S13, and Z represents the signal after one-to-many-common spatial mode filtering;
S17、对S13获得的信号Z进行特征提取;S17, perform feature extraction on the signal Z obtained in S13;
其中,diag(·)为矩阵的对角阵元素,tr(·)为矩阵的迹;Among them, diag( ) is the diagonal element of the matrix, and tr( ) is the trace of the matrix;
3)采用深度卷积神经网络对S17获得的特征进行空间特征学习,其方法步骤包括:3) Using a deep convolutional neural network to perform spatial feature learning on the features obtained in S17, the method steps include:
S18、深度卷积网络包含多个隐藏层,每个隐藏层是由卷基层和池化层组成,其中卷积层表示为:S18. The deep convolutional network contains multiple hidden layers, each hidden layer is composed of a convolutional base layer and a pooling layer, where the convolutional layer is expressed as:
hcl=R(conv(Wl,xl)+bl)hc l =R(conv(W l ,x l )+b l )
其中,xl与hcl分别表示第l层卷积层的输入与输出,Wl与bl分别表示第l层卷积层的权重和偏差,conv(·)表示卷积运算,R表示该层的激活函数;Among them, x l and hcl represent the input and output of the first convolutional layer , respectively, W l and b l represent the weight and bias of the first convolutional layer, respectively, conv( ) represents the convolution operation, and R represents the layer activation function;
S19、在每个卷积层后为池化层;S19. After each convolutional layer is a pooling layer;
S20、将深度卷积神经网络的输出转换为1维向量形式;S20. Convert the output of the deep convolutional neural network into a 1-dimensional vector form;
4)采用深度长短期记忆网络对多个时间窗口的S20获得的特征进行时间特征学习,其方法步骤包括:4) using a deep long short-term memory network to perform temporal feature learning on the features obtained in S20 of multiple time windows, and the method steps include:
S21、深度深度长短期记忆网络为多个长短期记忆网络细胞串联组成;S21. The deep deep long-term and short-term memory network is composed of multiple long-term and short-term memory network cells in series;
S22、长短期记忆网络细胞由遗忘门,输入门,输出门组成;S22. The long short-term memory network cell is composed of a forgetting gate, an input gate, and an output gate;
S23、遗忘门决定从长短期记忆网络细胞丢弃的信息的数量,该门输出一个0到1的数值,1表示完全保留,0表示完全舍弃:S23. The forgetting gate determines the amount of information discarded from the long-term and short-term memory network cells. The gate outputs a value from 0 to 1, where 1 represents complete retention, and 0 represents complete discard:
其中,hll,t-1表示前一个时间窗口的长短期记忆网络细胞输出,xl,t表示当前细胞的的输入,l表示第l个隐藏层,t表示第t个时间窗口,Wl f和分别表示权值和偏置信息,σ为Sigmoid函数;Among them, hl l, t-1 represents the long-term and short-term memory network cell output of the previous time window, x l, t represents the input of the current cell, l represents the l-th hidden layer, t represents the t-th time window, W l f and respectively represent the weight and bias information, and σ is the Sigmoid function;
S24、输入门决定对长短期记忆网络细胞更新新的信息的数量。首先,决定哪些信息需要更新;其次,计算备选的更新内容;最后,采用备选的更新内容对细胞状态进行更新:S24. The input gate determines the amount of new information to be updated to the long short-term memory network cells. First, decide which information needs to be updated; second, calculate the alternative update content; finally, use the alternative update content to update the cell state:
其中分别表示权值和偏置信息,il,t表示更新信息的数量,表示备选的更新内容,Cl,t表示当前长短期记忆网络细胞的状态;in represent the weight and bias information, respectively, i l, t represent the number of update information, represents the alternative update content, C l, t represents the current state of the long-term and short-term memory network cells;
S25、输出门是将长短期记忆网络细胞状态进行处理,确定细胞的输出S25. The output gate is to process the cell state of the long short-term memory network to determine the output of the cell
hll,t=ol,t×tanh(Cl,t)hl l,t =o l,t ×tanh(C l,t )
其中hll,t为长短期记忆网络细胞的输出,Wl o和分别表示权值和偏置信息。where hl l,t is the output of the long short-term memory network cell, W l o and represent the weight and bias information, respectively.
在离线过程中,需要对混合深度神经网络进行权值与偏差的训练,其方法步骤包括:In the offline process, it is necessary to train the weights and biases of the hybrid deep neural network, and the method steps include:
S26、采用Softmax函数对步骤4中S25的输出计算不同类别脑电信号的概率分布S26, using the Softmax function to calculate the probability distribution of different types of EEG signals on the output of S25 in step 4
其中,m表示输出y的脑电类别索引,T表示脑电信号类别总数;Among them, m represents the EEG category index of the output y, and T represents the total number of EEG signal categories;
S27、采用交叉熵函数计算混合深度神经网络预测分类与真实脑电信号标签之间概率分布距离S27. Use the cross-entropy function to calculate the probability distribution distance between the predicted classification of the hybrid deep neural network and the real EEG signal label
其中,yp为混合深度神经网络预测分类结果,yl为真实脑电信号标签值;Among them, y p is the predicted classification result of the hybrid deep neural network, and yl is the label value of the real EEG signal;
S28、采用反向传播算法更新深度神经网络权值及偏差,以减小交叉熵函数值。S28, using the back propagation algorithm to update the weights and biases of the deep neural network to reduce the value of the cross-entropy function.
在在线过程中,无人机编队重构采用完全分布式编队重构控制器控制无人机队形:In the online process, the UAV formation reconfiguration adopts a fully distributed formation reconfiguration controller to control the UAV formation:
S29、定义编队位置误差表达式ePi:S29. Define the formation position error expression e Pi :
其中,P0为虚拟leader无人直升机的位置,ci,cj分别为无人机i和j相对于leader的期望编队位置;Among them, P 0 is the position of the virtual leader unmanned helicopter, c i , c j are the expected formation positions of the drones i and j relative to the leader, respectively;
S30、设计如下外环编队控制器U1i(t),使编队误差ePi和eVi在有限时间内收敛到零的一个很小的邻域内,并且无人机之间可以避免碰撞S30. Design the following outer ring formation controller U 1i (t), so that the formation errors e Pi and e Vi converge to a small neighborhood of zero within a limited time, and collisions between drones can be avoided
其中,速度跟踪误差eVi和编队重构误差σPi分别表示为:Among them, the velocity tracking error e Vi and the formation reconstruction error σ Pi are respectively expressed as:
自适应增益更新规则为:adaptive gain The update rule is:
其中,参数取值范围为a>0,b>0,c>0,βi>0,λ1>0,λ2>0,λ3>0,F1i为神经网络学习函数。Among them, the parameter value range is a>0, b>0, c>0, β i >0, λ 1 >0, λ 2 >0, λ 3 >0, and F 1i is the neural network learning function.
S31、无人机i和j之间的避碰势能函数设计为:S31. The collision avoidance potential energy function between UAVs i and j is designed as:
其中,相对距离定义为dij=||Pi-Pj||,ra是无人机的安全避碰半径,0<εa<1是非常小的一个正常数,所以ln(1/εa)≥1,参数取值范围为ηj>0,l1>0,ρa更新规则为:Among them, the relative distance is defined as d ij =||P i -P j ||, r a is the safe collision avoidance radius of the UAV, 0<ε a <1 is a very small positive number, so ln(1/ ε a )≥1, the parameter value range is η j >0, l 1 >0, the update rule of ρ a is:
本发明的特点及有益效果是:The characteristics and beneficial effects of the present invention are:
脑控无人机集群编队重构技术结合了脑机接口技术和无人机集群编队控制技术的优势,能够简化无人机编队控制指令,增加人机交互方式,增强人对无人机编队重构的操控能力,本发明不仅有效提高了脑机交互控制技术的理论研究水平,同时为未来脑-无人机交互控制系统的研究与发展打下良好的理论技术基础。The brain-controlled UAV swarm formation reconstruction technology combines the advantages of the brain-computer interface technology and the UAV swarm formation control technology, which can simplify the UAV formation control instructions, increase the human-computer interaction method, and enhance the human-level focus on the UAV formation. The invention not only effectively improves the theoretical research level of the brain-computer interaction control technology, but also lays a good theoretical and technical foundation for the research and development of the brain-UAV interactive control system in the future.
附图说明:Description of drawings:
图1脑控无人机集群队形控制重构方法流程图。Figure 1. Flow chart of the formation control reconstruction method of brain-controlled UAV swarms.
图2电极帽64导联安放位置示意图。Figure 2 Schematic diagram of the placement position of the 64-lead electrode cap.
图3视觉信号刺激示意图。Figure 3. Schematic diagram of visual signal stimuli.
图4混合深度神经网络架构示意图。Figure 4 Schematic diagram of the hybrid deep neural network architecture.
图5三层深度卷积神经网络示意图。Figure 5 Schematic diagram of a three-layer deep convolutional neural network.
图6深度长短期记忆网络示意图,图中:a.3层长短期记忆网络示意图;b.长短期记忆网络细胞示意图。Figure 6 is a schematic diagram of a deep long-term and short-term memory network, in the figure: a. The schematic diagram of the 3-layer long-term and short-term memory network; b. The schematic diagram of the long-term and short-term memory network cells.
图7无人机集群队形重构控制界面示意图。Figure 7 Schematic diagram of the control interface for the formation reconfiguration of the UAV swarm.
图8脑控无人机编队队重构效果图。Figure 8. Reconstruction rendering of brain-controlled UAV formation.
图9脑控无人机编队重构与VR结合效果图。Figure 9. The effect of the combination of brain-controlled UAV formation reconstruction and VR.
具体实施方式Detailed ways
本发明技术方案如下:一种脑控无人机集群队形重构控制方法,包括离线训练步骤和在线训练步骤:The technical scheme of the present invention is as follows: a brain-controlled UAV swarm formation reconstruction control method includes an offline training step and an online training step:
离线训练步骤:S1、运动想象训练系统初始化;S2、启动交互界面,交互界面随机显示上、下、左、右指向的箭头;S3、操纵人员按照箭头方向的指示分别想象舌头、脚、左手、右手部位的运动,通过电极帽采集操纵人员的脑电信号;S4、处理脑电信号,包括:预处理,信号特征提取和利用基于深度卷积网络及深度长短期记忆网络的混合深度神经网络进行分类;S5、通过神经网络的分类值与标签值的比较,采用反向传播算法对混合深度神经网络进行训练,确定网络权值。Offline training steps: S1, initialization of the motor imagery training system; S2, start the interactive interface, the interactive interface randomly displays arrows pointing up, down, left and right; S3, the operator imagines the tongue, foot, left hand, For the movement of the right hand, the EEG signal of the operator is collected through the electrode cap; S4, processing the EEG signal, including: preprocessing, signal feature extraction and using a hybrid deep neural network based on a deep convolutional network and a deep long and short-term memory network. Classification; S5. By comparing the classification value of the neural network with the label value, a back-propagation algorithm is used to train the hybrid deep neural network, and the network weight is determined.
在线控制步骤:S6、启动虚拟无人机集群编队队形软件,进入无人机集群队形控制界面;S7、操纵人员根据自己期望的无人机集群队形分别想象舌头、脚、左手、右手部位的运动,同时电极帽采集操纵者的脑电信号;S8、采集到脑电信号后,处理采集到的脑电信号,包括:预处理,信号特征提取和利用基于深度卷积网络及深度长短期记忆网络的混合深度神经网络进行分类;S9、根据输出的分类结果生成控制命令,控制虚拟无人机集群队形重构。Online control steps: S6. Start the virtual UAV swarm formation software and enter the UAV swarm formation control interface; S7. The operator imagines the tongue, feet, left hand, and right hand according to the desired UAV swarm formation. At the same time, the electrode cap collects the EEG signal of the operator; S8. After the EEG signal is collected, process the collected EEG signal, including: preprocessing, signal feature extraction and utilization based on deep convolutional network and depth The hybrid deep neural network of the short-term memory network is used for classification; S9, a control command is generated according to the output classification result to control the formation reconstruction of the virtual UAV swarm.
本发明所采的脑电信号预处理,信号特征提取和利用基于深度卷积网络及深度长短期记忆网络的混合深度神经网络技术主要包括如下几个步骤:The EEG signal preprocessing, signal feature extraction and utilization of the hybrid deep neural network technology based on the deep convolution network and the deep long short-term memory network mainly include the following steps:
1)脑电信号预处理,包括:1) EEG signal preprocessing, including:
S10、对脑电信号进行降采样处理,获得250Hz的脑电信号;S11、对采集到的脑电信号进行50Hz的工频滤波;S12、采用时间窗口对脑电信号时间序列进行分割(建议时间窗口为0.2s);S13、采用滤波器组对脑电信号进行滤波(建议滤波器频率分别为:4-8Hz,6-10Hz,…,36-40Hz;建议采用切比雪夫3型滤波器进行滤波)。S10. Perform down-sampling processing on the EEG signal to obtain an EEG signal of 250 Hz; S11. Perform a power frequency filter of 50 Hz on the collected EEG signal; S12. Use a time window to segment the EEG signal time series (recommended time The window is 0.2s); S13, use a filter bank to filter the EEG signal (recommended filter frequencies are: 4-8Hz, 6-10Hz, ..., 36-40Hz; it is recommended to use a
2)脑电信号特征提取,包括:2) EEG feature extraction, including:
采用一对多-公共空间模式方法(OVR-CSP)对S13获得的脑电信号进行特征提取,一对多-公共空间模式方法步骤包括:The one-to-many-common space pattern method (OVR-CSP) is used to extract the features of the EEG signals obtained in S13. The one-to-many-common space pattern method steps include:
S14、分别对每类运动想象信号求其相对于其他信号的公共空间模式滤波权重Wj:S14, respectively, for each type of motor imagery signal, obtain its common spatial mode filtering weight W j relative to other signals:
其中,Cj表示该类运动想象信号的协方差矩阵,Ej表示包含Cj特征值的对角阵,Wj表示该类运动想象信号相对于其他信号的公共空间模式滤波权重,j=1,2,3,4分别代表四类运动想象信号;Among them, C j represents the covariance matrix of this type of motor imagery signal, E j represents the diagonal matrix containing the eigenvalues of C j , W j represents the common spatial mode filtering weight of this type of motor imagery signal relative to other signals, j=1 , 2, 3, and 4 represent four types of motor imagery signals, respectively;
S15、分别提取Wj的前两列和后两列组合成新的矩阵按顺序结合 获得 S15. Extract the first two columns and the last two columns of W j respectively and combine them into a new matrix Combine in order get
S16、对S13获得的脑电信号进行一对多-公共空间模式滤波:S16. Perform one-to-many-common spatial mode filtering on the EEG signal obtained in S13:
其中,X表示S13获得的脑电信号,Z表示一对多-公共空间模式滤波之后的信号;Among them, X represents the EEG signal obtained in S13, and Z represents the signal after one-to-many-common spatial mode filtering;
S17、对S13获得的信号Z进行特征提取;S17, perform feature extraction on the signal Z obtained in S13;
其中,diag(·)为矩阵的对角阵元素,tr(·)为矩阵的迹。where diag(·) is the diagonal element of the matrix, and tr(·) is the trace of the matrix.
3)采用深度卷积神经网络对S17获得的特征进行空间特征学习,其方法步骤包括:3) Using a deep convolutional neural network to perform spatial feature learning on the features obtained in S17, the method steps include:
S18、深度卷积网络包含多个隐藏层,每个隐藏层是由卷基层和池化层组成,其中卷积层表示为:S18. The deep convolutional network contains multiple hidden layers, each hidden layer is composed of a convolutional base layer and a pooling layer, where the convolutional layer is expressed as:
hcl=R(conv(Wl,xl)+bl)hc l =R(conv(W l ,x l )+b l )
其中,xl与hcl分别表示第l层卷积层的输入与输出,Wl与bl分别表示第l层卷积层的权重和偏差,conv(·)表示卷积运算,R表示该层的激活函数(建议采用RELU函数),RELU函数表示为:Among them, x l and hcl represent the input and output of the first convolutional layer , respectively, W l and b l represent the weight and bias of the first convolutional layer, respectively, conv( ) represents the convolution operation, and R represents the The activation function of the layer (recommended to use the RELU function), the RELU function is expressed as:
RULA(a)=max(0,a)RULA(a)=max(0,a)
S19、在每个卷积层后为池化层,池化层用来对输入的特征进行压缩,一方面减少网络计算复杂度,另一方面,对特征进行压缩提从而取主要特征(池化层建议采用最大池化函数)。S19. After each convolutional layer is a pooling layer. The pooling layer is used to compress the input features. On the one hand, the computational complexity of the network is reduced. On the other hand, the features are compressed and extracted to extract the main features (pooling). layer suggests a max pooling function).
S20、将深度卷积神经网络的输出转换为1维向量形式。S20. Convert the output of the deep convolutional neural network into a 1-dimensional vector form.
4)采用深度长短期记忆网络对多个时间窗口的S20获得的特征进行时间特征学习,其方法步骤包括:4) using a deep long short-term memory network to perform temporal feature learning on the features obtained in S20 of multiple time windows, and the method steps include:
S21、深度深度长短期记忆网络为多个长短期记忆网络细胞串联组成;S21. The deep deep long-term and short-term memory network is composed of multiple long-term and short-term memory network cells in series;
S22、长短期记忆网络细胞由遗忘门,输入门,输出门组成;S22. The long short-term memory network cell is composed of a forgetting gate, an input gate, and an output gate;
S23、遗忘门决定从长短期记忆网络细胞丢弃的信息的数量,该门输出一个0到1的数值,1表示完全保留,0表示完全舍弃:S23. The forgetting gate determines the amount of information discarded from the long-term and short-term memory network cells. The gate outputs a value from 0 to 1, where 1 represents complete retention, and 0 represents complete discard:
其中,hll,t-1表示前一个时间窗口的长短期记忆网络细胞输出,xl,t表示当前细胞的的输入,l表示第l个隐藏层,t表示第t个时间窗口,Wl f和分别表示权值和偏置信息,σ为Sigmoid函数。Among them, hl l, t-1 represents the long-term and short-term memory network cell output of the previous time window, x l, t represents the input of the current cell, l represents the l-th hidden layer, t represents the t-th time window, W l f and represent the weight and bias information, respectively, and σ is the Sigmoid function.
S24、输入门决定对长短期记忆网络细胞更新新的信息的数量。首先,决定哪些信息需要更新。其次,计算备选的更新内容。最后,采用备选的更新内容对细胞状态进行更新。S24. The input gate determines the amount of new information to be updated to the long short-term memory network cells. First, decide what information needs to be updated. Second, the alternative update content is calculated. Finally, the cell state is updated with the alternative update content.
其中分别表示权值和偏置信息,il,t表示更新信息的数量,表示备选的更新内容,Cl,t表示当前长短期记忆网络细胞的状态。in represent the weight and bias information, respectively, i l, t represent the number of update information, represents the alternative update content, and C l, t represents the current state of the long short-term memory network cell.
S25、输出门是将长短期记忆网络细胞状态进行处理,确定细胞的输出S25. The output gate is to process the cell state of the long short-term memory network to determine the output of the cell
hll,t=ol,t×tanh(Cl,t)hl l,t =o l,t ×tanh(C l,t )
其中hll,t为长短期记忆网络细胞的输出,Wl o和分别表示权值和偏置信息。where hl l,t is the output of the long short-term memory network cell, W l o and represent the weight and bias information, respectively.
5)在离线过程中,需要对混合深度神经网络进行权值与偏差的训练,其方法步骤包括:5) In the offline process, it is necessary to train the weights and biases of the hybrid deep neural network, and the method steps include:
S26、采用Softmax函数对步骤4中S25的输出计算不同类别脑电信号的概率分布S26, using the Softmax function to calculate the probability distribution of different types of EEG signals on the output of S25 in step 4
其中,m表示输出y的脑电类别索引,T表示脑电信号类别总数。Among them, m represents the EEG category index of the output y, and T represents the total number of EEG signal categories.
S27、采用交叉熵函数计算混合深度神经网络预测分类与真实脑电信号标签之间概率分布距离S27. Use the cross-entropy function to calculate the probability distribution distance between the predicted classification of the hybrid deep neural network and the real EEG signal label
其中,yp为混合深度神经网络预测分类结果,yl为真实脑电信号标签值。Among them, y p is the predicted classification result of the hybrid deep neural network, and yl is the label value of the real EEG signal.
S28、采用反向传播算法更新深度神经网络权值及偏差,以减小交叉熵函数值。S28, using the back propagation algorithm to update the weights and biases of the deep neural network to reduce the value of the cross-entropy function.
6)在在线过程中,无人机编队重构采用完全分布式编队重构控制器控制无人机队形。6) In the online process, the UAV formation reconfiguration adopts a fully distributed formation reconfiguration controller to control the UAV formation.
S29、定义编队位置误差表达式ePi:S29. Define the formation position error expression e Pi :
其中,P0为虚拟leader无人直升机的位置,ci,cj分别为无人机i和j相对于leader的期望编队位置。Among them, P 0 is the position of the virtual leader unmanned helicopter, c i , c j are the expected formation positions of the drones i and j relative to the leader, respectively.
S30、设计如下外环编队控制器U1i(t),使编队误差ePi和eVi在有限时间内收敛到零的一个很小的邻域内,并且无人机之间可以避免碰撞S30. Design the following outer ring formation controller U 1i (t), so that the formation errors e Pi and e Vi converge to a small neighborhood of zero within a limited time, and collisions between drones can be avoided
其中,速度跟踪误差eVi和编队重构误差σPi分别表示为:Among them, the velocity tracking error e Vi and the formation reconstruction error σ Pi are respectively expressed as:
自适应增益更新规则为:adaptive gain The update rule is:
其中,参数取值范围为a>0,b>0,c>0,βi>0,λ1>0,λ2>0,λ3>0,F1i为神经网络学习函数。Among them, the parameter value range is a>0, b>0, c>0, β i >0, λ 1 >0, λ 2 >0, λ 3 >0, and F 1i is the neural network learning function.
S31、无人机i和j之间的避碰势能函数设计为:S31. The collision avoidance potential energy function between UAVs i and j is designed as:
其中,相对距离定义为dij=||Pi-Pj||,ra是无人机的安全避碰半径,0<εa<1是非常小的一个正常数,所以ln(1/εa)≥1,参数取值范围为ηj>0,l1>0,ρa更新规则为:Among them, the relative distance is defined as d ij =||P i -P j ||, r a is the safe collision avoidance radius of the UAV, 0<ε a <1 is a very small positive number, so ln(1/ ε a )≥1, the parameter value range is η j >0, l 1 >0, the update rule of ρ a is:
社会效益:此项发明对于脑机接口技术以及无人机群编队队形重构控制方法的研究和发展具有十分重要的意义。该项发明具有国际先进水平,它可以作为无人机编队队形控制的一种新的方式,进而有助于推动无人机多种交互方式及技术的发展。该技术不仅有效提高了脑机交互控制技术的理论研究水平,同时为未来脑-无人机交互控制系统的研究与发展打下良好的理论技术基础。Social benefits: This invention is of great significance to the research and development of brain-computer interface technology and the control method of formation reconfiguration of UAV swarms. This invention is at the international advanced level, and it can be used as a new method for UAV formation control, which in turn helps to promote the development of various interaction methods and technologies for UAVs. This technology not only effectively improves the theoretical research level of brain-computer interaction control technology, but also lays a good theoretical and technical foundation for the research and development of brain-UAV interactive control system in the future.
经济效益:脑控无人机集群编队重构技术结合了脑机接口技术和无人机集群编队控制技术的优势,能够简化无人机编队控制指令,增加人机交互方式,增强人对无人机编队重构的操控能力,具有较高的经济价值,不仅在商业表演领域而且在军事方面都具有很大的潜在应用。该脑控无人机集群编队技术不仅可以为未来无人机编队飞行控制系统开发提供新的操控思路;同时可以作为一种新的交互方式运用到游戏领域当中,从而具有重大的经济价值。Economic benefits: Brain-controlled UAV swarm formation reconstruction technology combines the advantages of brain-computer interface technology and UAV swarm formation control technology, which can simplify UAV formation control instructions, increase human-computer interaction, and enhance human-to-unmanned The control ability of aircraft formation reconfiguration has high economic value and has great potential applications not only in the field of commercial performances but also in the military. The brain-controlled UAV swarm formation technology can not only provide new control ideas for the development of future UAV formation flight control systems; at the same time, it can be applied to the game field as a new interactive method, which has great economic value.
下面结合附图和具体实施方式对本发明进行详细说明。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
参见图1,脑控无人机集群队形重构控制方法流程图。系统主要由离线训练系统及在线控制系统组成。在离线训练系统中,采用电极帽采集带有标签的运动想象脑电数据,以监督学习的方式对混合深度神经网络权值及偏差进行训练。在在线控制系统中,实时采集试验人员的运动想象脑电信号,采用0.2s的时间窗口对脑电数据进行分割,通过训练好的混合深度神经网对每个时间窗口的数据进行分类,如果连续n个时间窗口输出结果一致(建议n=3),则无人机群进行该结果对应的队形变换。Referring to Figure 1, the flow chart of the formation reconfiguration control method of the brain-controlled UAV swarm. The system is mainly composed of offline training system and online control system. In the offline training system, electrode caps are used to collect labeled motor imagery EEG data, and the weights and biases of the hybrid deep neural network are trained in a supervised learning manner. In the online control system, the motor imagery EEG signals of the testers are collected in real time, the EEG data is segmented by a time window of 0.2s, and the data in each time window is classified by the trained hybrid deep neural network. The output results of the n time windows are consistent (n=3 is recommended), and the UAV swarm performs the formation transformation corresponding to the result.
参见图2,电极帽64导联安放位置示意图。该系统需要采集试验人员不同部位的脑电信号。根据对运动想象脑电信号分析的基本要求,至少需要对C3,C4,Cz导联进行联通,建议联通导联为:FC5,FC3,FC1,FCz,FC2,FC4,FC6,FT7,FT8,C5,C3,C1,Cz,C2,C4,C6,T7,T8,CP5,CP3,CP1,CP2,CP4,CP6,TP7,TP8。Referring to FIG. 2 , a schematic diagram of the placement position of the lead 64 of the electrode cap. The system needs to collect EEG signals from different parts of the experimenter. According to the basic requirements of motor imagery EEG signal analysis, at least the leads C3, C4, and Cz need to be connected. The recommended leads are: FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT7, FT8, C5 , C3, C1, Cz, C2, C4, C6, T7, T8, CP5, CP3, CP1, CP2, CP4, CP6, TP7, TP8.
参见图3,视觉信号刺激示意图。在离线训练过程中,需要对实验人员不同运动想象部位的脑电数据进行采集。根据图中的红色箭头指向,试验人员需要想象自己身体不同部位的运动,其中左箭头代表想象左手运动,右箭头代表想象右手运动,上箭头代表想象舌头运动,下箭头代表想象足部运动。See Figure 3, a schematic diagram of visual signal stimulation. In the offline training process, the EEG data of different motor imagery parts of the experimenter needs to be collected. According to the red arrows in the figure, the experimenter needs to imagine the movement of different parts of their body. The left arrow represents the imagined left hand movement, the right arrow represents the imagined right hand movement, the upper arrow represents the imagined tongue movement, and the down arrow represents the imagined foot movement.
参见图4,混合深度神经网络架构示意图。对采集到的实验人员运动想象脑电数据,首先先进行预处理操作,包括:对信号中的50Hz工频进行陷波滤波器滤波;采用时间窗口对脑电数据进行分割(建议时间窗口为0.2s);采用滤波器组对每个时间窗口的脑电数据进行子带滤波(建议滤波器频率分别为:4-8Hz,6-10Hz,…,36-40Hz;建议采用切比雪夫3型滤波器进行滤波);采用采用一对多-公共空间模式方法(OVR-CSP)对每个时间窗口及每个子带滤波后的脑电信号进行特征提取。其次,采用深度卷积神经网络(CNN)对预处理后的脑电信号进行空间特征学习及分类,最后,采用深度长短期记忆网络(LSTM)对每个时间窗口的深度卷积神经网络提取的空间特征进行时间特征学习及分类,并对每个时间窗口的脑电信号输出一个分类结果。See Figure 4, a schematic diagram of the hybrid deep neural network architecture. For the collected EEG data of the experimenter's motor imagery, the preprocessing operation is firstly performed, including: filtering the 50Hz power frequency in the signal with a notch filter; using a time window to segment the EEG data (the recommended time window is 0.2 s); use a filter bank to perform subband filtering on the EEG data of each time window (recommended filter frequencies are: 4-8Hz, 6-10Hz, ..., 36-40Hz;
参见图5,三层深度卷积神经网络示意图。图中,“C&P”表示卷积与池化运算,“reshape”表示矩阵变维处理。深度卷积神经网络(CNN)建议采用3层隐藏层,每个隐藏层的卷积核建议大小为:3×3,其中每次卷积过程中建议采用Zero-padding策略,即对每个隐藏层的输入四周进行0值填充,以保证卷积后的输出与输入维度一致。建议隐藏层的池化层采用最大池化函数,池化层滤波器建议大小为:2×2。在深度卷积网络最后一层,将数据形式转化成1×64大小的一维向量。See Figure 5, a schematic diagram of a three-layer deep convolutional neural network. In the figure, "C&P" represents convolution and pooling operations, and "reshape" represents matrix variable dimension processing. The deep convolutional neural network (CNN) is recommended to use 3 hidden layers. The recommended size of the convolution kernel of each hidden layer is: 3 × 3. It is recommended to use the Zero-padding strategy in each convolution process, that is, for each hidden layer The input of the layer is filled with 0 values around the input to ensure that the output after convolution is consistent with the input dimension. It is recommended that the pooling layer of the hidden layer adopts the maximum pooling function, and the recommended size of the pooling layer filter is: 2×2. In the last layer of the deep convolutional network, the data format is converted into a one-dimensional vector of
参见图6,深度长短期记忆网络示意图,其中a图为3层长短期记忆网络示意图;b图长短期记忆网络细胞示意图。EEG信号为连续信号,因此将每个时间窗口的深度卷积网络特征作为深度长短期记忆网络(LSTM)的输入。深度长短期记忆网络建议采用3层隐藏层,每个隐藏层处理过程见图6b所示。对于每个时间窗口会产生一个预期分类,在离线训练过程中,预期分类与真实标签进行比较从而对深度混合神经网络的权值与偏差进行更新;在实际控制过程中,连续读取三个时间窗口预期分类,如三个时间窗口分类结果一致,则输出该分类结果。Referring to FIG. 6 , a schematic diagram of a deep long-term and short-term memory network, where a is a schematic diagram of a 3-layer long-term and short-term memory network; b is a schematic diagram of a long-term and short-term memory network cell. The EEG signal is a continuous signal, so the deep convolutional network features of each time window are used as the input of a deep long short-term memory network (LSTM). It is recommended to use 3 hidden layers for the deep long short-term memory network, and the processing process of each hidden layer is shown in Fig. 6b. For each time window, an expected classification is generated. In the offline training process, the expected classification is compared with the real label to update the weights and biases of the deep hybrid neural network; in the actual control process, three times are continuously read. The window is expected to be classified. If the classification results of the three time windows are consistent, the classification result will be output.
参见图7,无人机集群队形重构控制界面示意图。无人机集群队形控制软件基于Unity3D三维引擎进行制作,其中,无人机集群海洋飞行场景使用AQUAS Water工具生成,海洋岛礁使用MapMagic工具进行制作,使用UGUI组件制作软件界面,用于显示无人机集群当前队形,基于UDP传输层通信协议实现网络通信功能,接收脑电信号识别结果,演示出对应的无人机集群队形变换演示。控制界面一共可以选择四种编队类型,其中想象舌部运动控制“V”字型编队变换;想象足部运动控制横队变换;想象左手运动为方队变换;想象右手运动为纵队变换。Referring to Figure 7, a schematic diagram of the control interface for the formation reconfiguration of the UAV swarm. The UAV swarm formation control software is made based on the Unity3D three-dimensional engine. Among them, the UAV swarm ocean flight scene is generated with the AQUAS Water tool, the ocean islands and reefs are made with the MapMagic tool, and the UGUI component is used to make the software interface for displaying no The current formation of the human-machine swarm is based on the UDP transport layer communication protocol to realize the network communication function, receive the recognition result of the EEG signal, and demonstrate the corresponding UAV swarm formation transformation demonstration. A total of four formation types can be selected on the control interface, among which, imagine the tongue movement to control the "V"-shaped formation change; imagine the foot movement to control the horizontal formation change; imagine the left hand movement to change the square formation; imagine the right hand movement to change the column formation.
下面给出具体实例:Specific examples are given below:
1.系统软硬件配置1. System hardware and software configuration
根据本节图一所示平台总体结构,本实例采用硬件配置如下表所示:According to the overall structure of the platform shown in Figure 1 of this section, the hardware configuration used in this example is shown in the following table:
本实例的软件实现方式包括:脑电分析程序采用MATLAB以及Python开发;交互界面基于Unity 3d引擎。The software implementation methods of this example include: the EEG analysis program is developed with MATLAB and Python; the interactive interface is based on the Unity 3d engine.
2.实验结果2. Experimental results
本实例在实验平台系统下进行了无人机编队控制实验,图8所示为脑控无人机编队队形重构效果图。图9所示为脑控无人机编队队形与VR结合效果图,使操控人员可以享受沉浸式体验。无人机集群编队队形重构意念控制方法取得了良好的交互式仿真效果,验证了本发明的可行性。In this example, the UAV formation control experiment was carried out under the experimental platform system. Figure 8 shows the effect diagram of the formation reconstruction of the brain-controlled UAV formation. Figure 9 shows the combined effect of the brain-controlled drone formation and VR, so that the operator can enjoy an immersive experience. The idea control method for formation reconstruction of UAV swarm formation has achieved good interactive simulation effect, which verifies the feasibility of the present invention.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910581534.0A CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910581534.0A CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110377049A CN110377049A (en) | 2019-10-25 |
CN110377049B true CN110377049B (en) | 2022-05-17 |
Family
ID=68251401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910581534.0A Active CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110377049B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111222578A (en) * | 2020-01-09 | 2020-06-02 | 哈尔滨工业大学 | An online processing method for motor imagery EEG signals |
CN111638724A (en) * | 2020-05-07 | 2020-09-08 | 西北工业大学 | Novel cooperative intelligent control method for unmanned aerial vehicle group computer |
CN112051780B (en) * | 2020-09-16 | 2022-05-17 | 北京理工大学 | A mobile robot formation control system and method based on brain-computer interface |
CN113009931B (en) * | 2021-03-08 | 2022-11-08 | 北京邮电大学 | A collaborative control device and method for a mixed formation of manned aircraft and unmanned aerial vehicles |
CN113741696B (en) * | 2021-09-07 | 2024-12-20 | 中国人民解放军军事科学院军事医学研究院 | A brain-controlled drone system based on LED three-dimensional interactive interface |
CN114637401B (en) * | 2022-03-10 | 2024-12-27 | 钧晟(天津)科技发展有限公司 | A method, device and system for controlling a drone based on an EEG acquisition headband |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140137870A (en) * | 2013-05-24 | 2014-12-03 | 고려대학교 산학협력단 | Apparatus and method for brain-brain interfacing |
CN106200680A (en) * | 2016-09-27 | 2016-12-07 | 深圳市千粤科技有限公司 | A kind of unmanned plane cluster management system and control method thereof |
CN107291096A (en) * | 2017-06-22 | 2017-10-24 | 浙江大学 | A kind of unmanned plane multimachine hybrid task cluster system |
CN107643695A (en) * | 2017-09-07 | 2018-01-30 | 天津大学 | Someone/unmanned plane cluster formation VR emulation modes and system based on brain electricity |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108845802A (en) * | 2018-05-15 | 2018-11-20 | 天津大学 | Unmanned plane cluster formation interactive simulation verifies system and implementation method |
CN109583346A (en) * | 2018-11-21 | 2019-04-05 | 齐鲁工业大学 | EEG feature extraction and classifying identification method based on LSTM-FC |
CN109683626A (en) * | 2018-11-08 | 2019-04-26 | 浙江工业大学 | A kind of quadrotor drone formation control method based on Adaptive radial basis function neural network |
CN109784211A (en) * | 2018-12-26 | 2019-05-21 | 西安交通大学 | A kind of Mental imagery Method of EEG signals classification based on deep learning |
-
2019
- 2019-06-29 CN CN201910581534.0A patent/CN110377049B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140137870A (en) * | 2013-05-24 | 2014-12-03 | 고려대학교 산학협력단 | Apparatus and method for brain-brain interfacing |
CN106200680A (en) * | 2016-09-27 | 2016-12-07 | 深圳市千粤科技有限公司 | A kind of unmanned plane cluster management system and control method thereof |
CN107291096A (en) * | 2017-06-22 | 2017-10-24 | 浙江大学 | A kind of unmanned plane multimachine hybrid task cluster system |
CN107643695A (en) * | 2017-09-07 | 2018-01-30 | 天津大学 | Someone/unmanned plane cluster formation VR emulation modes and system based on brain electricity |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108845802A (en) * | 2018-05-15 | 2018-11-20 | 天津大学 | Unmanned plane cluster formation interactive simulation verifies system and implementation method |
CN109683626A (en) * | 2018-11-08 | 2019-04-26 | 浙江工业大学 | A kind of quadrotor drone formation control method based on Adaptive radial basis function neural network |
CN109583346A (en) * | 2018-11-21 | 2019-04-05 | 齐鲁工业大学 | EEG feature extraction and classifying identification method based on LSTM-FC |
CN109784211A (en) * | 2018-12-26 | 2019-05-21 | 西安交通大学 | A kind of Mental imagery Method of EEG signals classification based on deep learning |
Non-Patent Citations (2)
Title |
---|
A Performance Study of 14-Channel and 5-Channel EEG Systems for Real-Time Control of Unmanned Aerial Vehicles (UAVs);Abijith Vijayendra,等;《2018 Second IEEE International Conference on Robotic Computing》;20181231;第183-188页 * |
基于SSVEP的脑控飞行器研究与实现;徐贤,等;《电子测试》;20181231;第10-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110377049A (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110377049B (en) | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method | |
CN112667080B (en) | Intelligent control method for electroencephalogram signal unmanned platform based on deep convolution countermeasure network | |
Chen et al. | Research of improving semantic image segmentation based on a feature fusion model | |
CN108319928B (en) | A deep learning method and system based on multi-objective particle swarm optimization optimization | |
Zhang et al. | A cable fault recognition method based on a deep belief network | |
CN110307982B (en) | Bearing fault classification method based on CNN and Adaboost | |
CN107256393A (en) | The feature extraction and state recognition of one-dimensional physiological signal based on deep learning | |
CN111544855B (en) | Pure idea control intelligent rehabilitation method and application based on distillation learning and deep learning | |
CN112270345B (en) | Clustering algorithm based on self-supervision dictionary learning | |
CN103729459A (en) | Method for establishing sentiment classification model | |
Cheng et al. | Research status of artificial neural network and its application assumption in aviation | |
CN109948498A (en) | A dynamic gesture recognition method based on 3D convolutional neural network algorithm | |
CN106647272A (en) | Robot route planning method by employing improved convolutional neural network based on K mean value | |
CN108364062B (en) | Construction method of deep learning model based on MEMD and its application in motor imagery | |
CN117671666A (en) | A target recognition method based on adaptive graph convolutional neural network | |
CN116225212A (en) | Gesture, hand shape and voice collaborative multi-mode interaction sensing method for human and unmanned aerial vehicle group | |
Shang et al. | [Retracted] Human‐Computer Interaction of Networked Vehicles Based on Big Data and Hybrid Intelligent Algorithm | |
CN113780134B (en) | Motor imagery brain electrolysis code method based on SheffleNetV 2 network | |
Su et al. | Nesterov accelerated gradient descent-based convolution neural network with dropout for facial expression recognition | |
Liang et al. | 1d convolutional neural networks for fault diagnosis of high-speed train bogie | |
CN116805157B (en) | Unmanned cluster autonomous dynamic assessment method and device | |
Li et al. | Multimodal information-based broad and deep learning model for emotion understanding | |
Kalimuthu et al. | Multi-class facial emotion recognition using hybrid dense squeeze network | |
CN116882584A (en) | A flight delay prediction method and system | |
CN114707609A (en) | A universal three-dimensional point cloud model classification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |