CN110377049B - Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method - Google Patents
Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method Download PDFInfo
- Publication number
- CN110377049B CN110377049B CN201910581534.0A CN201910581534A CN110377049B CN 110377049 B CN110377049 B CN 110377049B CN 201910581534 A CN201910581534 A CN 201910581534A CN 110377049 B CN110377049 B CN 110377049B
- Authority
- CN
- China
- Prior art keywords
- unmanned aerial
- aerial vehicle
- formation
- electroencephalogram
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000015572 biosynthetic process Effects 0.000 title claims abstract description 75
- 238000000034 method Methods 0.000 title claims description 43
- 238000013528 artificial neural network Methods 0.000 claims abstract description 38
- 230000015654 memory Effects 0.000 claims abstract description 25
- 238000012549 training Methods 0.000 claims abstract description 25
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 26
- 230000006403 short-term memory Effects 0.000 claims description 24
- 238000001914 filtration Methods 0.000 claims description 23
- 230000033001 locomotion Effects 0.000 claims description 19
- 238000012545 processing Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 14
- 230000007787 long-term memory Effects 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 14
- 238000011176 pooling Methods 0.000 claims description 13
- 230000002452 interceptive effect Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 8
- 230000004913 activation Effects 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 230000014759 maintenance of location Effects 0.000 claims description 3
- 238000005381 potential energy Methods 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 3
- 238000007363 ring formation reaction Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 abstract description 7
- 230000000694 effects Effects 0.000 abstract description 6
- 238000007405 data analysis Methods 0.000 abstract description 2
- 238000004519 manufacturing process Methods 0.000 abstract 1
- 210000004027 cell Anatomy 0.000 description 22
- 238000010586 diagram Methods 0.000 description 15
- 210000004556 brain Anatomy 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 13
- 230000009466 transformation Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- RZVHIXYEVGDQDX-UHFFFAOYSA-N 9,10-anthraquinone Chemical compound C1=CC=C2C(=O)C3=CC=CC=C3C(=O)C2=C1 RZVHIXYEVGDQDX-UHFFFAOYSA-N 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 230000000638 stimulation Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000012827 research and development Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000002301 combined effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 230000033772 system development Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
- G05D1/104—Simultaneous control of position or course in three dimensions specially adapted for aircraft involving a plurality of aircrafts, e.g. formation flying
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the fields of data analysis, brain-computer interfaces, human-computer interaction, software development and the like, and aims to improve the control effect on the formation of unmanned aerial vehicle clusters and enable operators to obtain more convenient and efficient operation feeling. An off-line training step: initializing a motor imagery training system; training the mixed deep neural network by adopting a back propagation algorithm through comparison of the classification value of the neural network and the label value, and determining a network weight; an online control step: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; and generating a control command according to the output classification result, and controlling the reconstruction of the virtual unmanned aerial vehicle cluster formation. The invention is mainly applied to the design and manufacture occasions of unmanned aerial vehicles.
Description
Technical Field
The invention relates to the fields of data analysis, brain-computer interfaces, human-computer interaction, software development and the like, in particular to an unmanned aerial vehicle cluster formation reconstruction control method based on the brain-computer interfaces.
Background
A brain-computer interface (BCI) system provides a new man-machine interaction method, and effective information in the brain-computer interface (BCI) system can be detected by extracting electroencephalogram signals of an operator, so that the function of controlling other equipment is achieved. Among the many brain-computer interface paradigms, P300, steady state visual potential (SSVEP), motor imagery are the most popular research areas today. Wherein the movement idea is only spontaneous and does not need external stimulation.
Motor imagery, meaning that the brain has only the intent of limb movement but does not actually perform, reflects a person's desire for movement and a preview of the actual movement that will occur. When a particular motion scene is envisioned, the brain produces a continuous EEG brain electrical signal. The electroencephalogram features extracted from the signals are related to the initial mental activities of experimenters, so that the signals can be converted into control instructions for external equipment.
Deep learning, one of machine learning, has been highlighted in the fields of computer vision, speech recognition, and natural language processing. In the big data era, a large number of motor imagery data sets are available through various channels. Therefore, the deep learning method can better learn and classify motor imagery features in a large amount of electroencephalogram data. The deep convolutional network (CNN) is a technology which is widely applied and can fully mine the spatial characteristics in the electroencephalogram data; the deep long-short term memory network (LSTM) is a time recurrent neural network, is very suitable for processing and classifying time series signals, and can well extract time characteristics in electroencephalogram data by adopting the long-short term memory network. Therefore, a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network is constructed to supervise and learn the electroencephalogram signals, the motor imagery features in the electroencephalogram signals are fully mined in space and time, and the real-time performance and the accuracy are better.
In the field of aerospace research, multi-machine formation and man-machine cooperation are the current research trends, and new requirements are provided for the control means of unmanned aerial vehicles. The traditional single-plane flight control equipment cannot meet the control requirement of the current unmanned plane cluster, so that the development of a new control method is urgent. In the middle of introducing the aerospace field to the BCI technique, unmanned aerial vehicle flight hand not only can rely on traditional flight control equipment to control unmanned aerial vehicle cluster position, can adopt the idea to carry out reconfiguration control to unmanned aerial vehicle cluster formation simultaneously, improves the controllability of control personnel to the unmanned aerial vehicle cluster greatly.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention aims to provide a brain control unmanned aerial vehicle cluster formation reconfiguration control method, which can enable an operator to control an unmanned aerial vehicle cluster through electroencephalogram signals based on motor imagery, so that the unmanned aerial vehicle cluster is changed into an expected formation, the control effect on the unmanned aerial vehicle cluster formation is improved, and the operator can obtain more convenient and efficient control feeling. Therefore, the invention adopts the technical scheme that the unmanned aerial vehicle cluster formation reconfiguration control method based on the brain-computer interface comprises the following steps of off-line training and on-line training:
an off-line training step: s1, initializing a motor imagery training system; s2, starting an interactive interface, wherein the interactive interface randomly displays arrows pointing to the upper, lower, left and right directions; s3, the operator respectively imagines the movement of the tongue, the feet, the left hand and the right hand according to the direction of the arrow, and the electroencephalogram signals of the operator are collected through the electrode cap; s4, processing the electroencephalogram signals, including: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; s5, training the mixed deep neural network by adopting a back propagation algorithm through comparison of the classification value and the label value of the neural network, and determining a network weight;
an online control step: s6, starting virtual unmanned aerial vehicle cluster formation form software, and entering an unmanned aerial vehicle cluster formation control interface; s7, enabling operators to imagine the movements of the tongue, the feet, the left hand and the right hand respectively according to the expected unmanned aerial vehicle cluster formation, and meanwhile, collecting electroencephalogram signals of the operators by the electrode caps; s8, processing the acquired electroencephalogram signals after acquiring the electroencephalogram signals, and the processing method comprises the following steps: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; and S9, generating a control command according to the output classification result, and controlling the reconstruction of the virtual unmanned aerial vehicle cluster formation.
Specifically, 1) electroencephalogram signal preprocessing, comprising:
s10, performing down-sampling processing on the electroencephalogram signals to obtain 250Hz electroencephalogram signals; s11, carrying out power frequency filtering of 50Hz on the acquired electroencephalogram signals; s12, segmenting the electroencephalogram signal time sequence by adopting a time window; s13, filtering the electroencephalogram signals by adopting a filter bank;
2) the extraction of the characteristics of the electroencephalogram signals comprises the following steps:
performing feature extraction on the electroencephalogram signal obtained by the S13 by adopting a one-to-many-public space mode method OVR-CSP, wherein the one-to-many-public space mode method comprises the following steps:
S14、respectively solving common spatial mode filtering weight W of each type of motor imagery signal relative to other signalsj:
Wherein, CjCovariance matrix representing the motor imagery signal of this type, EjRepresents that it contains CjDiagonal array of eigenvalues, WjRepresenting the common spatial mode filtering weight of the motor imagery signal relative to other signals, wherein j is 1,2,3 and 4 respectively represent four types of motor imagery signals;
s15, respectively extracting WjThe first two columns and the second two columns are combined into a new matrixAre combined in sequence To obtain
S16, performing one-to-many common spatial mode filtering on the electroencephalogram signals obtained in the S13:
wherein, X represents the electroencephalogram signal obtained at S13, and Z represents the signal after one-to-many-common spatial mode filtering;
s17, extracting the characteristics of the signal Z obtained in S13;
wherein, diag (·) is a diagonal array element of the matrix, and tr (·) is a trace of the matrix;
3) and (3) performing spatial feature learning on the features obtained in the step S17 by adopting a deep convolutional neural network, wherein the method comprises the following steps:
s18, the deep convolutional network comprises a plurality of hidden layers, each hidden layer is composed of a volume base layer and a pooling layer, wherein the volume layer is represented as:
hcl=R(conv(Wl,xl)+bl)
wherein x islAnd hc andlrespectively representing the input and output of the first convolution layer, WlAnd blRespectively representing the weight and deviation of the convolution layer of the first layer, conv (-) represents convolution operation, and R represents the activation function of the layer;
s19, forming a pooling layer after each convolution layer;
s20, converting the output of the deep convolutional neural network into a 1-dimensional vector form;
4) performing time feature learning on the features obtained in S20 of a plurality of time windows by using a deep long short-term memory network, wherein the method comprises the following steps:
s21, the deep depth long and short term memory network is formed by connecting a plurality of long and short term memory network cells in series;
s22, the long and short term memory network cell is composed of a forgetting gate, an input gate and an output gate;
s23, a forgetting gate determines the amount of information discarded from the long-short term memory network cell, the gate outputs a value of 0 to 1, 1 indicates complete retention, 0 indicates complete discard:
wherein hl isl,t-1Cell output of long short term memory network representing previous time window, xl,tRepresenting the input of the current cell, l representing the l hidden layer, t representing the t time window, Wl fAndrespectively representing weight and bias information, wherein sigma is a Sigmoid function;
s24, the input gate determines the amount of information to be updated for the long term memory network cell. Firstly, determining which information needs to be updated; secondly, calculating alternative updating contents; and finally, updating the cell state by adopting the alternative updating content:
whereinRespectively representing weight and bias information, il,tIndicating the amount of the update information to be updated,representing alternative updates, Cl,tRepresenting the current state of the long-short term memory network cells;
s25, the output gate is used for processing the cell state of the long-short term memory network and determining the output of the cell
hll,t=ol,t×tanh(Cl,t)
Wherein hll,tFor the output of long-short term memory network cells, Wl oAndrespectively representing weight and bias information.
In the off-line process, the weight and deviation training of the mixed deep neural network is needed, and the method comprises the following steps:
s26, calculating probability distribution of different categories of electroencephalogram signals by adopting Softmax function to the output of the S25 in the step 4
Wherein m represents the electroencephalogram category index of the output y, and T represents the total number of electroencephalogram signal categories;
s27, calculating the probability distribution distance between the prediction classification of the mixed depth neural network and the real electroencephalogram signal label by adopting a cross entropy function
Wherein, ypPredicting classification results for a hybrid deep neural network, ylLabeling values for real electroencephalogram signals;
and S28, updating the weight and the deviation of the deep neural network by adopting a back propagation algorithm so as to reduce the cross entropy function value.
In the online process, the formation of the unmanned aerial vehicle adopts a completely distributed formation reconstruction controller to control the formation of the unmanned aerial vehicle:
s29, defining a formation position error expression ePi:
Wherein, P0Position of the virtual leader unmanned helicopter, ci,cjExpected formation positions of unmanned planes i and j relative to leader respectively;
s30, designing an outer ring formation controller U as follows1i(t) make the formation error ePiAnd eViConverge to a very small neighborhood of zero in a limited time and avoid collisions between drones
Wherein the velocity tracking error eViAnd formation reconstruction error sigmaPiRespectively expressed as:
wherein the parameter value range is a>0,b>0,c>0,βi>0,λ1>0,λ2>0,λ3>0,F1iA function is learned for the neural network.
The collision avoidance potential energy function between S31 and unmanned aerial vehicles i and j is designed as follows:
wherein the relative distance is defined as dij=||Pi-Pj||,raIs the safe collision avoidance radius of the unmanned aerial vehicle, 0<εa<1 is a very small normal number, so ln (1/ε)a) More than or equal to 1, and the parameter value range is etaj>0,l1>0,ρaThe update rule is as follows:
the invention has the characteristics and beneficial effects that:
the brain-controlled unmanned aerial vehicle cluster formation reconstruction technology combines the advantages of a brain-computer interface technology and an unmanned aerial vehicle cluster formation control technology, can simplify unmanned aerial vehicle formation control instructions, increase man-machine interaction modes, and enhance the control capability of people on unmanned aerial vehicle formation reconstruction.
Description of the drawings:
fig. 1 is a flow chart of a brain control unmanned aerial vehicle cluster formation control reconstruction method.
Fig. 2 is a schematic illustration of the lead placement position of the electrode cap 64.
Fig. 3 is a schematic diagram of visual signal stimulation.
FIG. 4 is a schematic diagram of a hybrid deep neural network architecture.
FIG. 5 is a schematic diagram of a three-layer deep convolutional neural network.
FIG. 6 is a diagram of a deep long short term memory network, in which: a.3 layer long-short term memory network schematic diagram; b. schematic diagram of long-short term memory network cell.
Fig. 7 is a schematic view of a formation reconfiguration control interface of an unmanned aerial vehicle cluster.
Fig. 8 is a reconstruction effect diagram of formation of brain-controlled unmanned aerial vehicles.
Fig. 9 is a combined effect diagram of formation reconstruction and VR for brain-controlled drones.
Detailed Description
The technical scheme of the invention is as follows: a brain control unmanned aerial vehicle cluster formation reconstruction control method comprises an off-line training step and an on-line training step:
an off-line training step: s1, initializing a motor imagery training system; s2, starting an interactive interface, wherein the interactive interface randomly displays arrows pointing to the upper, lower, left and right directions; s3, the operator respectively imagines the movement of the tongue, the feet, the left hand and the right hand according to the direction of the arrow, and the electroencephalogram signals of the operator are collected through the electrode cap; s4, processing the electroencephalogram signals, including: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; and S5, training the mixed deep neural network by adopting a back propagation algorithm through the comparison of the classification value and the label value of the neural network, and determining the network weight.
An online control step: s6, starting virtual unmanned aerial vehicle cluster formation form software, and entering an unmanned aerial vehicle cluster formation control interface; s7, enabling operators to imagine the movements of the tongue, the feet, the left hand and the right hand respectively according to the expected unmanned aerial vehicle cluster formation, and meanwhile, collecting electroencephalogram signals of the operators by the electrode caps; s8, processing the acquired electroencephalogram signals after acquiring the electroencephalogram signals, and the processing method comprises the following steps: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; and S9, generating a control command according to the output classification result, and controlling the reconstruction of the virtual unmanned aerial vehicle cluster formation.
The electroencephalogram signal preprocessing, signal feature extraction and mixed deep neural network technology based on the deep convolutional network and the deep long-short term memory network mainly comprises the following steps:
1) preprocessing an electroencephalogram signal, comprising:
s10, performing down-sampling processing on the electroencephalogram signals to obtain 250Hz electroencephalogram signals; s11, carrying out power frequency filtering of 50Hz on the acquired electroencephalogram signals; s12, segmenting the electroencephalogram signal time sequence by adopting a time window (the suggested time window is 0.2S); s13, filtering the electroencephalogram signals by using a filter bank (the frequencies of the filters are respectively 4-8Hz, 6-10Hz, … and 36-40Hz, and the Chebyshev 3 type filter is recommended to be used for filtering).
2) The extraction of the characteristics of the electroencephalogram signals comprises the following steps:
the electroencephalogram signal obtained by the S13 is subjected to feature extraction by adopting a one-to-many common spatial pattern method (OVR-CSP), which comprises the following steps:
s14, respectively calculating the common spatial mode filtering weight W of each type of motor imagery signal relative to other signalsj:
Wherein, CjCovariance matrix representing the motor imagery signal of this type, EjRepresents that it contains CjDiagonal array of eigenvalues, WjRepresenting the common spatial mode filtering weight of the motor imagery signal relative to other signals, wherein j is 1,2,3 and 4 respectively represent four types of motor imagery signals;
s15, respectively extracting WjThe first two columns and the second two columns are combined into a new matrixAre combined in sequence To obtain
S16, performing one-to-many common spatial mode filtering on the electroencephalogram signals obtained in the S13:
wherein, X represents the electroencephalogram signal obtained at S13, and Z represents the signal after one-to-many-common spatial mode filtering;
s17, extracting the characteristics of the signal Z obtained in S13;
wherein, diag (·) is a diagonal matrix element of the matrix, and tr (·) is a trace of the matrix.
3) And (3) performing spatial feature learning on the features obtained in the step S17 by adopting a deep convolutional neural network, wherein the method comprises the following steps:
s18, the deep convolutional network comprises a plurality of hidden layers, each hidden layer is composed of a volume base layer and a pooling layer, wherein the volume layer is represented as:
hcl=R(conv(Wl,xl)+bl)
wherein x islAnd hc andlrespectively representing the input and output of the first convolution layer, WlAnd blRespectively representing the weight and the deviation of the convolution layer of the l layer, conv (·) represents the convolution operation, R represents the activation function of the layer (a RELU function is proposed), which is expressed as:
RULA(a)=max(0,a)
and S19, a pooling layer is arranged after each convolution layer, and the pooling layer is used for compressing input features, on one hand, reducing the network computation complexity, and on the other hand, compressing and extracting the features so as to obtain main features (the pooling layer suggests to adopt a maximum pooling function).
And S20, converting the output of the deep convolutional neural network into a 1-dimensional vector form.
4) Performing time feature learning on the features obtained in S20 of a plurality of time windows by using a deep long short-term memory network, wherein the method comprises the following steps:
s21, the deep depth long and short term memory network is formed by connecting a plurality of long and short term memory network cells in series;
s22, the long and short term memory network cell is composed of a forgetting gate, an input gate and an output gate;
s23, a forgetting gate determines the amount of information discarded from the long-short term memory network cell, the gate outputs a value of 0 to 1, 1 indicates complete retention, 0 indicates complete discard:
wherein hl isl,t-1Cell output of long short term memory network representing previous time window, xl,tRepresenting the input of the current cell, l representing the l hidden layer, t representing the t time window, Wl fAndrespectively representing weight and bias information, wherein sigma is a Sigmoid function.
S24, the input gate determines the amount of information to be updated for the long term memory network cell. First, it is decided which information needs to be updated. Second, alternative updates are computed. And finally, updating the cell state by adopting the alternative updating content.
WhereinRespectively representing weight and bias information, il,tRepresentation updateThe amount of information is such that the user can,representing alternative updates, Cl,tIndicating the current state of the long-short term memory network cells.
S25, the output gate is used for processing the cell state of the long-short term memory network and determining the output of the cell
hll,t=ol,t×tanh(Cl,t)
Wherein hll,tFor the output of long and short term memory network cells, Wl oAndrespectively representing weight and bias information.
5) In the off-line process, the weight and deviation training of the mixed deep neural network is needed, and the method comprises the following steps:
s26, calculating probability distribution of different categories of electroencephalogram signals by adopting Softmax function to the output of the S25 in the step 4
Wherein m represents the electroencephalogram category index of the output y, and T represents the total number of electroencephalogram signal categories.
S27, calculating the probability distribution distance between the prediction classification of the mixed depth neural network and the real electroencephalogram signal label by adopting a cross entropy function
Wherein, ypPredicting classification results for a hybrid deep neural network, ylAnd labeling the value of the real electroencephalogram signal.
And S28, updating the weight and the deviation of the deep neural network by adopting a back propagation algorithm so as to reduce the cross entropy function value.
6) In the online process, the formation of the unmanned aerial vehicles adopts a completely distributed formation reconstruction controller to control the formation of the unmanned aerial vehicles.
S29, defining a formation position error expression ePi:
Wherein, P0Position of a virtual leader unmanned helicopter, ci,cjThe expected formation positions of drones i and j, respectively, relative to leader.
S30, designing an outer ring formation controller U as follows1i(t) make the formation error ePiAnd eViConverge to a very small neighborhood of zero in a limited time and avoid collisions between drones
Wherein the velocity tracking error eViAnd formation reconstruction error sigmaPiRespectively expressed as:
wherein the parameter value range is a>0,b>0,c>0,βi>0,λ1>0,λ2>0,λ3>0,F1iA function is learned for the neural network.
The collision avoidance potential energy function between S31 and unmanned aerial vehicles i and j is designed as follows:
wherein the relative distance is defined as dij=||Pi-Pj||,raIs the safe collision avoidance radius of the unmanned aerial vehicle, 0<εa<1 is a very small normal number, so ln (1/ε)a) Not less than 1, parameter value range is etaj>0,l1>0,ρaThe update rule is as follows:
social benefits are as follows: the invention has very important significance for the research and development of a computer interface technology and a formation reconfiguration control method for the formation of the unmanned aerial vehicle group. The invention has an international advanced level, can be used as a new mode for controlling the formation of the unmanned aerial vehicle, and is further beneficial to promoting the development of various interaction modes and technologies of the unmanned aerial vehicle. The technology not only effectively improves the theoretical research level of the brain-unmanned aerial vehicle interactive control technology, but also lays a good theoretical technical foundation for the research and development of a brain-unmanned aerial vehicle interactive control system in the future.
Economic benefits are as follows: the brain-controlled unmanned aerial vehicle cluster formation reconstruction technology combines the advantages of a brain-computer interface technology and an unmanned aerial vehicle cluster formation control technology, can simplify unmanned aerial vehicle formation control instructions, increase man-machine interaction modes, enhance the control capability of people on unmanned aerial vehicle formation reconstruction, have higher economic value, and have great potential application in the commercial performance field and military. The brain-controlled unmanned aerial vehicle cluster formation technology can provide a new control idea for future unmanned aerial vehicle formation flight control system development; meanwhile, the method can be applied to the field of games as a new interaction mode, thereby having great economic value.
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, a flow chart of a brain-controlled unmanned aerial vehicle cluster formation reconstruction control method is shown. The system mainly comprises an off-line training system and an on-line control system. In an off-line training system, an electrode cap is adopted to collect motor imagery electroencephalogram data with labels, and the weight and deviation of the mixed deep neural network are trained in a supervised learning mode. In an online control system, motor imagery electroencephalogram signals of testers are collected in real time, electroencephalogram data are segmented by adopting 0.2s time windows, data of each time window are classified through a trained mixed depth neural network, and if output results of n continuous time windows are consistent (n is recommended to be 3), the unmanned aerial vehicle group performs corresponding formation transformation of the results.
Referring to fig. 2, the lead placement location of the electrode cap 64 is shown schematically. The system needs to collect electroencephalogram signals of different parts of a tester. According to the basic requirements on the analysis of the motor imagery electroencephalogram signal, at least C3, C4 and Cz leads need to be communicated, and the communication leads are suggested as follows: FC5, FC3, FC1, FCz, FC2, FC4, FC6, FT7, FT8, C5, C3, C1, Cz, C2, C4, C6, T7, T8, CP5, CP3, CP1, CP2, CP4, CP6, TP7, TP 8.
Referring to fig. 3, a visual signal stimulation diagram is shown. In the off-line training process, electroencephalogram data of different motor imagery parts of an experimenter need to be acquired. According to the red arrow direction in the figure, the experimenter needs to imagine the movement of different parts of the body of the experimenter, wherein the left arrow represents the imagined left hand movement, the right arrow represents the imagined right hand movement, the upper arrow represents the imagined tongue movement, and the lower arrow represents the imagined foot movement.
Referring to fig. 4, a hybrid deep neural network architecture is shown. Firstly, preprocessing the acquired electroencephalogram data of the motor imagery of the experimenters, comprising the following steps: carrying out notch filter filtering on 50Hz power frequency in the signal; segmenting the electroencephalogram data by adopting a time window (the suggested time window is 0.2 s); sub-band filtering is carried out on the electroencephalogram data of each time window by adopting a filter group (the frequencies of the proposed filters are respectively 4-8Hz, 6-10Hz, …,36-40 Hz; the filtering is carried out by adopting a Chebyshev 3 type filter); and (3) performing feature extraction on the electroencephalogram signals after filtering of each time window and each sub-band by adopting a one-to-many common spatial mode method (OVR-CSP). Secondly, carrying out spatial feature learning and classification on the preprocessed electroencephalogram signals by adopting a deep Convolutional Neural Network (CNN), and finally, carrying out time feature learning and classification on the spatial features extracted by the deep convolutional neural network of each time window by adopting a deep long-short term memory network (LSTM), and outputting a classification result on the electroencephalogram signals of each time window.
Referring to fig. 5, a schematic diagram of a three-layer deep convolutional neural network. In the figure, "C & P" represents convolution and pooling operation, and "reshape" represents matrix dimension-changing processing. Deep Convolutional Neural Network (CNN) proposes the use of 3 hidden layers, each hidden layer having a convolutional kernel of proposed size: 3 x 3, wherein a Zero-padding strategy is suggested in each convolution process, namely 0 value padding is carried out on the periphery of the input of each hidden layer to ensure that the output after convolution is consistent with the input dimension. The maximum pooling function is adopted by the pooling layer of the suggested hidden layer, and the suggested size of a pooling layer filter is as follows: 2X 2. At the last layer of the deep convolutional network, the dataform is converted into a one-dimensional vector of 1 × 64 size.
Referring to FIG. 6, a diagram of a deep long short term memory network is shown, wherein a is a diagram of a 3-layer long short term memory network; b is a schematic diagram of a short term memory network cell. The EEG signal is a continuous signal, so the deep convolutional network features of each time window are used as input to a deep long short term memory network (LSTM). The deep long short term memory network proposes to use 3 hidden layers, and the processing procedure of each hidden layer is shown in fig. 6 b. Generating an expected classification for each time window, and in the off-line training process, comparing the expected classification with the real label so as to update the weight and the deviation of the deep hybrid neural network; in the actual control process, three time window expected classifications are continuously read, and if the classification results of the three time windows are consistent, the classification result is output.
Referring to fig. 7, a schematic diagram of a formation reconfiguration control interface of the unmanned aerial vehicle cluster is shown. Unmanned aerial vehicle cluster formation control software is made based on a Unity3D three-dimensional engine, wherein an unmanned aerial vehicle cluster ocean flight scene is generated by using an AQUAS Water tool, an ocean island reef is made by using a MapMagic tool, a UGUI component is used for making a software interface for displaying the current formation of the unmanned aerial vehicle cluster, a network communication function is realized based on a UDP transmission layer communication protocol, an electroencephalogram signal identification result is received, and a corresponding unmanned aerial vehicle cluster formation transformation demonstration is demonstrated. The control interface can select four formation types in total, wherein the tongue motion is imagined to control the V-shaped formation transformation; imagining foot motion control cross-team transformation; imagine left-handed sports as a team transformation; imagine that right hand movement is a column shift.
Specific examples are given below:
1. system software and hardware configuration
According to the general structure of the platform shown in the first figure of this section, the hardware configuration adopted in this example is as shown in the following table:
name (R) | Model number |
Computer with a memory card | A CPU: intel Core i7-6700K, memory: 16G, display card: GTX1070 |
Electrode cap | Neuron 64 lead wireless digital electroencephalogram acquisition system |
The software implementation of this example includes: an electroencephalogram analysis program is developed by adopting MATLAB and Python; the interactive interface is based on the Unity3d engine.
2. Results of the experiment
In the embodiment, unmanned aerial vehicle formation control experiments are performed under an experiment platform system, and a brain control unmanned aerial vehicle formation reconstruction effect diagram is shown in fig. 8. Fig. 9 shows that brain accuse unmanned aerial vehicle formation and VR combine the effect picture, makes the control personnel can enjoy immersive experience. The unmanned aerial vehicle cluster formation idea reconstruction control method obtains a good interactive simulation effect, and the feasibility of the method is verified.
Claims (1)
1. An unmanned aerial vehicle cluster formation reconfiguration control method based on brain-computer interfaces is characterized by comprising an off-line training step and an on-line training step:
an off-line training step: s1, initializing a motor imagery training system; s2, starting an interactive interface, wherein the interactive interface randomly displays arrows pointing to the upper, lower, left and right directions; s3, the operator respectively imagines the movement of the tongue, the feet, the left hand and the right hand according to the direction of the arrow, and the electroencephalogram signals of the operator are collected through the electrode cap; s4, processing the electroencephalogram signals, including: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; s5, training the mixed deep neural network by adopting a back propagation algorithm through comparison of the classification value and the label value of the neural network, and determining a network weight;
an online control step: s6, starting virtual unmanned aerial vehicle cluster formation form software, and entering an unmanned aerial vehicle cluster formation control interface; s7, enabling operators to imagine the movements of the tongue, the feet, the left hand and the right hand respectively according to the expected unmanned aerial vehicle cluster formation, and meanwhile, collecting electroencephalogram signals of the operators by the electrode caps; s8, processing the acquired electroencephalogram signals after acquiring the electroencephalogram signals, and the processing method comprises the following steps: preprocessing, extracting signal characteristics and classifying by using a mixed deep neural network based on a deep convolutional network and a deep long-short term memory network; s9, generating a control command according to the output classification result, and controlling the reconstruction of the virtual unmanned aerial vehicle cluster formation;
specifically, 1) electroencephalogram signal preprocessing, comprising:
s10, performing down-sampling processing on the electroencephalogram signal to obtain the electroencephalogram signal of 250 Hz; s11, carrying out power frequency filtering of 50Hz on the acquired electroencephalogram signals; s12, segmenting the electroencephalogram signal time sequence by adopting a time window; s13, filtering the electroencephalogram signals by adopting a filter bank;
2) the extraction of the characteristics of the electroencephalogram signals comprises the following steps:
performing feature extraction on the electroencephalogram signal obtained by the S13 by adopting a one-to-many-public space mode method OVR-CSP, wherein the one-to-many-public space mode method comprises the following steps:
s14, respectively calculating the common spatial mode filtering weight W of each type of motor imagery signal relative to other signalsj:
Wherein, CjCovariance matrix representing the motor imagery signal of this type, EjRepresents that it contains CjDiagonal array of eigenvalues, WjRepresenting the common spatial mode filtering weight of the motor imagery signal relative to other signals, wherein j is 1,2,3 and 4 respectively represent four types of motor imagery signals;
s15, respectively extracting WjThe first two columns and the second two columns are combined into a new matrixAre combined in sequence To obtain
S16, performing one-to-many common spatial mode filtering on the electroencephalogram signals obtained in the S13:
wherein, X represents the electroencephalogram signal obtained at S13, and Z represents the signal after one-to-many-common spatial mode filtering;
s17, extracting the characteristics of the signal Z obtained in S13;
wherein, diag (·) is a diagonal array element of the matrix, and tr (·) is a trace of the matrix;
3) and (3) performing spatial feature learning on the features obtained in the step S17 by adopting a deep convolutional neural network, wherein the method comprises the following steps:
s18, the deep convolutional network comprises a plurality of hidden layers, each hidden layer is composed of a volume base layer and a pooling layer, wherein the volume layer is represented as:
hcl=R(conv(Wl,xl)+bl)
wherein x islAnd hc andlrespectively representing the input and output of the first convolution layer, WlAnd blRespectively representing the weight and deviation of the convolution layer of the first layer, conv (-) represents convolution operation, and R represents the activation function of the layer;
s19, forming a pooling layer after each convolution layer;
s20, converting the output of the deep convolutional neural network into a 1-dimensional vector form;
4) performing time feature learning on the features obtained in S20 of a plurality of time windows by using a deep long short-term memory network, wherein the method comprises the following steps:
s21, the deep depth long and short term memory network is composed of a plurality of long and short term memory network cells which are connected in series;
s22, the long and short term memory network cell is composed of a forgetting gate, an input gate and an output gate;
s23, a forgetting gate determines the amount of information discarded from the long-short term memory network cell, the gate outputs a value of 0 to 1, 1 indicates complete retention, 0 indicates complete discard:
fl,t=σ(Wl f·[hll,t-1,xl,t]+bl f)
wherein hl isl,t-1Cell output of long short term memory network representing previous time window, xl,tRepresenting the input of the current cell, l representing the l hidden layer, t representing the t time window, Wl fAndrespectively representing weight and bias information, wherein sigma is a Sigmoid function;
s24, determining the amount of new information to be updated for the long-term and short-term memory network cells by the input gate, and firstly, determining which information needs to be updated; secondly, calculating alternative updating contents; and finally, updating the cell state by adopting the alternative updating content:
wherein Wl i,Wl c,Respectively representing weight and bias information, il,tIndicating the amount of the update information to be updated,representing alternative updates, Cl,tRepresenting the current state of the long-short term memory network cells;
s25, the output gate is used for processing the cell state of the long-short term memory network and determining the output of the cell
hll,t=ol,t×tanh(Cl,t)
Wherein hll,tFor the output of long and short term memory network cells, Wl oAndrespectively representing weight and bias information;
in the off-line process, the weight and deviation training of the mixed deep neural network is needed, and the method comprises the following steps:
s26, calculating probability distribution of different categories of electroencephalogram signals by adopting Softmax function to the output of the S25 in the step 4
Wherein m represents the electroencephalogram category index of the output y, and T represents the total number of electroencephalogram signal categories;
s27, calculating the probability distribution distance between the prediction classification of the mixed depth neural network and the real electroencephalogram signal label by adopting a cross entropy function
Wherein, ypPredicting classification results for a hybrid deep neural network, ylLabeling values for the real electroencephalogram signals;
s28, updating the weight and the deviation of the deep neural network by adopting a back propagation algorithm to reduce a cross entropy function value;
in the online process, the formation of the unmanned aerial vehicle adopts a completely distributed formation reconstruction controller to control the formation of the unmanned aerial vehicle:
s29, defining a formation position error expression ePi:
Wherein, P0Position of a virtual leader unmanned helicopter, ci,cjExpected formation positions of unmanned planes i and j relative to leader respectively;
s30, designing an outer ring formation controller U as follows1i(t) make the formation error ePiAnd eViConverge to a very small neighborhood of zero in a limited time and avoid collisions between drones
Wherein the velocity tracking error eViAnd formation reconstruction error sigmaPiRespectively expressed as:
wherein the parameter value ranges are a is more than 0, b is more than 0, c is more than 0, betai>0,λ1>0,λ2>0,λ3>0,F1iLearning a function for the neural network;
the collision avoidance potential energy function between S31 and unmanned aerial vehicles i and j is designed as follows:
wherein the relative distance is defined as dij=||Pi-Pj||,raIs the safe collision avoidance radius of the unmanned aerial vehicle, and is more than 0 and less than epsilona< 1 is a very small normal number, so ln (1/ε)a) Greater than or equal to 1, parameter value rangeIs etaj>0,l1>0,ρaThe update rule is as follows:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910581534.0A CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910581534.0A CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110377049A CN110377049A (en) | 2019-10-25 |
CN110377049B true CN110377049B (en) | 2022-05-17 |
Family
ID=68251401
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910581534.0A Active CN110377049B (en) | 2019-06-29 | 2019-06-29 | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110377049B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111222578A (en) * | 2020-01-09 | 2020-06-02 | 哈尔滨工业大学 | Online processing method of motor imagery EEG signal |
CN111638724A (en) * | 2020-05-07 | 2020-09-08 | 西北工业大学 | Novel cooperative intelligent control method for unmanned aerial vehicle group computer |
CN112051780B (en) * | 2020-09-16 | 2022-05-17 | 北京理工大学 | Brain-computer interface-based mobile robot formation control system and method |
CN113009931B (en) * | 2021-03-08 | 2022-11-08 | 北京邮电大学 | Man-machine and unmanned-machine mixed formation cooperative control device and method |
CN113741696A (en) * | 2021-09-07 | 2021-12-03 | 中国人民解放军军事科学院军事医学研究院 | Brain-controlled unmanned aerial vehicle system based on LED three-dimensional interactive interface |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140137870A (en) * | 2013-05-24 | 2014-12-03 | 고려대학교 산학협력단 | Apparatus and method for brain-brain interfacing |
CN106200680A (en) * | 2016-09-27 | 2016-12-07 | 深圳市千粤科技有限公司 | A kind of unmanned plane cluster management system and control method thereof |
CN107291096A (en) * | 2017-06-22 | 2017-10-24 | 浙江大学 | A kind of unmanned plane multimachine hybrid task cluster system |
CN107643695A (en) * | 2017-09-07 | 2018-01-30 | 天津大学 | Someone/unmanned plane cluster formation VR emulation modes and system based on brain electricity |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108845802A (en) * | 2018-05-15 | 2018-11-20 | 天津大学 | Unmanned plane cluster formation interactive simulation verifies system and implementation method |
CN109583346A (en) * | 2018-11-21 | 2019-04-05 | 齐鲁工业大学 | EEG feature extraction and classifying identification method based on LSTM-FC |
CN109683626A (en) * | 2018-11-08 | 2019-04-26 | 浙江工业大学 | A kind of quadrotor drone formation control method based on Adaptive radial basis function neural network |
CN109784211A (en) * | 2018-12-26 | 2019-05-21 | 西安交通大学 | A kind of Mental imagery Method of EEG signals classification based on deep learning |
-
2019
- 2019-06-29 CN CN201910581534.0A patent/CN110377049B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20140137870A (en) * | 2013-05-24 | 2014-12-03 | 고려대학교 산학협력단 | Apparatus and method for brain-brain interfacing |
CN106200680A (en) * | 2016-09-27 | 2016-12-07 | 深圳市千粤科技有限公司 | A kind of unmanned plane cluster management system and control method thereof |
CN107291096A (en) * | 2017-06-22 | 2017-10-24 | 浙江大学 | A kind of unmanned plane multimachine hybrid task cluster system |
CN107643695A (en) * | 2017-09-07 | 2018-01-30 | 天津大学 | Someone/unmanned plane cluster formation VR emulation modes and system based on brain electricity |
CN108446020A (en) * | 2018-02-28 | 2018-08-24 | 天津大学 | Merge Mental imagery idea control method and the application of Visual Graph and deep learning |
CN108845802A (en) * | 2018-05-15 | 2018-11-20 | 天津大学 | Unmanned plane cluster formation interactive simulation verifies system and implementation method |
CN109683626A (en) * | 2018-11-08 | 2019-04-26 | 浙江工业大学 | A kind of quadrotor drone formation control method based on Adaptive radial basis function neural network |
CN109583346A (en) * | 2018-11-21 | 2019-04-05 | 齐鲁工业大学 | EEG feature extraction and classifying identification method based on LSTM-FC |
CN109784211A (en) * | 2018-12-26 | 2019-05-21 | 西安交通大学 | A kind of Mental imagery Method of EEG signals classification based on deep learning |
Non-Patent Citations (2)
Title |
---|
A Performance Study of 14-Channel and 5-Channel EEG Systems for Real-Time Control of Unmanned Aerial Vehicles (UAVs);Abijith Vijayendra,等;《2018 Second IEEE International Conference on Robotic Computing》;20181231;第183-188页 * |
基于SSVEP的脑控飞行器研究与实现;徐贤,等;《电子测试》;20181231;第10-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110377049A (en) | 2019-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110377049B (en) | Brain-computer interface-based unmanned aerial vehicle cluster formation reconfiguration control method | |
Li et al. | Densely feature fusion based on convolutional neural networks for motor imagery EEG classification | |
CN112667080B (en) | Intelligent control method for electroencephalogram signal unmanned platform based on deep convolution countermeasure network | |
CN111553295B (en) | Multi-mode emotion recognition method based on self-attention mechanism | |
CN112932502B (en) | Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network | |
CN108596039A (en) | A kind of bimodal emotion recognition method and system based on 3D convolutional neural networks | |
Yang et al. | Video-based Chinese sign language recognition using convolutional neural network | |
CN110321910A (en) | Feature extracting method, device and equipment towards cloud | |
CN113951900B (en) | Motor imagery intention recognition method based on multi-mode signals | |
CN106227341A (en) | Unmanned plane gesture interaction method based on degree of depth study and system | |
CN106909938B (en) | Visual angle independence behavior identification method based on deep learning network | |
CN108062569A (en) | It is a kind of based on infrared and radar unmanned vehicle Driving Decision-making method | |
CN106491083A (en) | Head-wearing type intelligent wearing number of electrodes optimization and application for brain status monitoring | |
CN107122050A (en) | Stable state of motion VEP brain-machine interface method based on CSFL GDBN | |
CN106971145A (en) | A kind of various visual angles action identification method and device based on extreme learning machine | |
CN108364062B (en) | Deep learning model construction method based on MEMD and application of deep learning model in motor imagery | |
CN109948498A (en) | A kind of dynamic gesture identification method based on 3D convolutional neural networks algorithm | |
Wei et al. | Image feature extraction and object recognition based on vision neural mechanism | |
CN110074779A (en) | A kind of EEG signal identification method and device | |
CN114863572B (en) | Myoelectric gesture recognition method of multi-channel heterogeneous sensor | |
CN113180659A (en) | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network | |
CN111543985A (en) | Brain control hybrid intelligent rehabilitation method based on novel deep learning model | |
CN113408397B (en) | Domain-adaptive cross-subject motor imagery electroencephalogram signal identification system and method | |
CN114209342A (en) | Electroencephalogram signal motor imagery classification method based on space-time characteristics | |
Ma | Summary of research on application of deep learning in image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |