CN106959753B - Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface - Google Patents

Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface Download PDF

Info

Publication number
CN106959753B
CN106959753B CN201710171103.8A CN201710171103A CN106959753B CN 106959753 B CN106959753 B CN 106959753B CN 201710171103 A CN201710171103 A CN 201710171103A CN 106959753 B CN106959753 B CN 106959753B
Authority
CN
China
Prior art keywords
aerial vehicle
unmanned aerial
signal
electroencephalogram
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710171103.8A
Other languages
Chinese (zh)
Other versions
CN106959753A (en
Inventor
李长军
荣海军
董继尧
陈霸东
黄辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710171103.8A priority Critical patent/CN106959753B/en
Publication of CN106959753A publication Critical patent/CN106959753A/en
Application granted granted Critical
Publication of CN106959753B publication Critical patent/CN106959753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Automation & Control Theory (AREA)
  • Health & Medical Sciences (AREA)
  • Remote Sensing (AREA)
  • Dermatology (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

An unmanned aerial vehicle virtual control method and system based on a motor imagery brain-computer interface are disclosed, the control method comprises the following steps: 1) when the experiment starts, a testee watches the virtual unmanned aerial vehicle interface, imagines different limb movements and collects electroencephalogram signals; 2) converting the electroencephalogram signal into a control signal through time filtering, space filtering, feature extraction and feature conversion in sequence; 3) the electroencephalogram control signal is transmitted to the virtual unmanned aerial vehicle control program, the virtual unmanned aerial vehicle is controlled to fly through the virtual unmanned aerial vehicle control program, the flying state of the unmanned aerial vehicle fed back is presented through the virtual unmanned aerial vehicle interface, a testee monitors the own control effect in real time, and the state of the motor imagery is adjusted until the virtual unmanned aerial vehicle flying task is successfully completed or a failure condition is triggered. The control system comprises a virtual unmanned aerial vehicle interface, electroencephalogram acquisition equipment, a signal processing module and an interface. The invention can train the testee to control the virtual unmanned aerial vehicle and prepare for controlling the real unmanned aerial vehicle.

Description

Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface
Technical Field
The invention relates to the field of brain-computer interfaces, in particular to an unmanned aerial vehicle virtual control method and system based on a motor imagery brain-computer interface, which meet the training requirement of an unmanned aerial vehicle for flight control, hand-brain electric control and unmanned aerial vehicle control.
Background
The first international conference on brain-computer interface (BCI) defines the brain-computer interface as a brain-computer communication system that does not rely on the normal neural and muscular output pathways of the brain. Many studies have shown that a particular imagination task, such as imagination of lateral movement of a limb, can produce a decrease or increase in the synchrony of certain components of brain signals in the contralateral area of the brain, and this imagination is referred to as time-dependent synchronization and time-dependent desynchronization. Based on this principle, motor imagery becomes an important paradigm for brain-computer interface control. With the development of electroencephalogram acquisition and processing technology, the function of a BCI system is greatly enhanced, and the application range of the BCI system is wider and wider. The BCI technology is used in the field of unmanned aerial vehicle control, and the control performance of the unmanned aerial vehicle is improved. But control unmanned aerial vehicle is not easy, and every unmanned aerial vehicle flies the control hand and all needs a large amount of remote control flight exercises, and just to the unmanned aerial vehicle that touches the brain electricity fly the control hand, utilize brain electricity control unmanned aerial vehicle to increase the control degree of difficulty undoubtedly. Aiming at the problem, a virtual control platform is needed to be established to train the flight control hands of the unmanned aerial vehicle, and the ability of the flight control hands to control the unmanned aerial vehicle by utilizing electroencephalogram is evaluated according to the result.
Disclosure of Invention
The invention aims to provide a virtual control method and a virtual control system of an unmanned aerial vehicle based on a motor imagery brain-computer interface, aiming at the problems in the prior art, so as to simulate the control strategy (scene) of a real unmanned aerial vehicle and meet the training requirements.
In order to achieve the purpose, the virtual control method of the unmanned aerial vehicle based on the motor imagery brain-computer interface comprises the following steps:
1) when the experiment starts, a testee watches the virtual unmanned aerial vehicle interface, imagines different limb movements and collects electroencephalogram signals;
2) converting the electroencephalogram signal into a control signal through time filtering, space filtering, feature extraction and feature conversion in sequence;
3) the electroencephalogram control signal is transmitted to the virtual unmanned aerial vehicle control program, the virtual unmanned aerial vehicle is controlled to fly through the virtual unmanned aerial vehicle control program, the flying state of the unmanned aerial vehicle fed back is presented through the virtual unmanned aerial vehicle interface, a testee monitors the own control effect in real time, and the state of the motor imagery is adjusted until the virtual unmanned aerial vehicle flying task is successfully completed or a failure condition is triggered.
The modeling of the virtual unmanned aerial vehicle interface is carried out through Blender software, electroencephalogram signals are collected through Scan4.5 software, and the electroencephalogram signals are converted into control signals through BCI2000 software.
The EEG signal acquisition equipment uses a SynAmps 2 amplifier developed by Neuroscan and a 64-wet-conducting electrode EEG cap, and the electrode arrangement mode of the EEG cap is arranged according to an international 10-20 system.
The spatial filtering of the electroencephalogram signals is simplified into a weighted sum of the potential of the central electrode minus the potential of the surrounding 4 electrodes at each time point t by using a Laplace spatial filter:
Figure GDA0002868946670000021
weight w in the formulah,iIs the distance d between the target electrode h and its nearby electrode ih,iA function of (a); si(t) is the potential measured at electrode i at time t, sh(t) is the potential measured at electrode h at time t.
The feature extraction of the electroencephalogram signals adopts an autoregressive maximum entropy spectrum method to convert Mu rhythms and Beta rhythms from time domain signals into frequency domain features, the Mu rhythms are 8Hz-12Hz, and the Beta rhythms are 18Hz-25 Hz.
The characteristic transformation of the electroencephalogram signals comprises classification identification and normalization processing; the classification identification introduces a Fisher identification rule expression to ensure that the projected sample has the maximum inter-class dispersion and the minimum intra-class dispersion; the normalization process makes the output control signal within a specific range by linearly transforming the control signal.
The electroencephalogram control signals are sent to the signal transfer software through a UDP connectionless transmission protocol, the electroencephalogram control signals of the local port are read by the signal transfer software, and then the electroencephalogram control signals are transmitted to the Blender virtual environment through memory mapping.
The invention relates to an unmanned aerial vehicle virtual control system based on a motor imagery brain-computer interface, which comprises: the virtual unmanned aerial vehicle interface and the electroencephalogram acquisition equipment are used for acquiring electroencephalogram signals generated when the testee imagines different limb movements; the signal processing module is used for converting the electroencephalogram signal into a control signal; and an interface for transmitting the brain electrical control signal to the virtual environment of the virtual unmanned aerial vehicle.
The electroencephalogram signals collected by the electroencephalogram collecting equipment are transmitted to the signal processing module through the electroencephalogram signal amplifier.
Compared with the prior art, the unmanned aerial vehicle virtual control method and the system based on the motor imagery brain-computer interface have the following beneficial effects: the method includes the steps that a test person watches a virtual unmanned aerial vehicle interface, is familiar with a motion imagination process, before an experiment begins, the test person needs to know control rules of a virtual unmanned aerial vehicle, the virtual unmanned aerial vehicle flies forwards at a constant speed in the scheme, when the virtual unmanned aerial vehicle moves leftwards when a left hand is imagined, the virtual unmanned aerial vehicle moves rightwards when the virtual unmanned aerial vehicle moves rightwards, the virtual unmanned aerial vehicle moves simultaneously, the virtual unmanned aerial vehicle moves upwards, the virtual unmanned aerial vehicle moves legs and the virtual unmanned aerial vehicle moves downwards. The electroencephalogram acquisition equipment is used for acquiring the generated electroencephalogram signals, the electroencephalogram signals are converted into control signals to be transmitted to the virtual unmanned aerial vehicle control program, and the virtual unmanned aerial vehicle is controlled to fly through the virtual unmanned aerial vehicle control program. The invention can train a testee to control the virtual unmanned aerial vehicle, thereby preparing for controlling a real unmanned aerial vehicle, being used in the fields of electronic entertainment, industrial control and the like, obtaining a perfect brain-computer interface system and being expected to obtain considerable social and economic benefits.
Drawings
FIG. 1 is a schematic diagram of the structure of the brain-computer interface of the present invention;
FIG. 2 is a schematic diagram of the software connection of the brain-computer interface of the present invention;
FIG. 3 is a schematic diagram of the signal processing flow and control method of the brain-computer interface according to the present invention;
fig. 4 is a schematic view of a virtual drone interface of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the unmanned aerial vehicle virtual control system based on the motor imagery brain-computer interface of the invention comprises a virtual unmanned aerial vehicle interface and an electroencephalogram acquisition device for acquiring electroencephalogram signals generated when a subject imagines different limb movements; the signal processing module is used for converting the electroencephalogram signal into a control signal; and an interface for transmitting the brain electrical control signal to the virtual environment of the virtual unmanned aerial vehicle. In view of the complexity of unmanned aerial vehicle control, a subject needs to perform certain one-dimensional motor imagery training before performing the method of the invention, and is familiar with the motor imagery process. Before the start of the experiment, the control method (rule) of the virtual nobody should be understood by the test: assuming that the virtual unmanned aerial vehicle flies forward at a constant speed, when the virtual unmanned aerial vehicle imagines left-handed movement, the virtual unmanned aerial vehicle moves leftwards; imagine a right hand movement, the drone moves to the right; imagine both hands moving simultaneously, unmanned aerial vehicle moves upwards; imagine both legs moving, the drone moves downwards.
Referring to fig. 2, the electroencephalogram acquisition software is Scan4.5, the electroencephalogram processing software is BCI2000, and the virtual unmanned aerial vehicle interface (equivalent to a simple game) modeling software is Blender. The virtual unmanned aerial vehicle model established by the method is a four-rotor unmanned aerial vehicle, and a control program is written by using a game engine carried by a Blender. When the experiment is carried out, the virtual unmanned aerial vehicle flies forwards at a constant speed, turns left and right according to the control signal, and ascends or descends. The electroencephalogram acquisition equipment uses a SynAmps 2 amplifier developed by Neuroscan and a 64-wet-conducting electrode electroencephalogram cap, and the electrode arrangement mode of the electroencephalogram cap is arranged according to an international 10-20 system. When the experiment began, the subject looked at the virtual drone interface, imagined different limb movements. The electroencephalogram signals are collected through an electroencephalogram signal amplifier and an electroencephalogram cap, the electroencephalogram signals are read by Scan4.5, the signals are transmitted to BCI2000 software through a TCP/IP protocol, and the BCI2000 software carries out a series of processing on the signals.
1) The BCI2000 receives all brain electrical signals transmitted by Scan4.5, but actually only focuses on signals of 13 electrodes C3, C4, C5, CZ and the like related to motor imagery. The signals of these 13 channels are thus signal processed. Further attributes that need to be addressed are data block (sample block) size and sampling frequency (sample Rate). Data among modules in the BCI2000 software is not transferred in real time, but is transferred according to the size of a data Block (sample Block).
2) The spatial filtering is performed to reduce the effect of spatial blurring and to improve the fidelity of the signal. The invention uses Laplace space filter, the simplified algorithm is the weighted sum of the potential of the central electrode minus the potential of the surrounding 4 electrodes at each time point t:
Figure GDA0002868946670000041
weight w in the formulah,iIs a target electrode h andthe distance d between its adjacent electrodes ih,iAs a function of (c). si(t) is the potential measured at electrode i at time t, sh(t) is the potential measured at electrode h at time t. In this experiment, a simpler implementation was used, i.e. subtracting the average of the four nearest electrodes from the central potential.
3) Then, the characteristic extraction is carried out, and theoretical research shows that the motor imagery can be the frequency band change of Mu (8Hz-12Hz) or Beta rhythm (18Hz-25 Hz). The general purpose of feature extraction is to convert the Mu and Beta rhythms from time domain signals to frequency domain features. BCI requires a system that can provide multiple feedbacks in one second, where the maximum entropy method MEM based on autoregressive models has better time resolution at a given frequency resolution than FFT and wavelet transforms, and is more suitable for BCI systems, and therefore, it is the autoregressive maximum entropy spectrum that is used for feature extraction. The method of maximum entropy extrapolation of the autocorrelation function is equivalent to the autoregressive analysis method.
The use of autoregressive analysis is described below:
from time series analysis, a stationary random number-limiting sequence with a mean of zero { x }kIts N-order autoregressive signal model [ AR (N) ]]Comprises the following steps:
Figure GDA0002868946670000042
in the formula, wnTo have zero mean and unit variance
Figure GDA0002868946670000051
G is its gain coefficient, ak(k-1, 2, …, N) is an autoregressive coefficient,
Figure GDA0002868946670000052
is xnAnd (4) predicting the value of one step. The selection of a proper AR model order is an important problem in AR spectrum estimation, when the order selection is too low, larger deviation can be generated, and when the order selection is too high, false spectrum peaks can be caused, the variance performance of spectrum estimation can be reduced, and according to the electroencephalogram signal processing experience, the model order is selected to be 16.
Thus GwnCorresponding to prediction error, regression systemThe number can be obtained by the minimum mean square error method of error, i.e. order
Figure GDA0002868946670000053
To akThe derivative of (a) is 0, the Yule-Walker equation can be obtained:
Figure GDA0002868946670000054
in the formula, r (0), r (1), …, r (N) are autocorrelation function of signal, coefficient ak(k ═ 1,2, …, N) can be derived by solving this equation. From the foregoing, it can be seen
Figure GDA0002868946670000055
Writing the equation into a Z-transform form to obtain
Figure GDA0002868946670000056
So that the transfer function is
Figure GDA0002868946670000057
Since the rational for h (z) is all-polar, this signal model becomes all-polar. The autoregressive signal model is therefore all-polar. The power spectrum is:
Figure GDA0002868946670000058
in the formula, G is also2Unknown, now derived.
The current autoregressive analysis method and the maximum entropy spectrum analysis both apply a prediction error filtering method, which estimates a minimum phase prediction error filter from the obtained signal sequence data directly, so that the output of the filter is a white noise sequence.
The prediction error is:
Figure GDA0002868946670000061
having a transfer function of
Figure GDA0002868946670000062
Since the output of the prediction error filter is a whitened sequence, E { E }nxn-k}=0(k>0)。
The output power can be obtained as follows:
Figure GDA0002868946670000063
r (0), r (1), …, r (N) is the autocorrelation function of the signal, coefficient ak(k-1, 2, …, N) is derived by solving Yule-Walker equation.
Frequency characteristic of the prediction error filter:
Figure GDA0002868946670000064
as a result of S (w) | A (w)2When P is the output power, the power spectrum s (w) obtained by the prediction error filter is:
Figure GDA0002868946670000065
if the filter input signal and the autoregressive signal have the same power spectrum, G2=P;
Thus solving G2To a problem of (a). From which the power spectrum of the signal can be calculated.
4) After the features are extracted, the extracted features need to be classified and identified, and the main process is to convert a series of electroencephalogram signal features into a series of control signals. Conventional classification/regression methods can accomplish this transformation.
Linear Discriminant Analysis (LDA) is described below.
The basic idea of the method is to project high-dimensional pattern samples to an optimal identification vector space to achieve the effects of extracting classification information and compressing feature space dimensions, and after projection, the pattern samples are ensured to have the maximum inter-class distance and the minimum intra-class distance in a new subspace, namely, the pattern has the optimal separability in the space.
The method comprises the following specific steps: the feature extracted data is a column vector of N x 1, assuming that m samples have been obtained and that the samples belong to C classes in total. Firstly, calculating a sample mean value of a class i;
Figure GDA0002868946670000066
obtaining the average value of the overall samples in the same way;
Figure GDA0002868946670000071
defining an inter-class dispersion matrix and an intra-class dispersion matrix:
Figure GDA0002868946670000072
Figure GDA0002868946670000073
LDA is used as a classification algorithm, and certainly, the classification effect is good when the coupling degree between the classified classes is low and the intra-class polymerization degree is high, namely the value in the intra-class dispersion matrix is small and the value in the inter-class dispersion matrix is large. Here we introduce the Fisher discrimination criteria expression:
Figure GDA0002868946670000074
wherein the content of the first and second substances,
Figure GDA0002868946670000075
is any n-dimensional column vector.
The Fisher linear discriminant analysis is selected so that
Figure GDA0002868946670000076
Vector to maximum
Figure GDA0002868946670000077
The physical meaning of the projection direction is that the projected sample has the largest inter-class dispersion and the smallest intra-class dispersion. Obtained finally through derivation and meeting Fisher conditions
Figure GDA0002868946670000078
The column vector is, matrix Sw -1SbThe feature vector corresponding to the largest feature value.
5) The output signal range after the characteristic conversion is uncertain, and for the stability of control, normalization processing needs to be carried out on the output signal, and the normalization is used for carrying out linear transformation on the control signal so that the output control signal is in a specific range. The BCI2000 uses the following transformations when operating online: the data buffer is used in the BCI2000 to hold the input signal prior to normalization, which is to subtract an offset from the channel data and multiply it by a gain value for each channel. The offset is the mean of the data and the gain is the inverse of the standard deviation of the data. Assume that the ith channel buffers N data:
Figure GDA0002868946670000079
Figure GDA00028689466700000710
outputi=(inputi-NormalizerOffseti)×NormalizerGaini
the resulting output signal has zero mean and unit variance.
6) Thus, the electroencephalogram signal processing part is completed. The control signal is then output to the drone virtual environment.
Firstly, the online control requires that the design of an external application program interface is as simple and efficient as possible, so that a UDP connectionless transmission protocol is adopted to transmit signals processed by electroencephalograms. The mode adopted here is to send the brain electrical signals processed by the BCI2000 to the signal relay software through the UDP protocol, and the signal relay software conveniently reads the control signal of the local port and then transmits the control signal to the Blender virtual environment through the memory mapping, as shown in fig. 2.
The model of the drone in the blend flies according to established strategies, as shown in fig. 3 and 4.

Claims (3)

1. An unmanned aerial vehicle virtual control method based on a motor imagery brain-computer interface is characterized by comprising the following steps:
1) when the experiment starts, a testee watches the virtual unmanned aerial vehicle interface, imagines different limb movements and collects electroencephalogram signals;
the EEG signal acquisition equipment uses a SynAmps 2 amplifier developed by Neuroscan and a 64-wet-conducting electrode EEG cap, and the electrode arrangement mode of the EEG cap is arranged according to an international 10-20 system; modeling a virtual unmanned aerial vehicle interface through Blender software, acquiring an electroencephalogram signal through Scan4.5 software, and converting the electroencephalogram signal into a control signal through BCI2000 software;
2) and (3) carrying out spatial filtering, wherein the spatial filtering of the electroencephalogram signals is simplified into a weighted sum of the potential of the central electrode minus the potential of 4 surrounding electrodes at each time point t by utilizing a Laplace spatial filter:
Figure FDA0002868946660000011
weight w in the formulah,iIs the distance d between the target electrode h and its nearby electrode ih,iA function of (a); si(t) is the potential measured at electrode i at time t, sh(t) is the potential measured at the electrode h at time t;
performing feature extraction, wherein the feature extraction of the electroencephalogram signals adopts an autoregressive maximum entropy spectrum method to convert Mu rhythms and Beta rhythms from time domain signals into frequency domain features, the Mu rhythms are 8Hz-12Hz, and the Beta rhythms are 18Hz-25 Hz;
converting the electroencephalogram signals into control signals through characteristic conversion, wherein the characteristic conversion of the electroencephalogram signals comprises classification identification and normalization processing;
the classification identification introduces a Fisher identification rule expression to ensure that the projected sample has the maximum inter-class dispersion and the minimum intra-class dispersion; the normalization process makes the output control signal in a specific range by performing linear transformation on the control signal;
from time series analysis, a stationary random number-limiting sequence with a mean of zero { x }kIts N-order autoregressive signal model [ AR (N) ]]Comprises the following steps:
Figure FDA0002868946660000012
in the formula, wnTo have zero mean and unit variance
Figure FDA0002868946660000013
G is its gain coefficient, ak(k-1, 2, …, N) is an autoregressive coefficient,
Figure FDA0002868946660000014
is xnPredicting value of the first step; selecting an AR model order of 16;
the regression coefficients are calculated by the minimum mean square error method of error, i.e. let:
Figure FDA0002868946660000015
akthe derivative of (A) is 0, and a Yule-Walker equation is obtained:
Figure FDA0002868946660000021
in the formula, r (0), r (1), …, r (N) is the autocorrelation function of the signalCoefficient ak(k ═ 1,2, …, N) results from solving this equation; from the foregoing, it can be seen
Figure FDA0002868946660000022
Writing the equation into a Z-transform form to obtain
Figure FDA0002868946660000023
So that the transfer function is
Figure FDA0002868946660000024
Since the rational for h (z) is all-polar, the signal model power spectrum is:
Figure FDA0002868946660000025
in the formula, j has no specific meaning and represents the imaginary part of a complex number, namely an imaginary unit; w represents the correlation frequency, here the argument; t is the time period of the signal;
deducing unknown G in the formula2
The prediction error is:
Figure FDA0002868946660000026
having a transfer function of
Figure FDA0002868946660000027
Since the output of the prediction error filter is a whitened sequence, E { E }nxn-k}=0(k>0);
The output power is obtained as follows:
Figure FDA0002868946660000028
r (0), r (1), …, r (N) is the autocorrelation function of the signal, coefficient ak(k=1,2, …, N) is obtained by solving Yule-Walker equation;
the frequency characteristic of the prediction error filter is
Figure FDA0002868946660000029
As a result of S (w) | A (w)2When P is the output power, the power spectrum s (w) obtained by the prediction error filter is:
Figure FDA0002868946660000031
if the filter input signal and the autoregressive signal have the same power spectrum, G2=P;
Classifying and identifying the extracted features, which comprises the following specific steps: the data after feature extraction is a column vector of N x 1, assuming that m samples have been obtained and the samples belong to C categories in total; firstly, calculating a sample mean value of a class i;
Figure FDA0002868946660000032
obtaining the average value of the overall samples in the same way;
Figure FDA0002868946660000033
defining an inter-class dispersion matrix and an intra-class dispersion matrix:
Figure FDA0002868946660000034
Figure FDA0002868946660000035
introducing Fisher discrimination criterion expression:
Figure FDA0002868946660000036
wherein the content of the first and second substances,
Figure FDA0002868946660000037
is any n-dimensional column vector;
fisher's linear discriminant analysis is chosen such that
Figure FDA0002868946660000038
Vector to maximum
Figure FDA0002868946660000039
As the projection direction, one satisfying the Fisher condition is obtained
Figure FDA00028689466600000310
The column vector is, matrix Sw -1SbThe characteristic vector corresponding to the maximum characteristic value;
normalizing the output signal, wherein the normalization performs linear transformation on the control signal to make the output control signal in a specific range; the following transformations are used when the BCI2000 is operating online: in BCI2000, a data buffer is used for storing input signals before normalization, and for each channel, normalization is to subtract an offset from channel data and multiply the offset by a gain value; the offset is the average value of the data, and the gain is the inverse of the standard deviation of the data; assume that the ith channel buffers N data:
Figure FDA0002868946660000041
Figure FDA0002868946660000042
outputi=(inputi-NormalizerOffseti)×NormalizerGaini
wherein n represents the number of channels, inputiIndicating the most recently saved data, output, in the ith channel data bufferiRepresenting the resulting output signal having zero mean and unit variance; NormalizerGainiAs a gain value, normalizerOffsetiFor offset, NormalizerGainiAnd normalizzerOffsetiIs an intermediate variable;
3) the electroencephalogram control signal is transmitted to a virtual unmanned aerial vehicle control program, and the virtual unmanned aerial vehicle is controlled to fly through the virtual unmanned aerial vehicle control program; according to the flight state of the unmanned aerial vehicle fed back by the virtual unmanned aerial vehicle interface, a testee monitors the own control effect in real time, and adjusts the motion imagery state until the flight task of the virtual unmanned aerial vehicle is successfully completed or a failure condition is triggered;
the electroencephalogram control signals are sent to the signal transfer software through a UDP connectionless transmission protocol, the electroencephalogram control signals of the local port are read by the signal transfer software, and then the electroencephalogram control signals are transmitted to the Blender virtual environment through memory mapping.
2. A virtual control system of a drone for implementing the virtual control method of a drone based on a motor imagery brain-computer interface as claimed in claim 1, comprising: the virtual unmanned aerial vehicle interface and the electroencephalogram acquisition equipment are used for acquiring electroencephalogram signals generated when the testee imagines different limb movements; the signal processing module is used for converting the electroencephalogram signal into a control signal; and an interface for transmitting the brain electrical control signal to the virtual environment of the virtual unmanned aerial vehicle.
3. The virtual control system of unmanned aerial vehicle of claim 2, wherein: the electroencephalogram signals collected by the electroencephalogram collecting equipment are transmitted to the signal processing module through the electroencephalogram signal amplifier.
CN201710171103.8A 2017-03-21 2017-03-21 Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface Active CN106959753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710171103.8A CN106959753B (en) 2017-03-21 2017-03-21 Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710171103.8A CN106959753B (en) 2017-03-21 2017-03-21 Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface

Publications (2)

Publication Number Publication Date
CN106959753A CN106959753A (en) 2017-07-18
CN106959753B true CN106959753B (en) 2021-02-09

Family

ID=59470437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710171103.8A Active CN106959753B (en) 2017-03-21 2017-03-21 Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface

Country Status (1)

Country Link
CN (1) CN106959753B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107544675A (en) * 2017-09-08 2018-01-05 天津大学 Brain control formula virtual reality method
CN107669416B (en) * 2017-09-30 2023-05-02 五邑大学 Wheelchair system based on continuous-light motor imagination nerve decoding and control method
CN108280395B (en) * 2017-12-22 2021-12-17 中国电子科技集团公司第三十研究所 Efficient identification method for flight control signals of low-small-slow unmanned aerial vehicle
CN108113670A (en) * 2018-01-22 2018-06-05 江苏师范大学 A kind of UAV system and control method of the control of multi-channel type induced brain wave
CN108196566A (en) * 2018-03-16 2018-06-22 西安科技大学 A kind of small drone cloud brain control system and its method
CN108536169A (en) * 2018-04-28 2018-09-14 赵小川 Brain control UAV system based on carbon nanotube high score sub-electrode and control method
CN108732932B (en) * 2018-06-01 2021-06-08 荷塘智能科技(固安)有限公司 Four-rotor unmanned aerial vehicle accurate position control method based on minimum variance regulator
CN108762303A (en) * 2018-06-07 2018-11-06 重庆邮电大学 A kind of portable brain control UAV system and control method based on Mental imagery
CN109044350A (en) * 2018-09-15 2018-12-21 哈尔滨理工大学 A kind of eeg signal acquisition device and detection method
CN109491510A (en) * 2018-12-17 2019-03-19 深圳市道通智能航空技术有限公司 A kind of unmanned aerial vehicle (UAV) control method, apparatus, equipment and storage medium
CN110687929B (en) * 2019-10-10 2022-08-12 辽宁科技大学 Aircraft three-dimensional space target searching system based on monocular vision and motor imagery
CN112034979B (en) * 2020-07-31 2022-03-08 西安交通大学 Wearable flight sensation feedback system based on force feedback

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103300852A (en) * 2012-10-19 2013-09-18 西安电子科技大学 Training method for controlling remotely-controlled trolley in electrocerebral way based on motor imagery
CN104571504A (en) * 2014-12-24 2015-04-29 天津大学 Online brain-machine interface method based on imaginary movement
CN104914994A (en) * 2015-05-15 2015-09-16 中国计量学院 Aircraft control system and fight control method based on steady-state visual evoked potential
CN105468143A (en) * 2015-11-17 2016-04-06 天津大学 Feedback system based on motor imagery brain-computer interface

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103300852A (en) * 2012-10-19 2013-09-18 西安电子科技大学 Training method for controlling remotely-controlled trolley in electrocerebral way based on motor imagery
CN104571504A (en) * 2014-12-24 2015-04-29 天津大学 Online brain-machine interface method based on imaginary movement
CN104914994A (en) * 2015-05-15 2015-09-16 中国计量学院 Aircraft control system and fight control method based on steady-state visual evoked potential
CN105468143A (en) * 2015-11-17 2016-04-06 天津大学 Feedback system based on motor imagery brain-computer interface

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于虚拟现实环境的脑机接口技术研究进展;孔丽文等;《电子测量与仪器学报》;20150331;第29卷(第3期);317-327 *

Also Published As

Publication number Publication date
CN106959753A (en) 2017-07-18

Similar Documents

Publication Publication Date Title
CN106959753B (en) Unmanned aerial vehicle virtual control method and system based on motor imagery brain-computer interface
CN112990074B (en) VR-based multi-scene autonomous control mixed brain-computer interface online system
CN109657642A (en) A kind of Mental imagery Method of EEG signals classification and system based on Riemann's distance
KR101293446B1 (en) Electroencephalography Classification Method for Movement Imagination and Apparatus Thereof
CN110059564B (en) Feature extraction method based on power spectral density and cross-correlation entropy spectral density fusion
CN113065526B (en) Electroencephalogram signal classification method based on improved depth residual error grouping convolution network
Lin et al. Implementing remote presence using quadcopter control by a non-invasive BCI device
CN108042132A (en) Brain electrical feature extracting method based on DWT and EMD fusions CSP
CN113128552A (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
Jusas et al. Classification of motor imagery using combination of feature extraction and reduction methods for brain-computer interface
Shrivastwa et al. A brain–computer interface framework based on compressive sensing and deep learning
Zhou et al. Discriminative dictionary learning for EEG signal classification in Brain-computer interface
CN115238796A (en) Motor imagery electroencephalogram signal classification method based on parallel DAMSCN-LSTM
CN114167982A (en) Brain-computer interface system based on tensor space-frequency coupling filtering
CN112884062B (en) Motor imagery classification method and system based on CNN classification model and generated countermeasure network
Liu et al. Motor imagery tasks EEG signals classification using ResNet with multi-time-frequency representation
Hazarika et al. Two-fold feature extraction technique for biomedical signals classification
CN106843509B (en) Brain-computer interface system
CN108388345B (en) Brain electrode optimization method based on wavelet multi-resolution complex network and application thereof
Kehri et al. A facial EMG data analysis for emotion classification based on spectral kurtogram and CNN
CN109144277B (en) Method for constructing intelligent vehicle controlled by brain based on machine learning
CN116700495A (en) Brain-computer interaction method and equipment based on steady-state visual evoked potential and motor imagery
CN105824321A (en) Four-axis aircraft control system and method based on surface electromyogram signals
CN116392148A (en) Electroencephalogram signal classification method, device, equipment and storage medium
Rodriguez-Bermudez et al. Testing Brain—Computer Interfaces with Airplane Pilots under New Motor Imagery Tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant