CN116301308A - Emergency state exercise intention brain-computer interface system based on fusion characteristics - Google Patents

Emergency state exercise intention brain-computer interface system based on fusion characteristics Download PDF

Info

Publication number
CN116301308A
CN116301308A CN202211102154.2A CN202211102154A CN116301308A CN 116301308 A CN116301308 A CN 116301308A CN 202211102154 A CN202211102154 A CN 202211102154A CN 116301308 A CN116301308 A CN 116301308A
Authority
CN
China
Prior art keywords
emergency state
data
brain
user
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211102154.2A
Other languages
Chinese (zh)
Inventor
陈龙
何佳潼
明东
许敏鹏
王仲朋
刘爽
刘秀云
王坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN202211102154.2A priority Critical patent/CN116301308A/en
Publication of CN116301308A publication Critical patent/CN116301308A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Neurosurgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Neurology (AREA)
  • Health & Medical Sciences (AREA)
  • Dermatology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an emergency state movement intention brain-computer interface system based on fusion characteristics, which comprises: combining an active BCI and a reactive BCI, designing a new paradigm for inducing an emergency state exercise intention, and inducing ERP and MRCP joint characteristics; building an electroencephalogram signal acquisition device and recording electroencephalogram data of a user; extracting offline data characteristics and establishing an identification model; and (3) introducing an identification model for online experiments, classifying the emergency state and the non-emergency state by utilizing linear discriminant analysis, and outputting a control instruction for external equipment when the emergency state is detected. The invention can effectively improve the classification recognition accuracy of the BCI system, and the system brings innovation in paradigm and application scene for the development of the BCI, is hopeful to realize a BCI system with quick response, reliability and stability, and brings considerable social benefit and economic benefit.

Description

Emergency state exercise intention brain-computer interface system based on fusion characteristics
Technical Field
The invention relates to the field of Brain-computer interfaces (Brain-Computer Interface, BCI), in particular to an emergency sports intention Brain-computer interface system based on fusion characteristics.
Background
BCI refers to a pathway between the brain and computers and external devices that is constructed by man that differs from traditional brain information transmission and that can replace, reconstruct, augment, supplement or improve the normal output of the central nervous system. Characteristics of an Electroencephalogram (EEG) signal can be classified into active, reactive, and passive. The active BCI is independent of external events, and its output control signal reflects the voluntary activity of the user. Motor Imagery brain-computer interface (MI-BCI) is a typical active BCI. The user can induce a specific response to the corresponding brain area by imagining the movement of a certain part of the body, and the computer is used for identifying the response and converting the movement intention of the user into a control instruction of external equipment so as to complete a preset task. Comprising the following steps: event-related desynchronization (ERD/ERS) features that characterize energy changes and motion-related cortical-related cortical potentials (MRCPs) features that characterize waveform changes. MRCPs are used as characteristic potential reflecting the exercise preparation process, and the exercise intention of a user can be predicted in advance in the preparation/planning stage of spontaneous exercise, so that a man-machine interaction mode is more natural and efficient. Therefore, the detection and identification of the motion preparation response characteristics are of great significance.
At present, research paradigms for Motor Imagery (MI) are mainly focused on imagination performed by users according to prompts given by experiments, and application scenes in sudden and emergency situations are not considered.
The reactive BCI requires external stimulation to induce the brain to generate a change with a specific frequency or a specific waveform, and converts the change into a corresponding instruction output according to different responses of the nerve signals. Typical passive BCIs are ERP-BCI and SSVEP-BCI (Steady-State Visual Evoked Potential BCI, steady-state visual evoked potential brain-machine interface). Unlike common evoked potentials, event-related potentials (Event-Related Potential, ERP) record information-induced responses of the brain to stimulus, which are related to mental activities such as memory and recognition on the basis of attention, and reflect the neurophysiologic changes of the brain in the cognitive process. Classical ERP components include: n2, P3, P1, N1, P2, the first two are endogenous components, and the second three are exogenous components. P3, also known as P300, is an endogenous ERP induced by a small probability event (in the form of visual, auditory, tactile, etc.). The BCI based on P300 mostly uses the stimulation sequence of specific events to induce the P300 potential of the user, and according to the characteristics of the user during locking, the conscious activity of the user is judged by detecting the P300 potential.
Currently, the vast majority of BCIs are referred to as "simple" BCIs, i.e., only one of the ERD/ERS, MRCPs, P potentials is utilized as a classification indicator.
Disclosure of Invention
The invention provides an emergency state movement intention brain-computer interface system based on fusion characteristics, which is oriented to an emergency state or event application scene, combines an active BCI and a reactive BCI, designs a new paradigm of emergency state movement intention induction, can induce ERP and MRCP combined characteristics (wherein the visual presentation of the emergency state or event can induce ERP characteristics, movement intention induction MRCPs characteristics), and finally builds the emergency state movement intention brain-computer interface system based on the fusion characteristics by utilizing a pattern recognition algorithm, and is described in detail below:
an emergency state motor intent brain-computer interface system based on fusion features, the system comprising:
combining an active BCI and a reactive BCI, designing a new paradigm for inducing an emergency state exercise intention, and inducing ERP and MRCP joint characteristics;
building an electroencephalogram signal acquisition device and recording electroencephalogram data of a user; extracting offline data characteristics and establishing an identification model;
and (3) introducing an identification model for online experiments, classifying the emergency state and the non-emergency state by utilizing linear discriminant analysis, and outputting a control instruction for external equipment when the emergency state is detected.
Wherein the exercise intent induces MRCPs features; the visual presentation of the emergency state induces ERP characteristics, the characteristic signals of nerve electric activity are extracted, the potential characteristics related to the motion and the event are fused, and the state of the user is identified by using a pattern identification algorithm; the recognition of motor intent is quickly achieved using the time difference between the EEG response and the actual motor response.
Further, the new paradigm is: according to VR technology, a scene that a cup slides off from the desk side is designed, and emergency tasks or events are simulated to induce ERP and MRCP through sudden falling of the cup;
(1) observation tasks: the user sits on the chair with VR glasses, and does not need to perform other tasks when looking at the cup in VR to fall;
(2) the key task: wearing VR glasses as well, the user presses the key at the fastest speed at the moment when the cup falls down, simulates the action of receiving the cup, and records the reaction time;
(3) imagine the task: similar to the key task, imagination tasks do not produce real motions, but only preview the action of connecting the cup once in mind;
the time when any cup falls out of the three tasks is random, and the user only performs one of the three tasks in one session.
Wherein, when the emergency state is detected, the control instruction to the external equipment is output as follows:
and transmitting the real-time classification result of the user through a user data packet protocol, realizing instruction communication between MATLAB external devices and feeding back the user in real time.
Further, the system defines the moment when the cup falls off from the desk edge as zero moment, takes the event code as a signal sign, intercepts data from 1 second before the zero moment to 1 second after the zero moment and segments the data;
meanwhile, 2s data in a non-task state is intercepted and used as an electroencephalogram signal under a non-urgent task and used for comparing the difference of electroencephalogram responses under the urgent task and the non-urgent task.
The system transmits the result to the Unity3d through the UDP protocol, controls the virtual arm to finish the cup receiving action, outputs a prediction result when the emergency state is identified, transmits the prediction result back to the Unity to finish the control of the virtual arm, and simultaneously starts the next task; and when the identification result is a non-urgent task, the system automatically extracts the next group of data to analyze until one session is ended.
Further, the motion intention induces the MRCPs feature to have the lock time characteristic of ERP, the segmented data is subjected to superposition averaging of a plurality of three according to experimental conditions, and finally ERP waveforms or 'ERP+MRCP' fusion waveforms are obtained.
Wherein the system comprises:
inputting data of 4 sessions of an imagination task in an offline experiment, preprocessing and extracting features to obtain an individual model, and storing the individual model;
inputting online data, sequentially preprocessing, extracting features and identifying modes, taking an offline model as a training set, taking online data features as a testing set, and utilizing LDA to carry out emergency state and non-emergency state.
The technical scheme provided by the invention has the beneficial effects that:
1. compared with the traditional single BCI paradigm feature, the fusion feature can effectively improve the classification recognition accuracy of the BCI system, and the system brings innovations on paradigms and application scenes for the development of the BCI, is hopeful to realize a BCI system with quick response, reliability and stability, and brings considerable social benefit and economic benefit;
2. the invention combines active and reactive BCI paradigms, focuses on the application of 'switch' to emergency state or event, and realizes the rapid and accurate identification of the emergency state or event by inducing and fusing different brain electrical characteristics, and can effectively improve the classification identification accuracy of the BCI system compared with the single characteristics of the traditional BCI paradigm;
3. the VR technology adopted by the invention not only optimizes the experimental scene, but also improves the participation degree and immersion feeling of the user, thereby being beneficial to improving the use effect of the system; the system overcomes the limitation of the existing BCI and is expected to provide innovative ideas and reliable technical guarantees for the development of novel BCI systems.
Drawings
FIG. 1 is a schematic diagram of an emergency sports intent brain-computer interface system framework;
FIG. 2 is a VR-based emergency exercise intent induction paradigm flow chart;
FIG. 3 is a schematic illustration of an experimental scenario;
FIG. 4 is a schematic diagram of a time domain waveform under observation;
FIG. 5 is a schematic diagram of a time domain waveform under a keystroke task;
FIG. 6 is a schematic diagram of a time domain waveform under an imaginary task;
fig. 7 is a flow chart of an on-line experiment of an emergency exercise intention brain-computer interface system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in further detail below.
Compared with a single BCI system, the Hybrid brain-computer interface (hBCI) can effectively improve the classification accuracy and the execution efficiency of the BCI system.
Virtual Reality (VR) is used as a man-machine interaction technology, and a Virtual environment is constructed through computer simulation, so that multi-sense simulation is provided for users, and the Virtual Reality (VR) has the characteristics of interactivity, immersive performance and conception. Research shows that compared with two-dimensional plane display, the three-dimensional environment creates an immersive effect, and a user can concentrate on the experiment, so that the activation intensity of cerebral cortex is enhanced; meanwhile, due to the interestingness of VR, fatigue can be effectively relieved, and the experience of a user is improved.
Therefore, the embodiment of the invention designs a scene that the cup slides from the table by means of VR technology, simulates emergency tasks or events to induce ERP and MRCPs by suddenly dropping the cup, and expands the paradigm and application scene of the existing BCI system by integrating active and reactive BCI.
Based on VR technical design, develop experimental scenario for simulating emergency state or incident in daily life. Compared with a non-emergency state, the emergency state or the event induces the cerebral cortex to generate special nerve electric activity, and the characteristic signals of the nerve electric activity are extracted, the potential characteristics related to the movement and the event are fused, and the state of the user is identified by utilizing a pattern identification algorithm; meanwhile, by utilizing the time difference between the EEG response and the actual motion response, the recognition of the motion intention can be realized quickly, so that the risk avoidance can be realized for a user, and the method has important application prospect in the fields of daily life and military.
Therefore, the embodiment of the invention combines the BCI and VR technologies, designs a new pattern for inducing the movement intention in the emergency state, detects and identifies the emergency state or event through time domain analysis, and converts the emergency state or event into instruction output, so that the quick response is realized, and the purposes of reducing the danger and protecting the safety of the external equipment are achieved through controlling the external equipment.
The technical flow is as follows: designing a new paradigm of emergency state exercise intention induction, and simultaneously inducing ERP and MRCPs; building an electroencephalogram signal acquisition device and recording electroencephalogram data of a user; extracting offline data characteristics and establishing an identification model; an identification model is imported for online experiments, two states (emergency state and non-emergency state) are classified by using linear discriminant analysis (Linear Discriminant Analysis, LDA), and when an emergency state or event is detected, a control instruction of an external device is output.
1. System architecture and experimental procedure
The overall system design of the embodiment of the invention is shown in fig. 1, and mainly comprises: the device comprises a stimulation presentation module, a data acquisition module, an electroencephalogram data processing module and an online control module. The stimulus presentation module presents specific experimental tasks in a VR form, prompting the tested to react at a specified moment. The data acquisition module utilizes an electroencephalogram acquisition device (Neuroscan) to realize synchronous acquisition of an original EEG signal and an event tag (the event tag is sent out by the stimulus presentation module and transmitted to the electroencephalogram acquisition device through a parallel port). The main functions of the electroencephalogram data processing module include preprocessing original EEG signals (acquired by the data acquisition module); extracting time domain waveform characteristics of the preprocessed electroencephalogram data and establishing an identification model; and carrying out classification recognition on the offline data, and carrying out real-time classification recognition in an online experiment. The online control module transmits the real-time classification result of the user through a user data packet protocol (User datagram protocol, UDP), realizes instruction communication between MATLAB and Unity3d or external equipment (such as a virtual arm, a mechanical arm, a unmanned aerial vehicle and the like) and feeds back the user (such as visual feedback and the like) in real time.
A flow chart of VR based emergency exercise intent induction paradigm is shown in fig. 2. The offline experimental process includes three tasks: (1) observation tasks: the user sits on the chair while wearing the VR glasses, and only needs to look at the VR to drop the cup without executing other tasks; (2) the key task: the VR glasses are worn, but a user needs to press a key at the fastest speed at the moment when the cup falls down, simulate the action of receiving the cup, and record the reaction time; (3) imagine the task: similar to the keystroke task, the user is required to react when the cup falls. In contrast, imagination tasks do not produce real movements, but merely preview the action of connecting the cup once. The time for which any one cup falls out in the three tasks is random (3-5 s) and is used for eliminating the interference of psychological expectation on experiments so as to obtain the characteristic response of the brain in emergency. The offline experiment comprises 12 sessions, 4 sessions for each task, 30 trials for each session (i.e. the cup is dropped 30 times), and the user only performs one of three tasks in one session, and gives the subject a certain rest time after 1 session is completed. The online experiment only comprises an imagination task, and after the movement intention of the user is identified, a control instruction is fed back to the Unity to realize the control of the virtual arm.
2. Function of each module
(1) Stimulus presentation module
The stimulus presentation module is mainly used for building a three-dimensional scene based on the Unity3D software developed by Unity Technologies. Modeling the required materials of the scene through 3ds Max, attaching proper materials, and exporting the completed model into an FBX file for later use. Then, the exported FBX model is imported into Unity3D, and C# script is written based on Visual Studio to realize the control of the cup motion state; and generating a prefabricated body for other objects in the scene, so that later-stage calling is convenient. Finally, adjusting the angles of the lamplight and the camera to finish VR scene arrangement. In order to bring a more realistic experience to the user, the user needs to bring VR glasses during the experiment. The system selects a VR glasses-HTC VIVE which is jointly developed by HTC and Value, and provides technical support by Value and stem VR together. Thus, in the last step, the stem VR SDK file required by HTC VIVE is imported into Unity3D and simply configured to implement the immersive VR experience (as in fig. 3).
(2) Data acquisition module
EEG data acquisition Using Synsaps from Neuroscan corporation 2 The amplifier amplifies the original signal and is matched with SCAN software for data storage. Using standard Ag/Agcl electrodes, the electrode placement was referenced to the International 10-20 System, with a channel number of 64 leads. The acquisition parameters are set to be 1000Hz in sampling rate and 0.1-200 Hz band-pass filtering, and meanwhile, a 50Hz wave trap is used for filtering power frequency interference. The head top is used as a reference in the acquisition process, the forehead is grounded, and the impedance between the scalp and the electrode is kept below 10KΩ. In the experimental process, the tested person is required to keep still as much as possible, and random eye movement and fine movements irrelevant to tasks are avoided so as to ensure the reliability of the acquired data.
(3) Electroencephalogram data processing module
EEG data preprocessing
The system uses EEGLAB, an electroencephalogram processing toolbox developed based on MATLAB, to preprocess the original EEG signals. Comprising the following steps: the preprocessing operations of data format conversion, band-pass filtering, downsampling, data segmentation and the like comprise the following steps:
1) Data format conversion: using EEGLAB tool box to change the original data into more general format;
2) Band-pass filtering: the system mainly explores the characteristics of N200, P300 and MRCP, and therefore, a three-order Butterworth band-pass filter with a filtering range of 1-10 Hz is adopted to filter data, so that extremely low frequency and extremely high frequency interference in an electroencephalogram signal are removed;
3) Downsampling: the common electroencephalogram information is mainly concentrated within 100Hz, and the original EEG signal is downsampled from 1000Hz to 200Hz on the premise of ensuring that the signal is not distorted in order to improve the operation efficiency;
4) Data segmentation: the system defines the moment when visual stimulus (namely that a cup falls off from a table edge) appears as zero moment, takes an event code as a signal sign, intercepts data from 1 second before the zero moment to 1 second after the zero moment, and segments the data for subsequent analysis. Meanwhile, 2s data in a non-task state is intercepted and used as an electroencephalogram signal under a non-urgent task and used for comparing the difference of electroencephalogram responses under the urgent task and the non-urgent task.
b. Time domain waveform analysis and classification recognition
1) Superposition averaging
Unlike random variations in spontaneous electroencephalogram, latency constancy and waveform constancy are two important features of ERP. Therefore, multi-terminal electroencephalogram signals caused by the same stimulus can be subjected to superposition average, so that irregular spontaneous electroencephalogram signals or noise can be subjected to positive and negative offset in the mutual superposition process, and when the superposition times are enough, the amplitude of ERP can be continuously increased, so that the ERP is highlighted. Likewise, the low frequency MRCP feature induced by motion preparation also has lock time characteristics of ERP. Therefore, the segmented data are subjected to superposition averaging of a plurality of three according to experimental conditions, and finally an ERP waveform or an ERP+MRCP fusion waveform is obtained, wherein the calculation process is shown in a formula (1):
Figure BDA0003841023010000071
wherein X is i Mean value (i=1, 2) of target data of i-th task, and M test times in each task
Figure BDA0003841023010000072
x (m) Represents the mth test time, N c Represents the channel number of the acquired brain electricity, N t Representing the length of the intercepted signal.
As shown in fig. 4, 5, and 6, the target (solid line) is a waveform in an emergency task, and the non-target (broken line) is a waveform in a non-emergency task. The P300 amplitude of both the keystroke task and the imagination task is significantly reduced compared to the observation task, while the N200 amplitude is increased to some extent. The reason is that the execution/imagination of the movement may cause MRCP, resulting in enhancement (N200) or weakening (P300) of certain features, thus enabling the induction of fusion features.
2) Feature extraction
Discriminating a typical pattern match (Discriminative Canonical Pattern Matching, DCPM) includes: discrimination spatial pattern analysis (Discriminative Spatial Pattern, DSP) and typical correlation analysis (Canonical Correlation Analysis, CCA) are spatially filtered twice. The whole idea is as follows: firstly, the preprocessed training data passes through a spatial filter to construct a data template, the test data passes through the spatial filter and then is matched and analyzed with template data which also passes through the spatial filter, and finally decision classification is carried out.
Firstly, obtaining template signals by averaging training set data
Figure BDA0003841023010000073
Let X be i Training data sets (i=1, 2) for class i tasks, wherein M samples are included in each training data set>
Figure BDA0003841023010000074
x (m) The m-th test time is indicated,
Figure BDA0003841023010000075
to test a sample, N c Represents the channel number of the acquired brain electricity, N t Representing the length of the intercepted signal:
Figure BDA0003841023010000076
the DSP spatial filter is then established. S is S b As an inter-class divergence matrix, S w In the form of an intra-class divergence matrix,
Figure BDA0003841023010000077
and
Figure BDA0003841023010000081
averaging all samples of two types of training sets to obtain template signals of the two types of training samples, wherein an optimal solution U is a matrix
Figure BDA0003841023010000082
Is a feature vector of (1):
Figure BDA0003841023010000083
Figure BDA0003841023010000084
Figure BDA0003841023010000085
template matching is performed by using the pearson correlation coefficient and the CCA. Firstly, calculating a pearson correlation coefficient between test data subjected to DSP filtering and a template signal:
Figure BDA0003841023010000086
Figure BDA0003841023010000087
performing CCA analysis on the test data subjected to DSP space filtering and each template, and recalculating the Pearson correlation coefficient in a new projection space:
Figure BDA0003841023010000088
Figure BDA0003841023010000089
Figure BDA00038410230100000810
Figure BDA00038410230100000811
and judging which type of training sample is more matched with the test sample by comparing the magnitudes of the characteristic values, and enabling:
ρ 1 =ρ 1121 ρ 2 =ρ 1222 #(12)
can extract 2-dimensional features, [ rho ] 12 ] T
3) Classification recognition
The basic idea of LDA is: during training, the training samples are projected onto a certain straight line, and the straight line can enable projection points of samples of the same type to be as close as possible, and projection points of samples of different types to be as far away as possible; during prediction, the data to be predicted is projected onto a straight line learned during training according to a projection matrix obtained during training, and the category of the data to be predicted is judged according to the position of the projection point. Thus, LDA can be seen as a feature vector dimension reduction process, projecting multi-dimensional features into one dimension. The form of the Fisher discriminant model is shown in equation (13), where x is the eigenvector:
f(x)=ω T x+b#(13)
the existing results are shown in the following table, using 10 fold cross-validation, 300ms of data after visual cues was taken for classification. Consistent with the aim of the study, the fusion features (i.e. keystroke tasks and imagination tasks) perform better in classification than the single features (observation tasks):
table 1 classification accuracy under three tasks
Figure BDA0003841023010000091
(4) On-line control module
The main function of the on-line control module is to identify the state of the user in real time through a computer processing program, and when the intention of the user to connect the cup is detected, a control instruction is output. And transmitting the cup receiving action to the Unity3d through the UDP protocol, and operating the virtual arm to finish the cup receiving action. The online experimental flow is shown in fig. 7, and mainly comprises two parts: firstly, inputting data of 4 sessions of an imagination task in an offline experiment, preprocessing and extracting features to obtain an individual model, and storing the individual model; and secondly, inputting online data, and sequentially carrying out preprocessing, feature extraction and pattern recognition. The offline model can be considered herein as a training set and the online data features as a test set, with LDA for classification predictions (emergency and non-emergency). When the emergency state is identified, a prediction result is output and transmitted back to the Unity to complete the control of the virtual arm, and the next task is started at the same time; and when the identification result is a non-urgent task, the system automatically extracts the next group of data to analyze until one session is ended.
The embodiment of the invention designs an emergency state exercise intention brain-computer interface system based on fusion characteristics, innovatively designs a BCI coding paradigm, and can induce ERP characteristics and MRCPs characteristics at the same time, thereby improving the identification accuracy and response time of the traditional active BCI system. Has wide application prospect in the fields of daily life, military battlefield, sports, etc., and is expected to obtain considerable social and economic benefits.
The embodiment of the invention does not limit the types of other devices except the types of the devices, so long as the devices can complete the functions.
Those skilled in the art will appreciate that the drawings are schematic representations of only one preferred embodiment, and that the above-described embodiment numbers are merely for illustration purposes and do not represent advantages or disadvantages of the embodiments.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (8)

1. An emergency state motor intent brain-computer interface system based on fusion features, the system comprising:
combining an active BCI and a reactive BCI, designing a new paradigm for inducing an emergency state exercise intention, and inducing ERP and MRCP joint characteristics;
building an electroencephalogram signal acquisition device and recording electroencephalogram data of a user; extracting offline data characteristics and establishing an identification model;
and (3) introducing an identification model for online experiments, classifying the emergency state and the non-emergency state by utilizing linear discriminant analysis, and outputting a control instruction for external equipment when the emergency state is detected.
2. An emergency state motor intent brain-computer interface system based on fusion features according to claim 1, wherein said motor intent induces MRCPs features; the visual presentation of the emergency state induces ERP characteristics, the characteristic signals of nerve electric activity are extracted, the potential characteristics related to the motion and the event are fused, and the state of the user is identified by using a pattern identification algorithm; the recognition of motor intent is quickly achieved using the time difference between the EEG response and the actual motor response.
3. The fusion feature-based emergency state exercise intent brain-machine interface system of claim 1, wherein the new paradigm is: according to VR technology, a scene that a cup slides off from the desk side is designed, and emergency tasks or events are simulated to induce ERP and MRCP through sudden falling of the cup;
(1) observation tasks: the user sits on the chair with VR glasses, and does not need to perform other tasks when looking at the cup in VR to fall;
(2) the key task: wearing VR glasses as well, the user presses the key at the fastest speed at the moment when the cup falls down, simulates the action of receiving the cup, and records the reaction time;
(3) imagine the task: similar to the key task, imagination tasks do not produce real motions, but only preview the action of connecting the cup once in mind;
the time when any cup falls out of the three tasks is random, and the user only performs one of the three tasks in one session.
4. The fusion feature-based emergency state exercise intent brain-machine interface system of claim 1, wherein the outputting control instructions to the external device when an emergency state is detected is:
and transmitting the real-time classification result of the user through a user data packet protocol, realizing instruction communication between MATLAB external devices and feeding back the user in real time.
5. The fusion feature-based emergency state exercise intent brain-computer interface system of claim 1, wherein the system defines the moment when a cup falls off a table as zero moment, takes event codes as signal marks, and intercepts data from 1 second before zero moment to 1 second after zero moment for segmentation;
meanwhile, 2s data in a non-task state is intercepted and used as an electroencephalogram signal under a non-urgent task and used for comparing the difference of electroencephalogram responses under the urgent task and the non-urgent task.
6. The brain-computer interface system for the movement intention of the emergency state based on the fusion characteristic according to claim 1, wherein the system transmits the movement intention to the Unity3d through a UDP protocol, controls the virtual arm to complete the cup receiving action, outputs a prediction result when the emergency state is identified, and transmits the prediction result back to the Unity to complete the control of the virtual arm, and simultaneously starts the next task; and when the identification result is a non-urgent task, the system automatically extracts the next group of data to analyze until one session is ended.
7. The emergency state exercise intent brain-computer interface system based on fusion features of claim 2, wherein the exercise intent induces the MRCPs feature to have lock time characteristics of ERP, and the segmented data is subjected to superposition averaging of multiple three according to experimental conditions, so as to finally obtain an ERP waveform or an "erp+mrcp" fusion waveform.
8. An emergency state movement intention brain-computer interface system based on fusion features according to claim 1, wherein said system comprises:
inputting data of 4 sessions of an imagination task in an offline experiment, preprocessing and extracting features to obtain an individual model, and storing the individual model;
inputting online data, sequentially preprocessing, extracting features and identifying modes, taking an offline model as a training set, taking online data features as a testing set, and utilizing LDA to carry out emergency state and non-emergency state.
CN202211102154.2A 2022-09-09 2022-09-09 Emergency state exercise intention brain-computer interface system based on fusion characteristics Pending CN116301308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211102154.2A CN116301308A (en) 2022-09-09 2022-09-09 Emergency state exercise intention brain-computer interface system based on fusion characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211102154.2A CN116301308A (en) 2022-09-09 2022-09-09 Emergency state exercise intention brain-computer interface system based on fusion characteristics

Publications (1)

Publication Number Publication Date
CN116301308A true CN116301308A (en) 2023-06-23

Family

ID=86824488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211102154.2A Pending CN116301308A (en) 2022-09-09 2022-09-09 Emergency state exercise intention brain-computer interface system based on fusion characteristics

Country Status (1)

Country Link
CN (1) CN116301308A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117137498A (en) * 2023-09-15 2023-12-01 北京理工大学 Emergency situation detection method based on attention orientation and exercise intention electroencephalogram

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117137498A (en) * 2023-09-15 2023-12-01 北京理工大学 Emergency situation detection method based on attention orientation and exercise intention electroencephalogram

Similar Documents

Publication Publication Date Title
Zhang et al. Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface
Ebrahimi et al. Brain-computer interface in multimedia communication
CN103699226B (en) A kind of three mode serial brain-computer interface methods based on Multi-information acquisition
Kachenoura et al. ICA: a potential tool for BCI systems
Wang et al. Local temporal common spatial patterns for robust single-trial EEG classification
US20070060830A1 (en) Method and system for detecting and classifying facial muscle movements
CN101464728B (en) Human-machine interaction method with vision movement related neural signal as carrier
Mohanchandra et al. A communication paradigm using subvocalized speech: translating brain signals into speech
CN105962935B (en) The brain electric nerve feedback training system and its method improved for motor learning function
CN112990074A (en) VR-based multi-scene autonomous control mixed brain-computer interface online system
Falzon et al. Complex-valued spatial filters for SSVEP-based BCIs with phase coding
CN107122050B (en) Stable state of motion visual evoked potential brain-computer interface method based on CSFL-GDBN
CN106560765A (en) Method and device for content interaction in virtual reality
CN112465059A (en) Multi-person motor imagery identification method based on cross-brain fusion decision and brain-computer system
CN112488002B (en) Emotion recognition method and system based on N170
CN111012342B (en) Audio-visual dual-channel competition mechanism brain-computer interface method based on P300
CN113082448A (en) Virtual immersion type autism children treatment system based on electroencephalogram signal and eye movement instrument
CN114469090A (en) Electroencephalogram emotion recognition method based on cross-brain coupling relation calculation and brain-computer system
CN116301308A (en) Emergency state exercise intention brain-computer interface system based on fusion characteristics
CN108958620A (en) A kind of dummy keyboard design method based on forearm surface myoelectric
Kaur et al. Technology development for unblessed people using bci: A survey
Chen et al. An adaptive feature extraction method for motor-imagery BCI systems
Buvaneswari et al. A review of EEG based human facial expression recognition systems in cognitive sciences
Chen et al. Enabling fast brain-computer interaction by single-trial extraction of visual evoked potentials
Tello et al. Performance improvements for navigation of a robotic wheelchair based on SSVEP-BCI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination