CN113343798A - Training method, device, equipment and medium for brain-computer interface classification model - Google Patents
Training method, device, equipment and medium for brain-computer interface classification model Download PDFInfo
- Publication number
- CN113343798A CN113343798A CN202110570838.4A CN202110570838A CN113343798A CN 113343798 A CN113343798 A CN 113343798A CN 202110570838 A CN202110570838 A CN 202110570838A CN 113343798 A CN113343798 A CN 113343798A
- Authority
- CN
- China
- Prior art keywords
- test data
- data set
- data
- brain
- classification model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013145 classification model Methods 0.000 title claims abstract description 83
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000012549 training Methods 0.000 title claims abstract description 38
- 238000012360 testing method Methods 0.000 claims abstract description 402
- 210000004556 brain Anatomy 0.000 claims abstract description 26
- 238000002474 experimental method Methods 0.000 claims abstract description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 47
- 238000005070 sampling Methods 0.000 claims description 39
- 238000004590 computer program Methods 0.000 claims description 23
- 230000015654 memory Effects 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 13
- 230000000007 visual effect Effects 0.000 claims description 13
- 238000012935 Averaging Methods 0.000 claims description 9
- 230000000763 evoking effect Effects 0.000 claims description 7
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 abstract 1
- 230000000638 stimulation Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 6
- 230000033001 locomotion Effects 0.000 description 5
- 230000000306 recurrent effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 210000004761 scalp Anatomy 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000001939 inductive effect Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000007659 motor function Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000033764 rhythmic process Effects 0.000 description 2
- 102000001554 Hemoglobins Human genes 0.000 description 1
- 108010054147 Hemoglobins Proteins 0.000 description 1
- 206010029216 Nervousness Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000010355 oscillation Effects 0.000 description 1
- 238000006213 oxygenation reaction Methods 0.000 description 1
- 230000001936 parietal effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000001020 rhythmical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 210000000857 visual cortex Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/015—Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/02—Preprocessing
- G06F2218/04—Denoising
- G06F2218/06—Denoising by applying a scale-space analysis, e.g. using wavelet analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Signal Processing (AREA)
- Dermatology (AREA)
- Neurology (AREA)
- Neurosurgery (AREA)
- Human Computer Interaction (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The application provides a method, a device, equipment and a medium for training a brain-computer interface classification model, wherein the method comprises the following steps: acquiring a test data set; wherein the test data set comprises a plurality of test data of each subject in a plurality of subjects, each test data of each subject is brain electrical signal data of the subject under brain-computer interface paradigm experiments; combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set; and training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model. According to the method and the device, the influence of self factors, environmental factors and the like on the sample data in the experiment of the testee can be reduced, and the accuracy of the trained classification model is improved.
Description
Technical Field
The present application relates to the field of brain-computer interfaces, and in particular, to a method, an apparatus, a device, and a medium for training a classification model of a brain-computer interface.
Background
Brain-Computer Interface (BCI) is a system that establishes a connection between the Brain and a machine, quantified by sensors on the scalp, on the surface of the Brain or within the Brain that detect electric or magnetic fields, hemoglobin oxygenation or parameters. The brain-computer interface experimental paradigm (hereinafter, referred to as brain-computer interface paradigm) is a technical means for obtaining features used by a brain-computer interface, and commonly used brain-computer interface paradigms include a motor imagery paradigm, a Steady-State Visual Evoked Potentials (SSVEP) paradigm, a P300 paradigm, and the like.
In the brain-computer interface paradigm experiment, when a subject receives external visual stimulation, the internal neural activity of the brain of the subject generates corresponding changes, and the passive responses form different brain electricity space-time scale modes. The electroencephalogram signals are collected by a computer and classified by a classification model so as to identify corresponding stimulation signals. By analyzing these stimulation signals, they can be used as instructions for the control of the external device by a person, thereby achieving the identification of specific targets.
When the classification model is trained, test data of a subject in a brain-computer interface paradigm experiment is used as a training sample. The subject is easily influenced by the environment and self factors (fatigue, nervousness and the like) during testing, so that the training sample has errors, and the accuracy of the trained classification model is influenced.
Disclosure of Invention
The application provides a training method, a device, equipment and a medium for a brain-computer interface classification model, which are used for reducing the influence of experimental environment, factors of a subject and the like on sample data and further improving the accuracy of the trained classification model.
In a first aspect, an embodiment of the present application provides a method for training a brain-computer interface classification model, including:
acquiring a test data set; the test data set comprises a plurality of test data corresponding to a plurality of subjects respectively, and each test data of each subject is electroencephalogram data of the subject under a brain-computer interface paradigm experiment;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set;
and training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model.
Optionally, the brain-computer interface paradigm is a motor imagery paradigm, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set, wherein the method comprises the following steps:
dividing the test data set into a plurality of test data subsets, wherein each test data subset comprises a test data corresponding to each of the plurality of subjects; splicing the test data in each test data subset according to rows in the plurality of test data subsets to obtain a sample data set; or
Performing serial combination on a plurality of test data corresponding to the plurality of subjects to obtain a sample data set; or
Respectively extracting feature data of each test data in the test data set, and dividing the feature data of each test data into a plurality of feature data subsets, wherein each feature data subset comprises one feature data corresponding to each of the plurality of subjects; splicing the characteristic data in each of the plurality of characteristic data subsets according to rows to obtain a sample data set; or
And fourthly, respectively extracting characteristic data of each test data in the test data set, and serially combining a plurality of characteristic data corresponding to the plurality of subjects to obtain a sample data set.
Optionally, if the first mode is adopted to combine the plurality of test data in the test data set according to a preset combination mode, the classification model is a first convolutional neural network; wherein a convolution kernel size of the first convolutional neural network is m × Nn; wherein m is the product of the set duration and the sampling frequency, N is the number of the electroencephalogram signal acquisition modules, N is the number of the subjects, and m, N and N are positive integers respectively; or
If the plurality of test data in the test data set are combined in the second mode according to a preset combination mode, the classification model comprises a second convolutional neural network and a time recursive neural network; wherein the convolution kernel size of the second convolutional neural network is 1 × m; or
And if the mode III or the mode IV is adopted to combine the plurality of test data in the test data set according to a preset combination mode, the classification model is a machine learning classifier.
Optionally, the brain-computer interface paradigm is a steady-state visual evoked potential SSVEP, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set, wherein the method comprises the following steps:
dividing the test data set into a plurality of test data subsets; wherein each of the test data subsets includes one test data corresponding to each of the plurality of testers;
and respectively extracting frequency domain characteristic data of each test data in the test data subsets aiming at each test data subset in the test data subsets, and combining the frequency domain characteristic data of each test data according to a preset combination mode to obtain a sample data set.
Optionally, the combining the frequency domain characteristic data according to a preset combination mode includes:
splicing the frequency domain characteristic data of each test data according to rows; or
The sixth mode is that the frequency domain characteristic data of each test data are spliced according to columns; or
And seventhly, respectively carrying out normalization processing on the frequency domain characteristic data of each test data, and superposing the obtained normalized characteristic data of each test data according to corresponding frequency points.
Optionally, if the plurality of test data in the test data set are combined in the preset combination mode in the fifth mode, the classification model is a third convolutional neural network, and the third convolutional neural network adopts three-layer convolution; or
If the test data in the test data set are combined in the preset combination mode in the mode six, the classification model is a fourth convolutional neural network, and the fourth convolutional neural network adopts two layers of convolution; or
And if the plurality of test data in the test data set are combined in the preset combination mode in the mode seven, the classification model is a fifth convolutional neural network, and the fifth convolutional neural network adopts two layers of convolution.
Optionally, the brain-computer interface paradigm is P300, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of leads;
the combining the plurality of test data in the test data set according to a preset combination mode to obtain a sample data set includes:
dividing the test data set into a plurality of test data subsets; wherein each of the test data subsets comprises a plurality of test data corresponding to each of the plurality of testers;
and aiming at each test data subset in the plurality of test data subsets, carrying out superposition averaging on corresponding sampling points of each test data in the test data subsets to obtain a sample data set.
In a second aspect, an embodiment of the present application provides a training apparatus for a brain-computer interface classification model, including:
the acquisition module is used for acquiring a test data set; wherein the test data set comprises a plurality of test data for each subject in a plurality of subjects, each test data for each subject being brain electrical signal data for the subject under brain-computer interface paradigm experiments;
the combination module is used for combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set;
and the training module is used for training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model.
Optionally, the brain-computer interface paradigm is a motor imagery paradigm, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
the combination module is further configured to:
dividing the test data set into a plurality of test data subsets, wherein each test data subset comprises a test data corresponding to each of the plurality of subjects; splicing the test data in each test data subset according to rows in the plurality of test data subsets to obtain a sample data set; or
Performing serial combination on a plurality of test data corresponding to the plurality of subjects to obtain a sample data set; or
Respectively extracting feature data of each test data in the test data set, and dividing the feature data of each test data into a plurality of feature data subsets, wherein each feature data subset comprises one feature data corresponding to each of the plurality of subjects; splicing the characteristic data in each of the plurality of characteristic data subsets according to rows to obtain a sample data set; or
And fourthly, respectively extracting characteristic data of each test data in the test data set, and serially combining a plurality of characteristic data corresponding to the plurality of subjects to obtain a sample data set.
Optionally, if the combination module combines the plurality of test data in the test data set in the first mode according to a preset combination mode, the classification model is a first convolutional neural network; wherein a convolution kernel size of the first convolutional neural network is m × Nn; wherein m is the product of the set duration and the sampling frequency, N is the number of the electroencephalogram signal acquisition modules, N is the number of the subjects, and m, N and N are positive integers respectively; or
If the combination module combines a plurality of test data in the test data set in the second mode according to a preset combination mode, the classification model comprises a second convolutional neural network and a time recursive neural network; wherein the convolution kernel size of the second convolutional neural network is 1 × m; or
And if the combination module combines the plurality of test data in the test data set according to the third mode or the fourth mode in a preset combination mode, the classification model is a machine learning classifier.
Optionally, the brain-computer interface paradigm is a steady-state visual evoked potential SSVEP, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
the combination module further comprises:
a partitioning submodule for partitioning the test data set into a plurality of test data subsets; wherein each of the test data subsets includes one test data corresponding to each of the plurality of testers;
and the combination sub-module is used for respectively extracting the frequency domain characteristic data of each test data in the test data subsets aiming at each test data subset in the test data subsets, and combining the frequency domain characteristic data of each test data according to a preset combination mode to obtain a sample data set.
Optionally, the combining sub-module is further configured to:
splicing the frequency domain characteristic data of each test data according to rows; or
The sixth mode is that the frequency domain characteristic data of each test data are spliced according to columns; or
And seventhly, respectively carrying out normalization processing on the frequency domain characteristic data of each test data, and superposing the obtained normalized characteristic data of each test data according to corresponding frequency points.
Optionally, if the combining submodule combines the plurality of test data in the test data set in the fifth mode according to a preset combining mode, the classification model is a third convolutional neural network, and the third convolutional neural network adopts three-layer convolution; or
If the combining submodule combines the plurality of test data in the test data set according to the mode VI in a preset combining mode, the classification model is a fourth convolutional neural network, and the fourth convolutional neural network adopts two layers of convolution; or
And if the combination submodule combines the plurality of test data in the test data set according to a preset combination mode in the mode seven, the classification model is a fifth convolutional neural network, and the fifth convolutional neural network adopts two-layer convolution.
Optionally, the brain-computer interface paradigm is P300, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of leads;
the combination module is further configured to:
dividing the test data set into a plurality of test data subsets; wherein each of the test data subsets comprises a plurality of test data corresponding to each of the plurality of testers;
and aiming at each test data subset in the plurality of test data subsets, carrying out superposition averaging on corresponding sampling points of each test data in the test data subsets to obtain a sample data set.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the processor is caused to implement the steps of the method for training a brain-computer interface classification model as provided in the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the steps of the method for training a brain-computer interface classification model as provided in the first aspect are implemented.
The training method, the device, the equipment and the medium for the brain-computer interface classification model have the following beneficial effects:
the method comprises the steps of combining a plurality of test data corresponding to a plurality of testees in a test data set according to a preset combination mode to obtain a sample data set, and then training a classification model of a brain-computer interface normal form by adopting the sample data set to obtain a target classification model. Because the sample data set comprises the test data of a plurality of subjects, compared with the test data of a single subject as sample data, the influence of the experimental environment and the factors of the subject on the sample data can be reduced, and the accuracy of the trained classification model is further improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a method for training a brain-computer interface classification model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a test data set according to an embodiment of the present application;
fig. 3 is a schematic diagram of a training apparatus for a brain-computer interface classification model according to an embodiment of the present application;
fig. 4 is a schematic view of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
To facilitate better understanding of the technical solutions of the present application for those skilled in the art, the following terms related to the present application are introduced.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein.
The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the embodiments of the present application, "/" means "or" unless otherwise specified, for example, a/B may mean a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: in the description of the embodiments of the present application, "a" or "a" refers to two or more, and other terms and the like should be understood similarly, the preferred embodiments described herein are only used for explaining and explaining the present application, and are not used for limiting the present application, and features in the embodiments and examples of the present application may be combined with each other without conflict.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
(1) Motor imagery: the intention of the user is judged by detecting an Electroencephalogram (EEG) generated by motor imagery of the user, thereby realizing direct communication and control between the human brain and external equipment. The principle of the BCI system based on Sensorimotor rhythms (SMRs) is that the execution and imagination of limb movements affects the changes in the rhythmic activity recorded by the sensory-motor cortex of the brain. These increases and decreases in sensorimotor rhythms are called Event-Related Synchronization (ERS) and Event-Related Desynchronization (ERD), respectively, and these changes are usually localized to specific areas of the body. SMR changes associated with Motor Image (MI) can be used as a valid signal for BCI control, a classification of MI-BCI system: ERD or ERS phenomena occur in certain characteristic frequency bands of EEG signals. The former represents the activation of the motor function region and the latter represents the "idle" state of the motor function region. For the imagination action of the left hand and the right hand, the brain movement area corresponding to the hand has obvious contralateral dominance characteristics; the active brain region corresponding to the foot's imaginary movement is usually at the top Cz position, while the brain regions corresponding to the top left C3 and the top right C4 are rarely activated and are mostly in the "empty" state. According to the characteristics, the output instructions can be converted into output instructions for the control of the BCI system.
(2) SSVEP: it is meant that when subjected to a visual stimulus of a fixed frequency, the visual cortex of the human brain produces a continuous response related to the stimulus frequency (at the fundamental frequency or at multiples of the stimulus frequency). The steady state visual evoked potential is the stable oscillation of the brain electricity induced by rapid and repeated stimulation, the common stimulus source is a checkerboard pattern of a flash lamp, a light emitting diode and a display, and the current stimulus source is presented by a multipurpose display. In the standard SSVEP-BCI stimulation interface, the subject can see different targets in the visual field to flash at different frequencies, and the continuous stimulation presents a response inducing the brain to produce a steady state, which is generally analyzed by a frequency analysis method. Frequency analysis typically shows response peaks at fundamental and octave frequencies of the stimulus frequency. For example, the stimulation frequency is 14Hz, and the frequency domain components of the brain electrical contain components at 14, 28, and 42 Hz. The visual zone stimulated by SSVEP is located in the occipital region, and the normal course of the experiment is that the occipital region of the subject (Oz, O1, O2, etc.) is attached with electrodes, and the subject sees a certain target flashing at a certain frequency, and the target represents an output instruction of the BCI system. The BCI system calculates the frequency spectrum of the occipital EEG, matches the stimulation frequency of each target according to the peak value of the frequency spectrum, and then outputs the target of that stimulation frequency.
(3) P300: is a positive offset component that occurs in the EEG recorded on the scalp after stimulation presented in a particular situation. P300 is generally greatest at the central parietal cortex and decays gradually with increasing distance from this brain region. A specific Event that induces P300 Event-related Potentials (ERP) is called oddball paradigm, and the smaller the probability of an Event occurrence, the higher the peak of P300. The general P300 paradigm includes auditory P300 and visual P300, and the visual P300 paradigm is more commonly used at present, for example, the line and column of the stimulation interface flash in random order, the subject is required to watch the target to be selected, and the number of times the target flashes (small probability stimulation) is recorded. When the target flickers, the P300 signal of the subject is detected, and the target is finally output after several times of superposition. The target can be replaced by a control instruction of the BCI system.
In view of the problem that in the brain-computer interface paradigm experiment, because a subject is easily influenced by environment and self factors (fatigue, vague, etc.) during testing, sample data is inaccurate, or stimulation time for inducing effective features during testing is long, which causes large error of the sample data, thereby influencing the accuracy of the trained classification model, the embodiment of the application provides a training method, a device and equipment of the brain-computer interface classification model, which can reduce the influence of the experiment environment and the self factors of the subject on the sample data, and further improve the accuracy of the trained classification model.
The following describes in detail a method, an apparatus, and a device for training a brain-computer interface classification model in the embodiments of the present application with reference to the accompanying drawings.
Example 1
Fig. 1 illustrates a training method of a brain-computer interface classification model according to an embodiment of the present application, and referring to fig. 1, the training method of the brain-computer interface classification model includes the following steps:
step S101, acquiring a test data set; wherein the test data set comprises a plurality of test data of each subject in a plurality of subjects, and each test data of each subject is brain electrical signal data of the subject under brain-computer interface paradigm experiment.
The brain-computer interface paradigm can be a motor imagery paradigm, an SSVEP paradigm, or a P300 paradigm, etc. Under brain-computer interface paradigm experiments, a plurality of leads are arranged on the scalp of a subject (the leads can be understood as the placement position of an electrode on the scalp and the connection mode of the electrode and an amplifier when electroencephalograms are recorded), the plurality of subjects respectively receive the same test tasks, and the test tasks are determined according to the corresponding brain-computer interface paradigm, that is, each brain-computer interface paradigm can correspond to a fixed test task, then for each subject, electroencephalogram signal data (including multiple paths of electroencephalograms signals, and one path of electroencephalogram signal is collected by each lead) are collected by the plurality of leads, each subject can receive a plurality of test tasks, and thus a plurality of test data corresponding to the plurality of subjects can be obtained. Alternatively, the test data set may employ a public data set.
Illustratively, a test data set of 3 subjects is acquired, each subject corresponding to N test data, and there are a total of 3N test data, each of which may be represented as nxS, where N represents the number of leads and S represents the number of sampling points per lead. Wherein N, S and N are positive integers more than 1.
Step S102, combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set.
In this step, for example, each test data is represented as nxS, and a plurality of test data may be joined by rows, a plurality of test data may be joined by columns, or a plurality of test data corresponding to a plurality of subjects may be combined in series.
It should be noted that, a plurality of test data may be directly spliced, or feature data of each test data may be respectively extracted, and then the plurality of feature data are combined, which will be further described in the following embodiments.
And S103, training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model.
After the sample data set is obtained, the sample data set can be divided into a training subset, a testing subset and a verification subset so as to train the classification model of the brain-computer interface normal form and obtain a target classification model.
In the embodiment of the application, the sample data set is formed by combining a plurality of test data of a plurality of subjects, so that compared with the case that a plurality of test data of a single subject are used as the sample data set, the influence of the experimental environment, the factors of the subject and the like on the sample data can be reduced, and the accuracy of the trained classification model is further improved.
As an optional implementation manner, the brain-computer interface paradigm may be a motor imagery paradigm, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules, wherein each electroencephalogram signal acquisition module may acquire the plurality of sampling point data through the leads, so as to form one path of electroencephalogram signal data.
It should be noted that the test data in the motor imagery paradigm includes two categories, and if the screen display paradigm in the motor imagery paradigm includes two categories, one is left-handed motion imagery and the other is right-handed motion imagery, the two categories of test data are left-handed test data and right-handed test data, respectively, and the plurality of test data of each subject includes the two categories of test data.
The step S102 may adopt one of the following four ways.
Dividing a test data set into a plurality of test data subsets, wherein each test data subset comprises test data corresponding to a plurality of subjects; and splicing the test data in each test data subset according to rows in the plurality of test data subsets to obtain a sample data set.
Illustratively, the plurality of subjects are subject 1, subject 2, and subject 3, respectively, each subset of test data may include one test data of subject 1, one test data of subject 2, and one test data of subject 3, and each subset of test data is the same category of test data of the plurality of subjects, such as a plurality of left-hand test data or a plurality of right-hand test data.
The above-mentioned manner of splicing by rows may be understood as a parallel combination manner. For example, each subject corresponds to 144 pieces of test data, including 72 pieces of left-hand test data and 72 pieces of right-hand test data, each piece of test data may be represented as 3 × 1000, where 3 is the number of leads and 1000 is the number of sampling points, and one piece of left-hand test data corresponding to each of subject 1, subject 2, and subject 3 is spliced by rows, so that the obtained combined test data may be represented as 9 × 1000, and one piece of right-hand test data corresponding to each of subject 1, subject 2, and subject 3 is spliced by rows, so that the obtained combined test data may also be represented as 9 × 1000, and finally 144 pieces of 9 × 1000 combined test data may be formed and may be used as a sample data set.
In addition, the test data in each of the test data subsets can be spliced in columns to obtain a sample data set. For example, the left-handed test data for each of subject 1, subject 2, and subject 3 are concatenated in columns, and the resulting combined test data can be represented as 3x 3000.
And secondly, serially combining a plurality of test data corresponding to a plurality of subjects to obtain a sample data set.
Illustratively, also by way of example in the first mode, the subject 1, the subject 2, and the subject 3 respectively have 72 left-hand test data and 72 right-hand test data, each of which may be represented as 3 × 1000, and the 72 left-hand test data of the subject 1, the 72 left-hand test data of the subject 2, and the 72 left-hand test data of the subject 3 are serially combined to obtain 216 combined test data of 3 × 1000, and similarly, the 72 right-hand test data of the subject 1, the 72 right-hand test data of the subject 2, and the 72 right-hand test data of the subject 3 are serially combined to obtain 216 combined test data of 3 × 1000, and 432 combined test data of 3 × 1000 in total may be obtained to serve as a sample data set.
Respectively extracting the feature data of each test data in the test data set, and dividing the feature data of each test data into a plurality of feature data subsets, wherein each feature data subset comprises one feature data corresponding to each subject; and splicing the characteristic data in each characteristic data subset according to rows in the plurality of characteristic data subsets to obtain a sample data set.
In this way, the feature data of each test data in the test data set can be extracted by wavelet transform. And splicing the characteristic data in each characteristic data subset according to rows, wherein the splicing mode is similar to that in the first mode, and only the test data in the first mode is replaced by the characteristic data, which is not described herein again.
And fourthly, respectively extracting characteristic data of each test data in the test data set, and serially combining a plurality of characteristic data corresponding to a plurality of testees to obtain a sample data set.
In this way, the feature data of each test data in the test data set can be extracted by wavelet transform. And serially combining a plurality of characteristic data corresponding to a plurality of subjects, wherein the splicing mode is similar to the splicing mode in the second mode, and only the test data in the second mode is replaced by the characteristic data, which is not described again.
It should be noted that the embodiment of the present application is not limited to the feature extraction method, and other feature extraction methods may be adopted in addition to the wavelet transform.
In the embodiment of the application, under a motor imagery paradigm, a plurality of test data corresponding to a plurality of subjects are combined to obtain a sample data set, so that the influence of error factors when a single subject performs a test task can be reduced, and the accuracy of the sample data is improved.
The classification models of the brain-computer interface paradigm in the different combinations are described below.
In a possible implementation manner, by combining a plurality of test data in the test data set according to a preset combination manner in the above manner, the classification model may be a first convolutional neural network; wherein the convolution kernel size of the first convolution neural network is m × Nn; wherein m is the product of the set duration and the sampling frequency, N is the number of the electroencephalogram signal acquisition modules, N is the number of the subjects, and m, N and N are positive integers respectively.
Wherein, the sampling frequency is the sampling frequency of the electroencephalogram signal acquisition module (such as a lead). The set time length can be set according to needs, for example, the set time length is 0.2s, which means that the length of one convolution kernel in the convolution layer of the first convolution neural network is to cover 0.2s of data, if the sampling frequency is 250Hz, the number N of subjects is 3, and the number N of electroencephalogram signal acquisition modules is 3, the size of the convolution kernel of the first convolution neural network can be 50 × 9, the convolution step size can be 1, the output of the convolution layer is subjected to two classifications by a pooling layer, a full-link layer, and a softmax classifier, and a classification result, such as a left hand or a right hand, is output. The number of convolution kernels of the convolution layer may be set as necessary, and is not limited herein.
In another possible implementation, combining a plurality of test data in the test data set according to a preset combination mode in a second mode, wherein the classification model comprises a second convolutional neural network and a time recursive neural network; wherein the convolution kernel size of the second convolutional neural network is 1 × m.
For example, m is 50, that is, the convolution kernel size of the second convolutional neural network is 1 × 50, which means that one time-domain convolution can cover 0.2s of time-domain data, and the number of convolution kernels can be set as required, which is not limited herein. The temporal recurrent neural network may be a Long Short-Term Memory network (LSTM), or may be another temporal recurrent neural network, which is not limited herein. Alternatively, a temporal recurrent neural network may employ two layers of LSTM. And the convolution kernel of the second convolution neural network enables each time domain convolution to cover 0.2s of time domain data, and then the 0.2s of time domain data is input into the time recurrent neural network after being subjected to feature extraction.
In another possible implementation manner, a third or fourth manner is adopted to combine a plurality of test data in the test data set according to a preset combination manner, and then the classification model is a machine learning classifier.
For example, the machine learning classifier may be a support vector machines (svm) classifier, may also be other binary models, may also adopt a deep learning model, and is not limited herein.
As an optional implementation manner, the brain-computer interface paradigm may be a steady-state visual evoked potential SSVEP, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules, where each electroencephalogram signal acquisition module may acquire the plurality of sampling point data through the leads to form one path of electroencephalogram signal data.
It should be noted that the screen display paradigm in the SSVEP paradigm may include a plurality of categories, i.e., the test data includes a plurality of categories, each category representing a stimulus of a corresponding frequency, a checkerboard pattern of stimulus sources such as flash lamps, light emitting diodes, and displays, etc.
The step S102 of combining a plurality of test data in the test data set according to a preset combination manner to obtain the sample data set may include the following steps:
A. dividing the test data set into a plurality of test data subsets; each test data subset comprises one test data corresponding to each of the plurality of testers.
If the category of the test data in the SSVEP paradigm includes 20, the plurality of test data for each subject includes 20 categories of test data, each category of test data represents test data under a specified frequency stimulus, and each subset of the test data includes the same category of test data for the plurality of subjects.
B. And respectively extracting the frequency domain characteristic data of each test data in the test data subsets aiming at each test data subset in the test data subsets, and combining the frequency domain characteristic data of each test data according to a preset combination mode to obtain a sample data set.
In this step, the frequency domain feature data of each test data in the test data subset may be extracted by fast fourier transform. The frequency domain characteristic data of each test data are combined according to a preset combination mode, and one of the following three modes can be adopted.
And fifthly, splicing the frequency domain characteristic data of each test data according to rows.
For example, each test data is represented as 3 × 1000, 3 represents the number of leads, 1000 represents the number of sampling points of one lead, and after the frequency domain features of the test data are extracted through fast fourier transform, the obtained frequency domain feature data can be represented as 3 × 300, and 300 represents the frequency domain spectral length of 0Hz to 50Hz obtained after the test data (original brain electrical signals) are subjected to fast fourier transform.
The fifth mode is similar to the splicing mode in the first mode, except that the test data in the first mode is replaced with the frequency domain feature data in the fifth mode, for example, the frequency domain feature data of the same category corresponding to the subject 1, the subject 2, and the subject 3 are respectively represented as 3 × 300, the 3 frequency domain feature data are spliced in rows, and the obtained combined frequency domain feature data may be represented as 9 × 300, where 9 represents 9 leads, which is not described herein again.
And a sixth mode, splicing the frequency domain characteristic data of each test data according to columns.
For example, the frequency domain feature data of the same category corresponding to each of the subject 1, the subject 2, and the subject 3 is denoted by 3 × 300, and the combined frequency domain feature data obtained by concatenating the 3 frequency domain feature data is denoted by 3 × 900.
And seventhly, respectively carrying out normalization processing on the frequency domain characteristic data of each test data, and superposing the obtained normalized characteristic data of each test data according to corresponding frequency points.
For example, in the fifth embodiment, each test datum is represented as 3 × 1000, the frequency domain feature datum obtained by performing fast fourier transform on the test datum is represented as 3 × 300, as can be seen from the above, 300 represents the frequency domain spectrum length of 0Hz to 50Hz obtained by performing fast fourier transform on the test datum, and then for the frequency domain feature datum of each test datum, 30 to 300 data can be intercepted, that is, data within 5Hz to 50Hz can be intercepted, and then the normalized feature datum can be obtained by performing calculation using the following normalization equation (1). The frequency domain feature data of each test data comprises a plurality of frequency point data, and the normalized feature data comprises normalized data of each frequency point.
Wherein, x' is the normalized data of a frequency point in the normalized feature data, x is the data of a frequency point in the frequency domain feature data, max (x) is the maximum value in the data of each frequency point in the frequency domain feature data, and min (x) is the minimum value in the data of each frequency point in the frequency domain feature data.
For each test data subset, a plurality of normalized feature data corresponding to each of the subject 1, the subject 2, and the subject 3 can be obtained, and one normalized feature data corresponding to each of the subject 1, the subject 2, and the subject 3 is added according to the corresponding frequency points to obtain fused feature data, which can be used as one sample data.
In the embodiment of the application, under the SSVEP paradigm, a plurality of test data corresponding to a plurality of subjects are combined to obtain a sample data set, so that the stimulation time required for inducing effective features can be reduced, the influence of error factors when a single subject performs a test task is reduced, and the accuracy of the sample data is improved.
In a possible implementation manner, combining the plurality of test data in the test data set according to a preset combination manner in a fifth manner, so that the classification model is a third convolutional neural network, and the third convolutional neural network adopts three-layer convolution.
For example, the sampling frequency of each test data is 250Hz, the data duration is 6S, and the number of classifications is 20, that is, the classification of the test data includes 20. And (3) carrying out fast Fourier transform on each test data to obtain a spectrogram with the frequency domain resolution of 0.16, namely 6 sampling points within 1 Hz.
In the fifth embodiment, the obtained combined frequency domain feature data may be represented as 9 × 300, that is, the input size is 9 × 300, in this case, the convolution kernel size of the first layer of convolution may be 9 × 1, and the number of convolution kernels may be set as needed, which is not limited herein. The output of the first layer of convolution is 3x 300 in size.
The convolution kernel size of the second layer convolution may be 3 × 1, and the number of convolution kernels may be set as needed, which is not limited herein. The convolution kernel size of the third layer of convolution may be 1 × 10, and the number of convolution kernels may be set as required, and is not limited herein.
In another possible implementation, combining the plurality of test data in the test data set according to the preset combination mode in the sixth mode, where the classification model is a fourth convolutional neural network, and the fourth convolutional neural network adopts two-layer convolution.
For example, in the sixth mode, the obtained combined frequency domain feature data may be represented as 3 × 900, that is, the input size is 3 × 900. The convolution kernel size of the first layer of convolution may be 3 × 1, and the number of convolution kernels may be set as desired, and is not limited herein. The convolution kernel size of the second layer convolution may be 1 × 10, and the number of convolution kernels may be set as needed, which is not limited herein.
In another possible implementation manner, if the multiple test data in the test data set are combined in the preset combination manner in the seventh manner, the classification model is a fifth convolutional neural network, and the fifth convolutional neural network uses two-layer convolution.
For example, in the seventh mode, the obtained fused feature data may be represented by 3 × 270, that is, the input size is 3 × 270. The convolution kernel size of the first layer of convolution may be 3 × 1, and the number of convolution kernels may be set as desired, and is not limited herein. The convolution kernel size of the second layer convolution may be 1 × 10, and the number of convolution kernels may be set as needed, which is not limited herein.
As an alternative embodiment, the brain-computer interface paradigm may be P300, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of leads.
The step S102 of combining a plurality of test data in the test data set according to a preset combination manner to obtain the sample data set may include the following steps:
(1) dividing the test data set into a plurality of test data subsets; wherein each test data subset comprises a plurality of test data corresponding to each of the plurality of testers.
Illustratively, each test data in the P300 format may be represented as 3x 1000, and for subject 1, subject 2, and subject 3, 7 test data for subject 1, 7 test data for subject 2, and 7 test data for subject 3 are combined into one test data subset.
(2) And aiming at each test data subset in the plurality of test data subsets, carrying out superposition averaging on corresponding sampling points of each test data in the test data subsets to obtain a sample data set.
For example, a P300 waveform can be generated by performing the superposition averaging of the corresponding sampling points of the 7 test data of the subject 1, the 7 test data of the subject 2, and the 7 test data of the subject 3, so as to obtain one sample data.
In the related art, the corresponding sampling points of the multiple test data of a single subject need to be subjected to superposition averaging, for example, the corresponding sampling points of the 20 test data of a single subject are added and then averaged, and the resulting P300 waveform has a P300 peak. In the embodiment of the present application, the test data of a plurality of subjects, for example, 3 subjects, are used to perform the superposition averaging of the corresponding sampling points on the 7 test data of subject 1, the 7 test data of subject 2, and the 7 test data of subject 3, so that the test duration of a single subject can be reduced, the number of times of the superposition averaging can be reduced, and the obtained P300 waveform can also have a P300 peak.
The following is an exemplary description of a brain-computer interface system based on motor imagery.
As shown in fig. 2, the brain-computer interface system based on motor imagery includes: an EEG signal acquisition module 21, a signal processing and decoding module 22 and a control command output module 23. The EEG signal acquisition module 21 is used for acquiring and amplifying EEG signal data of a subject; the signal processing and decoding module 22 is used for decoding the electroencephalogram signal data, performing filtering preprocessing on the decoded electroencephalogram signal data, extracting the characteristics of the electroencephalogram signal data, and then classifying the characteristics through a classification model; and the control command output is to send the control command to the external equipment according to the classification and identification result and control the external equipment according to the tested intention.
The classification model is a classification model obtained by training in the above embodiments of the present application.
Based on the same inventive concept, the embodiment of the present application further provides a training device for a brain-computer interface classification model, and the principle of the device for solving the problem is similar to the method of the above embodiment, so that the implementation of the device can refer to the implementation of the method, and repeated details are not repeated. Fig. 3 shows a schematic structural diagram of a training apparatus for a brain-computer interface classification model according to an embodiment of the present application.
Referring to fig. 3, the training device of the brain-computer interface classification model includes an obtaining module 31, a combining module 32 and a training module 33.
An obtaining module 31, configured to obtain a test data set; wherein the test data set comprises a plurality of test data of each subject in a plurality of subjects, each test data of each subject is brain electrical signal data of the subject under brain-computer interface paradigm experiments;
the combination module 32 is configured to combine a plurality of test data in the test data set according to a preset combination manner to obtain a sample data set;
and the training module 33 is configured to train the classification model of the brain-computer interface paradigm by using the sample data set to obtain a target classification model.
Optionally, the brain-computer interface paradigm is a motor imagery paradigm, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
the combination module is further configured to:
dividing a test data set into a plurality of test data subsets, wherein each test data subset comprises test data corresponding to a plurality of subjects; splicing the test data in each test data subset according to rows in a plurality of test data subsets to obtain a sample data set; or
Performing serial combination on a plurality of test data corresponding to a plurality of subjects to obtain a sample data set; or
Respectively extracting the feature data of each test data in the test data set, and dividing the feature data of each test data into a plurality of feature data subsets, wherein each feature data subset comprises one feature data corresponding to each subject; splicing the characteristic data in each characteristic data subset according to rows in a plurality of characteristic data subsets to obtain a sample data set; or
And fourthly, respectively extracting characteristic data of each test data in the test data set, and serially combining a plurality of characteristic data corresponding to a plurality of testees to obtain a sample data set.
Optionally, if the combination module combines a plurality of test data in the test data set according to a preset combination mode in a first mode, the classification model is a first convolutional neural network; wherein the convolution kernel size of the first convolution neural network is m × Nn; wherein m is the product of the set duration and the sampling frequency, N is the number of electroencephalogram signal acquisition modules, N is the number of subjects, and m, N and N are positive integers respectively; or
If the combination module combines a plurality of test data in the test data set according to a preset combination mode in a second mode, the classification module comprises a second convolutional neural network and a time recursive neural network; wherein the convolution kernel size of the second convolutional neural network is 1 × m; or
And if the combination module combines a plurality of test data in the test data set according to a preset combination mode in a third mode or a fourth mode, the classification module is a machine learning classifier.
Optionally, the brain-computer interface paradigm is a steady-state visual evoked potential SSVEP, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of electroencephalogram signal acquisition modules;
the combination module further comprises:
a partitioning submodule for partitioning the test data set into a plurality of test data subsets; each test data subset comprises one test data corresponding to each of a plurality of testers;
and the combination sub-module is used for respectively extracting the frequency domain characteristic data of each test data in the test data subsets aiming at each test data subset in the test data subsets, and combining the frequency domain characteristic data of each test data according to a preset combination mode to obtain a sample data set.
Optionally, the combining sub-module is further configured to:
splicing the frequency domain characteristic data of each test data according to rows; or
Splicing the frequency domain characteristic data of each test data according to columns; or
And seventhly, respectively carrying out normalization processing on the frequency domain characteristic data of each test data, and superposing the obtained normalized characteristic data of each test data according to corresponding frequency points.
Optionally, if the combination sub-module combines a plurality of test data in the test data set in a fifth mode according to a preset combination mode, the classification model is a third convolutional neural network, and the third convolutional neural network adopts three-layer convolution; or
If the combination submodule combines a plurality of test data in the test data set according to a preset combination mode in a mode six, the classification model is a fourth convolution neural network, and the fourth convolution neural network adopts two layers of convolution; or
And if the combination sub-module combines a plurality of test data in the test data set according to a preset combination mode in a mode seven, the classification model is a fifth convolutional neural network, and the fifth convolutional neural network adopts two layers of convolution.
Optionally, the brain-computer interface paradigm is P300, and the electroencephalogram signal data includes a plurality of sampling point data respectively acquired by a plurality of leads;
the combiner module 32 is also used to:
dividing the test data set into a plurality of test data subsets; each test data subset comprises a plurality of test data corresponding to a plurality of testers;
and aiming at each test data subset in the plurality of test data subsets, carrying out superposition averaging on corresponding sampling points of each test data in the test data subsets to obtain a sample data set.
Based on the same inventive concept, the embodiment of the present application further provides an electronic device, and the principle of the electronic device to solve the problem is similar to the method of the above embodiment, so that the implementation of the electronic device may refer to the implementation of the method, and repeated details are not repeated. Fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Referring to fig. 4, the electronic device may include a processor 402 and a memory 401. Memory 401 provides program instructions and data stored in memory 401 to processor 402. In this embodiment, the memory 401 may be used to store a training program of the brain-computer interface classification model in this embodiment.
The processor 402 is configured to execute the steps of the training method of the brain-computer interface classification model provided in embodiment 1 above by calling the program instructions stored in the memory 401.
The specific connection medium between the memory 401 and the processor 402 is not limited in the embodiments of the present application. For example, the memory 401 and the processor 402 are connected by a bus, which may be divided into an address bus, a data bus, a control bus, and the like.
The Memory may include a Read-Only Memory (ROM) and a Random Access Memory (RAM), and may further include a Non-Volatile Memory (NVM), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a central processing unit, a Network Processor (NP), and the like; but may also be a Digital instruction processor (DSP), an application specific integrated circuit, a field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or the like.
An embodiment of the present application further provides a computer program medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the method for training a brain-computer interface classification model provided in embodiment 1 above.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (10)
1. A method for training a brain-computer interface classification model is characterized by comprising the following steps:
acquiring a test data set; the test data set comprises a plurality of test data corresponding to a plurality of subjects respectively, and each test data of each subject is electroencephalogram data of the subject under a brain-computer interface paradigm experiment;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set;
and training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model.
2. The method of claim 1, wherein the brain-computer interface paradigm is a motor imagery paradigm, and the brain electrical signal data comprises a plurality of sampling point data respectively acquired by a plurality of brain electrical signal acquisition modules;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set, wherein the method comprises the following steps:
dividing the test data set into a plurality of test data subsets, wherein each test data subset comprises a test data corresponding to each of the plurality of subjects; splicing the test data in each test data subset according to rows in the plurality of test data subsets to obtain a sample data set; or
Performing serial combination on a plurality of test data corresponding to the plurality of subjects to obtain a sample data set; or
Respectively extracting feature data of each test data in the test data set, and dividing the feature data of each test data into a plurality of feature data subsets, wherein each feature data subset comprises one feature data corresponding to each of the plurality of subjects; splicing the characteristic data in each of the plurality of characteristic data subsets according to rows to obtain a sample data set; or
And fourthly, respectively extracting characteristic data of each test data in the test data set, and serially combining a plurality of characteristic data corresponding to the plurality of subjects to obtain a sample data set.
3. The method of claim 2, wherein if the first method is used to combine a plurality of test data in the test data set according to a preset combination, the classification model is a first convolutional neural network; wherein a convolution kernel size of the first convolutional neural network is m × Nn; wherein m is the product of the set duration and the sampling frequency, N is the number of the electroencephalogram signal acquisition modules, N is the number of the subjects, and m, N and N are positive integers respectively; or
If the plurality of test data in the test data set are combined in the second mode according to a preset combination mode, the classification model comprises a second convolutional neural network and a time recursive neural network; wherein the convolution kernel size of the second convolutional neural network is 1 × m; or
And if the mode III or the mode IV is adopted to combine the plurality of test data in the test data set according to a preset combination mode, the classification model is a machine learning classifier.
4. The method of claim 1, wherein the brain-computer interface paradigm is Steady State Visual Evoked Potential (SSVEP), and the brain electrical signal data comprises a plurality of sampling point data respectively acquired by a plurality of brain electrical signal acquisition modules;
combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set, wherein the method comprises the following steps:
dividing the test data set into a plurality of test data subsets; wherein each of the test data subsets includes one test data corresponding to each of the plurality of testers;
and respectively extracting frequency domain characteristic data of each test data in the test data subsets aiming at each test data subset in the test data subsets, and combining the frequency domain characteristic data of each test data according to a preset combination mode to obtain a sample data set.
5. The method according to claim 4, wherein the combining the frequency domain feature data according to a preset combination mode comprises:
splicing the frequency domain characteristic data of each test data according to rows; or
The sixth mode is that the frequency domain characteristic data of each test data are spliced according to columns; or
And seventhly, respectively carrying out normalization processing on the frequency domain characteristic data of each test data, and superposing the obtained normalized characteristic data of each test data according to corresponding frequency points.
6. The method according to claim 5, wherein if the test data in the test data set are combined in the preset combination manner in the manner five, the classification model is a third convolutional neural network, and the third convolutional neural network adopts three-layer convolution; or
If the test data in the test data set are combined in the preset combination mode in the mode six, the classification model is a fourth convolutional neural network, and the fourth convolutional neural network adopts two layers of convolution; or
And if the plurality of test data in the test data set are combined in the preset combination mode in the mode seven, the classification model is a fifth convolutional neural network, and the fifth convolutional neural network adopts two layers of convolution.
7. The method of claim 1, wherein the brain-computer interface paradigm is P300, and the brain electrical signal data comprises a plurality of sampling point data respectively acquired by a plurality of leads;
the combining the plurality of test data in the test data set according to a preset combination mode to obtain a sample data set includes:
dividing the test data set into a plurality of test data subsets; wherein each of the test data subsets comprises a plurality of test data corresponding to each of the plurality of testers;
and aiming at each test data subset in the plurality of test data subsets, carrying out superposition averaging on corresponding sampling points of each test data in the test data subsets to obtain a sample data set.
8. A device for training a brain-computer interface classification model is characterized by comprising:
the acquisition module is used for acquiring a test data set; wherein the test data set comprises a plurality of test data for each subject in a plurality of subjects, each test data for each subject being brain electrical signal data for the subject under brain-computer interface paradigm experiments;
the combination module is used for combining a plurality of test data in the test data set according to a preset combination mode to obtain a sample data set;
and the training module is used for training the classification model of the brain-computer interface normal form by adopting the sample data set to obtain a target classification model.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program operable on the processor, the computer program, when executed by the processor, causing the processor to carry out the method of any one of claims 1 to 7.
10. A computer-readable storage medium having a computer program stored therein, the computer program characterized by: the computer program, when executed by a processor, implements the method of any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110570838.4A CN113343798A (en) | 2021-05-25 | 2021-05-25 | Training method, device, equipment and medium for brain-computer interface classification model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110570838.4A CN113343798A (en) | 2021-05-25 | 2021-05-25 | Training method, device, equipment and medium for brain-computer interface classification model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113343798A true CN113343798A (en) | 2021-09-03 |
Family
ID=77471257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110570838.4A Pending CN113343798A (en) | 2021-05-25 | 2021-05-25 | Training method, device, equipment and medium for brain-computer interface classification model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113343798A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116501235A (en) * | 2023-06-29 | 2023-07-28 | 珠海妙存科技有限公司 | Sampling point determining method, system, device and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304917A (en) * | 2018-01-17 | 2018-07-20 | 华南理工大学 | A kind of P300 signal detecting methods based on LSTM networks |
CN109389059A (en) * | 2018-09-26 | 2019-02-26 | 华南理工大学 | A kind of P300 detection method based on CNN-LSTM network |
CN109766751A (en) * | 2018-11-28 | 2019-05-17 | 西安电子科技大学 | Stable state vision inducting brain electricity personal identification method and system based on Frequency Domain Coding |
CN109770900A (en) * | 2019-01-08 | 2019-05-21 | 中国科学院自动化研究所 | Brain-computer interface based on convolutional neural networks instructs delivery method, system, device |
CN112465059A (en) * | 2020-12-07 | 2021-03-09 | 杭州电子科技大学 | Multi-person motor imagery identification method based on cross-brain fusion decision and brain-computer system |
-
2021
- 2021-05-25 CN CN202110570838.4A patent/CN113343798A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108304917A (en) * | 2018-01-17 | 2018-07-20 | 华南理工大学 | A kind of P300 signal detecting methods based on LSTM networks |
CN109389059A (en) * | 2018-09-26 | 2019-02-26 | 华南理工大学 | A kind of P300 detection method based on CNN-LSTM network |
CN109766751A (en) * | 2018-11-28 | 2019-05-17 | 西安电子科技大学 | Stable state vision inducting brain electricity personal identification method and system based on Frequency Domain Coding |
CN109770900A (en) * | 2019-01-08 | 2019-05-21 | 中国科学院自动化研究所 | Brain-computer interface based on convolutional neural networks instructs delivery method, system, device |
CN112465059A (en) * | 2020-12-07 | 2021-03-09 | 杭州电子科技大学 | Multi-person motor imagery identification method based on cross-brain fusion decision and brain-computer system |
Non-Patent Citations (1)
Title |
---|
北京医轩国际医学研究院: "《临床麻醉学研究》", 江西科学技术出版社 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116501235A (en) * | 2023-06-29 | 2023-07-28 | 珠海妙存科技有限公司 | Sampling point determining method, system, device and storage medium |
CN116501235B (en) * | 2023-06-29 | 2024-02-23 | 珠海妙存科技有限公司 | Sampling point determining method, system, device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Gao et al. | Complex networks and deep learning for EEG signal analysis | |
US11627903B2 (en) | Method for diagnosing cognitive disorder, and computer program | |
Donner et al. | Population activity in the human dorsal pathway predicts the accuracy of visual motion detection | |
Drijvers et al. | Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension | |
Makeig et al. | ERP features and EEG dynamics: an ICA perspective | |
CN107530012B (en) | system for brain activity resolution | |
Pammer et al. | Visual word recognition: the first half second | |
Pasqualotto et al. | Toward functioning and usable brain–computer interfaces (BCIs): A literature review | |
KR101518575B1 (en) | Analysis method of user intention recognition for brain computer interface | |
Aydın et al. | The impact of musical experience on neural sound encoding performance | |
Valenza et al. | Autonomic nervous system dynamics for mood and emotional-state recognition: Significant advances in data acquisition, signal processing and classification | |
US20190374154A1 (en) | Method, command, device and program to determine at least one brain network involved in carrying out a given process | |
CN114246589B (en) | Memory cognition capability assessment method and system | |
Williams et al. | 10 years of EPOC: A scoping review of Emotiv’s portable EEG device | |
Paulus et al. | Modeling event‐related heart period responses | |
CN109961018B (en) | Electroencephalogram signal analysis method and system and terminal equipment | |
CN108710895A (en) | Motor imagery electroencephalogram signal classification method based on independent component analysis | |
KR20180123365A (en) | Apparatus and method for context recognizable brain-machine interface | |
Yudhana et al. | Recognizing human emotion patterns by applying Fast Fourier Transform based on brainwave features | |
CN114366103B (en) | Attention assessment method and device and electronic equipment | |
KR20190030611A (en) | Method for integrated signal processing of bci system | |
Staudigl et al. | Saccade-related neural communication in the human medial temporal lobe is modulated by the social relevance of stimuli | |
CN113343798A (en) | Training method, device, equipment and medium for brain-computer interface classification model | |
Mekruksavanich et al. | Deep learning approaches for epileptic seizures recognition based on eeg signal | |
KR20200042373A (en) | Eeg signal variability analysis system for depression diagnosis and method thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210903 |
|
RJ01 | Rejection of invention patent application after publication |