CN113143295A - Equipment control method and terminal based on motor imagery electroencephalogram signals - Google Patents

Equipment control method and terminal based on motor imagery electroencephalogram signals Download PDF

Info

Publication number
CN113143295A
CN113143295A CN202110440696.XA CN202110440696A CN113143295A CN 113143295 A CN113143295 A CN 113143295A CN 202110440696 A CN202110440696 A CN 202110440696A CN 113143295 A CN113143295 A CN 113143295A
Authority
CN
China
Prior art keywords
time
motor imagery
matrix
frequency
imagery electroencephalogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110440696.XA
Other languages
Chinese (zh)
Inventor
刘京
田亮
赵薇
王少华
陈栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hebei Normal University
Original Assignee
Hebei Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hebei Normal University filed Critical Hebei Normal University
Priority to CN202110440696.XA priority Critical patent/CN113143295A/en
Publication of CN113143295A publication Critical patent/CN113143295A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7246Details of waveform analysis using correlation, e.g. template matching or determination of similarity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/726Details of waveform analysis characterised by using transforms using Wavelet transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention is suitable for the technical field of motor imagery electroencephalogram signal processing, and provides a device control method and a terminal based on motor imagery electroencephalogram signals, wherein the method comprises the following steps: performing wavelet transformation on original electroencephalogram data detected by a plurality of electroencephalogram signal channels to obtain a three-dimensional time-frequency characteristic matrix; calculating the similarity between any two time points according to the three-dimensional time-frequency characteristic matrix to obtain a time self-attention weight matrix; according to the time self-attention weight matrix, extracting the characteristics of the three-dimensional time-frequency characteristic matrix to obtain the time characteristics of the motor imagery electroencephalogram signals of the target personnel; extracting the spatial characteristics and the frequency characteristics of the motor imagery electroencephalogram signals, classifying the motor imagery electroencephalogram signals based on the temporal characteristics, the spatial characteristics and the frequency characteristics, and controlling equipment connected with target personnel according to a classification result. The invention can improve the classification precision of the motor imagery electroencephalogram signals, thereby improving the control accuracy of personnel on external equipment.

Description

Equipment control method and terminal based on motor imagery electroencephalogram signals
Technical Field
The invention belongs to the technical field of motor imagery electroencephalogram signal processing, and particularly relates to a device control method and a terminal based on motor imagery electroencephalogram signals.
Background
Motor Imagery (MI) electroencephalography (EEG) classification is a research hotspot in the fields of brain science and human-computer interaction.
The main task of MI Brain electrical signal classification is to classify and identify MI Brain electrical signals generated by four motor imagery tasks (left hand, right hand, feet and tongue) of human Brain, and then encode the identified motor imagery tasks by relying on Brain Computer Interface (BCI) technology, thereby realizing the control of external equipment. In recent years, with the rapid development of Deep Learning (DL), the MI electroencephalogram classification technology based on Deep Learning has been widely focused.
However, the inventor of the present application finds that the existing deep learning method has the problem of low classification precision in the classification of the cross-tested MI brain electrical signals, and reduces the control accuracy of external equipment. The reason for this is that the time from the presentation of the motor imagery task to the execution of the motor imagery task is different for different people, and the amplitude of the MI brain signals generated by different people for the same motor imagery task is different, that is, the reflection time and amplitude of the MI brain signals of different people are different, which reduces the precision of the cross-subject classification.
Disclosure of Invention
In view of this, the embodiment of the present invention provides a device control method and a terminal based on a motor imagery electroencephalogram signal, so as to improve the classification precision of the motor imagery electroencephalogram signal, and further improve the control accuracy of a person on an external device.
The first aspect of the embodiment of the invention provides a device control method based on motor imagery electroencephalogram signals, which comprises the following steps:
acquiring original electroencephalogram data of a target person detected by a plurality of electroencephalogram signal channels, and performing wavelet transformation on the original electroencephalogram data to obtain a three-dimensional time-frequency characteristic matrix;
calculating the similarity between any two time points according to the three-dimensional time-frequency characteristic matrix to obtain a time self-attention weight matrix;
according to the time self-attention weight matrix, extracting the characteristics of the three-dimensional time-frequency characteristic matrix to obtain the time characteristics of the motor imagery electroencephalogram signals of the target personnel;
extracting the spatial characteristics and the frequency characteristics of the motor imagery electroencephalogram signals, classifying the motor imagery electroencephalogram signals based on the temporal characteristics, the spatial characteristics and the frequency characteristics, and controlling equipment connected with target personnel according to a classification result.
A second aspect of the embodiments of the present invention provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the device control method based on the motor imagery electroencephalogram signal as described above when executing the computer program.
A third aspect of the embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the device control method based on motor imagery electroencephalogram signal as described above.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
according to the method, a time self-attention mechanism is added in a time feature extraction stage, the similarity among all time points is calculated, and the mutual dependency among all the time points is obtained; furthermore, the three-dimensional time-frequency characteristic matrix is subjected to characteristic extraction through the time self-attention weight matrix, so that the continuity time characteristic of the motor imagery electroencephalogram signal can be obtained, noise interference is reduced, and meanwhile, the characteristics of each extracted time point comprise global time characteristics, so that the learning of cross-test migration classification is enhanced; and finally, classifying the motor imagery electroencephalogram signals by combining the time characteristics, the space characteristics and the frequency characteristics of the motor imagery electroencephalogram signals, and controlling equipment connected with personnel according to the classification result. The invention can improve the classification precision of the motor imagery electroencephalogram signals, thereby improving the control accuracy of personnel on external equipment.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of an implementation of a device control method based on motor imagery electroencephalogram signals according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a classification model of motor imagery electroencephalogram signals provided by an embodiment of the invention;
FIG. 3 is a schematic diagram of classification experiment results provided by an embodiment of the present invention;
fig. 4 is a schematic diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to explain the technical means of the present invention, the following description will be given by way of specific examples.
The motor imagery electroencephalogram signal classification is a basic stone technology of a brain-computer interface technology based on motor imagery, and has wide application in the fields of rehabilitation, military combat, life entertainment and the like. The classification of the electroencephalogram signals crossing the trial motor imagery is an expansion of the classification of the electroencephalogram signals of the single trial motor imagery, and has more practical value in the application field of the classification of the electroencephalogram signals of the motor imagery.
In recent years, with the rapid development of deep learning, a Convolutional Neural Network (CNN) gradually becomes a core method in classification of motor imagery electroencephalograms, and more researchers propose different MI electroencephalogram classification Network models based on the CNN. After the inventor of the application deeply studies the currently used deep learning method, the inventor finds that the existing DL method has the problem of low classification precision in the task of four classification of electroencephalogram signals across the trial movement imagination because the reflection time and the amplitude of MI electroencephalogram signals of different testees are different, and the control accuracy of the testees on external equipment is reduced. Aiming at the problem, the invention is designed as follows:
the embodiment of the invention provides a device control method based on motor imagery electroencephalogram signals, and as shown in figure 1, the method comprises the following steps:
s101, acquiring original electroencephalogram data of a target person detected by a plurality of electroencephalogram signal channels, and performing wavelet transformation on the original electroencephalogram data to obtain a three-dimensional time-frequency characteristic matrix.
In the embodiment of the invention, a motor imagery electroencephalogram classification model is constructed, and the network architecture of the model can be shown in fig. 2. Before the original electroencephalogram data of the target person is obtained for classification, the model needs to be trained in advance so as to learn the weight of each convolution kernel in the model. The training by the target person is a single test, and the training by other persons is a cross test.
Specifically, the first step of the classification process is to acquire a plurality of electroencephalogram signal channel signals in each area of the brain of a target person through electrodes to form original electroencephalogram data M belonging to R and having the height of H and the width of WH×W. Then, Morlet wavelet transform is carried out on the original electroencephalogram data M to obtain a three-dimensional time-frequency characteristic matrix A epsilon RC×F×TWherein, C represents the number of EEG signal channels, F represents the number of frequency sampling points (frequency points), and T represents the number of time sampling points (time points).
And S102, calculating the similarity between any two time points according to the three-dimensional time-frequency characteristic matrix to obtain a time self-attention weight matrix.
Optionally, as a possible implementation manner, according to the three-dimensional time-frequency feature matrix, the similarity between any two time points is calculated, which may be detailed as:
determining a two-dimensional characteristic matrix corresponding to each time point according to the three-dimensional time-frequency characteristic matrix;
the similarity between any two time points is calculated based on the following formula:
Figure BDA0003034936520000041
in the formula (I), the compound is shown in the specification,
Figure BDA0003034936520000042
is the similarity between the ith time point and the jth time point, f is the similarity function, T is the number of time points, TiIs a two-dimensional feature matrix corresponding to the ith time point, (T)j) ' is the transpose of the two-dimensional feature matrix corresponding to the jth time point.
In the embodiment of the present invention, referring to fig. 2, in the time feature extraction stage, the three-dimensional time-frequency feature matrix a may be firstly reshaped into T1 e R through Reshape and Transpose functionsT×(C×F)And T2 ∈ R(C×F)×TTo facilitate the implementation of matrix multiplication between them. Then, calculating the similarity between any two time sampling points by applying a softmax function (the value range of the similarity is 0-1, wherein 0 represents no similarity, and 1 represents complete similarity), and obtaining a time self-attention weight matrix P1∈RT×T
And S103, extracting the characteristics of the three-dimensional time-frequency characteristic matrix according to the time self-attention weight matrix to obtain the time characteristics of the motor imagery electroencephalogram signals of the target personnel.
Optionally, as a possible implementation manner, according to the time self-attention weight matrix, feature extraction is performed on the three-dimensional time-frequency feature matrix to obtain time features of the motor imagery electroencephalogram signal of the target person, which may be detailed as follows:
extracting the influence characteristics of each time point from the three-dimensional time-frequency characteristic matrix according to the time self-attention weight matrix to obtain a time influence characteristic matrix;
and calculating the time characteristics of the motor imagery electroencephalogram signals based on the time influence characteristic matrix.
Optionally, as a possible implementation manner, the influence feature of each time point may be extracted according to the following formula, so as to obtain a time influence feature matrix:
S1=P1×A
in the formula, S1For a time-influencing feature matrix, P1Is a time self-attention weight matrix, and A is a three-dimensional time-frequency characteristic matrix.
Optionally, as a possible implementation, the time characteristic of the motor imagery electroencephalogram signal may be calculated based on the following formula:
G1=α×S1+A
in the formula, G1Alpha is a characteristic parameter obtained by pre-training for the time characteristic of the motor imagery electroencephalogram signal, S1A is a three-dimensional time-frequency characteristic matrix.
In the embodiment of the present invention, referring to FIG. 2, time is taken from attention weight matrix P1Performing matrix multiplication with the three-dimensional time-frequency feature matrix A to obtain a time influence feature matrix S1Then, the S is1Multiplying the parameter alpha obtained by the pre-training to form a residual block, and multiplying the residual block by A epsilon RC×F×TPerforming element-by-element summation operation to obtain final time characteristic G1∈RC×F×T。G1The time dependency between any two time points can be learned, so that the continuity information of the motor imagery electroencephalogram signals can be extracted, and the noise interference can be reduced. Meanwhile, the feature value of each time point is updated by using the weighted sum of the feature values of all time points in the time domain, namely the feature value of each time point contains global time features, so that the representation of the time features is facilitated, the learning of the cross-tested migration classification is enhanced, and the problems of low classification precision caused by the fact that the motor imagery electroencephalogram signals are continuous in time and the signal-to-noise ratio is low and MI amplitudes and response times of different testees are different in the time domain are solved.
For example, the parameters of the temporal feature extraction section may be as shown in table 1.
TABLE 2 spatial feature extraction parameters
Figure BDA0003034936520000061
S101, extracting spatial features and frequency features of the motor imagery electroencephalogram signals, classifying the motor imagery electroencephalogram signals based on the temporal features, the spatial features and the frequency features, and controlling equipment connected with target personnel according to classification results.
According to the method, a time self-attention mechanism is added in a time feature extraction stage, the similarity among all time points is calculated, and the mutual dependency among all the time points is obtained; furthermore, the three-dimensional time-frequency characteristic matrix is subjected to characteristic extraction through the time self-attention weight matrix, so that the continuity time characteristic of the motor imagery electroencephalogram signal can be obtained, noise interference is reduced, and meanwhile, the characteristics of each extracted time point comprise global time characteristics, so that the learning of cross-test migration classification is enhanced; and finally, classifying the motor imagery electroencephalogram signals by combining the time characteristics, the space characteristics and the frequency characteristics of the motor imagery electroencephalogram signals, and controlling equipment connected with personnel according to the classification result. The invention can improve the classification precision of the motor imagery electroencephalogram signals, thereby improving the control accuracy of personnel on external equipment.
Optionally, as a possible implementation manner, extracting spatial features of the motor imagery electroencephalogram signal may be detailed as follows:
calculating the similarity between any two electroencephalogram signal channels according to the three-dimensional time-frequency characteristic matrix to obtain a spatial self-attention weight matrix;
and according to the spatial self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the spatial features of the electroencephalogram signals based on the motor imagery.
In the embodiment of the present invention, referring to fig. 2, a three-dimensional time-frequency feature matrix a e R may be first obtained through a Reshape function and a Transpose functionC×F×TRemodeling to C1 ∈ RC×(F×T)And C2 ∈ R(F×T)×CTo facilitate the realization of the moment between themArray multiplication. Then, calculating the similarity (the value range of the similarity is 0-1, wherein 0 represents no similarity, and 1 represents complete similarity) between any two electroencephalogram signal channels by applying a softmax function to obtain a spatial self-attention weight matrix P2∈RC×CThe corresponding formula is as follows:
Figure BDA0003034936520000071
in the formula (I), the compound is shown in the specification,
Figure BDA0003034936520000072
is the similarity between the ith and the jth EEG signal channels, f is the similarity function, C is the number of the EEG signal channelsiIs a two-dimensional characteristic matrix corresponding to the ith EEG signal channel, (C)j) ' is the transpose of the two-dimensional characteristic matrix corresponding to the jth EEG signal channel.
Then, the space self-attention weight matrix P2Performing matrix multiplication with the three-dimensional time-frequency characteristic matrix A to obtain a space influence characteristic matrix S2Then, the S is2Multiplying the parameter beta obtained by the pre-training to form a residual block, and multiplying the residual block by A epsilon RC×F×TPerforming element-by-element summation operation to obtain final time characteristic G2∈RC×F×T. The feature similarity among all electroencephalogram signal channels is automatically learned by introducing a self-attention mechanism into the spatial feature extraction part, the features of any electroencephalogram signal channel are updated by aggregating the features on all electroencephalogram signal channels in a weighted summation mode, a higher weight value can be automatically assigned to a channel related to motor imagery, and a lower weight value is assigned to a channel unrelated to motor imagery to select an optimal channel. Compared with the conventional method for manually selecting the electroencephalogram signal channel, the method can prevent the loss of the characteristic information.
For example, the parameters of the spatial feature extraction section may be as shown in table 2.
TABLE 2 spatial feature extraction parameters
Figure BDA0003034936520000081
Optionally, as a possible implementation manner, extracting the frequency feature of the motor imagery electroencephalogram signal may be detailed as follows:
calculating the similarity between any two frequency points according to the three-dimensional time-frequency characteristic matrix to obtain a frequency self-attention weight matrix;
and according to the frequency self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the frequency features of the motor imagery electroencephalogram signals.
In the embodiment of the present invention, referring to fig. 2, a three-dimensional time-frequency feature matrix a e R may be first obtained through a Reshape function and a Transpose functionC×F×TRemodeling to F1 ∈ RF×(C×T)And F2 ∈ R(C×T)×FTo facilitate the implementation of matrix multiplication between them. Then, calculating the similarity between any two frequency points by applying a softmax function (the value range of the similarity is 0-1, wherein 0 represents no similarity, and 1 represents complete similarity), and obtaining a frequency self-attention weight matrix P3∈RF×FThe corresponding formula is as follows:
Figure BDA0003034936520000082
in the formula (I), the compound is shown in the specification,
Figure BDA0003034936520000083
is the similarity between the ith frequency point and the jth frequency point, F is the similarity function, F is the number of frequency points, FiIs a two-dimensional feature matrix corresponding to the ith frequency point, (F)j) ' is the transpose of the two-dimensional feature matrix corresponding to the jth frequency point.
Then, the frequency is self-attention weight matrix P3Performing matrix multiplication with the three-dimensional time-frequency characteristic matrix A to obtain a frequency influence characteristic matrix S3Then, the S is3Multiplying the obtained parameter gamma by a pre-training parameter to form a residual block, and multiplying the residual block by A epsilon RC×F×TPerforming element-by-element summation operation to obtain final frequency characteristic G3∈RC×F×T. And F, encoding the global frequency into the characteristic information of each frequency point, and including the frequency relation between any two frequency points, so that the noise interference is eliminated, and the frequency characteristic representation capability is enhanced.
For example, the parameters of the frequency feature extraction section may be as shown in table 3.
TABLE 3 frequency feature extraction parameters
Figure BDA0003034936520000091
Optionally, as a possible implementation, the classification of the motor imagery electroencephalogram signal is performed based on the temporal feature, the spatial feature and the frequency feature, which may be detailed as follows:
fusing the time characteristic, the space characteristic and the frequency characteristic to obtain the space-time characteristic of the motor imagery electroencephalogram signal;
and inputting the space-time characteristics of the motor imagery electroencephalogram signals into a preset characteristic classification channel for classification to obtain the category of the motor imagery electroencephalogram signals.
In the embodiment of the present invention, referring to FIG. 2, the time characteristic G is1Spatial feature G2Sum frequency characteristic G3Convolution is carried out by Conv11, Conv12 and Conv13 respectively, so that the three feature maps have the same scale RD×F×TThen fusing the characteristic graph N epsilon R through Concat characteristic splicing operation3D×F×TWhere D is 16, F is 22, and T is 1125. The characteristic diagram N contains enhanced time-frequency domain and space domain characteristic information, the N is subjected to convolution (Conv3) with the kernel size of (5 multiplied by 5), and batch normalization (BL) and nonlinear activation (NL) are carried out to obtain Q1. Will Q1Reducing the input size (48,18,1121) to (48,18,69) output by an average pooling layer (AvgP, including Dropout layer) of kernel size (1 × 100) yields Q2. Then, Q is added2Performing batch normalization (BL) and nonlinear activation (NL) to obtain Q by convolution (Conv4) with kernel size of 18 × 13∈R40 ×1×69. Finally Q is3Through a full convolution layer (FC) with kernel size of (1 × 69), the output is Q4∈R4×1×1. Finally, Q is added4Conversion to conditional probabilities for four tags, four classifications are performed using the LogSoftmax function.
For example, the parameters of the frequency feature extraction section may be as shown in table 4.
TABLE 4 frequency feature extraction parameters
Figure BDA0003034936520000101
The invention adopts a spatial self-attention network layer on a spatial domain, the layer automatically learns the similar weight values among channels by calculating the similarity degree among all electroencephalogram signal channels, adaptively assembles the signal data of all electroencephalogram signal channels, and updates the characteristics of each electroencephalogram signal channel by using a weighted summation mode, thereby eliminating the problem of information loss caused by manually selecting the signal channel in the traditional machine learning, and automatically selecting the electroencephalogram signal channel related to motor imagery to extract the spatial characteristics for enhancing the spatial characteristics. And in the time domain, a time self-attention network layer is introduced, the layer does not store a large amount of other tested historical information, meanwhile, the dependency relationship among different time points can be extracted, and each time point contains global time characteristics, so that the time characteristic representation capability is enhanced. On a frequency domain, a frequency self-attention network layer is introduced, the layer can extract the dependency relationship among different frequency points, and each frequency point contains global frequency features, so that the frequency feature representation capability can be enhanced, and the cross-tested classification accuracy of the model can be improved. And finally, splicing and fusing the three layers of obtained feature maps, inputting the feature maps into a preset feature classification channel, and performing final classification and identification through a LogSoftmax function after a series of convolution pooling operations.
And coding the identified motor imagery task by relying on a BCI technology according to the classification identification result, so that the control of external equipment can be realized. For example, the wheelchair can be started, stopped, steered, and the like according to the classification result, and can also be applied to equipment such as a trolley, a household appliance, a robot, and the like.
In the following, the feasibility of the method of the embodiment of the present invention was verified by experiments.
Details of the experiment:
the experimental data were evaluated using the published data set for MI-EEG, BCICIV2 a. The data set recorded four types of motor imagery tasks (left hand, right hand, both feet and tongue) performed by 9 different subjects in 25 lead electrode channels (22 brain electrical channels and 3 eye electrical channels) with a sampling rate of 250 Hz. Each channel is pre-processed with a 0.5-100Hz band pass filter. The electroencephalogram test data of each subject are divided into 2 groups, wherein one group is used as a training set, and the other group is used as a testing set. Each group contained 288 motor imagery trials, with an average of 72 trials per class of motor imagery task. In addition, each experiment uses the same time window [ -0.5,4s ] to extract the motor imagery signals of all 22 brain electrical channels. Thus, in the data set, 9 training sets and 9 test sets are explicitly separated. In the subset, there were 72 experiments per category. After removing the 3 more ocular channel signals, 22x1, 1125 data points were obtained for each experiment.
The most important evaluation index in the MI-EEG analysis is Accuracy (Accuracy), and the higher the average Accuracy of the categories is, the higher the Accuracy of the categories is, the better the model performance is. The average accuracy is calculated by the formula:
Figure BDA0003034936520000111
wherein TP is true positive number, TN is true negative number, FP is false positive number, and FN is false negative number. TP is the number of true positive samples, representing the number of positive samples that are correctly predicted. TN is the number of true negative samples, indicating the number of negative samples that are correctly predicted. FP is the number of false positive samples, representing the number of classes that were misjudged as positive samples. FN represents the number of false positives as negative samples.
For the four types of MI classification, an NLLoss function in a Pythrch is defined as a loss function, an Xavier algorithm is used for initializing all parameters in a network, and an Adam algorithm is adopted as an optimization algorithm. The learning rate of the BCICIV2a data set was 0.0001 and the batch size was 32.
Since the data set clearly demarcates the training data set and the test data set, this chapter randomly divides the training data set into a training set (80%) and a validation set (20%), with all test sets being test sets. With this partitioning, an early stopping strategy developed in the field of computer vision can be used. The first phase of training is stopped when the verification accuracy does not improve within a predetermined period of time. Training of the training and validation data sets then continues using the parameter values that result in the validation data set having the highest accuracy. Training ends when the loss function on the validation dataset drops to the same value as the training dataset at the end of the first training phase. The hyperparameters in the Dropout layer and the constants and weight decay rates in the batch normalization layer are set to 0.5, 10-5, and 0.1, respectively.
The comparative experiment process comprises the following steps:
in order to verify the effect of the proposed method on the performance of cross-test classification, the invention performs a cross-test classification experiment. Training a model on a BCICIV2a data set by using electroencephalogram data of other testees (except target testees), initializing the model as an initial weight of a network, and then loading data of a new tester (target testees) for further testing. In this way, the trained model can integrate information of other testees, thereby making it more robust.
Fig. 3 is a confusion matrix of classification results among BCICIV2a data set subjects. Comparing the classification result with other advanced DL methods (MCCNN, DeepCNN, DMTLCNN, STSCANN), the average accuracy of the method is 75.49%, and 3 of 9 testees are tested to obtain the best (2, 8, 9). Therefore, the method is more suitable for cross-test classification, can improve the classification precision of the motor imagery electroencephalogram signals, and further improves the control accuracy of personnel on external equipment.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 4 is a schematic diagram of a terminal according to an embodiment of the present invention. As shown in fig. 4, the terminal 40 of this embodiment includes: a processor 41, a memory 42, and a computer program 43 stored in the memory 42 and executable on the processor 41. The processor 41 implements the steps in each of the above-described embodiments of the device control method based on motor imagery electroencephalogram signals, such as steps S101 to S104 shown in fig. 1, when executing the computer program 43.
Illustratively, the computer program 43 may be divided into one or more modules/units, which are stored in the memory 42 and executed by the processor 41 to implement the present invention. One or more of the modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 43 in the terminal 40. For example, the computer program 43 may be divided into a temporal feature extraction module, a spatial feature extraction module, a frequency feature extraction module, and a classification control module (a module in a virtual device), and each module has the following specific functions:
the time characteristic extraction module is used for acquiring original electroencephalogram data of a target person detected by a plurality of electroencephalogram signal channels and performing wavelet transformation on the original electroencephalogram data to obtain a three-dimensional time-frequency characteristic matrix; calculating the similarity between any two time points according to the three-dimensional time-frequency characteristic matrix to obtain a time self-attention weight matrix; and according to the time self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the time features of the motor imagery electroencephalogram signals of the target personnel.
And the spatial feature extraction module is used for extracting the spatial features of the motor imagery electroencephalogram signals.
And the frequency characteristic extraction module is used for extracting the frequency characteristics of the motor imagery electroencephalogram signals.
And the classification control module is used for classifying the motor imagery electroencephalogram signals based on the time characteristics, the space characteristics and the frequency characteristics and controlling equipment connected with the target personnel according to a classification result.
The terminal 40 may be a computing device such as a desktop computer, a notebook, a palm top computer, and a cloud server. The terminal 40 may include, but is not limited to, a processor 41, a memory 42. Those skilled in the art will appreciate that fig. 4 is merely an example of a terminal 40 and does not constitute a limitation of terminal 40, and may include more or fewer components than shown, or some components in combination, or different components, e.g., terminal 40 may also include input-output devices, network access devices, buses, etc.
The Processor 41 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 42 may be an internal storage unit of the terminal 40, such as a hard disk or a memory of the terminal 40. The memory 42 may also be an external storage device of the terminal 40, such as a plug-in hard disk provided on the terminal 40, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. Further, the memory 42 may also include both internal and external memory units of the terminal 40. The memory 42 is used for storing computer programs and other programs and data required by the terminal. The memory 42 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of the embodiments of the method. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, in accordance with legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunications signals.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A device control method based on motor imagery electroencephalogram signals is characterized by comprising the following steps:
acquiring original electroencephalogram data of a target person detected by a plurality of electroencephalogram signal channels, and performing wavelet transformation on the original electroencephalogram data to obtain a three-dimensional time-frequency characteristic matrix;
calculating the similarity between any two time points according to the three-dimensional time-frequency characteristic matrix to obtain a time self-attention weight matrix;
according to the time self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the time features of the motor imagery electroencephalogram signals of the target personnel;
extracting the spatial feature and the frequency feature of the motor imagery electroencephalogram signal, classifying the motor imagery electroencephalogram signal based on the time feature, the spatial feature and the frequency feature, and controlling equipment connected with the target person according to a classification result.
2. The device control method based on motor imagery electroencephalogram signal of claim 1, wherein calculating a similarity between any two time points according to the three-dimensional time-frequency feature matrix comprises:
determining a two-dimensional characteristic matrix corresponding to each time point according to the three-dimensional time-frequency characteristic matrix;
the similarity between any two time points is calculated based on the following formula:
Figure FDA0003034936510000011
in the formula, P1 ijIs the similarity between the ith time point and the jth time point, f is the similarity function, T is the number of time points, TiIs a two-dimensional feature matrix corresponding to the ith time point, (T)j) ' is the transpose of the two-dimensional feature matrix corresponding to the jth time point.
3. The device control method based on motor imagery electroencephalogram signals of claim 1, wherein the step of performing feature extraction on the three-dimensional time-frequency feature matrix according to the time self-attention weight matrix to obtain the time features of the motor imagery electroencephalogram signals of the target person comprises the steps of:
according to the time self-attention weight matrix, extracting the influence characteristics of each time point from the three-dimensional time-frequency characteristic matrix to obtain a time influence characteristic matrix;
and calculating the time characteristics of the motor imagery electroencephalogram signals based on the time influence characteristic matrix.
4. The device control method based on motor imagery electroencephalogram signals, of claim 3, wherein the time impact feature matrix is obtained by extracting the impact features of each time point according to the following formula:
S1=P1×A
in the formula, S1For a time-influencing feature matrix, P1Is a time self-attention weight matrix, and A is a three-dimensional time-frequency characteristic matrix.
5. The device control method based on motor imagery electroencephalogram signal of claim 3, wherein the temporal characteristics of the motor imagery electroencephalogram signal are calculated based on the following equation:
G1=α×S1+A
in the formula, G1Alpha is a characteristic parameter obtained by pre-training for the time characteristic of the motor imagery electroencephalogram signalNumber, S1A is a three-dimensional time-frequency characteristic matrix.
6. The device control method based on motor imagery electroencephalogram signal, of claim 1, wherein extracting spatial features of the motor imagery electroencephalogram signal comprises:
calculating the similarity between any two electroencephalogram signal channels according to the three-dimensional time-frequency characteristic matrix to obtain a spatial self-attention weight matrix;
and according to the spatial self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the spatial features of the ocular motor imagery electroencephalogram signals.
7. The device control method based on motor imagery electroencephalogram signal, of claim 1, wherein extracting frequency features of the motor imagery electroencephalogram signal comprises:
calculating the similarity between any two frequency points according to the three-dimensional time-frequency characteristic matrix to obtain a frequency self-attention weight matrix;
and according to the frequency self-attention weight matrix, performing feature extraction on the three-dimensional time-frequency feature matrix to obtain the frequency features of the motor imagery electroencephalogram signals.
8. The motor imagery brain electrical signal based device control method of any one of claims 1 to 7, wherein classifying the motor imagery brain electrical signal based on the temporal feature, the spatial feature, and the frequency feature comprises:
fusing the time characteristic, the space characteristic and the frequency characteristic to obtain a space-time characteristic of the motor imagery electroencephalogram signal;
and inputting the space-time characteristics of the motor imagery electroencephalogram signals into a preset characteristic classification channel for classification to obtain the category of the motor imagery electroencephalogram signals.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202110440696.XA 2021-04-23 2021-04-23 Equipment control method and terminal based on motor imagery electroencephalogram signals Pending CN113143295A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110440696.XA CN113143295A (en) 2021-04-23 2021-04-23 Equipment control method and terminal based on motor imagery electroencephalogram signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110440696.XA CN113143295A (en) 2021-04-23 2021-04-23 Equipment control method and terminal based on motor imagery electroencephalogram signals

Publications (1)

Publication Number Publication Date
CN113143295A true CN113143295A (en) 2021-07-23

Family

ID=76869847

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110440696.XA Pending CN113143295A (en) 2021-04-23 2021-04-23 Equipment control method and terminal based on motor imagery electroencephalogram signals

Country Status (1)

Country Link
CN (1) CN113143295A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655884A (en) * 2021-08-17 2021-11-16 河北师范大学 Equipment control method, terminal and system
CN114795178A (en) * 2022-04-28 2022-07-29 电子科技大学 Multi-attention neural network-based brain state decoding method
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258215A (en) * 2013-05-10 2013-08-21 杭州电子科技大学 Multi-lead correlation analysis electroencephalo-graph (EEG) feature extraction method
CN107169462A (en) * 2017-05-19 2017-09-15 山东建筑大学 A kind of two sorting techniques of the EEG signals tagsort based on step analysis
KR20190062285A (en) * 2017-11-27 2019-06-05 고려대학교 산학협력단 Apparatus and method for classifying motor imagery based on local channel region analysis using common spatial pattern
CN111012336A (en) * 2019-12-06 2020-04-17 重庆邮电大学 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN111062250A (en) * 2019-11-12 2020-04-24 西安理工大学 Multi-subject motor imagery electroencephalogram signal identification method based on depth feature learning
CN111310656A (en) * 2020-02-13 2020-06-19 燕山大学 Single motor imagery electroencephalogram signal identification method based on multi-linear principal component analysis
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103258215A (en) * 2013-05-10 2013-08-21 杭州电子科技大学 Multi-lead correlation analysis electroencephalo-graph (EEG) feature extraction method
CN107169462A (en) * 2017-05-19 2017-09-15 山东建筑大学 A kind of two sorting techniques of the EEG signals tagsort based on step analysis
KR20190062285A (en) * 2017-11-27 2019-06-05 고려대학교 산학협력단 Apparatus and method for classifying motor imagery based on local channel region analysis using common spatial pattern
CN111062250A (en) * 2019-11-12 2020-04-24 西安理工大学 Multi-subject motor imagery electroencephalogram signal identification method based on depth feature learning
CN111012336A (en) * 2019-12-06 2020-04-17 重庆邮电大学 Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN111310656A (en) * 2020-02-13 2020-06-19 燕山大学 Single motor imagery electroencephalogram signal identification method based on multi-linear principal component analysis
CN112120694A (en) * 2020-08-19 2020-12-25 中国地质大学(武汉) Motor imagery electroencephalogram signal classification method based on neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIULING LIU; YONGLONG SHEN; JING LIU; JIANLI YANG: "Parallel Spatial-Temporal Self-Attention CNN-Based Motor Imagery Classification for BCI", 《FRONTIERS IN NEUROSCIENCE》 *
ZHENQI LI; JING WANG; ZIYU JIA; YOUFANG LIN: "Learning Space-Time-Frequency Representation with Two-Stream Attention Based 3D Network for Motor Imagery Classification", 《2020 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM)》 *
杨俊;马正敏;沈韬;陈壮飞;宋耀莲: "基于深度时空特征融合的多通道运动想象EEG解码方法", 《电子与信息学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113655884A (en) * 2021-08-17 2021-11-16 河北师范大学 Equipment control method, terminal and system
CN114795178A (en) * 2022-04-28 2022-07-29 电子科技大学 Multi-attention neural network-based brain state decoding method
CN114795178B (en) * 2022-04-28 2023-05-05 电子科技大学 Brain state decoding method based on multi-attention neural network
CN116595455A (en) * 2023-05-30 2023-08-15 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction
CN116595455B (en) * 2023-05-30 2023-11-10 江南大学 Motor imagery electroencephalogram signal classification method and system based on space-time frequency feature extraction

Similar Documents

Publication Publication Date Title
CN113627518B (en) Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning
Altaheri et al. Physics-informed attention temporal convolutional network for EEG-based motor imagery classification
CN113693613B (en) Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN111160139B (en) Electrocardiosignal processing method and device and terminal equipment
CN109165556B (en) Identity recognition method based on GRNN
Perna Convolutional neural networks learning from respiratory data
CN113143295A (en) Equipment control method and terminal based on motor imagery electroencephalogram signals
CN113133769A (en) Equipment control method, device and terminal based on motor imagery electroencephalogram signals
CN112001306A (en) Electroencephalogram signal decoding method for generating neural network based on deep convolution countermeasure
CN108256629A (en) The unsupervised feature learning method of EEG signal based on convolutional network and own coding
Köster et al. A two-layer model of natural stimuli estimated with score matching
CN112990008B (en) Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
CN113712573A (en) Electroencephalogram signal classification method, device, equipment and storage medium
KR20200018868A (en) Method for Adaptive EEG signal processing using reinforcement learning and System Using the same
CN117009780A (en) Space-time frequency domain effective channel attention motor imagery brain electrolysis code method based on contrast learning
CN115935238A (en) Gesture recognition and classification method and device based on lightweight convolutional network
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN111026267A (en) VR electroencephalogram idea control interface system
CN111772629B (en) Brain cognitive skill transplanting method
CN113887559A (en) Brain-computer information fusion classification method and system for brain off-loop application
CN117193537A (en) Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning
CN116340825A (en) Method for classifying cross-tested RSVP (respiratory tract protocol) electroencephalogram signals based on transfer learning
CN115169384A (en) Electroencephalogram classification model training method, intention identification method, equipment and medium
CN114224354B (en) Arrhythmia classification method, arrhythmia classification device, and readable storage medium
CN114386479B (en) Medical data processing method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210723

RJ01 Rejection of invention patent application after publication