CN115919330A - EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution - Google Patents
EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution Download PDFInfo
- Publication number
- CN115919330A CN115919330A CN202211503612.3A CN202211503612A CN115919330A CN 115919330 A CN115919330 A CN 115919330A CN 202211503612 A CN202211503612 A CN 202211503612A CN 115919330 A CN115919330 A CN 115919330A
- Authority
- CN
- China
- Prior art keywords
- sample
- model
- test
- eeg
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an EEG emotional state classification method based on multi-level SE attention and graph convolution. The method comprises the steps of firstly, acquiring data for preprocessing; extracting EEG characteristics through DE, and obtaining a two-dimensional sample matrix from the three-dimensional EEG time sequence; then respectively defining a training set and a test set under two task scenes to obtain a non-coincident training set and test set; for each sample, obtaining an electroencephalogram channel relation matrix and reconstructed electroencephalogram channel characteristics through an electroencephalogram channel relation matrix and an MSE module respectively, and finally forming a graph data structure related to the sample; then sending the graph structure into a GCN network to extract the graph structure information of a deeper layer, and outputting the final representation of the whole graph by a pooling method; and finally, performing performance evaluation on the model under two task scenes by using the classification accuracy. The invention solves the problem of insufficient utilization of channel, frequency band and structure information in electroencephalogram emotion recognition, and trains a high-precision emotional state classifier in and across the test.
Description
Technical Field
The invention belongs to the field of electroencephalogram (EEG) emotional state recognition in the field of biological feature recognition, and particularly relates to an EEG emotional state classification method based on multilayer SE (Squeeze and Excitation) attention and graph convolution.
Background
The emotion is a complex and comprehensive psychophysical state and plays a central role in human-computer interaction and emotion calculation. Some emotion recognition studies rely on non-physiological signals such as images, gestures, sounds, etc., but neuroscience studies believe that physiological signals are more representative of descriptive emotions than behavioral signals by virtue of their hard-to-disguise and hide characteristics. EEG is a physiological signal that directly and accurately reflects the activity of the human brain, and with the rapid development of noninvasive, easy-to-use, inexpensive electroencephalographic recording devices, emotion recognition based on EEG has received increasing attention.
EEG-based emotion recognition studies mainly involve two aspects, namely extraction of EEG discriminative features and classification of emotions. Basically, artificially extracted EEG features can be divided into two types: time domain feature types (such as Hjorth feature, fractal dimension feature) and frequency domain feature types (such as PSD, DE). Unlike the time domain features, the frequency domain features are intended to obtain EEG information from a frequency perspective. Several EEG-based emotion detection studies have demonstrated the effectiveness of frequency domain features in emotion recognition.
In recent years, deep learning methods based on CNN have become popular in many fields, and CNN can extract local features on regular grid data by using convolution kernels. However, brain electrical signals are irregular data because they are recorded by electrodes placed on an irregular grid. Although some CNN and RNN based studies successfully convert continuous brain electrical frames into regular grids for convolution, their methods require 2D representation of brain electrical channels on the scalp, resulting in loss of brain electrical channel spatial information. Different from a method for representing a two-dimensional plane of the relationship between EEG channels in CNN, the graph structure provides an effective data structure for representing a group of EEG channels and the relationship thereof. Due to the superior performance of the Graph Convolution Network (GCN) in graph data structures, there is an increasing research into the role of EEG signal topology in EEG emotion recognition. These studies tend to model the relationships between different brain electrical channels using a graphical neural network, or alternatively to capture local and global relationships between different channels.
Research results show that the GCN-based EEG emotion recognition model achieves better performance compared with the CNN-based model, but the GCN still has some defects. First, when the number of layers of the GCN is too deep, all node representations on the graph data may collapse to be approximately equivalent across the entire graph after more GCN layers, which may result in reduced performance. That is, several layers of GCN are enough to learn the topological characteristics of the graph structure, and too many GCN layers will result in poor representation. This results in the inability of the GCN to learn deeper features in the EEG features by stacking more layers as in CNNs, and therefore we need additional networks to assist the GCN in mining deeper information and learning better graphs. Secondly, although neural network models using frequency domain features have a good effect in EEG emotion recognition, these models treat the features of different frequency bands and different channels equally. Some researches show that different frequency bands and different channels play different roles in human emotional behaviors, and different importance should be given to the model according to specific situations.
In summary, the previous emotion recognition research based on the GCN neglects the relationship among electroencephalogram channels and the importance of different frequency bands and different characteristics, and the GCN with fewer layers cannot be used for mining deeper information. Based on the situation, the importance of different frequency bands and different channels is expected to be learned by using a Multi-scale SE (MSE) mechanism, then EEG frequency domain features represented by graph data are converted by using different weights, the GCN is helped to mine deep EEG features, and finally the GCN is used for learning the relation among the EEG channels and performing a final EEG emotion recognition task.
The invention combines MSE and GCN, trains the MSE-based graph neural network model by optimizing parameter setting and using proper training skills, so as to achieve better emotional state classification performance in both the test and the cross-test situations.
Disclosure of Invention
Aiming at the defects, the MSE attention and the GCN are combined, so that the information on the electroencephalogram channel structure can be acquired through the GCN, the MSE can be used for paying attention to the contribution of different channels, frequency bands and samples to the emotional state recognition capability of the model, and the GCN is helped to acquire deeper information. The method solves the problem of insufficient utilization of channel, frequency band and structural information in electroencephalogram emotion recognition to a certain extent, and has the advantages of small time complexity, high calculation efficiency, strong generalization capability and the like.
The technical scheme adopted by the invention is as follows:
the invention takes DE characteristics as used EEG signal frequency domain characteristics, takes a model combining multi-level SE attention and GCN as a classifier, and respectively realizes effective distinguishing of positive, negative and neutral emotional states under two tasks of a tested object and a tested object by analyzing the EEG signals. Firstly, acquiring data, carrying out band-pass filtering, and then removing artifacts by an ICA (independent component analysis) technology; secondly, EEG feature extraction is carried out through DE, and a two-dimensional sample matrix is obtained from a three-dimensional EEG time sequence; then respectively defining a training set and a test set under two task scenes to obtain a non-coincident training set and test set; for each sample, obtaining an electroencephalogram channel node relation matrix A and reconstructed electroencephalogram channel node characteristics through two network modules, and finally forming a graph data structure related to the sample; then sending the graph structure into a GCN network to extract deeper layers for reconstruction and recombination, enabling the reconstructed data to pass through the GCN network and obtain output containing structure information, and outputting the final representation of the whole graph by a pooling method; and finally, performing performance evaluation on the model under two task scenes by using the classification accuracy.
The technical scheme adopted by the invention for solving the technical problem comprises the following steps:
step S1, data processing:
the emotion data set is taken as an example for analysis, and the processing steps of the original EEG data acquired by the EEG acquisition equipment are as follows:
s1-1: data denoising
The dataset used by the present invention to verify model performance is derived from SEED. Firstly, the EEG original signal collected in the data set is down-sampled to 200HZ, then the band-pass filtering of 0.3-50HZ is carried out, and finally the ICA technology is adopted to remove the ocular artifacts in the signal.
S1-2: DE feature extraction
And (3) performing DE feature extraction on the EEG data after artifact removal, and performing data segmentation on each tested individual by using a 1s non-overlapping sliding window to obtain 3394 data samples. For each sample x i Wherein n is 62, representing that the number of electroencephalogram data acquisition channels is 62; d is 5, representing the frequency domain characteristics of five frequency bands of delta (1-3 HZ), theta (4-7 HZ), alpha (8-13 HZ), beta (14-30 HZ) and gamma (31-50 HZ) which are extracted.
S2, data definition and data set division:
emotional state classification there are two test scenarios: the model test in both cases has different data definitions and data set partitions, and each is described in detail below.
Assuming that there are N test subjects, the set of N test subjects is represented as U, and each data set of test subjects is represented as UWherein s represents a test and n represents the number of samples in a test>Represents the ith sample of the tested s, the corresponding label is ^>
Each subject had 15 emotional tests, and for the emotional state classification tasks in the subjects: selecting 9 emotional tests before each test and assuming that the emotional state labels of the emotional tests are known as a training set of the model; the last 6 emotional trials were selected and their emotional state labels were assumed to be unknown as the test set of models.
For the emotional state classification across subjects: and adopting a leave-one-out method to divide the data set. Specifically, firstly, data of all 15 emotion tests of a first tested object are taken, and an emotional state label is assumed to be unknown, so that the data serve as a test set; the data of all 15 emotional trials of the remaining N-1 subjects were taken and assumed to have known emotional state labels as a training set. Then, the 15 emotion tests of the ith tested sample are sequentially taken as a test set i =2,3, ·, N, and the data of all 15 emotion tests of the rest N-1 tested samples are taken as a training set. A total of N modeling experiments were performed and the average accuracy was calculated.
Step S3. Construction of EEG graph Structure
S3-1: model specific data entry
For the scenario under test: the training set of the model is input as An ith sample, representing the s-th training set being triaged>The corresponding real label of the ith sample in the s-th training set is shown. Test set input for a model is ≦ ≦>
For the cross-subject scenario: the test set of the model is input as Represents the ith sample of the subject s1. The training set input for the model is-> Wherein the model does not get any label information about sample i from the test set, unlike the training set.
S3-2: electroencephalogram channel relation matrix construction network
Under two task scenarios, each sample x in the training set and test set i ∈R nxd The adjacency matrix forming module constructs an adjacency matrix A epsilon R describing the relation among the n channels according to the specific input of the samples nxn Wherein A is ij To represent the relevance weight between brain electrical channels i and j.
S3-3: MSE-based electroencephalogram channel feature re-calibration and recombination network
The invention designs a multi-channel and multi-band SE attention machine to explore the contributions of different channels and different frequency bands in emotional state recognition, and also designs a sample attention machine to explore the contributions of different samples in emotional state recognition. In summary, the set of three attention mechanisms is the Multi-Scale SE Module (MSE). In both task scenarios, x is applied to each sample in the training set and test set i ∈R nxd The MSE module de-re-calibrates and re-combines a new sample feature x _ re according to the specific input of the sample i ∈R nx(3d) 。
To explore the importance of different channel characteristics,the invention designs a Channel-SE module, and particularly uses F firstly sq Function finding average representation of each channelThen using F ex The function obtains the non-linear relation between channels, and obtains the importance expression v _ c of each channel i Finally by F scale The final output of the Channel-SE module is obtained>
In order to explore the importance of different Frequency band characteristics, a Frequency-SE module is designed, and specifically, F is firstly used sq Function finding average representation of each frequency bandThen using F ex The function obtains the non-linear relation between frequency bands, and obtains the importance expression v _ f of each frequency band i Finally by F scale Gets the final output->
In order to explore different samples x i For different contributions of the model, a Sample-Attention module is designed, specifically F is used first sq Function solving each sample x i Channel average representation ofAnd a frequency band averaging representation>Then the two are connected together to constitute an average representation of the sample->Then uses the full-connection network to make a pair->The importance of the samples obtained by feature conversion represents v _ s i Finally by F scale Gets the final output of Sample-Attention module>/>
In conclusion, the characteristics of the recalibration samples of the Channel-SE module, the Frequency-SE module and the Sample-Attention module are obtained respectivelyAnd &>In order to obtain the final output of the MSE module, the invention designs to splice the three outputs and obtain the final output->The specific formula is as follows:
s3-4: composition of graph data structure
The adjacency relation matrix A of the graph and the reconstructed node characteristics are obtained by the step S3-2 and the step S3-3 respectivelyNext, input Graph of GCN network is constructed i 。Graph i Is specifically indicated as->
S4, construction and training of a neural network GCN model:
s4-1: construction of neural network GCN model
At the end of step S3, from sample x i Obtaining the input of a graph convolution neural network (GCN)The specific formula of the 2-layer graph convolution neural network used in the invention is expressed as follows:
wherein the content of the first and second substances,is a Laplace matrix on the graph obtained by calculation of an adjacency relation matrix A, and Z belongs to R n×b ,W 1 ∈R 3d×a ,W 2 ∈R a×b . In the invention, a =40, b =20, a and b represent the dimensions of the hidden layer and the output layer of the GCN network respectively.
In order to obtain the final emotional state classification result, Z needs to be pooled by a GraphReadout method to obtain the final representation g of the graph convolution neural network out 。
Finally, g is out Sending the data into a classifier to obtain a final emotional state prediction result y pred 。
S4-2: training of neural network GCN model
The network modules in steps S3-2, S3-2 and S4-1 are trained using CrossEncopy as a loss function, where x represents a sample and p (x) and q (x) are the true sample distribution and the predicted sample distribution of x, respectively.
The training process uses an adaptive time estimation method (Adam) for performing gradient descent to give better performance to the network modules in steps S3-2, S3-2 and S4-1.
The data transmission process included in the steps S3-2, S3-2 and S4-1 is one-time training iteration, and T times of iteration are needed to enable the network provided by the invention to have better emotion state classification performance.
S5, evaluating the performance of the model under two situations of an in-test situation and a cross-test situation:
the present invention specifically verifies model performance on the SEED data set.
Will be obtained in step S4-1Predicted state y of pred And test centralizationTrue state Y of label display t And comparing to obtain a comparison result and evaluating the performance of the model. The accuracy is the number of correctly classified samples in the total test samples during the model test, and the calculation formula of the model accuracy on the tested s is as follows:
wherein, TP is a positive sample predicted as a positive class by the model, TN is a negative sample predicted as a negative class by the model, FP is a negative sample predicted as a positive class by the model, and FN is a positive sample predicted as a negative class by the model.
The data set division in both the in-test and cross-test scenarios is detailed in step S3-1. For the situation in the test, the model proposed by the invention tests on EEG data of 2 trials in total of 15 trials; for the cross-trial scenario, the model proposed by the present invention was tested on EEG data for 1 trial of 15 trials.
Compared with the prior art, the invention has the beneficial effects that:
1) Aiming at the emotion state prediction in the two situations of the tested situation and the cross-tested situation, the method can obtain a better result than other classification methods;
2) According to the method, the influence of different channels, different frequency bands and different samples on the final performance of the model in emotion state recognition can be explored through the MSE network, the relevance among the electroencephalogram channels is explored through the GCN network on the basis, and a certain data support can be provided for researchers to explore information on a deeper physiological level in emotion recognition while good performance is obtained;
3) The method obtains good emotion classification performance in both in-test and cross-test task scenes, and the model provided by the method has good generalization performance;
in conclusion, the model provided by the invention has better performance in the aspect of emotion state prediction, and meanwhile, the channel and the frequency band play great roles in different EEG identification tasks, so that the model provided by the invention is expected to obtain better effects in different data sets and different EEG classification tasks (cognitive state prediction, motor imagery prediction and the like), and has wide application prospects in actual brain-computer interaction.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a flow chart of the input and output of the network model of the present invention;
FIG. 3 is a diagram of an MSE-GCN model architecture;
FIG. 4 is a network diagram for constructing a brain electrical channel relationship matrix;
FIG. 5 is a diagram of an MSE network architecture;
FIG. 6 is a diagram of a graph data structure construction;
FIG. 7 is a diagram of a GCN network architecture;
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
The EEG signal frequency domain feature is used as Differential Entropy (DE), a model combining multi-level SE attention and GCN is used as a classifier, and the EEG signal is analyzed to effectively distinguish the active emotional state, the passive emotional state and the neutral emotional state under two tasks of a cross-test task and a test task under different emotional data sets. Firstly, acquiring data, carrying out band-pass filtering, and then removing artifacts by an ICA (independent component analysis) technology; secondly, EEG feature extraction is carried out through DE, and a two-dimensional sample matrix is obtained from a three-dimensional EEG time sequence; then respectively defining a training set and a testing set under two task scenes to obtain the non-coincident training set and testing set; for each sample x i ∈R nxd According to specific data, the electroencephalogram channel relationship matrix A and the reconstructed electroencephalogram channel node characteristics are obtained through the networks in the steps S3-2 and S3-3 respectivelyIs based on A and->Make up the final data for sample x i Graph data structure Graph i (ii) a Then the Graph is put i Sending the information into a GCN network to extract the graph structure information of a deeper layer and outputting the final representation of the whole graph by a graph pooling method; and finally, evaluating the accuracy of the classification result by using the confusion matrix.
Referring to fig. 1, fig. 2, fig. 3, fig. 4, fig. 5, fig. 6, and fig. 7, an embodiment of the present invention includes the following steps:
step S1, data processing:
the emotion data set is taken as an example for analysis, and the processing steps of the original EEG data acquired by the EEG acquisition equipment are as follows:
s1-1: data denoising
The data set used for verifying the model performance is from SEED, and the paper "inquiring Critical Frequency Bands and Channels for EEG-Based appearance Recognition with Deep Neural Networks" can be referred to specifically. Firstly, the EEG original signal collected in the data set is down-sampled to 200HZ, then the band-pass filtering of 0.3-50HZ is carried out, and finally the ICA technology is adopted to remove the ocular artifacts in the signal.
S1-2: DE feature extraction
And (3) performing DE feature extraction on the EEG data after artifact removal, wherein each tested person watches 15 videos capable of causing the obvious emotion transformation of the tested person, the EEG data acquired in the same video playing time duration is considered as an emotion test, and each tested person has 15 emotion tests. Data segmentation was performed for each subject using a non-overlapping sliding window of 1s, resulting in 3394 data samples. For each sample x i Wherein n is 62, representing that the number of electroencephalogram data acquisition channels is 62; d is 5, representing the frequency domain characteristics of five frequency bands of delta (1-3 HZ), theta (4-7 HZ), alpha (8-13 HZ), beta (14-30 HZ) and gamma (31-50 HZ) which are extracted.
S2, data definition and data set division:
emotional state classification there are two test scenarios: the model test in both cases has different data definitions and data set partitions, and each is described in detail below.
Assume that there are N testees, the set of N testees is represented as U, and each data set is represented asWherein s denotes the test subject, n denotes the number of samples in a test subject, and ` H `>An i-th sample representing a test s, corresponding label being { (R) }>For a sample that is assumed to be unknown in emotional state, we only use the sample's corresponding->
For the emotional state classification task in the test: we selected the first 9 emotional trials (total 2010 samples) of each test and assumed that their emotional state labels are known as the training set of the model; the last 6 sentiment trials (1384 samples in total) were selected and assumed to have unknown emotional state signatures as a test set of models. And finally, verifying the model performance on the N tested test sets respectively and calculating the average accuracy.
For the emotional state classification across subjects: we use leave-one-out to perform dataset partitioning. Specifically, firstly, data of all 15 emotion tests of a first tested object are taken, and an emotional state label is assumed to be unknown, so that the data serve as a test set; the data of all 15 emotional trials of the remaining N-1 subjects were taken and assumed to have known emotional state labels as a training set. The above is a data set dividing process, then we take the 15 tested emotion tests of the ith test in turn as a test set i =2,3, ·, N, and take the data of all 15 tested emotion tests of the remaining N-1 tests as a training set. A total of N modeling experiments were performed and the average accuracy was calculated.
Step S3, constructing an EEG graph structure
S3-1: model specific data entry
For the scenario under test: the training set of the model is input as e.U, s is the s test, n is the total number of training set samples in a test, and ∈ U>An ith sample, representing the s-th training set being trialled>The corresponding real label of the ith sample in the s-th training set is shown. The test set of the model is input ass ∈ U, where the model does not get any label information about sample i from the test set, unlike the training set.
For the cross-subject scenario: the test set of the model is input ass1 ∈ U, s1 is a test, n is a total number of samples tested, and ∈ U>Represents the ith sample of the subject s1. The training set input for the model is->s2= U-s1, s2 being all tested, except s1, in the set of tested U->The corresponding real label of the ith sample in the tested s1 is shown, and N is all samples in N-1 tested samples. Wherein the model does not get any label information about sample i from the test set, unlike the training set.
S3-2: constructing an electroencephalogram channel relationship matrix
The structure of the electroencephalogram channel relationship matrix construction network is shown in fig. 4.
Under two task scenarios, each sample x in the training set and test set i ∈R nxd The adjacency matrix forming module constructs an adjacency matrix A epsilon R describing the relation among the n channels according to the specific input of the samples nxn Specifically, the following is shown:
wherein A is ij To represent the relevance weight, x, between brain electrical channels i and j i ,x j ∈R 1xd Representing the d-dimensional frequency domain characteristics of one EEG channel, w ∈ R dx1 And represents a parameter matrix that can be learned in the neural network.
S3-3: the MSE-based electroencephalogram channel feature recalibration and recombination network has a structure shown in FIG. 5.
The invention designs a multi-channel and multi-band SE attention machine to explore the contributions of different channels and different frequency bands in emotional state recognition, and also designs a sample attention machine to explore the contributions of different samples in emotional state recognition. In summary, the three attention mechanisms are designed as a Multi-Scale SE (MSE) module. In both task scenarios, x is applied to each sample in the training set and test set i ∈R nxd The MSE module can perform de-duplication calibration and recombination to obtain a new sample characteristic according to specific input of the sample
In order to explore the importance of different channel characteristics, the invention designsChannel-SE module, in particular using F sq Function finding average representation of each channelThen using F ex The function obtains the non-linear relation between channels, and obtains the importance expression v _ c of each channel i . The formula is as follows:
wherein the content of the first and second substances,v_c i ∈R nx1 ,/>where r is the reduction factor, r =2 in the present invention. Finally pass through F scale Getting the final output of the Channel-SE module>The formula is as follows:
in order to explore the importance of different Frequency band characteristics, a Frequency-SE module is designed, and specifically, F is firstly used sq Function finding average representation of each frequency bandThen using F ex The function obtains the nonlinear relation between frequency bands, and obtains the importance expression v _ f of each frequency band i The formula is specifically as follows:
wherein the content of the first and second substances,v_f i ∈R dx1 ,/>where r is the reduction factor, r =5 in the present invention. Finally by F scale Gets the final output->The formula is as follows:
in order to explore different samples x i For different contributions of the model, a Sample-Attention module is designed, specifically F is used first sq Function solving each sample x i Channel average representation ofAnd a frequency band averaging representation>Then the two are connected together to constitute an average representation of the sample->Then uses the full-connection network to make a pair->Performing feature conversion to obtainThe importance of a sample represents v _ s i The formula is specifically as follows:
wherein the content of the first and second substances,v_s i ∈R 1x1 ,W 1 ∈R c×(n+d) ,W 2 ∈R 1×c ,b 1 and b 2 The Bias corresponding to the Linear layer in the fully-connected network is provided with c =12 in the invention. Finally by F scale Gets the final output of Sample-Attention module>The formula is as follows:
in conclusion, the characteristics of the recalibration samples of Channel-SE, frequency-SE and Sample-Attention are obtained respectivelyAnd &>In order to obtain the final output of the MSE module, the invention designs to splice the three outputs and obtain the final output->The concrete formula is as follows:
s3-4: the structure of the construction diagram data is specifically constructed as shown in fig. 6.
The adjacency relation matrix A of the graph and the reconstructed node characteristic x _ re are obtained by the step S3-2 and the step S3-3 respectively i Then constructing the input Graph of the GCN network i 。Graph i Is specifically shown asWherein v = x _ re i ,v i ∈v,v i ∈R 1x(3d) . Side-or-bin>When A is ij When not equal to 0, consider the edge (v) i ,v j ) Are present.
S4, construction and training of a neural network GCN model:
s4-1: the construction of the neural network GCN model, the structure of which is shown in FIG. 7.
At the end of step S3, from sample x i Obtaining the input of a graph convolution neural network (GCN)As to the concrete principle of the GCN network, reference is made to the paper SEMI-SUPERVISED CLASSIFIFIFICATION WITH GRAPH CONVOLUONAL NETWORKS.
The specific formula of the 2-layer graph convolution neural network used in the invention is expressed as follows:
wherein, the first and the second end of the pipe are connected with each other,is a Laplace matrix on the graph obtained by A calculation, and Z belongs to R n×b ,W 1 ∈R 3d×a ,W 2 ∈R a×b . In the present invention a =40,b =20,a and b represent the dimensions of the hidden and output layers of the GCN network, respectively.
In order to obtain a final emotional state classification result, Z needs to be pooled by a GraphReadout method to obtain a final representation g _ out of the graph convolution neural network, specifically, the summation operation is performed on the second dimension of Z in the invention, and the specific formula is as follows:
finally, g is out Sending the data into a classifier to obtain a final emotional state prediction result y pred ,y pred ∈R Classx1 Class is the category of emotional states, and in the present invention Class =3.
S4-2: training of neural network GCN model
In the step S3-2, the cross Encopy is used as a loss function when the MSE module in the adjacency matrixes A and S3-3 and the GCN model in S4-1 are trained, wherein x represents a sample, p (x) and q (x) are respectively a true sample distribution and a predicted sample distribution of x, and p (x) and q (x) can be respectively calculated byAnd y pred Simple calculation yields, the crossEntrol formula is as follows:
the training process uses an adaptive time estimation method (Adam) for performing gradient descent to give better performance to the adjacency matrix A in step S3-2, the MSE module in S3-3, and the GCN model in S4-1, where parameter settings in Adam include: the learning rate was 0.015 and the weight attenuation rate was 0.0002.
In the step S3-2, the data transmission process of the MSE module in the adjacency relation matrix A and S3-3 and the GCN model in S4-1 is one training iteration, as shown in FIG. 2; t iterations are required to make the network proposed by the present invention have a better classification of emotional states.
S5, evaluating the performance of the model under two situations of an in-test situation and a cross-test situation:
the present invention specifically verifies model performance on the SEED data set.
Predicted State y to be obtained at S4-1 pred And test centralizationTrue state Y of label display t And comparing by using a confusion matrix to obtain a comparison result and evaluating the performance of the model. The accuracy is the number of correctly classified samples in the total test samples during the model test, and the calculation formula of the model accuracy on the tested s is as follows:
wherein, TP is a positive sample predicted as a positive class by the model, TN is a negative sample predicted as a negative class by the model, FP is a negative sample predicted as a positive class by the model, and FN is a positive sample predicted as a negative class by the model. The SEED data included 15 subjects, each of which was tested in three trials for a total of 45 trials. The average accuracy of the 15 first 2 trials tested is as follows:
the resulting mean square error formula is shown below:
the data set division in both the in-test and cross-test scenarios is detailed in S3-1. For the situation in the test, the model proposed by the invention tests on EEG data of 2 trials in total of 15 trials; for the cross-trial scenario, the model proposed by the present invention was tested on EEG data from 1 trial of 15 trials. The final test results are compared with the prior art (SVM, DGCNN and RGNN) as shown in the following table:
table 1: classifier performance comparison
Classifier | SVM | DGCNN | RGNN | The invention |
Accuracy in the test | 83.99/09.72 | 90.40/08.49 | 94.24/05.95 | 96.60/05.32 |
Cross-subject accuracy | 56.73/16.29 | 79.95/09.02 | 85.30/06.72 | 86.07/05.28 |
From the results in the table above, it can be seen that the method provided by the present invention achieves higher accuracy and smaller variance than SVM, DGCNN and RGNN both in-test and across-test situations.
The invention is not only suitable for the research of emotional state recognition, but also suitable for any classification prediction task based on EEG, solves the problem of EEG individual difference to a certain extent, and has the advantages of small time complexity, high calculation efficiency, strong generalization capability and the like.
Claims (6)
1. The EEG emotional state classification method based on multilayer SE attention and graph convolution is characterized in that DE characteristics are used as frequency domain characteristics of used EEG signals, a model combining multilayer SE attention and GCN is used as a classifier, and effective distinction of positive, negative and neutral emotional states under two tasks of crossing a tested task and a tested task is respectively realized through analysis of the EEG signals; firstly, acquiring data, carrying out band-pass filtering, and then removing artifacts by an ICA (independent component analysis) technology; secondly, EEG feature extraction is carried out through DE, and a two-dimensional sample matrix is obtained from a three-dimensional EEG time sequence; then respectively defining a training set and a test set under two task scenes to obtain a non-coincident training set and test set; for each sample, obtaining an electroencephalogram channel node relation matrix A and reconstructed electroencephalogram channel node characteristics through a network module, and finally forming a graph data structure related to the sample; then sending the graph data structure into a GCN network to extract a deeper layer for reconstruction and recombination, obtaining output containing structure information by the reconstructed data through the GCN network, and outputting the final representation of the whole graph by a pooling method; and finally, performing performance evaluation on the model under two task scenes by using the classification accuracy.
2. The method for classifying EEG emotional states based on multi-level SE attention and map convolution of claim 1, comprising the steps of:
s1, data processing, wherein the processing steps of the original EEG data acquired by the EEG acquisition equipment are as follows:
s1-1: data denoising
S1-2: DE feature extraction
S2, data definition and data set division:
emotional state classification there are two test scenarios: the model test in the two cases has different data definitions and data set partitions, and the details are as follows:
assuming that there are N test subjects, the set of N test subjects is represented as U, and each data set of test subjects is represented as UWherein s denotes the test subject, n denotes the number of samples in a test subject, and ` H `>Represents the ith sample of the tested s, the corresponding label is ^>
Each tested object has 15 emotion tests, and the classification task of the emotion states in the tested object is as follows: selecting 9 emotional tests before each test and assuming that the emotional state labels of the emotional tests are known as a training set of the model; selecting the last 6 emotional tests and assuming that the emotional state labels are unknown as a test set of the model;
and (3) carrying out data set division on the emotional state classification across the testees by adopting a leave-one-out method: firstly, taking data of all 15 emotion tests of a first tested object and assuming that emotional state labels of the data are unknown as a test set; taking data of all 15 emotion tests of the rest N-1 tested subjects and assuming that emotion state labels are known as a training set; then sequentially taking 15 emotion tests of the ith tested sample as a test set i =2,3, ·, N, and taking data of all 15 emotion tests of the rest N-1 tested samples as a training set; performing model experiments for N times in total and calculating average accuracy;
step S3, constructing an EEG graph structure
S3-1: model specific data entry
For the context under test: the training set of the model is input as An ith sample, representing the s-th training set being triaged>Representing the corresponding real label of the ith sample in the s tested training set; test set input for a model is &>
For the cross-subject scenario: the test set of the model is input as Represents the ith sample of the tested s 1; the training set input for the model is-> Wherein, unlike the training set, the model does not derive any label information about sample i from the test set;
s3-2: constructing an electroencephalogram channel relationship matrix
Under two task scenarios, each sample x in the training set and test set i ∈R nxd The adjacency matrix forming module constructs an adjacency matrix A epsilon R describing the relation among the n channels according to the specific input of the samples nxn Wherein A is ij Representing the relevance weight between the electroencephalogram channels i and j;
s3-3: MSE-based electroencephalogram channel feature re-calibration and recombination network
Designing a set of three attention mechanisms, namely an MSE module, and aiming at each sample x in a training set and a testing set under two task situations i ∈R nxd The MSE module de-re-calibrates and re-combines a new sample feature x _ re according to the specific input of the sample i ∈R nx(3d) ;
To explore the importance of different Channel characteristics, a Channel-SE module is designed, specifically F is used first sq Function finding average representation of each channelThen with F ex The function obtains the non-linear relation between channels, and obtains the importance expression v _ c of each channel i Finally through F scale The final output of the Channel-SE module is obtained>
In order to explore the importance of different Frequency band characteristics, a Frequency-SE module is designed, specifically, F is firstly used sq Function finding average representation of each frequency bandThen using F ex The function obtains the non-linear relation between frequency bands, and obtains the importance expression v _ f of each frequency band i Finally by F scale Gets the final output->
In order to explore different samples x i For different contributions of the model, a Sample-Attention module is designed, specifically F is used first sq Function solving each sample x i Channel average representation ofAnd the frequency band mean represents->Then the two are connected together to constitute an average representation of the sample->Then uses the full-connection network to make a pair->The importance of the sample obtained by feature transformation represents v _ s i Finally by F scale Get the final output of Sample-Attention module>
In conclusion, the recalibration Sample characteristics of the Channel-SE module, the Frequency-SE module and the Sample-Attention module are obtained respectivelyAnd &>In order to obtain the final output of the MSE module, the invention designs to splice the three outputs and obtain the final output->The specific formula is as follows:
s3-4: building graph data structures
The adjacency relation matrix A of the graph and the reconstructed node characteristics are obtained by the step S3-2 and the step S3-3 respectivelyNext, input Graph of GCN network is constructed i ;Graph i Is specifically indicated as->
S4, constructing and training a neural network GCN model:
and S5, evaluating the performance of the model in an in-test situation and a cross-test situation.
3. The EEG emotional state classification method based on multi-level SE attention and graph convolution according to claim 2, characterized in that the EEG channel relation matrix constructed in step S3-2 is specifically realized as follows:
wherein A is ij To represent the relevance weight, x, between brain electrical channels i and j i ,x j ∈R 1xd Representing the d-dimensional frequency domain characteristics of one EEG channel, w ∈ R dx1 And represents a parameter matrix that can be learned in the neural network.
4. The EEG emotional state classification method based on multi-level SE attention and graph convolution of claim 2, wherein the MSE-based EEG channel feature recalibration and recombination network of step S3-3 is specifically realized as follows:
the Channel-SE module formula is specifically as follows:
wherein the content of the first and second substances,v_c i ∈R nx1 ,/>where r is the reduction factor, r =2; finally by F scale The final output of the Channel-SE module is obtained>The formula is as follows:
the Frequency-SE module formula is specifically as follows:
wherein the content of the first and second substances,v_f i ∈R dx1 ,/>where r is the reduction factor and finally passes through F scale Gets the final output->The formula is as follows:
the Sample-Attention module formula is specifically as follows:
wherein the content of the first and second substances,v_s i ∈R 1x1 ,W 1 ∈R c×(n+d) ,W 2 ∈R 1×c ,b 1 and b 2 The Bias corresponding to the Linear layer in the fully-connected network is obtained, and c =12; finally by F scale Get the final output of Sample-Attention module>The formula is as follows:
5. the multi-level SE attention and graph convolution-based EEG emotional state classification method of claim 4, wherein the step S4 of constructing and training a neural network GCN model is implemented as follows:
s4-1: construction of neural network GCN model
From the sample x i Obtaining the input of the GCN model of the graph convolution neural networkThe specific formula of the 2-layer graph convolution neural network is expressed as follows:
wherein the content of the first and second substances,is a Laplace matrix on the graph obtained by calculation of an adjacency relation matrix A, and Z belongs to R n×b ,W 1 ∈R 3d×a ,W 2 ∈R a×b (ii) a Wherein a =40, b =20, a and b represent the dimensions of the hidden and output layers, respectively, of the GCN network; pooling Z by GraphReadout method to obtain final representation g of graph convolution neural network out And performing summation operation on the second dimension of Z, wherein the specific formula is as follows:
finally, g is out Sending the data into a classifier to obtain a final emotional state prediction result y pred ,y pred ∈R Classx1 Class is the category of emotional state, class =3;
s4-2: training of neural network GCN model
In the step S3-2, a cross Engine is used as a loss function when an MSE module in an adjacency relation matrix A, an S3-3 and a GCN model in S4-1 are trained, wherein x represents a sample, and p (x) and q (x) are a real sample distribution and a predicted sample distribution of x respectively; the training process uses an adaptive time estimation method for performing gradient descent.
6. The method for classifying EEG emotional states based on multi-level SE attention and map convolution of claim 5, wherein said step S5 of model performance evaluation in both the in-test and cross-test scenarios:
the predicted state y to be obtained at step S4-1 pred And test centralizationTrue state Y of label display t Comparing to obtain a comparison result and evaluating the performance of the model; the accuracy is the number of correctly classified samples in the total test samples during the model test, and the calculation formula of the model accuracy on the tested s is as follows:
wherein, TP is a positive sample predicted as a positive class by the model, TN is a negative sample predicted as a negative class by the model, FP is a negative sample predicted as a positive class by the model, and FN is a positive sample predicted as a negative class by the model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211503612.3A CN115919330A (en) | 2022-11-28 | 2022-11-28 | EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211503612.3A CN115919330A (en) | 2022-11-28 | 2022-11-28 | EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115919330A true CN115919330A (en) | 2023-04-07 |
Family
ID=86653516
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211503612.3A Pending CN115919330A (en) | 2022-11-28 | 2022-11-28 | EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115919330A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116369949A (en) * | 2023-06-06 | 2023-07-04 | 南昌航空大学 | Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium |
CN117033638A (en) * | 2023-08-23 | 2023-11-10 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN117455994A (en) * | 2023-11-07 | 2024-01-26 | 暨南大学 | Camera pose estimation method, system, electronic equipment and readable medium |
-
2022
- 2022-11-28 CN CN202211503612.3A patent/CN115919330A/en active Pending
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116369949A (en) * | 2023-06-06 | 2023-07-04 | 南昌航空大学 | Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium |
CN116369949B (en) * | 2023-06-06 | 2023-09-15 | 南昌航空大学 | Electroencephalogram signal grading emotion recognition method, electroencephalogram signal grading emotion recognition system, electronic equipment and medium |
CN117033638A (en) * | 2023-08-23 | 2023-11-10 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN117033638B (en) * | 2023-08-23 | 2024-04-02 | 南京信息工程大学 | Text emotion classification method based on EEG cognition alignment knowledge graph |
CN117455994A (en) * | 2023-11-07 | 2024-01-26 | 暨南大学 | Camera pose estimation method, system, electronic equipment and readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ay et al. | Automated depression detection using deep representation and sequence learning with EEG signals | |
Salama et al. | EEG-based emotion recognition using 3D convolutional neural networks | |
CN110069958B (en) | Electroencephalogram signal rapid identification method of dense deep convolutional neural network | |
Liu et al. | EEG emotion recognition based on the attention mechanism and pre-trained convolution capsule network | |
CN111728609B (en) | Electroencephalogram signal classification method, classification model training method, device and medium | |
CN115919330A (en) | EEG Emotional State Classification Method Based on Multi-level SE Attention and Graph Convolution | |
Sikder et al. | Human activity recognition using multichannel convolutional neural network | |
CN110598793B (en) | Brain function network feature classification method | |
CN112244873A (en) | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network | |
CN111202517B (en) | Sleep automatic staging method, system, medium and electronic equipment | |
CN112450947B (en) | Dynamic brain network analysis method for emotional arousal degree | |
CN113191225B (en) | Emotion electroencephalogram recognition method and system based on graph attention network | |
Jinliang et al. | EEG emotion recognition based on granger causality and capsnet neural network | |
CN112932505A (en) | Symbol transfer entropy and brain network characteristic calculation method based on time-frequency energy | |
CN113476056B (en) | Motor imagery electroencephalogram signal classification method based on frequency domain graph convolution neural network | |
CN113627391A (en) | Cross-mode electroencephalogram signal identification method considering individual difference | |
CN107256408B (en) | Method for searching key path of brain function network | |
Wang et al. | A personalized feature extraction and classification method for motor imagery recognition | |
CN117648604A (en) | Brain-computer target reading method and system based on dynamic graph feature network | |
Liu et al. | Automated Machine Learning for Epileptic Seizure Detection Based on EEG Signals. | |
CN115969392A (en) | Cross-period brainprint recognition method based on tensor frequency space attention domain adaptive network | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training | |
CN116340825A (en) | Method for classifying cross-tested RSVP (respiratory tract protocol) electroencephalogram signals based on transfer learning | |
Zhang et al. | A pruned deep learning approach for classification of motor imagery electroencephalography signals | |
CN114692682A (en) | Method and system for classifying motor imagery based on graph embedding representation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |