CN116115240A - Electroencephalogram emotion recognition method based on multi-branch chart convolution network - Google Patents

Electroencephalogram emotion recognition method based on multi-branch chart convolution network Download PDF

Info

Publication number
CN116115240A
CN116115240A CN202211614998.5A CN202211614998A CN116115240A CN 116115240 A CN116115240 A CN 116115240A CN 202211614998 A CN202211614998 A CN 202211614998A CN 116115240 A CN116115240 A CN 116115240A
Authority
CN
China
Prior art keywords
emotion
branch
electroencephalogram
matrix
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211614998.5A
Other languages
Chinese (zh)
Inventor
张建海
刘伟健
朱莉
刘芬
陈文斌
梅佳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202211614998.5A priority Critical patent/CN116115240A/en
Publication of CN116115240A publication Critical patent/CN116115240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses an electroencephalogram emotion recognition method based on a multi-branch chart convolution network. The modeling graph signal characteristic data of the invention restores the connection of the brain electrical data in space and function, and retains more emotion information by utilizing multidimensional characteristics, thereby being used for accurately classifying and identifying the emotion states of individuals. By combining with an attention mechanism, different sample data are well distinguished in the emotion information richness, so that the interference of redundant data on classification performance is greatly reduced, and the classification effect and generalization performance are good; a multi-branch graph rolling model is adopted, physical connection and correlation connection among channels are considered in the model, and an attention mechanism based on the correlation connection is utilized to assist in optimizing an adjacent matrix based on the physical connection, so that the problem of information singleization of the adjacent matrix in the conventional graph rolling model is solved. The invention utilizes the online brain-computer interface system to change the phenomenon of analyzing offline brain-electrical data in the past, and improves the real-time performance of brain-electrical data analysis.

Description

Electroencephalogram emotion recognition method based on multi-branch chart convolution network
Technical Field
The invention belongs to the technical field of signal processing, relates to the field of brain-computer interfaces, relates to an electroencephalogram emotion recognition method based on a multi-branch chart convolution network, and particularly relates to an on-line brain-computer interface system based on electroencephalogram emotion feature recognition of a space-time diagram convolution neural network.
Background
At present, brain electricity has a lot of important findings in the research field of emotion recognition. In many physiological signals, the brain electricity is directly taken from brain signals, so that the brain electricity can directly reflect the real-time activity state of the brain, and the brain electricity is not deceptive, so that emotion recognition based on the brain electricity has practical significance. The emotion state of the individual can be effectively read, various implementation problems in work and life can be solved, for example, doctors monitor the emotion state of a patient in real time so as to adjust a treatment scheme in time, the satisfaction degree of customers on products is known, the fatigue condition of the driver in the process of driving a vehicle is detected, and the like, and meanwhile, due to the wide application of a brain-computer interface (BCI), emotion recognition research based on brain-computer signals is greatly beneficial to improving the experience feeling of the user in man-machine interaction.
The electroencephalogram signal has high time resolution and strong instantaneity, and can react more directly to the emotion state of the human brain, so that the electroencephalogram signal is widely used in application research of emotion recognition. Early brain information processing mechanism research is mainly based on establishing all brain electrical channels. However, as the research is advanced, although the brain is divided into a plurality of brain regions, the brain regions associated with a specific emotion always show an active state in the corresponding emotion. Therefore, when researching a specific emotion, a specific brain area becomes an important focus of attention, and research on the characteristics of different emotions by using the specific brain area becomes a focus of attention worldwide.
The electroencephalogram emotion recognition method mainly comprises a traditional machine learning method and a deep learning method, and in recent years, a deep neural network has proved to be superior to the traditional machine learning method in application fields of face recognition, image recognition, voice recognition and the like. Among them, convolutional neural networks have been widely used in spatially continuous data such as computer vision, natural language processing, etc. Neuroscience has demonstrated that adjacent brain regions play a very important role in brain functional activity. The traditional machine learning method ignores the spatial characteristics of the electroencephalogram signals, and only simply analogizes the electroencephalogram signals into image signals or sound signals of Euclidean domain, so that the method can not well solve the important problem of modeling the inter-electrode relationship in the electroencephalogram signals.
Along with the rise of the graph rolling network, an electroencephalogram emotion recognition method based on the graph rolling neural network is also provided. The figure convolutional network was proposed to address the limitations of conventional discrete convolution in addressing data of non-euclidean structures. In order to effectively extract spatial features on non-euclidean data for machine learning, graph rolling networks are becoming an important issue for research. The graph convolution neural network inputs a graph structure comprising original electroencephalogram data and a modeling structure of the electrode, and information among nodes is transferred according to the connection relation among the nodes in the graph structure. The technical problems which cannot be solved in the prior art at present are as follows:
(1) The stability is low; the influence of the spatial discreteness and the temporal dimensional discontinuity of the affective electroencephalogram signals is not fully considered.
(2) The identification accuracy is low; the spatial information retained by the low-dimensional features is limited, thereby reducing classification accuracy. (3) low interpretability; neuroscience has demonstrated that adjacent brain regions play a very important role in brain functional activity. The traditional machine learning method ignores the spatial characteristics of the electroencephalogram signals, and only simply analogizes the electroencephalogram signals into image signals or sound signals of Euclidean domain, so that the method can not well solve the important problem of modeling the inter-electrode relationship in the electroencephalogram signals. Meanwhile, the information of the edge features in the current emotion classification method based on the graph rolling network is too single, so that the classification accuracy is reduced.
(4) The real-time performance is low; in the existing electroencephalogram emotion classification method, offline data are mostly adopted, and data analysis is carried out on the offline data, so that real-time data and algorithms are separated, and most of research in the field of brain-computer interfaces is only stopped in an experimental stage. The invention is used for researching a BCI system with strong instantaneity, and improving the instantaneity of brain intention decoding.
Disclosure of Invention
The invention aims to provide an electroencephalogram emotion recognition method based on a multi-branch graph convolution network, which is used for solving the problems which are not solved at present, so that the graph convolution neural network is applied to the field of unstructured data processing of electroencephalogram signals, and meanwhile, the instantaneity of electroencephalogram signal data analysis is realized, and the accuracy of electroencephalogram emotion recognition is improved.
In order to achieve the above object, the present invention has the following specific idea: a multi-branch graph rolling model is provided, physical connection and correlation connection among channels are considered in the model, the problem of adjacency matrix information singleization in the conventional common graph rolling model is solved by using an attention mechanism based on the correlation among channels, and simultaneously, the space-time attention mechanism is combined to obtain the representation of real emotion characteristics, so that redundant data are filtered to be used for assisting emotion classification. Specifically, the invention provides a brain electricity emotion recognition method based on a space-time diagram convolutional neural network and an online brain electricity emotion classification system based on the method.
In a first aspect, there is provided a method for identifying brain electricity emotion based on a multi-branch graph convolution network, the method comprising the steps of:
s1: acquiring emotion electroencephalogram data and preprocessing the emotion electroencephalogram data;
s101: filtering emotion electroencephalogram data by adopting a 1-50Hz band-pass filter, and dividing the emotion electroencephalogram data into 4 frequency bands, namely theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and gamma (31-50 Hz);
s102: slicing the electroencephalogram data with different frequency bands, and performing sliding window operation with a fixed length s;
s103: acquiring node characteristics;
according to the electrode position of the international 10-20 system, performing feature extraction on the electroencephalogram data after slicing processing, converting the extracted feature data from 1-dimensional data into a 2-dimensional plane format, obtaining 4 frequency band two-dimensional planes of each electroencephalogram sample, and integrating the two-dimensional planes of the 4 frequency bands to obtain 3-dimensional electroencephalogram feature data as node features;
s104, acquiring an edge feature A1 based on a physical distance;
according to the method, a multi-branch graph convolution network is adopted, wherein one branch obtains edge characteristics A1 with richer emotion information in a point-by-point multiplication fusion mode of 2 identical adjacent matrixes based on physical distances:
Figure SMS_1
wherein A is d Representing an adjacency matrix based on physical connections,
Figure SMS_2
representing point-by-point multiplication, A d The calculation mode of (2) is as follows:
Figure SMS_3
wherein τ represents a preset threshold value, θ represents a fixed parameter, and dist (i, j) represents a distance between an ith node and a jth node;
s105: acquiring an edge feature A2 considering the relevance connection:
optimizing the physical distance-based adjacency matrix by a correlation connection-based attention mechanism yields edge feature A2:
Figure SMS_4
wherein X is an attention matrix based on characteristic difference characterization among different channels obtained after the self-adaptive network layer performs learning;
s2: constructing a multi-branch graph rolling network;
the multi-branch graph rolling network comprises a first branch and a second branch which are parallel;
the first branch and the second branch have the same structure and comprise an input layer, a channel attention module and a space-time diagram convolutional neural network;
the first branch is used for extracting emotion information based on physical distance of brain electricity; the input layer is used for receiving node characteristics and edge characteristics A1 and forming a characteristic diagram G1;
the second branch is used for extracting emotion information optimized by a correlation connection-based attention mechanism of the brain electricity; the input layer is used for receiving node characteristics and edge characteristics A2 and forming a characteristic diagram G2;
the channel attention module considers that the influence of different areas on emotion classification is different and dynamically changes in the emotion occurrence process. In order to dynamically and automatically extract the spatial attention, the invention utilizes a spatial attention mechanism, which is defined as follows:
Attn s mn =V s ·σ((X s Z 1 )Z 2 (Z 3 X s )+C s ) (4)
Attn s mn =SoftMax(Attn s m ) (5)
Wherein Attn s Representing a spatial attention matrix; x is X s ∈R N×W×T Is the corresponding spatial layer input; attn s mn ∈R W×W Representing the importance of the edges formed by channel m and channel n; sigma represents a sigmoid activation function; softMax represents matrix normalization operations; v (V) s ∈R T×T ,C s ∈R T×T ,Z 1 ∈R W×1 ,Z 2 ∈R W×N ,Z 3 ∈R W×1 The method is a parameter to be learned, N is the number of EEG channels, T is the context length of a sample to be classified, and W is a characteristic dimension;
Attn s is a spatial attention matrix that is dynamically computed from the input of the current layer of the picture volume layer and dynamically updated in the later picture volume blocks. If the ith electrode contributes more to the classification task, the value of the attention coefficient corresponding to that electrode will be greater, and vice versa.
The space-time diagram convolutional neural network comprises a time attention block, a diagram convolutional block and a decision fusion layer;
(1) the temporal attention module takes into account that in the temporal dimension there is a correlation between adjacent emotion fragments, and that the correlation varies from case to case. Thus, a temporal attention mechanism is utilized to capture dynamic temporal information in a brain emotional network. The definition is as follows:
Attn t =V t ·σ((X t M 1 )MZ 2 (M 3 X t )+C t ) (6)
Attn t mn =SoftMax(Attn t m ) (7)
Wherein Attn t Is a time attention matrix; x is X t ∈R N×W×T Is input corresponding to the time step; attn t mn ∈R W×W Representing the correlation between the segments, i.e. the contribution of the segment information of the nth time step to classifying the samples of the mth time step; the SoftMax function represents a normalization operation; sigma represents a sigmoid activation function; v (V) t ∈R T×T ,C t ∈R T×T ,M 1 ∈R W×1 ,M 2 ∈R W×N ,M 3 ∈R W×1 The method is a parameter to be learned, N is the number of EEG channels, T is the context length of a sample to be classified, and W is a characteristic dimension;
Attn t for the time attention matrix, the input of different picture scroll layers is regulated by the time attention moment matrix so as to pay attention to the time data with richer emotion information.
(2) The graph-volume block simultaneously establishes an adjacent matrix A based on physical connection in the invention by taking the information difference contained in adjacent matrices constructed based on different modes into consideration d And attention matrix X based on correlation connection, in order to fuse information of two matrixes more intuitively, the invention adopts a branch pair A d Point-wise multiplication and X-based use to assist in optimizing A d The adjacency matrix obtained after optimization considers both physical information and correlation information, and the matrix can provide more comprehensive and rich inter-channel for the modelVarious relationships facilitate the model to use this information to extract more efficient and mood-related features. The graph convolution block receives the characteristic graph G which is output by the channel attention module and endows the channel attention with the weight, and the Laplace matrix of the G can be obtained according to a formula (8).
L=d-a type (8)
Where D is the degree matrix of the channel, i.e. D is the element a ij Diagonal matrix of a) ij Representing the degree of association of channel i with channel j; a is an adjacent matrix used by a picture volume block, and each element in the matrix A is expressed as the degree of connection between different channels; calculating a normalized Laplace matrix according to equation (9):
Figure SMS_5
wherein E is an identity matrix;
calculating chebyshev polynomial terms according to formula (10):
Figure SMS_6
wherein θ is k Is the coefficient of a k-order chebyshev polynomial, T k () Is a k-order chebyshev polynomial used to replace the filter effect;
obtaining an output characteristic graph G after graph convolution calculation s The specific calculation mode is as follows:
gs=σ (G (L) G) type (11)
Wherein G represents a feature map G1 or G2, σ represents a map convolution parameter, G s The feature map after fusion after the feature map G1 or G2 is convolved by the map;
(3) the decision fusion layer fuses the output of the first branch and the output of the second branch through a fusion function to obtain a feature graph G s
S3, constructing a prediction classifier;
the prediction classifier comprises a full connection layer and a softMax layer, and receives the output G of the graph volume block s Further predicting to obtain a classification result;
preferably, during the training process, the space-time diagram convolutional neural network optimizes network parameters using a back-propagation algorithm and a cross entropy loss function, and reduces inter-dependency between parameters using a Relu activation function.
Preferably, in the training process, the loss function of the multi-branch graph convolution network joint prediction classifier is as follows:
Figure SMS_7
where H is the number of samples, R y In order to classify the number of categories,
Figure SMS_8
for predicting the classification result, y is the real label.
In a second aspect, the present invention provides a multidimensional electroencephalogram emotion recognition system, comprising
The EEG signal acquisition module is used for acquiring EEG signals of a user;
the data preprocessing module is used for preprocessing acquired EEG electroencephalogram signals to obtain feature graphs G1 and G2;
the electroencephalogram emotion recognition module carries out emotion recognition on the feature map G by adopting a trained and verified multi-branch map convolution network to obtain the feature map G s And G is taken s Inputting the result into a prediction classifier to obtain a classification result;
and the result visualization module is used for displaying the classification result output by the electroencephalogram emotion recognition module in real time.
In a third aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method.
In a fourth aspect, the present invention provides a computing device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method. Compared with the prior art, the invention has the beneficial effects that:
(1) The method comprises the following steps: the modeling graph signal characteristic data in the invention restores the connection of the data in space and function, and simultaneously reserves more emotion characteristic information by utilizing multidimensional characteristics, thereby being capable of accurately classifying and identifying the emotion states of individuals. Meanwhile, the invention combines the attention mechanism, well distinguishes different sample data in the emotion information richness, thereby greatly reducing the interference of redundant data on classification performance, and having good classification effect and generalizability;
(2) On the model: the invention provides a multi-branch graph convolution model by considering the information difference contained in adjacent matrixes constructed based on different modes, and compared with the prior model, the model is different in that the difference of the information of different adjacent matrixes is considered, the adjacent matrixes connected based on physical distance are optimized by adding sub-modules by using a attention mechanism based on correlation connection, the adjacent matrixes obtained after optimization contain more emotion information at the same time, the matrixes can provide various more comprehensive and rich relations among channels for the model, and the model is helped to utilize the information to extract more effective and emotion-related characteristics.
(3) Practical use: the invention utilizes the online brain-computer interface system, changes the phenomenon of mostly analyzing offline data in the past, and improves the real-time performance of the analysis of the brain-computer data, thereby leading the analysis result of the brain-computer data to be better applied in time.
(4) Scalability: the invention adopts a module independent development method, has relatively good independence among all modules, and can improve and expand functions of any module. For example, the attention module can select other modes for calculating attention indexes, and normal functions of other modules are not affected.
Drawings
FIG. 1 is a schematic diagram of steps of an electroencephalogram emotion feature recognition method based on a multi-branch graph convolution network in an online system.
Fig. 2 is a frame diagram of a multi-branch graph convolutional network.
Fig. 3 is a diagram of a brain-computer interface system interface UI.
Detailed Description
In order to make the implementation of the present invention more clear, the following detailed description will be given with reference to the accompanying drawings.
The invention provides an electroencephalogram emotion recognition method based on a space-time diagram convolutional neural network as shown in fig. 1, which comprises the following steps:
s1: acquiring emotion electroencephalogram data and preprocessing the emotion electroencephalogram data;
s101, the acquired emotion brain electricity data can be experimental data of a tested person acquired by the user, can be an existing data set, and can be a new data set formed by mixing the two data sets. The invention uses the DEAP data set of the published brain electricity emotion data set and the SEED emotion brain electricity data set of the published Shanghai university of transportation as an example, wherein the DEAP data set collects brain electricity data of 32 healthy participants (16 men and 16 women). The human body and the mind of the person taking part in the experiment are all healthy state, and the brain electric signals are collected according to the 32-lead (see electrode distribution diagram) electrode cap of the international lead standard of 10-20. Volunteers were asked to watch 40 videos of one minute duration and to collect the subject's EEG signals at a sampling frequency of 512 Hz. All the tested are required to mark the sizes of valance, arousal and Dominance of the watched videos according to the size relation from 1 to 9 after the videos are watched. In the SEED data set, 15 tested (7 men and 8 women) participate in the experiment, in each experiment, 15 movie fragments are played to excite the emotion corresponding to the video, and in the SEED data set experiment, the ESI NeuroScan System of 62 channels is adopted to record the electroencephalogram signals, and the sampling frequency is 1000Hz. The DEAP brain electrical emotion data of 32 channels and the SEED brain electrical emotion data of 62 channels are respectively filtered through a band-pass filter, and the data are respectively divided into data of 4 frequency bands of theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and gamma (31-50 Hz);
s102, in order to add more sample data, the electroencephalogram signal data obtained from the previous two data sets are subjected to sliding window operation, and the window size is 3S. After the slicing operation of the data is completed, a large amount of sample data is obtained, the data set is divided, 28 tested data of the DEAP data set are used as training sets, and the rest 4 tested data are used as test sets. Taking 12 tested data of the SEED data set as a training set and the rest 3 tested data as a test set;
s103: according to the electrode position of the international 10-20 system, performing feature extraction on the electroencephalogram data after slicing processing, converting the extracted feature data from 1-dimensional data into a 2-dimensional plane format, obtaining 4 frequency band two-dimensional planes of each electroencephalogram sample, and integrating the two-dimensional planes of the 4 frequency bands to obtain 3-dimensional electroencephalogram feature data as node features; s104, acquiring an edge feature A1 based on a physical distance;
obtaining edge characteristics A1 with richer emotion information through a point-by-point multiplication fusion mode of 2 identical adjacent matrixes based on physical distances;
s105: acquiring an edge feature A2 considering the relevance connection;
optimizing an adjacency matrix based on physical distance through an attention mechanism based on correlation connection to obtain an edge feature A2;
s2: constructing a multi-branch graph rolling network as shown in fig. 2;
the multi-branch graph rolling network comprises a first branch and a second branch which are parallel;
the first branch and the second branch have the same structure and comprise an input layer, a channel attention module and a space-time diagram convolutional neural network;
the first branch is used for extracting emotion information based on physical distance of brain electricity; the input layer is used for receiving node characteristics and edge characteristics A1 and forming a characteristic diagram G1;
the second branch is used for extracting emotion information optimized by a correlation connection-based attention mechanism of the brain electricity; the input layer is used for receiving node characteristics and edge characteristics A2 and forming a characteristic diagram G2;
when the brain is stimulated by the outside world, the activation level of different brain regions is different, so that the influence of different brain regions on emotion recognition is different, and different regions correspond to different electrode channels, so that the importance of the different electrode channels is necessary to consider. The spatial attention module is used for modeling the correlation among channels and is used for enhancing the feature propagation among more important edges in the subsequent graph convolution operation, namely, the channel with larger contribution of the emotion classification task is focused more. Define graph model g= (V, E), V represents a vertex and E represents an edge in the graph. Each electroencephalogram channel corresponds to a vertex in the graph. The adjacency matrix A of the brain graph is determined according to the Euclidean distance between the nodes, and in the invention, the corresponding adjacency matrix size is 62 multiplied by 62 for the SEED data set, and the corresponding adjacency matrix size is 32 multiplied by 32 for the DEAP data set.
The channel attention module adopts a spatial attention mechanism to extract a spatial attention matrix;
the space-time diagram convolutional neural network comprises a time attention block, a diagram convolutional block and a decision fusion layer;
(1) the temporal attention module takes into account that in the temporal dimension there is a correlation between adjacent emotion fragments, and that the correlation varies from case to case. Thus, a temporal attention mechanism is utilized to capture dynamic temporal information in a brain emotional network.
(2) The graph-volume block simultaneously establishes an adjacent matrix A based on physical connection in the invention by taking the information difference contained in adjacent matrices constructed based on different modes into consideration d And attention matrix X based on correlation connection, in order to fuse information of two matrixes more intuitively, the invention adopts a branch pair A d Point-wise multiplication and X-based use to assist in optimizing A d The adjacency matrix obtained after optimization considers physical information and correlation information at the same time, and the matrix can provide various relations among channels for the model, thereby being beneficial to the model to extract more effective and emotion-related characteristics by utilizing the information.
(3) The decision fusion layer fuses the output of the first branch and the output of the second branch through a fusion function to obtain a trace feature graph G s
S3, constructing a prediction classifier;
the prediction classifier comprises a full connection layer and a softMax layer, and receives the output G of the graph volume block s Further predicting to obtain a classification result;
the graph roll-up neural network is a deep learning network combining CNN with graph theory, which has the advantage of processing spatially discrete data (non-euclidean data). And taking the graph signals obtained after the feature extraction as the input of the network, extracting graph features from the input data by utilizing the characteristics of the graph convolution network, improving electroencephalogram emotion recognition by mining graph domain features and time domain information, and providing guarantee for subsequent prediction classification recognition research. From the above, we can train the model proposed, and the basic algorithm steps are as follows:
1. initializing model parameters: learning rate p, chebyshev polynomial order k, and number of iterations e.
2. Input: and (5) a sample set, a label and the number T of the input characteristic fragments.
3. The output is the parameters after model training.
4. Calculating an adjacency matrix A based on Euclidean distance d And an attention matrix X based on the correlation connection and based on A d And X, to obtain multi-branch adjacency matrixes A1 and A2, wherein the A1 and the A2 are called A in the following, and elements of the matrix A are regularized by using a Relu activation function.
5. According to the formula
Figure SMS_9
A and D∈R N×N A normalized laplacian matrix L is calculated. Where D is a diagonal matrix and E is an identity matrix.
6. According to the formula
Figure SMS_10
Computing chebyshev polynomials, wherein θ k Is the coefficient of Chebyshev polynomial, T k () Is a calculation method of chebyshev polynomials.
7. Calculation of
Figure SMS_11
8. The results are regularized using a Relu function.
9. The output of the picture convolution block is input to the prediction classification block after being acted on by the temporal attention block.
10. And calculating an output result of the full connection layer.
Using
Figure SMS_12
And calculating a loss function, and updating the model parameters of the multi-branch graph convolutional network joint prediction classifier.
Wherein the training times are set to be 100-300, the loss function is set to be a cross entropy loss function, adam is selected as an optimization algorithm, the learning rate is initially set to be 0.001, and learning rate attenuation is added. The learning rate decays linearly with increasing number of iterations to help the model update with smaller amplitude at the later stage of training. Training the constructed neural network by using the training set divided in the step S1, checking the neural network in training by using the test set divided in the step S1, and if the fitting phenomenon occurs, adjusting the learning rate to retrain the neural network until the fitting does not occur any more, so that the parameters of the neural network can be timely adjusted, and the space-time diagram convolutional neural network after the preliminary training can be obtained quickly and better. The multi-branch graph convolution network shown in fig. 2 comprises two branches, and the submodules for feature fusion and decision fusion are added into the model, so that various relations among channels can be provided for the model, and the model can be helped to extract more effective and emotion-related features by utilizing the information.
And finally, the full-connection layer predicts and classifies the output of the graph convolution layer to obtain a classification result.
An on-line brain-computer interface system for implementing electroencephalogram emotion recognition of a space-time convolutional neural network, comprising:
the EEG signal acquisition module is used for acquiring EEG signals of a user;
the data preprocessing module is used for preprocessing acquired EEG electroencephalogram signals to obtain feature graphs G1 and G2;
brain electric emotionThe recognition module carries out emotion recognition on the feature map G by adopting a trained and verified multi-branch map convolution network and outputs G s Inputting the result into a prediction classifier to obtain a classification result;
and the result visualization module is used for displaying the classification result output by the electroencephalogram emotion recognition module in real time, and the classification result is shown in fig. 3.
It will be apparent to those skilled in the art that several modifications and variations can be made in the present invention without departing from the spirit or scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. The electroencephalogram emotion recognition method based on the multi-branch chart convolution network is characterized by comprising the following steps of:
s1: acquiring emotion electroencephalogram data and preprocessing the emotion electroencephalogram data;
s101: filtering emotion electroencephalogram data by adopting a 1-50Hz band-pass filter, and dividing the emotion electroencephalogram data into a theta frequency band, an alpha frequency band, a beta frequency band and a gamma frequency band;
s102: slicing the electroencephalogram data with different frequency bands, and performing sliding window operation with a fixed length s;
s103: acquiring node characteristics;
according to the electrode position of the international 10-20 system, performing feature extraction on the electroencephalogram data after slicing processing, converting the extracted feature data from 1-dimensional data into a 2-dimensional plane format, obtaining 4 frequency band two-dimensional planes of each electroencephalogram sample, and integrating the two-dimensional planes of the 4 frequency bands to obtain 3-dimensional electroencephalogram feature data as node features;
s104, acquiring an edge feature A1 based on a physical distance;
the edge feature A1 with more abundant emotion information is obtained through a fusion mode of point-by-point multiplication of 2 identical adjacent matrixes based on physical distance:
Figure QLYQS_1
wherein the method comprises the steps of
Figure QLYQS_2
Representing point-wise multiplication;
A d representing a physical connection based adjacency matrix calculated as follows:
Figure QLYQS_3
wherein τ represents a preset threshold value, θ represents a fixed parameter, and dist (i, j) represents a distance between an ith node and a jth node;
s105: acquiring an edge feature A2 considering the relevance connection;
optimizing the physical distance-based adjacency matrix by a correlation connection-based attention mechanism yields edge feature A2:
Figure QLYQS_4
wherein X is an attention matrix based on characteristic difference characterization among different channels obtained after the self-adaptive network layer performs learning;
s2: constructing a multi-branch graph rolling network;
the multi-branch graph rolling network comprises a first branch and a second branch which are parallel;
the first branch and the second branch have the same structure and comprise an input layer, a channel attention module and a space-time diagram convolutional neural network;
the first branch is used for extracting emotion information based on physical distance of brain electricity; the input layer is used for receiving node characteristics and edge characteristics A1 and forming a characteristic diagram G1;
the second branch is used for extracting emotion information optimized by a correlation connection-based attention mechanism of the brain electricity; the input layer is used for receiving node characteristics and edge characteristics A2 and forming a characteristic diagram G2;
the channel attention module extracts a spatial attention matrix by adopting a spatial attention mechanism, and the spatial attention matrix is defined as follows:
Attn s =V s ·σ((X s Z 1 )Z 2 (Z 3 X s )+C s ) (4)
Attn s mn =SoftMax(Attn s m ) (5)
Wherein Attn s Representing a spatial attention matrix; x is X s ∈R N×W×T Is the corresponding spatial layer input; attn s mn ∈R W×W Representing the importance of the edges formed by channel m and channel n; sigma represents a sigmoid activation function; softMax represents matrix normalization operations; v (V) s ∈R T ×T ,C s ∈R T×T ,Z 1 ∈R W×1 ,Z 2 ∈R W×N ,Z 3 ∈R W×1 The method is a parameter to be learned, N is the number of EEG channels, T is the context length of a sample to be classified, and W is a characteristic dimension;
the space-time diagram convolutional neural network comprises a time attention block, a diagram convolutional block and a decision fusion layer;
(1) the time attention block takes into account that in the time dimension, there is a correlation between adjacent emotion fragments, and the correlation varies in different cases; thus, dynamic time information in a brain emotion network is captured using a time attention mechanism, defined as follows:
Attn t =V t ·σ((X t M 1 )MZ 2 (M 3 X t )+C t ) (6)
Attn t mn =SoftMax(Attn t m ) (7)
Wherein Attn t Is a time attention matrix; x is X t ∈R N×W×T Is input corresponding to the time step; attn t mn ∈R W×W Representing the correlation between the segments, i.e. the contribution of the segment information of the nth time step to classifying the samples of the mth time step; the SoftMax function represents a normalization operation; sigma represents a sigmoid activation function; v (V) t ∈R T×T ,C t ∈R T×T ,M 1 ∈R W×1 ,M 2 ∈R W×N ,M 3 ∈R W×1 The method is a parameter to be learned, N is the number of EEG channels, T is the context length of a sample to be classified, and W is a characteristic dimension;
(2) the picture volume block adopts a multi-layer Gcn stack; the graph convolution block receives the characteristic graph G which is output by the channel attention module and endowed with the channel attention weight, and the Laplacian matrix of the G can be obtained according to a formula (8);
l=d-a type (8)
Where D is the degree matrix of the channel, i.e. D is the element a ij Diagonal matrix of a) ij Representing the degree of association of channel i with channel j; a is an adjacent matrix used by a picture volume block, and each element in the matrix A is expressed as the degree of connection between different channels; calculating a normalized Laplace matrix according to equation (9):
Figure QLYQS_5
wherein E is an identity matrix;
calculating chebyshev polynomial terms according to formula (10):
Figure QLYQS_6
wherein θ is k Is the coefficient of a k-order chebyshev polynomial, T k () Is a k-order chebyshev polynomial used to replace the filter effect;
obtaining an output characteristic graph G after graph convolution calculation s The specific calculation mode is as follows:
gs=σ (G (L) G) type (11)
Wherein G represents a feature map G1 or G2, and σ represents a map convolution parameter;
(3) the decision fusion layer fuses the output of the first branch and the output of the second branch through a fusion function to obtain a feature graph G s
S3, constructing a prediction classifier;
the prediction classifier comprises a full connection layer and a softMax layer, and receives the output G of the graph volume block s And further predicting to obtain a classification result.
2. The method of claim 1, wherein step S103 extracts time-frequency features.
3. The method of claim 1, wherein during training, the space-time convolutional neural network optimizes network parameters using a back-propagation algorithm and a cross-entropy loss function, and reduces inter-dependencies between parameters using a Relu activation function.
4. The method of claim 1, wherein the multi-branch graph convolutional network joint prediction classifier loss function during training is:
Figure QLYQS_7
where H is the number of samples, R y In order to classify the number of categories,
Figure QLYQS_8
for predicting the classification result, y is the real label.
5. A multidimensional electroencephalogram emotion recognition system implementing the method of any one of claims 1-4, comprising:
the EEG signal acquisition module is used for acquiring EEG signals of a user;
the data preprocessing module is used for preprocessing acquired EEG electroencephalogram signals to obtain feature graphs G1 and G2;
the electroencephalogram emotion recognition module carries out emotion recognition on the feature map G by adopting a trained and verified multi-branch map convolution network to obtain the feature map G s And G is taken s Inputting into a prediction classifier to obtain classification result;
And the result visualization module is used for displaying the classification result output by the electroencephalogram emotion recognition module in real time.
6. A computer readable storage medium having stored thereon a computer program which, when executed in a computer, causes the computer to perform the method of any of claims 1-4.
7. A computing device comprising a memory having executable code stored therein and a processor, which when executing the executable code, implements the method of any of claims 1-4.
CN202211614998.5A 2022-12-14 2022-12-14 Electroencephalogram emotion recognition method based on multi-branch chart convolution network Pending CN116115240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211614998.5A CN116115240A (en) 2022-12-14 2022-12-14 Electroencephalogram emotion recognition method based on multi-branch chart convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211614998.5A CN116115240A (en) 2022-12-14 2022-12-14 Electroencephalogram emotion recognition method based on multi-branch chart convolution network

Publications (1)

Publication Number Publication Date
CN116115240A true CN116115240A (en) 2023-05-16

Family

ID=86307148

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211614998.5A Pending CN116115240A (en) 2022-12-14 2022-12-14 Electroencephalogram emotion recognition method based on multi-branch chart convolution network

Country Status (1)

Country Link
CN (1) CN116115240A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033638A (en) * 2023-08-23 2023-11-10 南京信息工程大学 Text emotion classification method based on EEG cognition alignment knowledge graph

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033638A (en) * 2023-08-23 2023-11-10 南京信息工程大学 Text emotion classification method based on EEG cognition alignment knowledge graph
CN117033638B (en) * 2023-08-23 2024-04-02 南京信息工程大学 Text emotion classification method based on EEG cognition alignment knowledge graph

Similar Documents

Publication Publication Date Title
CN110399857B (en) Electroencephalogram emotion recognition method based on graph convolution neural network
CN109389059B (en) P300 detection method based on CNN-LSTM network
CN111134666B (en) Emotion recognition method of multi-channel electroencephalogram data and electronic device
CN112932502B (en) Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network
CN112244873A (en) Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network
CN114224342B (en) Multichannel electroencephalogram signal emotion recognition method based on space-time fusion feature network
CN110151203B (en) Fatigue driving identification method based on multistage avalanche convolution recursive network EEG analysis
CN111709267A (en) Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN111544256A (en) Brain-controlled intelligent full limb rehabilitation method based on graph convolution and transfer learning
CN110037693A (en) A kind of mood classification method based on facial expression and EEG
CN114492513A (en) Electroencephalogram emotion recognition method for adaptation to immunity domain based on attention mechanism in cross-user scene
CN110974219A (en) Human brain idea recognition system based on invasive BCI
CN116115240A (en) Electroencephalogram emotion recognition method based on multi-branch chart convolution network
CN111273767A (en) Hearing-aid brain computer interface system based on deep migration learning
CN116058800A (en) Automatic sleep stage system based on deep neural network and brain-computer interface
CN113476056B (en) Motor imagery electroencephalogram signal classification method based on frequency domain graph convolution neural network
CN113128353B (en) Emotion perception method and system oriented to natural man-machine interaction
Guo et al. Convolutional gated recurrent unit-driven multidimensional dynamic graph neural network for subject-independent emotion recognition
CN114169364A (en) Electroencephalogram emotion recognition method based on space-time diagram model
CN114145744A (en) Cross-device forehead electroencephalogram emotion recognition method and system
CN114983447B (en) Human action recognition, analysis and storage wearable device based on AI technology
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
CN113887365A (en) Special personnel emotion recognition method and system based on multi-mode data fusion
Sharma et al. EmHM: a novel hybrid model for the emotion recognition based on EEG signals
Ngo et al. EEG Signal-Based Eye Blink Classifier Using Convolutional Neural Network for BCI Systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination