CN114287937A - Emotion recognition method based on multi-mode convolutional neural network - Google Patents

Emotion recognition method based on multi-mode convolutional neural network Download PDF

Info

Publication number
CN114287937A
CN114287937A CN202111403467.7A CN202111403467A CN114287937A CN 114287937 A CN114287937 A CN 114287937A CN 202111403467 A CN202111403467 A CN 202111403467A CN 114287937 A CN114287937 A CN 114287937A
Authority
CN
China
Prior art keywords
neural network
signals
feature
electroencephalogram
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111403467.7A
Other languages
Chinese (zh)
Inventor
戴紫玉
马玉良
张卫
佘青山
席旭刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111403467.7A priority Critical patent/CN114287937A/en
Publication of CN114287937A publication Critical patent/CN114287937A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses an emotion recognition method based on a multi-mode convolutional neural network. The multi-mode convolutional neural network consists of a multi-scale convolutional kernel convolutional neural network (MLCNN) and a long-time memory network (LSTM), the multi-mode convolutional neural network is used for performing secondary feature extraction on the electroencephalogram signals with the extracted Differential Entropy (DE) features, the LSTM is used for extracting the time sequence features of the eye movement signals, and feature level fusion is adopted in feature fusion. The experimental result shows that the multi-modal signal has higher emotion classification accuracy compared with a single-modal signal, and the emotion four-classification average accuracy of the multi-modal signal based on the 6-channel electroencephalogram signal and the eye movement signal reaches 97.94%; in a cross-session stability experiment, the multi-mode signals obtain 96.32% of accuracy, and the cross-session relative stability and effectiveness of the multi-mode convolutional neural network are verified.

Description

Emotion recognition method based on multi-mode convolutional neural network
Technical Field
The invention relates to a multi-modal signal emotion recognition method, in particular to an emotion recognition method based on a multi-modal convolutional neural network (MLCNN-LSTM).
Background
In recent years, with the continuous development of artificial intelligence and portable noninvasive human body sensor technology, multi-modal emotion recognition has become a research hotspot in the field of emotion calculation at home and abroad. The multi-modal learning can improve the final recognition accuracy by utilizing the complementarity of multi-modal signals, and the performance of the multi-modal emotion recognition is generally superior to that of emotion recognition based on a single modality. The invention adopts electroencephalogram (EEG) signals and eye movement signals to carry out multi-mode information fusion emotion recognition research, designs a method for processing multi-mode signals by using a multi-mode convolutional neural network (MLCNN-LSTM), and aims to extract the most effective emotion classification characteristics of electroencephalogram and electrooculogram respectively, improve the efficiency and accuracy of emotion classification recognition to the maximum extent and verify the cross-session relative stability and effectiveness of the multi-mode convolutional neural network.
Disclosure of Invention
The invention designs a method for processing multi-modal signals by using a multi-modal convolutional neural network (MLCNN-LSTM). the LSTM is used for processing eye movement signals, the multi-scale convolutional neural network is used for extracting features of EEG signals, a feature fusion module adopts feature level fusion, and finally a softmax classifier is used for carrying out emotion four classification.
A multi-modal convolutional neural network (MLCNN-LSTM) based emotion recognition method comprises the following steps:
step 1: selecting an SEEDIV data set as an experiment data set, and performing emotion four-classification experiments;
step 2: processing the EEG signal, extracting differential entropy characteristics of the EEG signal, and performing characteristic smoothing by using a linear dynamic system method to obtain a data set after characteristic extraction;
and step 3: and respectively cutting the eye movement signal and the EEG signal, and performing normalization processing. Wherein EEG signals are cut into a matrix of 5 x 62 size, eye movement signals are cut into a matrix of 1 x 31 size, the number of samples per test is 2495, wherein the number of training set samples is 1997 and the number of test set samples is 499;
and 4, step 4: performing feature extraction and classification on the multi-modal signals by using a multi-modal convolutional neural network; the method comprises the following specific steps:
4-1, processing the eye movement signal by adopting LSTM, and selecting the last time series characteristic as the characteristic finally extracted;
and 4-2, extracting features of the EEG signal by adopting a multi-scale convolution kernel convolutional neural network, wherein the multi-scale convolution kernel convolutional neural network consists of four layers of networks, namely an input layer, a convolutional layer, a pooling layer and a full-connection layer. The convolution layer adopts a multi-scale convolution kernel to perform feature extraction of different dimensions on an input signal; the pooling layer adopts space pyramid pooling, and aims to convert the size of a characteristic graph output by the convolution layer into the same size; a full connection layer, which paves data to prepare for subsequent feature fusion;
and 4-3, in the feature fusion module, splicing two feature matrices extracted by the eye movement signal and the EEG signal to obtain a large feature matrix, then passing through a full connection layer, and finally selecting softmax as a classifier, wherein the output dimension of the connected dense neural layer is 4, one unit of each type is provided, and the number of output units is set according to the type, namely the classification type depends on the number of output neurons.
Preferably, said data set comprises EEG signals recorded on 62 channels, recorded according to the international 10-20 standard system, and corresponding eye movement signals; preprocessing the acquired signals, down-sampling the original EEG data to 200HZ, processing the EEG data by using a band-pass filter of 0-75HZ in order to filter noise and remove artifacts, and extracting an electroencephalogram segment corresponding to the image viewing duration to obtain a preprocessed EEG data set. The eye movement data extracts pupil diameter, gaze, eye jump and blink features. The pupil diameter comprises two diameters in the horizontal direction and the vertical direction, and the median, standard deviation and differential entropy characteristics on four different frequency bands are extracted; for three parameters of blink intervals, gaze deviation degrees and gaze time, extracting the median and standard deviation of the parameters respectively as features; extracting standard deviation and median of the glance amplitude and glance time interval of the glance parameters; the statistical features include blink frequency, fixation frequency, maximum fixation time, total fixation deviation, maximum fixation deviation, saccade frequency, average saccade amplitude, average saccade time interval and average saccade delay, so that a 23-dimensional electro-oculography feature is extracted.
Preferably, in step 4, the feature extraction and classification of the multi-modal signal is performed by using a multi-modal convolutional neural network, which specifically includes:
because the selected experimental data comprise data of three different time periods, in order to better extract time series characteristics, the LSTM is adopted to process the eye movement signal, the size of the input eye movement signal characteristic matrix is 1 x 31, the last time series characteristic is selected as the finally extracted characteristic, and the size of the output characteristic matrix is 1 x 64. The electroencephalogram signal adopts a multi-scale convolution kernel convolution neural network to extract features, the size of an input feature matrix is 5 x 62, convolution kernels with three different sizes are adopted to perform convolution operation on the feature matrix, and the sizes are respectively as follows: 5 × 1, 5 × 3 × 1 and 5 × 1, the final output feature matrix size is 1 × 384. In the feature fusion module, the two feature matrixes are spliced to obtain a large feature matrix with the dimensionality of 1 x 448, then the large feature matrix passes through a full-connection layer, and finally softmax is selected as a classifier to perform emotion four classification.
Compared with the prior art, the invention has the following beneficial effects:
the method provided by the invention verifies that the multi-modal signals have higher emotion classification accuracy compared with single-modal signals, and the emotion four-classification average accuracy of the multi-modal signals based on the 6-channel EEG signals and the eye movement signals reaches 97.94 percent, which is higher than the similar research results; in addition, the confusion matrix shows that the multi-modal signals have complementary action on the happy emotion, so that higher classification accuracy can be obtained compared with single-modal signals, and the classification performance of the brain electrical signals on the heart-hurt emotion is higher than that of the eye movement signals and the multi-modal signals. In a cross-session stability experiment, the multi-mode signals obtain 96.32% of accuracy, and cross-session relative stability and effectiveness of the multi-mode convolutional neural network are verified.
Drawings
FIG. 1 is a flow chart of an SEEDIV data set experiment;
FIG. 2 is a diagram of a structure of a multi-scale convolution kernel CNN;
FIG. 3 is a diagram of an LSTM network architecture;
FIG. 4 is a diagram of a multi-modal convolutional neural network architecture;
FIG. 5 compares single-modality signals with multi-modality signal classification results;
FIG. 6 a multimodal convolutional neural network cross-session stability analysis.
Detailed Description
The present invention is further illustrated by the following specific examples. The following description is exemplary and explanatory only and is not restrictive of the invention in any way.
Step 1: selecting an SEEDIV data set as an experiment data set, and performing emotion four-classification experiments;
step 2: processing the electroencephalogram signal, extracting differential entropy characteristics of the electroencephalogram signal, and smoothing the characteristics by using a linear dynamic system method to obtain an electroencephalogram data set after the characteristics are extracted;
and step 3: and respectively cutting the eye movement signal and the EEG signal, and performing normalization processing. Wherein EEG signals are cut into a matrix of 5 x 62 size, eye movement signals are cut into a matrix of 1 x 31 size, the number of samples per test is 2495, wherein the number of training set samples is 1997 and the number of test set samples is 499;
and 4, step 4: feature extraction and classification of the multi-modal signal is performed using a multi-modal convolutional neural network.
In the step 1, an SEEDIV data set experimental flow chart is shown in fig. 1, the data set comprises electroencephalogram signals of three different time periods and corresponding eye movement signals, wherein EEG signals are recorded according to an international 10-20 standard system; preprocessing the acquired EEG signals, down-sampling the original EEG data to 200HZ, processing the EEG data by using a band-pass filter of 0-75HZ in order to filter noise and remove artifacts, and extracting an EEG segment corresponding to the viewing duration to obtain a preprocessed EEG data set. Eye movement data extracts pupil diameter, blinking, gaze, saccades and statistical features. Wherein, the pupil diameter comprises two diameters in the horizontal and vertical directions, and the median, standard deviation and differential entropy characteristics in four different frequency bands (0-0.2Hz, 0.2-0.4Hz, 0.4-0.6Hz and 0.6-1Hz) are extracted; for three parameters of blink intervals, gaze deviation degrees and gaze time, extracting the median and standard deviation of the parameters respectively as features; extracting standard deviation and median of the glance amplitude and glance time interval of the glance parameters; the statistical features include blink frequency, fixation frequency, maximum fixation time, total fixation deviation, maximum fixation deviation, saccade frequency, average saccade amplitude, average saccade time interval and average saccade delay, so that a 23-dimensional eye movement feature is extracted.
In the step 2, the differential entropy characteristics of the electroencephalogram signals are extracted, and characteristic smoothing is carried out by using a linear dynamic system method, wherein the specific method comprises the following steps:
differential entropy expands the thought of shannon entropy and is used to measure the complexity of continuous random variables. For a fixed-length electroencephalogram signal, the differential entropy is equivalent to a log-energy spectrum within a certain frequency band. Is provided with an electroencephalogram signal XiThe differential entropy expression is as follows:
H(x)=-∫f(x)log[f(x)]dx (1)
where f (x) is the probability density function of the brain electrical signal if the random variables obey a Gaussian distribution N (μ, σ)2) Then the differential entropy in the above equation can be simply calculated by the following equation:
Figure BDA0003371901690000041
although the original electroencephalogram signals do not obey a certain fixed distribution, after the band-pass filtering from 2Hz to 44Hz, the electroencephalogram signals obey Gaussian distribution in continuous sub-bands every 2Hz, and the above formula shows that only sigma needs to be known2Can obtain XiDifferential entropy, normal distribution of N (mu, sigma)2) The variance calculation formula of (a) is:
Figure BDA0003371901690000042
defining the spectral energy of the discrete signal as
Figure BDA0003371901690000043
The EEG signal X can be known from the above formulaiThe variance of (1) is the average energy value P, and the differential entropy characteristic for a certain frequency band is as follows:
Figure BDA0003371901690000044
the differential entropy characteristics can reduce errors caused by overlarge frequency band energy values during calculation and improve the characteristic accuracy.
To filter out components that are not related to emotional state, a method of LDS smoothing features was introduced. The linear power system can be expressed as:
xt=zt+wt (5)
zt=Azt-1+vt (6)
in the formula xtRepresenting an observed variable, ztRepresenting hidden affective variables, A being a transition matrix, wtIs a mean value of
Figure BDA0003371901690000051
Gaussian noise with variance Q, vtExpressed as the mean value of
Figure BDA0003371901690000052
Gaussian noise with variance R, the above equation can also be expressed in terms of a gaussian conditional distribution:
Figure BDA0003371901690000053
Figure BDA0003371901690000054
in the step 4, the multi-modal convolutional neural network is used to perform feature extraction and classification on the multi-modal signals, which specifically comprises the following steps:
processing an electroencephalogram signal by using a multi-scale convolution kernel CNN network, wherein the multi-scale convolution kernel CNN model has five layers in total, the first layer is an input layer, the electroencephalogram signal is cut into the size of MxNx1 to be used as the input of the multi-scale convolution kernel CNN model, and M and N respectively represent the length and the width of an input matrix; the second layer is a convolution layer, the input signal is subjected to feature extraction with different dimensionalities by adopting a multi-scale convolution kernel, and the size of the multi-scale convolution kernel is set as follows: mx 5 × 1, mx 3 × 1, mx 1 × 1, 128 convolution kernels for each size; the third layer is a pooling layer, and spatial pyramid pooling is adopted; the fourth layer is a full connection layer, and preparation is made for flattening the data into categories; the fifth layer is an output layer, a Softmax classifier is adopted to realize four-classification, and the structure of the multi-scale convolution kernel CNN is shown in figure 2.
The multi-scale convolution kernel CNN can learn the convolution of the convolution kernel and the input of the current layer as the input of the next layer through forward propagation output convolution layers, and corrects the network weight and the bias of each layer through the back propagation of errors. The forward propagation formula is:
Figure BDA0003371901690000055
in the formula:
Figure BDA0003371901690000056
characteristic diagram of i-th characteristic diagram of l-1 layerInputting a signal;
Figure BDA0003371901690000057
the output value of the jth characteristic diagram of the ith layer is obtained; mjIs a set of feature maps; symbol represents convolution;
Figure BDA0003371901690000058
is a learnable convolution kernel between the ith feature map of the l-1 layer and the jth feature map of the l layer; b is the bias of the output characteristic diagram; f (-) represents the activation function of the output, and the network model adopts the Relu activation function.
The loss function expression is:
Figure BDA0003371901690000059
xifor input, j is the prediction of a single sample, yiResults for the true category; w is a weight parameter, f is an activation function, the network model adopts a Relu activation function, delta is a fault tolerance,
Figure BDA00033719016900000510
a regularization penalty term is obtained, wherein lambda is a penalty coefficient, and k and l are rows and columns of the weight parameter respectively;
the Softmax classifier expression is:
Figure BDA0003371901690000061
k is the number of classes, zjLinear prediction probability, z, representing the jth classkIs the sum of the linear prediction probabilities of k classes, fj(z) represents the normalized prediction result for each class. In the reverse propagation, the Adam gradient algorithm is used.
The eye movement signal is processed by using an LSTM network, the network structure of the LSTM is shown in figure 3, the LSTM network and a multi-scale convolution kernel CNN are combined to form a multi-mode convolution neural network, and the network structure diagram is shown in figure 4. Because the selected experimental data comprise data of three different time periods, in order to better extract time series characteristics, the LSTM is adopted to process the eye movement signals, the size of an input eye movement signal characteristic matrix is 1 x 31, the last time series characteristic is selected as the finally extracted characteristic, and the size of an output characteristic matrix is 1 x 64. The electroencephalogram signal adopts a multi-scale convolution kernel convolution neural network to extract features, the size of an input feature matrix is 5 x 62, convolution kernels with three different sizes are adopted to perform convolution operation on the feature matrix, and the sizes are respectively as follows: 5 × 1, 5 × 3 × 1 and 5 × 1, the final output feature matrix size is 1 × 384. In the feature fusion module, the two feature matrixes are spliced to obtain a large feature matrix with the dimension of 1 × 448, then the large feature matrix passes through a full connection layer, and finally softmax is selected as a classifier.
Results of the experiment
Fig. 5 shows the average accuracy of emotion classification in three experiments under three different signals and the average classification accuracy of 45 experiments under three different signals, wherein the three different signals are 62-channel electroencephalogram signals, eye movement signals, and multi-mode signals of 62-channel electroencephalogram signals fused with eye movement signals. The experimental result shows that the emotion classification accuracy rate based on the multi-mode is higher than that based on the single-mode signal, and the effectiveness of the multi-mode convolutional neural network on the emotion classification of the multi-mode signal is verified.
Table 1 shows emotion classification comparison results of the method and other multi-modal methods based on the SEED-IV data set, and document [1] performs high-level coordination representation by using Depth Canonical Correlation Analysis (DCCA), performs feature extraction on electroencephalogram signals and eye movement data, and performs feature-level fusion, so that the classification accuracy on the SEED-IV data set is 87.45%; document [2] proposes a multi-modal emotion recognition framework-Emotionmeter, which sends the extracted electroencephalogram signal features and eye movement signal features to the Emotionmeter for feature level fusion, and finally adopts DNN classification, so that the average accuracy rate reaches 85.11%. According to the invention, the LSTM network is adopted to extract eye movement signal characteristics, the ML-CNN network is adopted to extract electroencephalogram signal characteristics, four classifications are carried out after feature level fusion, and the classification accuracy rate of 97.94% is obtained.
TABLE 1 comparison of results of analogous studies
Literature Fusion mode Emotional recognition rate
Document [1]] Feature level fusion 87.45%
Document [2]] Feature level fusion 85.11%
Our method Feature level fusion 97.94%
In order to test the stability of the multi-mode convolutional neural network along with the change of time, three times of experimental data of each tested object are mixed together and are disordered, and then a training set and a test set are divided, so that the training set and the test set respectively comprise three times of experimental data in different time periods. The total number of samples of each tested sample is 2495, the scrambled data is divided according to the ratio of 8:2, the number of samples in the training set is 1996, the number of samples in the testing set is 499, and the number of iterations is set to 300. The experimental result is shown in fig. 6, 10 of 15 tested multi-modal signal classification results are superior to single-modal signals, the emotion classification average accuracy rates based on 6-channel electroencephalogram, eye movement signals and multi-modal signals are 96.16%, 88.21% and 96.32%, and the results show that the multi-modal convolutional neural network still has good performance in mixed three-time experimental data, the multi-modal signal performance is superior to the single-modal signals, and the relative stability of the multi-modal convolutional neural network is verified.

Claims (4)

1. A method for recognizing emotion based on a multi-modal convolutional neural network is characterized by comprising the following steps:
step 1: selecting an SEEDIV data set as an experiment data set, and performing emotion four-classification experiments;
step 2: processing the electroencephalogram signal, extracting differential entropy characteristics of the electroencephalogram signal, and smoothing the characteristics by using a linear dynamic system method to obtain an electroencephalogram data set after the characteristics are extracted;
and step 3: cutting the eye movement signal and the brain electrical signal respectively, and performing normalization processing;
and 4, step 4: the method comprises the following steps of using a multi-modal convolutional neural network to carry out feature extraction and classification on multi-modal signals, and specifically comprising the following steps:
4-1, processing the eye movement signal by adopting LSTM, and selecting the last time series characteristic as the characteristic finally extracted;
4-2, extracting the characteristics of the electroencephalogram signal by adopting a multi-scale convolution kernel convolution neural network, wherein the multi-scale convolution kernel convolution neural network comprises an input layer, a convolution layer, a pooling layer and a full-connection layer; the convolution layer adopts a multi-scale convolution kernel to perform feature extraction of different dimensions on an input signal; the pooling layer adopts space pyramid pooling, and aims to convert the size of a characteristic graph output by the convolution layer into the same size; a full connection layer, which paves data to prepare for subsequent feature fusion;
and 4-3, in the feature fusion module, splicing two feature matrixes extracted from the eye movement signals and the brain electrical signals to obtain a large feature matrix, then passing through a full connection layer, and finally selecting softmax as a classifier, wherein the output dimension of the connected dense neural layer is 4, one unit of each type is provided with the number of output units according to the type, namely the classification type depends on the number of output neurons.
2. The emotion recognition method based on the multi-modal convolutional neural network of claim 1, wherein:
the SEEDIV data set comprises electroencephalogram signals with three different time periods and corresponding eye movement signals, wherein the electroencephalogram signals are recorded according to an international 10-20 standard system;
preprocessing the acquired electroencephalogram signals, down-sampling the original electroencephalogram signals to 200HZ, processing the electroencephalogram signals by using a band-pass filter of 0-75HZ in order to filter noise and remove artifacts, and extracting electroencephalogram fragments corresponding to the viewing duration to obtain a preprocessed electroencephalogram data set;
extracting pupil diameter, blinking, staring, saccade and statistical characteristics from the eye movement data; the pupil diameter comprises two diameters in the horizontal direction and the vertical direction, and the median, standard deviation and differential entropy characteristics on four different frequency bands are extracted; for three parameters of blink intervals, gaze deviation degrees and gaze time, extracting the median and standard deviation of the parameters respectively as features; extracting standard deviation and median of the glance amplitude and glance time interval of the glance parameters; the statistical features include blink frequency, gaze frequency, maximum gaze time, total gaze deviation, maximum gaze deviation, saccade frequency, average saccade amplitude, average saccade interval, and average saccade delay.
3. The emotion recognition method based on the multi-modal convolutional neural network of claim 2, wherein: in step 3, the electroencephalogram signal is cut into a matrix with the size of 5 × 62, the eye movement signal is cut into a matrix with the size of 1 × 31, the number of samples in each test is 2495, the number of samples in the training set is 1997, and the number of samples in the testing set is 499.
4. The emotion recognition method based on the multi-modal convolutional neural network of claim 2, wherein: in the step 4, the size of the input eye movement signal feature matrix is 1 × 31, the last time series feature is selected as the finally extracted feature, and the size of the output feature matrix is 1 × 64;
the size of the feature matrix input by the electroencephalogram signal is 5 x 62, convolution operation is carried out on the feature matrix by adopting convolution cores with three different sizes, and the sizes are respectively as follows: 5 × 1, 5 × 3 × 1 and 5 × 1, the feature matrix size of the final output being 1 × 384;
in the feature fusion module, the two feature matrices are spliced to obtain a large feature matrix with the dimension of 1 × 448.
CN202111403467.7A 2021-11-24 2021-11-24 Emotion recognition method based on multi-mode convolutional neural network Pending CN114287937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111403467.7A CN114287937A (en) 2021-11-24 2021-11-24 Emotion recognition method based on multi-mode convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111403467.7A CN114287937A (en) 2021-11-24 2021-11-24 Emotion recognition method based on multi-mode convolutional neural network

Publications (1)

Publication Number Publication Date
CN114287937A true CN114287937A (en) 2022-04-08

Family

ID=80965369

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111403467.7A Pending CN114287937A (en) 2021-11-24 2021-11-24 Emotion recognition method based on multi-mode convolutional neural network

Country Status (1)

Country Link
CN (1) CN114287937A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115919315A (en) * 2022-11-24 2023-04-07 华中农业大学 Cross-subject fatigue detection deep learning method based on EEG channel multi-scale parallel convolution

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993131A (en) * 2019-04-04 2019-07-09 北京理工大学 A kind of design idea judgement system and method based on multi-modal signal fused
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
CN111553295A (en) * 2020-05-01 2020-08-18 北京邮电大学 Multi-mode emotion recognition method based on self-attention mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993131A (en) * 2019-04-04 2019-07-09 北京理工大学 A kind of design idea judgement system and method based on multi-modal signal fused
CN110772268A (en) * 2019-11-01 2020-02-11 哈尔滨理工大学 Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
CN111553295A (en) * 2020-05-01 2020-08-18 北京邮电大学 Multi-mode emotion recognition method based on self-attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
戴紫玉,马玉良,高云园等: "基于多尺度卷积核CNN的脑电情绪识别", 传感技术学报, vol. 34, no. 4, 30 April 2021 (2021-04-30), pages 496 - 502 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115919315A (en) * 2022-11-24 2023-04-07 华中农业大学 Cross-subject fatigue detection deep learning method based on EEG channel multi-scale parallel convolution
CN115919315B (en) * 2022-11-24 2023-08-29 华中农业大学 Cross-main-body fatigue detection deep learning method based on EEG channel multi-scale parallel convolution

Similar Documents

Publication Publication Date Title
CN111012336B (en) Parallel convolutional network motor imagery electroencephalogram classification method based on spatio-temporal feature fusion
CN109994203B (en) Epilepsia detection method based on EEG signal depth multi-view feature learning
CN112932502B (en) Electroencephalogram emotion recognition method combining mutual information channel selection and hybrid neural network
CN108256629B (en) EEG signal unsupervised feature learning method based on convolutional network and self-coding
CN110772268A (en) Multimode electroencephalogram signal and 1DCNN migration driving fatigue state identification method
CN106963369B (en) Electroencephalogram relaxation degree identification method and device based on neural network model
CN109784242A (en) EEG Noise Cancellation based on one-dimensional residual error convolutional neural networks
CN114052735B (en) Deep field self-adaption-based electroencephalogram emotion recognition method and system
Ashokkumar et al. Implementation of deep neural networks for classifying electroencephalogram signal using fractional S‐transform for epileptic seizure detection
CN112465069B (en) Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN
CN114366124B (en) Epileptic electroencephalogram identification method based on semi-supervised deep convolution channel attention list classification network
CN111709267A (en) Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN112528819B (en) P300 electroencephalogram signal classification method based on convolutional neural network
An et al. Electroencephalogram emotion recognition based on 3D feature fusion and convolutional autoencoder
CN113569997A (en) Emotion classification method and system based on graph convolution neural network
CN114287937A (en) Emotion recognition method based on multi-mode convolutional neural network
CN113180659B (en) Electroencephalogram emotion recognition method based on three-dimensional feature and cavity full convolution network
CN117473303B (en) Personalized dynamic intention feature extraction method and related device based on electroencephalogram signals
CN114081503A (en) Method for removing ocular artifacts in electroencephalogram signals
CN117609951A (en) Emotion recognition method, system and medium integrating electroencephalogram and function near infrared
CN117113015A (en) Electroencephalogram signal identification method and device based on space-time deep learning
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
CN116304815A (en) Motor imagery electroencephalogram signal classification method based on self-attention mechanism and parallel convolution
CN116350239A (en) Electroencephalogram signal concentration degree classification method and system
CN116236209A (en) Method for recognizing motor imagery electroencephalogram characteristics of dynamics change under single-side upper limb motion state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination