CN112465069B - Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN - Google Patents
Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN Download PDFInfo
- Publication number
- CN112465069B CN112465069B CN202011478666.XA CN202011478666A CN112465069B CN 112465069 B CN112465069 B CN 112465069B CN 202011478666 A CN202011478666 A CN 202011478666A CN 112465069 B CN112465069 B CN 112465069B
- Authority
- CN
- China
- Prior art keywords
- data
- feature
- samples
- convolution kernel
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000008451 emotion Effects 0.000 title claims abstract description 25
- 238000012360 testing method Methods 0.000 claims abstract description 18
- 238000010606 normalization Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000013145 classification model Methods 0.000 claims abstract description 7
- 238000009499 grossing Methods 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000012634 fragment Substances 0.000 claims description 10
- 239000011159 matrix material Substances 0.000 claims description 10
- 238000002474 experimental method Methods 0.000 claims description 9
- 238000011176 pooling Methods 0.000 claims description 9
- 238000012549 training Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 6
- 210000005036 nerve Anatomy 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 210000004205 output neuron Anatomy 0.000 claims description 3
- 238000002360 preparation method Methods 0.000 claims description 3
- 210000004556 brain Anatomy 0.000 abstract description 4
- 230000008909 emotion recognition Effects 0.000 abstract description 4
- 230000003993 interaction Effects 0.000 abstract 1
- 230000006870 function Effects 0.000 description 11
- 238000011160 research Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/245—Classification techniques relating to the decision surface
- G06F18/2453—Classification techniques relating to the decision surface non-linear, e.g. polynomial classifier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Nonlinear Science (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses an electroencephalogram emotion classification method based on a multi-scale convolution kernel CNN. Selecting a SEED dataset as experimental data, and performing classification test by using a characteristic selection and classification model combined by a reliefF algorithm and a multi-scale convolution kernel CNN; firstly, extracting differential entropy characteristics on each frequency band from the preprocessing data, and performing characteristic smoothing by using a linear dynamic system method; secondly, performing feature selection on the data by using a reliefF algorithm, cutting the data into feature samples suitable for being input as a multi-scale convolution kernel CNN, and performing normalization processing; and finally, classifying and testing the processed samples by using a multi-scale convolution kernel CNN. The invention adopts the feature selection and classification model method combining the reliefF algorithm and the multi-scale convolution kernel CNN, can effectively improve the classification rate of the emotion brain electrical signals, and has important significance for the intelligent human-computer interaction field and brain electrical emotion recognition.
Description
Technical Field
The invention relates to an electroencephalogram emotion feature classification method, in particular to an electroencephalogram emotion classification method based on a multi-scale convolution kernel CNN.
Background
The traditional emotion recognition is mainly based on research of facial features, limb actions and voices, the external features are easy to camouflage and cannot reflect real emotion, and brain electrical signals can reflect nerve electrophysiological activities accompanied by processing emotion of the brain and can well make up for the defects of the traditional research method. The traditional method has the advantages that the traditional method combines the emotion characteristics extracted manually with the machine learning related algorithm, a large amount of priori knowledge is needed to find the characteristics of the electroencephalogram signals, characteristic engineering is constructed, the electroencephalogram signals are easy to be interfered by noise, and the differences among different tested are caused, so that a large amount of time and energy are consumed for selecting the artificial characteristics based on the electroencephalogram signals.
The traditional CNN convolution layer adopts a convolution kernel with a single size, the emotion characteristics in the electroencephalogram signals can not be extracted from multiple dimensions, and aiming at the problem, the research tries to simultaneously extract the emotion electroencephalogram characteristics from high dimensions and low dimensions by adopting the multi-scale convolution kernel, and experiments are carried out on an international public data set SEED.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an electroencephalogram emotion classification method based on a multi-scale convolution kernel CNN by using a characteristic selection and classification model combined with the multi-scale convolution kernel CNN to carry out emotion classification test on electroencephalogram data.
Firstly, performing feature selection on data by using a reliefF algorithm to remove redundant features; secondly, convolution kernels with different scales are added on a convolution layer of the CNN to construct a multi-scale convolution kernel CNN, and compared with the traditional CNN, the multi-scale convolution kernel CNN enlarges the dimension of the convolution layer extraction feature, reduces the number of layers of the convolution layer and reduces the network complexity; finally, the multi-scale convolution kernel CNN is used for carrying out emotion classification test on the data after feature selection, and the method can effectively improve the accuracy of electroencephalogram emotion classification.
According to the technical scheme provided by the invention, the invention provides an electroencephalogram emotion feature recognition method based on a multi-scale convolution kernel CNN, which comprises the following steps:
step 1, selecting an SEED data set as a data set of an electroencephalogram signal emotion three-classification experiment;
step 2, extracting differential entropy characteristics from the data in the preprocessed data set, and performing characteristic smoothing by using a linear dynamic system method to obtain the data set after characteristic extraction;
step 3, classifying and testing the data in the data set obtained in the step 2 by using a characteristic selection and classification model combined by a reliefF algorithm and a multi-scale convolution kernel CNN; firstly, performing feature selection on data by using a reliefF algorithm; the method comprises the following specific steps:
3-1 randomly selecting an arbitrary sample Q from the sample set g i ;
3-2. Slave and Q i Selecting k and Q from the same class of samples i Samples closest to the sample; slave and Q i Selecting k and Q from different classes of samples i Samples closest to the sample;
3-3. Updating the weight value of each feature according to the following formula:
wherein W (A) represents the weight value of feature A; h j Representation and Q i The j sample in k nearest neighbor samples of the same class; p (C) represents the proportion of samples of class C in the training samples; p (class (Q) i ) With Q) represents i The proportion of the same sample in the total sample, class (Q i ) Representation of sample Q i A corresponding tag; m is the total number of samples; m is M j (C) Represents the j-th nearest sample among k samples of category C, C and Q i Different species; function diff (A, Q 1 ,Q 2 ) Representation of sample Q 1 And Q 2 The difference over feature a is calculated as follows:
Q 1 [A]and Q 2 [A]Respectively represent sample Q 1 And Q 2 Values on feature a; max (a) and min (a) represent the maximum and minimum values of feature a in all samples, respectively;
3-4, selecting a proper threshold according to the obtained weight value, reserving the weight value larger than the threshold, removing the weight value smaller than the threshold, and forming a final feature subset by the reserved feature value;
step 4, cutting the data in the feature subsets obtained in the step 3 into a plurality of matrixes, and carrying out normalization processing to obtain a feature sample input by the multi-scale convolution kernel CNN;
step 5, classifying and testing the processed samples by using a multi-scale convolution kernel CNN; the method comprises the following specific steps:
5-1, randomly scrambling the normalized data in the step 4, and dividing the randomly scrambled data and the corresponding labels into training samples and test samples according to the proportion of 8:2;
5-2, a second layer of the model is a convolution layer, and the multi-scale convolution check is adopted to extract the characteristics of different dimensionalities of the input signals;
5-3, the third layer of the model is a pooling layer, and spatial pyramid pooling is adopted, so that the size of the feature map output by the convolution layer is converted into the same size;
5-4, the fourth layer is a full connection layer, and the data are paved as classification preparation;
and 5-5, the fifth layer is an output layer, a Softmax classifier is adopted, the output dimension of the connected dense nerve layers is 3, each unit is one unit, the number of output units is set according to the category, namely the classification category depends on the number of output neurons.
Preferably, the data set comprises the electroencephalogram signals recorded on 62 channels, and is recorded according to an international 10-20 standard system; preprocessing the acquired signals, downsampling the original EEG data to 200HZ, processing the EEG data by a band-pass filter of 0-75HZ for filtering noise and removing artifacts, extracting electroencephalogram fragments corresponding to the viewing time length, and obtaining a preprocessed data set.
Preferably, in the step 4, the data is cut first and then normalized, specifically:
the data format of the data set after the differential entropy feature is extracted is as follows: the number of channels is multiplied by the potential value multiplied by the frequency band, wherein the number of channels is 62, the frequency band is 5, the potential value is the duration of each video segment, each tested person views 15 movie fragments in sequence, the duration of the 15 movie fragments is 3394s, and each tested person performs three experiments; after the reliefF algorithm is used for carrying out feature selection on the data, the feature dimension which is reserved is M, the sample data is cut into a matrix size of M multiplied by 5 multiplied by 1, and the number of samples added by each tested three experiments is 10182;
the data is normalized by zero mean normalization, which is also called standard deviation normalization, the mean value of the processed data is 0, the standard deviation is 1, and the conversion formula is:
wherein the method comprises the steps ofThe sigma is the standard deviation of the original data; after zero mean normalization of the electroencephalogram signals, all data are on the same order of magnitude so as to facilitate subsequent classification processing.
Preferably, in the step 5, the multi-scale convolution kernel CNN structure is:
the multi-scale convolution kernel size is set as follows: m×5×1, m×3×1, m×1×1, 128 convolution kernels for each size, the convolution step size being set to 1; the sizes of the feature graphs output by the convolution layer are respectively as follows: 1×76×1×128, 1×70×1×128, 1×57×1×128;
the matrix sizes selected by the pooling layer are respectively as follows: 1×76×1, 1×70×1, 1×57×1, step size is set to 1, so that the output matrix size are all 1X 1 x 1 x 128;
the loss function is in a form of fusing a cross entropy function with an L1 regularization term, and the formula is as follows:
wherein the first term is a cross entropy function, p is the desired output, q is the actual output; the second term is an L1 regularization term, λ is a regularization coefficient, n is the number of samples, and w is a characteristic weight.
Compared with the prior art, the invention has the following beneficial effects:
firstly, a reliefF algorithm is used for carrying out feature selection on data, so that redundant features can be effectively removed, and the calculated amount is reduced; in general, a single-size convolution kernel is adopted in a convolution layer of CNN, so that a plurality of convolution layers are required to be constructed for extracting deeper features of signals, a network structure becomes complex, network parameters are multiplied by the increase of the number of network layers, rapid convergence of a network is not facilitated, and network performance is seriously affected.
Drawings
FIG. 1 is a flow chart of a SEED dataset experiment;
FIG. 2 is a 62 channel electrode profile of an electroencephalogram acquisition apparatus;
FIG. 3 is a sequence of movie fragments and their duration;
FIG. 4 is a diagram of a multi-scale convolution kernel CNN structure;
FIG. 5 is a flow chart for emotion electroencephalogram feature extraction and classification;
Detailed Description
The invention will be further illustrated with reference to specific examples. The following description is merely exemplary and explanatory and is not intended to limit the present invention in any manner.
Step 1, selecting a SEED data set as experimental data;
step 2, extracting differential entropy features from the data, and performing feature smoothing by using a linear dynamic system method to obtain a data set after feature extraction;
step 3, classifying and testing the data in the data set obtained in the step 2 by using a characteristic selection and classification model combined by a reliefF algorithm and a multi-scale convolution kernel CNN; and firstly, carrying out feature selection on the data by using a reliefF algorithm to obtain a final feature subset.
Step 4, cutting the data in the feature subsets obtained in the step 3 into matrixes with proper sizes, and then carrying out normalization processing and disturbing;
step 5, dividing the randomly disturbed data and the corresponding labels into training samples and test samples according to the proportion of 8:2; and finally, classifying and testing the processed samples by using a multi-scale convolution kernel CNN.
In the step 1, the test data set experimental flow is shown in fig. 1, and the data set comprises the electroencephalogram signals recorded on 62 channels and is recorded according to the international 10-20 standard system, as shown in fig. 2. Preprocessing the acquired signals, downsampling the original EEG data to 200HZ, processing the EEG data by a band-pass filter of 0-75HZ for filtering noise and removing artifacts, extracting electroencephalogram fragments corresponding to the duration of each film, and obtaining a preprocessed data set;
in the step 2, differential entropy characteristics are extracted from the data, and characteristic smoothing is performed by a linear dynamic system method. The differential features of the electroencephalogram signal are extracted because the differential entropy features are more suitable for emotion recognition than other features. The signal is divided into a plurality of 1s segments according to the duration of the signal, and assuming that the signal approximates a smooth process within a time interval of 1s, in each segment, the differential entropy characteristics of five frequency bands (delta: 1-3Hz, theta: 4-7Hz, alpha: 8-13Hz, beta: 14-30Hz, gamma: 31-50 Hz) of the preprocessed data are calculated using a 256-point non-overlapping hanning window short-time Fourier transform.
In the step 3, the reliefF algorithm is used for selecting the characteristics of the data, and the specific steps are as follows:
3-1 randomly selecting an arbitrary sample Q from the sample set Q i ;
3-2. Slave and Q i Selecting k and Q from the same class of samples i Samples closest to the sample; slave and Q i Selecting k and Q from different classes of samples i Samples closest to the sample;
3-3. Updating the weight value of each feature according to the following formula:
wherein W (A) represents the weight value of feature A; h j Representation and Q i The j sample in k nearest neighbor samples of the same class; p (C) represents the proportion of samples of class C in the training samples; p (class (Q) i ) With Q) represents i The proportion of the same sample in the total sample, class (Q i ) Representation of sample Q i A corresponding tag; m is the total number of samples; m is M j (C) The j-th nearest neighbor sample of k samples of class CThe radicals C and Q i Different species; function diff (A, Q 1 ,Q 2 ) Representation of sample Q 1 And Q 2 The difference over feature a is calculated as follows:
Q 1 [A]and Q 2 [A]Respectively represent sample Q 1 And Q 2 Values on feature a; max (a) and min (a) represent the maximum and minimum values of feature a in all samples, respectively;
3-4, selecting a proper threshold according to the obtained weight value, reserving the weight value larger than the threshold, removing the weight value smaller than the threshold, and forming a final feature subset by the reserved feature value;
in the step 4, the sample data is cut first, and then normalized, specifically:
the data format of the data set after the DE characteristics are extracted is as follows: channel number x potential value x frequency band, wherein the channel number is 62, the frequency band is 5, the potential value is the duration of each video segment, each test watches 15 movie fragments according to the sequence of fig. 3, fig. 3 also shows the duration of each fragment, which lasts 3394s altogether, and each test is tested three times; after the data is subjected to feature selection by using a reliefF algorithm, the feature dimension which is reserved is M, and the sample data is cut into a matrix size of M multiplied by 5 multiplied by 1, so that the number of samples added by each tested three experiments is 10182.
The data is normalized by zero mean normalization, which is also called standard deviation normalization, and the mean value of the processed data is 0 and the standard deviation is 1. The conversion formula is as follows:
wherein the method comprises the steps ofIs the mean value of the original dataSigma is the standard deviation of the original data; after zero mean normalization of the electroencephalogram signals, all data can be on the same order of magnitude so as to facilitate subsequent classification processing.
In the step 5, the assigned samples are subjected to classification test by using a multi-scale convolution kernel CNN, and the specific steps are as follows:
5-1, randomly scrambling the normalized data in the step 4, and dividing the randomly scrambled data and the corresponding labels into training samples and test samples according to the proportion of 8:2;
5-2, a second layer of the model is a convolution layer, the input signals are checked by adopting multi-scale convolution to extract features with different dimensions, and the multi-scale convolution kernel size is set as follows: m×5×1, m×3×1, m×1×1, 128 convolution kernels for each size, the convolution step size being set to 1; the sizes of the feature graphs output by the convolution layer are respectively as follows: 1×76×1×128, 1×70×1×128, 1×57×1×128;
5-3, the third layer of the model is a pooling layer, and spatial pyramid pooling is adopted, so that the size of a characteristic diagram output by a convolution layer is converted into the size of a matrix selected by the pooling layer with the same size, and the matrix sizes are respectively as follows: 1×76×1, 1×70×1, 1×57×1, step size is set to 1, so that the output matrix size are all 1X 1 x 1 x 128;
5-4, the fourth layer is a full connection layer, and the data are paved as classification preparation;
and 5-5, the fifth layer is an output layer, a Softmax classifier is adopted, the output dimension of the connected dense nerve layers is 3, each unit is one unit, the number of output units is set according to the category, namely the classification category depends on the number of output neurons.
The structure diagram of the multi-scale convolution kernel CNN model is shown in fig. 4, wherein the forward propagation formula of the multi-scale convolution kernel CNN is as follows:
wherein:input signal for i-th feature map of layer l-1;/>Output values of the j-th feature map of the first layer; m is M j Is a set of feature maps; the symbols represent convolutions; />Is a learnable convolution kernel between the ith feature map of layer l-1 and the jth feature map of layer l; b is the bias of the output feature map; f (·) represents the activation function of the output, and this network model employs the Relu activation function.
The loss function takes the form of fusing the cross entropy function with the L1 regularization term, and the formula is as follows:
wherein the first term is a cross entropy function, p is the desired output, q is the actual output; the second term is an L1 regularization term, λ is a regularization coefficient, n is the number of samples, and w is a characteristic weight.
The Softmax classifier expression is:
k is the category number, z j Representing the linear prediction probability, z, of the j-th class k Is the sum of the linear prediction probabilities of k categories, f j (z) represents the normalized prediction result of each class. In back propagation, adam gradient algorithm is used.
Regarding network superparameter settings: the network adopts Adam algorithm to optimize the loss function, and the initial learning rate is 0.001. The trained batch size was taken at 62 and epoch was set at 600.
The overall workflow is shown in fig. 5. In the field of emotion recognition, exploring to take an emotion electroencephalogram signal as a feature extraction object, combining a reliefF algorithm with a multi-scale convolution kernel CNN, providing a novel feature selection and classification model method, and firstly, performing feature selection on data by using the reliefF algorithm to remove redundant features; secondly, the method is improved on the basis of a classical CNN model to obtain a multi-scale convolution kernel CNN model, and the method has very research value in emotion electroencephalogram feature classification.
Claims (4)
1. An electroencephalogram emotion classification method based on a multi-scale convolution kernel CNN is characterized by comprising the following steps:
step 1, selecting an SEED data set as a data set of an electroencephalogram signal emotion three-classification experiment;
step 2, extracting differential entropy characteristics from the data in the preprocessed data set, and performing characteristic smoothing by using a linear dynamic system method to obtain the data set after characteristic extraction;
step 3, classifying and testing the data in the data set obtained in the step 2 by using a characteristic selection and classification model combined by a reliefF algorithm and a multi-scale convolution kernel CNN; firstly, performing feature selection on data by using a reliefF algorithm; the method comprises the following specific steps:
3-1 randomly selecting an arbitrary sample Q from the sample set Q i ;
3-2. Slave and Q i Selecting k and Q from the same class of samples i Samples closest to the sample; slave and Q i Selecting k and Q from different classes of samples i Samples closest to the sample;
3-3. Updating the weight value of each feature according to the following formula:
wherein W (A) represents the weight value of feature A; h j Representation and Q i The j sample in k nearest neighbor samples of the same class; p (C) represents the proportion of samples of class C in the training samples; p (class (Q) i ) With Q) represents i The proportion of the same sample in the total sample, class (Q i ) Representation of sample Q i A corresponding tag; m is the total number of samples; m is M j (C) Representing categoriesThe j-th nearest sample among the k samples of C, C and Q i Different species; function diff (A, Q 1 ,Q 2 ) Representation of sample Q 1 And Q 2 The difference over feature a is calculated as follows:
Q 1 [A]and Q 2 [A]Respectively represent sample Q 1 And Q 2 Values on feature a; max (a) and min (a) represent the maximum and minimum values of feature a in all samples, respectively;
3-4, selecting a proper threshold according to the obtained weight value, reserving the weight value larger than the threshold, removing the weight value smaller than the threshold, and forming a final feature subset by the reserved feature value;
step 4, cutting the data in the feature subsets obtained in the step 3 into a plurality of matrixes, and carrying out normalization processing to obtain a feature sample input by the multi-scale convolution kernel CNN;
step 5, classifying and testing the processed samples by using a multi-scale convolution kernel CNN; the method comprises the following specific steps:
5-1, randomly scrambling the normalized data in the step 4, and dividing the randomly scrambled data and the corresponding labels into training samples and test samples according to the proportion of 8:2;
5-2, a second layer of the model is a convolution layer, and the multi-scale convolution check is adopted to extract the characteristics of different dimensionalities of the input signals;
5-3, the third layer of the model is a pooling layer, and spatial pyramid pooling is adopted, so that the size of the feature map output by the convolution layer is converted into the same size;
5-4, the fourth layer is a full connection layer, and the data are paved as classification preparation;
and 5-5, the fifth layer is an output layer, a Softmax classifier is adopted, the output dimension of the connected dense nerve layers is 3, each unit is one unit, the number of output units is set according to the category, namely the classification category depends on the number of output neurons.
2. The electroencephalogram emotion classification method based on multi-scale convolution kernel CNN according to claim 1, characterized in that: the data set comprises the electroencephalogram signals recorded on 62 channels and is recorded according to an international 10-20 standard system; preprocessing the acquired signals, downsampling the original EEG data to 200HZ, processing the EEG data by a band-pass filter of 0-75HZ for filtering noise and removing artifacts, extracting electroencephalogram fragments corresponding to the viewing time length, and obtaining a preprocessed data set.
3. The electroencephalogram emotion classification method based on multi-scale convolution kernel CNN according to claim 1, characterized in that: in the step 4, the data is firstly cut and then normalized, specifically:
the data format of the data set after the differential entropy feature is extracted is as follows: the number of channels is multiplied by the potential value multiplied by the frequency band, wherein the number of channels is 62, the frequency band is 5, the potential value is the duration of each video segment, each tested person views 15 movie fragments in sequence, the duration of the 15 movie fragments is 3394s, and each tested person performs three experiments; after the reliefF algorithm is used for carrying out feature selection on the data, the feature dimension which is reserved is M, the sample data is cut into a matrix size of M multiplied by 5 multiplied by 1, and the number of samples added by each tested three experiments is 10182;
the data is normalized by zero mean normalization, which is also called standard deviation normalization, the mean value of the processed data is 0, the standard deviation is 1, and the conversion formula is:
wherein the method comprises the steps ofThe sigma is the standard deviation of the original data; after zero mean normalization of the electroencephalogram signals, all data are on the same order of magnitude so as to facilitate subsequent classification processing.
4. The electroencephalogram emotion classification method based on multi-scale convolution kernel CNN according to claim 1, characterized in that: in the step 5, the multi-scale convolution kernel CNN structure is as follows:
the multi-scale convolution kernel size is set as follows: m×5×1, m×3×1, m×1×1, 128 convolution kernels for each size, the convolution step size being set to 1; the sizes of the feature graphs output by the convolution layer are respectively as follows: 1×76×1×128, 1×70×1×128, 1×57×1×128;
the matrix sizes selected by the pooling layer are respectively as follows: 1×76×1, 1×70×1, 1×57×1, step size is set to 1, so that the output matrix size are all 1X 1 x 1 x 128;
the loss function is in a form of fusing a cross entropy function with an L1 regularization term, and the formula is as follows:
wherein the first term is a cross entropy function, p is the desired output, q is the actual output; the second term is an L1 regularization term, λ is a regularization coefficient, n is the number of samples, and w is a characteristic weight.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011478666.XA CN112465069B (en) | 2020-12-15 | 2020-12-15 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011478666.XA CN112465069B (en) | 2020-12-15 | 2020-12-15 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112465069A CN112465069A (en) | 2021-03-09 |
CN112465069B true CN112465069B (en) | 2024-02-06 |
Family
ID=74804336
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011478666.XA Active CN112465069B (en) | 2020-12-15 | 2020-12-15 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112465069B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113011330B (en) * | 2021-03-19 | 2022-09-02 | 中国科学技术大学 | Electroencephalogram signal classification method based on multi-scale neural network and cavity convolution |
CN113177482A (en) * | 2021-04-30 | 2021-07-27 | 中国科学技术大学 | Cross-individual electroencephalogram signal classification method based on minimum category confusion |
CN113558644B (en) * | 2021-07-20 | 2024-03-22 | 陕西科技大学 | Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network |
CN114504331A (en) * | 2022-02-25 | 2022-05-17 | 北京工业大学 | Mood recognition and classification method fusing CNN and LSTM |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110353702A (en) * | 2019-07-02 | 2019-10-22 | 华南理工大学 | A kind of emotion identification method and system based on shallow-layer convolutional neural networks |
CN110464366A (en) * | 2019-07-01 | 2019-11-19 | 华南师范大学 | A kind of Emotion identification method, system and storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6427137B2 (en) * | 1999-08-31 | 2002-07-30 | Accenture Llp | System, method and article of manufacture for a voice analysis system that detects nervousness for preventing fraud |
-
2020
- 2020-12-15 CN CN202011478666.XA patent/CN112465069B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110464366A (en) * | 2019-07-01 | 2019-11-19 | 华南师范大学 | A kind of Emotion identification method, system and storage medium |
CN110353702A (en) * | 2019-07-02 | 2019-10-22 | 华南理工大学 | A kind of emotion identification method and system based on shallow-layer convolutional neural networks |
Also Published As
Publication number | Publication date |
---|---|
CN112465069A (en) | 2021-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112465069B (en) | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN | |
CN110069958B (en) | Electroencephalogram signal rapid identification method of dense deep convolutional neural network | |
Chen et al. | Environmental sound classification with dilated convolutions | |
CN114052735B (en) | Deep field self-adaption-based electroencephalogram emotion recognition method and system | |
CN111709267B (en) | Electroencephalogram signal emotion recognition method of deep convolutional neural network | |
CN112244873A (en) | Electroencephalogram time-space feature learning and emotion classification method based on hybrid neural network | |
CN111832416A (en) | Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network | |
CN108959895B (en) | Electroencephalogram EEG (electroencephalogram) identity recognition method based on convolutional neural network | |
CN112766355B (en) | Electroencephalogram signal emotion recognition method under label noise | |
CN113011239B (en) | Motor imagery classification method based on optimal narrow-band feature fusion | |
CN111461201B (en) | Sensor data classification method based on phase space reconstruction | |
CN114176607B (en) | Electroencephalogram signal classification method based on vision transducer | |
CN108921141B (en) | Electroencephalogram EEG (electroencephalogram) feature extraction method based on depth self-coding neural network | |
An et al. | Electroencephalogram emotion recognition based on 3D feature fusion and convolutional autoencoder | |
CN113128552A (en) | Electroencephalogram emotion recognition method based on depth separable causal graph convolution network | |
CN114298216A (en) | Electroencephalogram vision classification method based on time-frequency domain fusion Transformer | |
CN113180659A (en) | Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network | |
CN116211319A (en) | Resting state multichannel electroencephalogram signal identification method based on graph attention network and sparse coding | |
CN115795346A (en) | Classification and identification method of human electroencephalogram signals | |
Stuchi et al. | Frequency learning for image classification | |
CN113011330B (en) | Electroencephalogram signal classification method based on multi-scale neural network and cavity convolution | |
CN113128353B (en) | Emotion perception method and system oriented to natural man-machine interaction | |
CN116628420A (en) | Brain wave signal processing method based on LSTM neural network element learning | |
CN116421200A (en) | Brain electricity emotion analysis method of multi-task mixed model based on parallel training | |
CN114091529A (en) | Electroencephalogram emotion recognition method based on generation countermeasure network data enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |