CN106886792B - Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism - Google Patents

Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism Download PDF

Info

Publication number
CN106886792B
CN106886792B CN201710053891.0A CN201710053891A CN106886792B CN 106886792 B CN106886792 B CN 106886792B CN 201710053891 A CN201710053891 A CN 201710053891A CN 106886792 B CN106886792 B CN 106886792B
Authority
CN
China
Prior art keywords
emotion
electroencephalogram
channel
classifier
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710053891.0A
Other languages
Chinese (zh)
Other versions
CN106886792A (en
Inventor
李贤�
闫健卓
李东佩
盛文瑾
王静
陈建辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201710053891.0A priority Critical patent/CN106886792B/en
Publication of CN106886792A publication Critical patent/CN106886792A/en
Application granted granted Critical
Publication of CN106886792B publication Critical patent/CN106886792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]

Abstract

The invention relates to an electroencephalogram emotion recognition method for constructing a multi-classifier fusion model based on a layering mechanism. And dividing channels of the emotion electroencephalogram characteristic matrix according to the electrode positions, executing optimization characteristic selection integration aiming at each channel, and constructing a plurality of single emotion classification models. And selecting the optimal single emotion classification model of each channel by taking the difference and the accuracy of each classification model obtained when the classification models aim at the same emotion recognition problem as evaluation criteria to obtain a classifier set to be fused. And (4) using the classification error of each optimal single emotion classification model as a weight, and constructing an emotion recognition fusion model based on a weighted voting method. The invention solves the problem that higher emotion recognition rate is difficult to obtain in electroencephalogram sample space by utilizing multi-classifier fusion.

Description

Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism
Technical Field
The invention relates to the field of emotion calculation, in particular to an electroencephalogram emotion recognition method based on electroencephalogram, and particularly relates to an electroencephalogram emotion recognition method based on channel layering mechanism and feature selection integration construction multi-classifier fusion.
Background
Emotion is a high-level function of the human brain, is a psychological and physiological state accompanying cognitive and consciousness processes, integrates human senses, ideas and behaviors, and plays a very important role in human-to-human communication. In recent years, with the rapid development of pervasive technology and computer technology, emotion recognition as a key problem of emotion calculation has become an important subject of cross-disciplinary research in the fields of computer science, cognitive science, artificial intelligence and the like, and has gained more and more attention and application. In clinical medical care, if the emotional state of a patient, especially a patient with expressive disorder, can be known, different care measures can be taken according to the emotion of the patient to improve the care quality. In the product development process, if the emotion of a user in the product using process can be measured and the user experience is known, the product function can be improved, the product quality can be improved, and the requirements of the user can be better met. For drivers in the aspects of motor cars, high-speed rails, long-distance passenger transport and the like, high attention and alertness need to be kept for a long time, and if the emotional state of the driver on the same day can be obtained in advance, traffic accidents caused by negative emotions of the driver, such as irritability, depression and the like, can be effectively avoided. In addition, the system is also gaining more and more attention in the aspects of psychological behavior monitoring of mental disorder patients, intelligent multimedia recommendation system, and friendliness and intellectualization of human-computer interaction. Therefore, the emotion recognition technology is of great application and research value in analyzing and evaluating emotion.
Earlier emotion researches usually identify different emotions of a person by using apparent features of facial expressions, voice tones, body postures and the like of the person, and although the human body signals are easy to obtain, the human body signals are often easily masked or disguised by people, the influence of subjective factors is difficult to eliminate, and sometimes even the inherent real emotional state cannot be known. The physiological response accompanied by emotion is governed by the nervous system and the endocrine system, has spontaneity and is not easy to be controlled by subjective idea, so that the emotion recognition based on the corresponding physiological signals can obtain objective and real results and is more suitable for practical application. Peripheral nerve physiological signals such as respiration, heart rate, body surface temperature, skin impedance and the like are commonly used for detecting the emotional state of a person, but the difference of the signals is usually small, the change rate is also usually slow, and the signals cannot meet the research requirement under the condition that the emotion needs to be rapidly identified in real time. The research of the theory of cognition and neurophysiology shows that the brain activity of human plays an important role in the generation and the activity of emotion, and the electroencephalogram signals collected from the brain can detect the information related to the emotional state change. In recent years, electroencephalogram signals have been paid more and more attention and are applied to the field of emotion calculation along with the application and popularization of electroencephalogram signal acquisition equipment, the rapid development of signal processing and machine learning technologies and the great improvement of computer data processing capacity due to the advantages of non-disguise and real-time difference of electroencephalogram signals.
At present, the emotion recognition technology based on electroencephalogram is mainly based on a traditional single classifier and an improved model thereof, and a support vector machine, a decision tree, a Bayesian network, a neural network, a K-nearest neighbor algorithm and the like are common, so that a good recognition effect is obtained, but a space is still provided for improvement. Generally, for emotion recognition, we focus more on improving recognition rates and have good generalization ability for new data sets. However, in reality, because of objective factors such as cultural differences and individual characters of the tested electroencephalogram data, the electroencephalogram data collected in emotion induction experiments often have class imbalance, and meanwhile, as the experiment time increases, the tested electroencephalogram data are tired and have experiment conflict psychology caused by psychological fluctuation, and the data often contain more noise due to interference of external factors. In addition, the brain nonlinear chaotic characteristics enable the brain electricity to have diversity and complexity, and the experience degrees of different brain areas on emotion are not completely the same. The above factors greatly increase the difficulty of electroencephalogram emotion recognition, and accurate classification in the whole sample space is difficult to realize by using a traditional single classifier. A common solution strategy is to find a classifier with the best classification performance by multiple test comparisons for a specific emotion recognition problem. However, when a priori knowledge is not sufficient, it is difficult to determine the best classifier, and if the differences between features are large, it is difficult to focus them into a single classifier for decision making. Although the performance of each classifier is different, their sets of misclassified samples do not necessarily overlap, indicating that there is some complementary information in the various individual classifiers. If a plurality of classifiers can be combined by utilizing the complementary information and each classifier plays a role in the dominant space region, namely, the fusion of the plurality of classifiers is expected to improve the accuracy of electroencephalogram emotion recognition.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a electroencephalogram emotion recognition method for constructing a multi-classifier fusion model based on a layering mechanism, and aims to solve the problem that the accuracy of the existing emotion recognition method needs to be improved when classification is performed on emotion electroencephalogram data with unbalanced categories and nonlinear non-stationary emotion.
The technical scheme adopted by the invention for solving the technical problems is as follows: an electroencephalogram emotion recognition method for constructing a multi-classifier fusion model based on a layering mechanism comprises the following steps:
(1) collecting multi-lead emotion electroencephalogram data, and analyzing and processing the data, wherein the analysis and processing comprise electroencephalogram preprocessing, feature extraction and channel selection based on weight measurement so as to construct an emotion electroencephalogram feature matrix.
(2) And dividing channels of the emotion electroencephalogram characteristic matrix according to the electrode positions, executing optimization characteristic selection integration aiming at each channel, and constructing a plurality of single emotion classification models.
(3) And selecting the optimal single emotion classification model of each channel by taking the difference and the accuracy of each classification model obtained when the classification models aim at the same emotion recognition problem as evaluation criteria to obtain a classifier set to be fused.
(4) And (4) using the classification error of each optimal single emotion classification model as a weight, and constructing an emotion recognition fusion model based on a weighted voting method.
Further, the step (1) is a method for constructing an emotion electroencephalogram characteristic matrix based on electroencephalogram analysis processing, and the specific steps comprise:
preprocessing the acquired multi-lead emotion electroencephalogram original signals, comprising the following steps: resetting the reference electrode, namely changing the original reference potential, reducing the sampling frequency from 512Hz to 128Hz, filtering and denoising, namely adopting band-pass filtering of 0.1 Hz-50 Hz, removing artifact interference, namely utilizing independent component analysis to remove ocular artifacts.
And (3) dividing each preprocessed electroencephalogram data into T sections by a time window with the length of 2s and the overlapping time of 1s, and respectively calculating time domain characteristics, statistical characteristics, frequency domain characteristics and nonlinear dynamics characteristics to obtain an initial emotion electroencephalogram characteristic matrix.
Calculating the weight of each channel based on a Relieff method, representing the importance degree of each channel for emotion recognition by using the weight, and further realizing the selection of the channel, wherein the specific process comprises the following steps:
normalizing the extracted electroencephalogram characteristic value and initializing an electroencephalogram characteristic weight w0
For each sample xiSearching k adjacent H of same emotion category by Euclidean distance measurementjAnd k neighbors M of different classesj(c);
Updating each feature fLWeight w (f)L);
Repeating the steps m times, wherein m is the total number of samples, and outputting all sample feature weights w;
taking the average value of all the characteristic weights of each channel as the weight W (T) of the channel;
sorting the channel weights from large to small, and selecting D channels { Ch with larger weights1,Ch2,…,ChD};
The method comprises the steps of arranging according to a channel-feature-segmentation mode to obtain a two-dimensional emotional electroencephalogram feature matrix comprising m rows and D multiplied by q multiplied by T columns, wherein m is the total sample number, D is the number of selected channels, q is the number of various features extracted under each channel, and T is the number of segments of each lead electroencephalogram.
Further, the step (2) is a method for generating a base classifier by combining channel division and optimization feature selection integration, and specifically comprises the following steps:
generating S training subsets { SubTr ] on each electroencephalogram channel by using bootstrap sampling method1,SubTr2,…,SubTrS};
Selecting an optimal electroencephalogram feature subset for each training subset by utilizing a Particle Swarm Optimization (PSO) algorithm;
learning samples using SVM on the optimal feature subset of each training subset to generate S base classifiers { SVM1,SVM2,…,SVMS}。
Further, step (3) is a method for selecting an optimal basis classifier for each channel based on the difference and the accuracy, and specifically includes the following steps:
with base classifiers SVM generated on each channelSIdentifying the test sample, and calculating the identification accuracy Acc of each base classifier according to the identification resultS
According to the recognition rate of the classifier, sorting from large to small, and selecting the emotion classification model set with the best recognition effect
Figure BDA0001216640500000021
Computing
Figure BDA0001216640500000022
Mean difference Div between the middle and other channel classification modelsi
Computing selection Evaluation criterion Evaluation of optimal emotion classification model of each channeli
Further, the weighted voting-based electroencephalogram emotion classification model fusion method specifically comprises the following steps:
calculating the classification Error of the optimal emotion classification model of each channelt
Calculating the weight omega of the optimal emotion classification model of each channelt
Counting voting scores Score of each emotion categoryy
And (3) taking the emotion category with the highest score as a final decision output: label ═ argmax (Score)y)。
The invention can be applied to all electroencephalogram-based emotion recognition systems.
Has the advantages that:
according to the method, the weight calculation is carried out on the features of each channel of the electroencephalogram by utilizing the channel layering thought and the Relieff algorithm, the average weight of the features is used as the weight of the channel, the fact that the channel is high in correlation with the target emotion category is shown by the fact that the weight is large, channel screening is carried out according to the weight, calculation cost and memory consumption are reduced, and the accuracy of subsequent emotion recognition is expected to be improved. The feature selection integration method based on the 'Bagging-PSO-SVM' generates base classifiers with good classification performance, dynamically selects optimal classifiers of all channels by using an evaluation criterion of 'difference + accuracy', enables the classifiers to have high identification capacity and more complementary information, and finally adopts a weighted voting method to fuse classification results of the optimal classifiers. The problem that a single classifier is difficult to obtain higher emotion recognition rate in an electroencephalogram sample space with unbalanced and complex categories is solved by utilizing multi-classifier fusion.
Drawings
FIG. 1 is a flowchart of a preferred embodiment of the electroencephalogram emotion recognition method for constructing multi-classifier fusion based on a hierarchical mechanism.
Fig. 2 is a detailed flowchart of step S101 in the method shown in fig. 1.
Fig. 3 is a detailed flowchart of step S102 in the method shown in fig. 1.
Fig. 4 is a schematic diagram of step S102 in the method shown in fig. 1.
Fig. 5 is a detailed flowchart of step S103 in the method shown in fig. 1.
Fig. 6 is a detailed flowchart of step S104 in the method shown in fig. 1.
Detailed Description
The invention provides an electroencephalogram emotion recognition method based on hierarchical mechanism multi-classifier fusion, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a flowchart of a preferred embodiment of electroencephalogram emotion recognition based on hierarchical multi-classifier fusion according to the present invention, and as shown in the figure, the implementation steps thereof include the following:
(1) collecting multi-lead emotion electroencephalogram data, analyzing and processing the data, as shown in fig. 2, including electroencephalogram preprocessing, feature extraction and channel selection, forming an emotion electroencephalogram feature matrix, and specifically comprising the following steps:
preprocessing the acquired multi-lead emotion electroencephalogram original signals, comprising the following steps: resetting the reference electrode as Cz (changing the original reference potential), down-sampling (reducing the original sampling frequency from 512Hz to 128Hz), filtering and de-noising (adopting band-pass filtering of 0.1 Hz-50 Hz), and removing artifact interference (removing ocular artifacts by independent component analysis).
Dividing each preprocessed electroencephalogram data into T sections by adopting a time window with the length of 2s and the length of 1s, and respectively calculating time domain characteristics, statistical characteristics, frequency domain characteristics and nonlinear dynamics characteristics which are commonly used for representing emotion, wherein the frequency domain characteristics extract corresponding characteristics aiming at theta waves (4-7.5Hz), alpha waves (8-13Hz), beta waves (14-30Hz) and gamma waves (30-45Hz), and as shown in table 1, obtaining emotion electroencephalogram characteristic vectors with dimensions of NxqxT, wherein N is the number of electrodes, q is the number of various extracted emotion electroencephalogram characteristics, and T is the number of sections of each electroencephalogram signal which is divided.
TABLE 1 various emotional EEG characteristics
Figure BDA0001216640500000041
Calculating the weight of each channel based on a Relieff method, representing the importance degree of each channel for emotion recognition by using the weight, and further realizing the selection of the channel, wherein the specific process comprises the following steps:
normalizing the extracted electroencephalogram characteristic value and initializing an electroencephalogram characteristic weight w0
For each sample xiSearching k adjacent H with same emotion category by Euclidean distancej(j ═ 1,2, …, k) and k neighbors of different classes Mj(c) C is 1,2, … C, and C is the number of emotion categories;
updating the weight w (f) of all the electroencephalogram characteristics according to the following formulaL):
Wherein class (x)i) Represents a sample xiThe category of the emotion to which the emotion belongs,
Figure BDA0001216640500000043
the probability of expressing the c-th emotion category, m and k can be set according to the number of samples and the feature dimension of electroencephalogram, diff (f, x)1,x2) Representing samples x in the EEG features f1And x2The difference between them is used to measure the sample x1And sample x2The distance of feature f can be calculated by:
Figure BDA0001216640500000044
repeating the steps until all samples execute the operation, and finally obtaining all sample characteristic weight values w;
taking the average value of all feature weights of each channel as the weight W (t) of the channel according to the following formula:
Figure BDA0001216640500000045
wherein, L is the number of features under each channel, and T represents the T-th channel;
sorting the channel weights from large to small, and selecting D channels { Ch with larger weights1,Ch2,…,ChDCarrying out subsequent treatment;
the method comprises the steps of arranging according to a channel-feature-segmentation mode to obtain a two-dimensional emotional electroencephalogram feature matrix comprising m rows and D multiplied by q multiplied by T columns, wherein m is the total sample number, D is the number of selected channels, q is the number of various features extracted under each channel, and T is the number of segments of each lead electroencephalogram.
(2) The method comprises the following steps of dividing channels of an emotion electroencephalogram characteristic matrix according to electrode positions, executing optimization characteristic selection integration aiming at each channel, and constructing a plurality of single emotion classification models, as shown in fig. 3 and 4, wherein the specific steps are as follows:
generating S training subsets { SubTr ] on each electroencephalogram channel by using bootstrap sampling method1,SubTr2,…,SubTrS};
Selecting an optimal emotion electroencephalogram feature subset for each training subset based on a Particle Swarm Optimization (PSO) algorithm;
learning samples by using SVM on the optimal feature subset of each training subset to generate S emotion base classification models { SVM1,SVM2,…,SVMS}。
(3) The differences and the accuracies of all the classification models obtained when the classification models aim at the same emotion recognition problem are used as evaluation criteria, the optimal single emotion classification model of each channel is selected, and a classifier set to be fused is obtained, as shown in fig. 5, the specific steps are as follows:
with base classifiers SVM generated on each channelSIdentifying the test sample, and calculating the identification accuracy Acc of each base classifier according to the identification resultS
Acc according to the recognition rate of the classifierSSelecting the emotion classification model set with the best recognition effect according to the sequence from large to small
Figure BDA0001216640500000056
Calculated according to the following formula
Figure BDA0001216640500000057
The classification models of the other channelsMean difference between class models:
Figure BDA0001216640500000051
wherein, DiviRepresenting the decision difference degree between the ith classifier and the base classifiers generated by other channels in the classifier set, wherein M is the total number of the base classifiers generated by other channels, and N is the number of samples in the test set;
and calculating the selection evaluation criterion of the optimal emotion classification model of each channel according to the following formula:
Evaluationi=Acci+α·Divi
wherein AcciRepresenting the classification accuracy of the ith classifier in the classifier set, wherein alpha is an adjustable parameter and represents the degree of difference DiviDegree of contribution in the evaluation criterion;
and selecting the classification model with the maximum evaluation value as the optimal emotion classification model of the channel, and participating in final fusion.
(4) The classification error of the optimal emotion-based classification model of each channel is used as weight, a multi-classifier fusion model is constructed based on a weighted voting method and used for electroencephalogram emotion recognition, and the method specifically comprises the following steps:
calculating the classification error of the optimal emotion classification model of each channel according to the following formula:
Figure BDA0001216640500000052
where N is the number of samples in the test set, Ft(xk) Optimal classifier pair sample x representing the t-th channelkClassification result of (a), ykIs a real emotion category, Num is used for counting the number;
calculating the weight of the optimal emotion classification model of each channel according to the following formula:
Figure BDA0001216640500000053
and counting the voting scores of all emotion categories according to the following formula:
where, y is the emotion category,
Figure BDA0001216640500000055
expressing the voting of the t-th classifier on the category y, if the classification result of the sample is equal to the real category of the sample, the score is 1, otherwise, the score is 0;
and (3) taking the emotion category with the highest score as a final decision output: label ═ argmax (Score)y)。
Example (b):
comparing and verifying the electroencephalogram emotion recognition method based on the hierarchical mechanism and the traditional recognition method based on a single classifier, the experimental parameter selection comprises the following steps:
the simulation data is selected from electroencephalogram emotion data in a public data set DEAP, 32 subjects participate in data acquisition, the ages of the subjects are 19-37 years old, and each subject is required to watch 40 music video clips. During the emotion induction experiments, two-dimensional emotion models are used to quantify emotion, including two dimensions of Arousal (Arousal) and Valence (Valence). After each subject has viewed a video, it records the metric values from each dimension in the rating table (SAM) in the range of 1-9. The 32-conductive electrode cap of the international 10-20 system is used for collecting electroencephalogram signals, and the sampling frequency is 512 Hz. The preprocessing of the electroencephalogram signals is operated by adopting an open-source EEGLAB electroencephalogram analysis tool box.
In order to verify the effectiveness of the electroencephalogram emotion recognition and the performance of the electroencephalogram emotion recognition method compared with other traditional methods, a group of comparison experiments are carried out, and the experimental results are as follows:
TABLE 2 comparison of the method of the invention with other methods
Figure BDA0001216640500000061
From table 2, it can be seen that for a single classification model, the classification accuracy of the Support Vector Machine (SVM) is higher than that of the other three methods, Naive Bayes (NB) times, and the decision tree (C4.5) has the worst effect. The multi-classifier fusion method based on the hierarchical mechanism provided by the invention has better classification accuracy in Valence (value) and Arousal (Arousal) dimensions than any single classification model. The method provided by the invention can solve the problem of low emotion recognition accuracy of a single classification model when aiming at electroencephalogram data with the characteristics of class imbalance, complexity and the like.
The invention aims to provide a multi-classifier fusion-based electroencephalogram emotion recognition method based on a layering mechanism. It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (4)

1. An electroencephalogram emotion recognition method for constructing a multi-classifier fusion model based on a layering mechanism is characterized by comprising the following steps: the method comprises the following steps of,
(1) collecting multi-lead emotion electroencephalogram data, and analyzing and processing the data, wherein the analysis and processing comprise electroencephalogram preprocessing, feature extraction and channel selection based on weight measurement so as to construct an emotion electroencephalogram feature matrix;
(2) dividing channels of the emotion electroencephalogram characteristic matrix according to the electrode positions, executing optimized characteristic selection integration aiming at each channel, and constructing a plurality of single emotion classification models; the method comprises the following steps of dividing channels of an emotion electroencephalogram characteristic matrix according to electrode positions, executing optimization characteristic selection integration aiming at each channel, and constructing a plurality of single emotion classification models, wherein the method comprises the following specific steps:
generating S training subsets { SubTr ] on each electroencephalogram channel by using bootstrap sampling method1,SubTr2,...,SubTrS};
Selecting an optimal emotion electroencephalogram feature subset for each training subset based on a Particle Swarm Optimization (PSO) algorithm;
learning samples by using SVM on the optimal feature subset of each training subset to generate S emotion base classification models { SVM1,SVM2,...,SVMS};
(3) Taking the difference and the accuracy of each classification model obtained when the classification models aim at the same emotion recognition problem as evaluation criteria, and selecting the optimal single emotion classification model of each channel to obtain a classifier set to be fused;
(4) and (4) using the classification error of each optimal single emotion classification model as a weight, and constructing an emotion recognition fusion model based on a weighted voting method.
2. The electroencephalogram emotion recognition method for building the multi-classifier fusion model based on the hierarchical mechanism as claimed in claim 1, characterized in that: the method for constructing the emotion electroencephalogram characteristic matrix based on electroencephalogram analysis processing in the step (1) comprises the following specific steps:
preprocessing the acquired multi-lead emotion electroencephalogram original signals, comprising the following steps: resetting the reference electrode, namely changing the original reference potential, reducing the sampling frequency from 512Hz to 128Hz, filtering and denoising, namely adopting band-pass filtering of 0.1 Hz-50 Hz, removing artifact interference, namely utilizing independent component analysis to remove ocular artifacts;
dividing each preprocessed electroencephalogram data into T sections by a time window with the length of 2s and the overlap of 1s, and respectively calculating time domain characteristics, statistical characteristics, frequency domain characteristics and nonlinear dynamics characteristics to obtain an initial emotion electroencephalogram characteristic matrix;
calculating the weight of each channel based on a Relieff method, representing the importance degree of each channel for emotion recognition by using the weight, and further realizing the selection of the channel, wherein the specific process comprises the following steps:
normalizing the extracted electroencephalogram characteristic value and initializing an electroencephalogram characteristic weight w0
For each sample xiSearching k adjacent H of same emotion category by Euclidean distance measurementjAnd k neighbors M of different classesj(c);
Updating each feature fLWeight w (f)L);
Repeating the steps m times, wherein m is the total number of samples, and outputting all sample feature weights w;
taking the average value of all the characteristic weights of each channel as the weight W (T) of the channel;
sorting the channel weights from large to small, and selecting D channels { Ch with larger weights1,Ch2,...,ChD};
The method comprises the steps of arranging according to a channel-feature-segmentation mode to obtain a two-dimensional emotional electroencephalogram feature matrix comprising m rows and D multiplied by q multiplied by T columns, wherein m is the total sample number, D is the number of selected channels, q is the number of various features extracted under each channel, and T is the number of segments of each lead electroencephalogram.
3. The electroencephalogram emotion recognition method for building the multi-classifier fusion model based on the hierarchical mechanism as claimed in claim 1, characterized in that: the method for selecting the optimal base classifier of each channel based on the difference and the accuracy in the step (3) specifically comprises the following steps:
with base classifiers SVM generated on each channelSIdentifying the test sample, and calculating the identification accuracy Acc of each base classifier according to the identification resultS
According to the recognition rate of the classifier, sorting from large to small, and selecting the emotion classification model set with the best recognition effect
Figure FDA0002212033120000021
Computing
Figure FDA0002212033120000022
Mean difference Div between the middle and other channel classification modelsi
Computing selection Evaluation criterion Evaluation of optimal emotion classification model of each channeli
4. The electroencephalogram emotion recognition method for building the multi-classifier fusion model based on the hierarchical mechanism as claimed in claim 1, characterized in that: the weighted voting-based electroencephalogram emotion classification model fusion method specifically comprises the following steps:
calculating the classification Error of the optimal emotion classification model of each channelt
Calculating the weight omega of the optimal emotion classification model of each channelt
Counting voting scores Score of each emotion categoryy
And (3) taking the emotion category with the highest score as a final decision output: label ═ argmax (Score)y)。
CN201710053891.0A 2017-01-22 2017-01-22 Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism Active CN106886792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710053891.0A CN106886792B (en) 2017-01-22 2017-01-22 Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710053891.0A CN106886792B (en) 2017-01-22 2017-01-22 Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism

Publications (2)

Publication Number Publication Date
CN106886792A CN106886792A (en) 2017-06-23
CN106886792B true CN106886792B (en) 2020-01-17

Family

ID=59176822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710053891.0A Active CN106886792B (en) 2017-01-22 2017-01-22 Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism

Country Status (1)

Country Link
CN (1) CN106886792B (en)

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107669266A (en) * 2017-10-12 2018-02-09 公安部南昌警犬基地 A kind of animal brain electricity analytical system
CN107944473A (en) * 2017-11-06 2018-04-20 南京邮电大学 A kind of physiological signal emotion identification method based on the subjective and objective fusion of multi-categorizer
CN108021941B (en) * 2017-11-30 2020-08-28 四川大学 Method and device for predicting drug hepatotoxicity
CN108420430A (en) * 2018-04-02 2018-08-21 东北电力大学 A kind of organoleptic substances sorting technique based on smell brain wave and PSO-SVM
CN108549875B (en) * 2018-04-19 2022-04-15 北京工业大学 Electroencephalogram epileptic seizure detection method based on depth channel attention perception
CN108937968B (en) * 2018-06-04 2021-11-19 安徽大学 Lead selection method of emotion electroencephalogram signal based on independent component analysis
CN109034235B (en) * 2018-07-20 2021-09-28 安徽理工大学 Multi-feature-based integrated SVM noise point detection method
CN109117787A (en) * 2018-08-10 2019-01-01 太原理工大学 A kind of emotion EEG signal identification method and system
CN109447125B (en) * 2018-09-28 2019-12-24 北京达佳互联信息技术有限公司 Processing method and device of classification model, electronic equipment and storage medium
CN109330613A (en) * 2018-10-26 2019-02-15 蓝色传感(北京)科技有限公司 Human body Emotion identification method based on real-time brain electricity
CN109620152B (en) * 2018-12-16 2021-09-14 北京工业大学 MutifacolLoss-densenert-based electrocardiosignal classification method
CN109656366B (en) * 2018-12-19 2022-02-18 电子科技大学中山学院 Emotional state identification method and device, computer equipment and storage medium
WO2020132941A1 (en) * 2018-12-26 2020-07-02 中国科学院深圳先进技术研究院 Identification method and related device
CN110070105B (en) * 2019-03-25 2021-03-02 中国科学院自动化研究所 Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN110109543B (en) * 2019-04-30 2021-08-31 福州大学 c-VEP identification method based on tested migration
CN110414548A (en) * 2019-06-06 2019-11-05 西安电子科技大学 The level Bagging method of sentiment analysis is carried out based on EEG signals
CN110472649B (en) * 2019-06-21 2023-04-18 中国地质大学(武汉) Electroencephalogram emotion classification method and system based on multi-scale analysis and integrated tree model
CN110490152A (en) * 2019-08-22 2019-11-22 珠海格力电器股份有限公司 Information sharing method and electronic equipment based on image recognition
CN111134667B (en) * 2020-01-19 2024-01-26 中国人民解放军战略支援部队信息工程大学 Time migration emotion recognition method and system based on electroencephalogram signals
CN111428580A (en) * 2020-03-04 2020-07-17 威海北洋电气集团股份有限公司 Individual signal identification algorithm and system based on deep learning
CN111543988B (en) * 2020-05-25 2021-06-08 五邑大学 Adaptive cognitive activity recognition method and device and storage medium
CN111714118B (en) * 2020-06-08 2023-07-18 北京航天自动控制研究所 Brain cognition model fusion method based on ensemble learning
CN111723867A (en) * 2020-06-22 2020-09-29 山东大学 Intelligent evaluation system and method for online game fascination degree
CN111832438B (en) * 2020-06-27 2024-02-06 西安电子科技大学 Emotion recognition-oriented electroencephalogram signal channel selection method, system and application
CN111860463B (en) * 2020-08-07 2024-02-02 北京师范大学 Emotion recognition method based on joint norm
CN112200016A (en) * 2020-09-17 2021-01-08 东北林业大学 Electroencephalogram signal emotion recognition based on ensemble learning method AdaBoost
CN112190269B (en) * 2020-12-04 2024-03-12 兰州大学 Depression auxiliary identification model construction method based on multisource brain electric data fusion
CN113243924A (en) * 2021-05-19 2021-08-13 成都信息工程大学 Identity recognition method based on electroencephalogram signal channel attention convolution neural network
CN113408603B (en) * 2021-06-15 2023-10-31 西安华企众信科技发展有限公司 Coronary artery stenosis degree identification method based on multi-classifier fusion
CN113967022B (en) * 2021-11-16 2023-10-31 常州大学 Individual self-adaption-based motor imagery electroencephalogram characteristic characterization method
CN114209341B (en) * 2021-12-23 2023-06-20 杭州电子科技大学 Emotion activation mode mining method for characteristic contribution degree difference electroencephalogram data reconstruction
CN114711790B (en) * 2022-04-06 2022-11-29 复旦大学附属儿科医院 Newborn electroconvulsive type determination method, newborn electroconvulsive type determination device, newborn electroconvulsive type determination equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187944A (en) * 2007-11-30 2008-05-28 中国科学院合肥物质科学研究院 A multilayer selection method for classifier integration based on small survival environment particle sub-group optimization algorithm
CN101887721A (en) * 2010-07-19 2010-11-17 东南大学 Electrocardiosignal and voice signal-based bimodal emotion recognition method
CN102106730A (en) * 2011-03-16 2011-06-29 上海交通大学 Method for processing electroencephalogram signal and detecting alertness based on fractal characteristics
CN102473247A (en) * 2009-06-30 2012-05-23 陶氏益农公司 Application of machine learning methods for mining association rules in plant and animal data sets containing molecular genetic markers, followed by classification or prediction utilizing features created from these association rules
CN102722728A (en) * 2012-06-11 2012-10-10 杭州电子科技大学 Motion image electroencephalogram classification method based on channel weighting supporting vector
US9147129B2 (en) * 2011-11-18 2015-09-29 Honeywell International Inc. Score fusion and training data recycling for video classification
CN105184316A (en) * 2015-08-28 2015-12-23 国网智能电网研究院 Support vector machine power grid business classification method based on feature weight learning

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101187944A (en) * 2007-11-30 2008-05-28 中国科学院合肥物质科学研究院 A multilayer selection method for classifier integration based on small survival environment particle sub-group optimization algorithm
CN102473247A (en) * 2009-06-30 2012-05-23 陶氏益农公司 Application of machine learning methods for mining association rules in plant and animal data sets containing molecular genetic markers, followed by classification or prediction utilizing features created from these association rules
CN101887721A (en) * 2010-07-19 2010-11-17 东南大学 Electrocardiosignal and voice signal-based bimodal emotion recognition method
CN102106730A (en) * 2011-03-16 2011-06-29 上海交通大学 Method for processing electroencephalogram signal and detecting alertness based on fractal characteristics
US9147129B2 (en) * 2011-11-18 2015-09-29 Honeywell International Inc. Score fusion and training data recycling for video classification
CN102722728A (en) * 2012-06-11 2012-10-10 杭州电子科技大学 Motion image electroencephalogram classification method based on channel weighting supporting vector
CN105184316A (en) * 2015-08-28 2015-12-23 国网智能电网研究院 Support vector machine power grid business classification method based on feature weight learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A multiobjective weighted voting ensemble classifier based on differential evolution algorithm for text sentiment classification;Aytu˘g Onan 等;《Expert Systems With Applications》;20160607;1-16 *
Aspect term extraction for sentiment analysis in large movie reviews using Gini Index feature selection method and SVM classifier;Asha S Manek 等;《World Wide Web》;20160204;135-154 *
Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks;Wei-Long Zheng 等;《IEEE TRANSACTIONS ON AUTONOMOUS MENTAL DEVELOPMENT》;20150930;第7卷(第3期);162-175 *
ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition;Jianhai Zhang 等;《Sensors》;20161231;1-15 *

Also Published As

Publication number Publication date
CN106886792A (en) 2017-06-23

Similar Documents

Publication Publication Date Title
CN106886792B (en) Electroencephalogram emotion recognition method for constructing multi-classifier fusion model based on layering mechanism
Nakisa et al. Long short term memory hyperparameter optimization for a neural network based emotion recognition framework
Dissanayake et al. Deep learning for patient-independent epileptic seizure prediction using scalp EEG signals
Liu et al. Multimodal emotion recognition using deep canonical correlation analysis
Liu et al. Subject-independent emotion recognition of EEG signals based on dynamic empirical convolutional neural network
Zheng et al. EEG-based emotion classification using deep belief networks
George et al. Recognition of emotional states using EEG signals based on time-frequency analysis and SVM classifier.
CN110070105B (en) Electroencephalogram emotion recognition method and system based on meta-learning example rapid screening
CN113729707A (en) FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
Zhu et al. EEG-based emotion recognition using discriminative graph regularized extreme learning machine
Li et al. EEG emotion recognition based on 3-D feature representation and dilated fully convolutional networks
Sharma et al. Audio-video emotional response mapping based upon electrodermal activity
Wang et al. Maximum weight multi-modal information fusion algorithm of electroencephalographs and face images for emotion recognition
Soni et al. Graphical representation learning-based approach for automatic classification of electroencephalogram signals in depression
Zhuang et al. Real-time emotion recognition system with multiple physiological signals
Suchetha et al. Sequential Convolutional Neural Networks for classification of cognitive tasks from EEG signals
Zhang et al. Four-classes human emotion recognition via entropy characteristic and random Forest
Immanuel et al. Recognition of emotion with deep learning using EEG signals-the next big wave for stress management in this covid-19 outbreak
Yang et al. Stochastic weight averaging enhanced temporal convolution network for EEG-based emotion recognition
Wang et al. EEG-based emotion identification using 1-D deep residual shrinkage network with microstate features
Saha et al. Automatic emotion recognition from multi-band EEG data based on a deep learning scheme with effective channel attention
Weerasinghe et al. Emotional stress classification using spiking neural networks.
Liu et al. EEG-based emotion estimation using adaptive tracking of discriminative frequency components
KR102646257B1 (en) Deep Learning Method and Apparatus for Emotion Recognition based on Efficient Multimodal Feature Groups and Model Selection
Lu Human emotion recognition based on multi-channel EEG signals using LSTM neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant