CN114343674B - Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method - Google Patents

Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method Download PDF

Info

Publication number
CN114343674B
CN114343674B CN202111578215.8A CN202111578215A CN114343674B CN 114343674 B CN114343674 B CN 114343674B CN 202111578215 A CN202111578215 A CN 202111578215A CN 114343674 B CN114343674 B CN 114343674B
Authority
CN
China
Prior art keywords
matrix
formula
electroencephalogram
emotion
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111578215.8A
Other languages
Chinese (zh)
Other versions
CN114343674A (en
Inventor
彭勇
李幸
张怿恺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111578215.8A priority Critical patent/CN114343674B/en
Publication of CN114343674A publication Critical patent/CN114343674A/en
Application granted granted Critical
Publication of CN114343674B publication Critical patent/CN114343674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method. The method comprises the following steps: and guiding the testee to watch the video with obvious emotion tendencies to acquire the brain electrical data. And preprocessing the obtained electroencephalogram data and extracting features to generate a sample matrix. The method comprises the steps of constructing a combined sub-discriminant space and a semi-supervised learning model, reducing intra-class dispersion between brain electrical data and increasing inter-class dispersion by projecting a sample matrix into a new feature space, and adding a pseudo-label to a non-marked sample to a training model for semi-supervised learning. According to the objective function, the joint optimization is realized by fixing two variables and updating the rule of the other variable, and the emotion recognition accuracy is improved by continuously iterating and optimizing the sub-discrimination space. The physical meaning of the combined projection matrix is researched, an electroencephalogram activation mode in emotion recognition is obtained, and a key frequency band and a lead related to emotion effect are obtained.

Description

Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method
Technical Field
The invention belongs to the technical field of electroencephalogram signal processing, and particularly relates to a combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method.
Background
Emotion is an adaptive physiological expression generated under the condition that people are stimulated by external environment in daily life and work, and has the functions of information transmission and behavior regulation. Emotion is a state and experience generated after human beings compare objective things with own needs according to the definition of the psychology big dictionary in daily life of people. Emotion can reflect the current physiological and psychological state of a person and has important influence on cognition, communication, decision making and the like of the person. From an artificial intelligence point of view, emotion generation is accompanied by individual characterization and psychological response changes, so that measurement and simulation can be performed through scientific methods. The machine can automatically and accurately identify the emotion state of a person, and the realization of emotion man-machine interaction is a research hotspot in the fields of current information science, psychology, cognitive neuroscience and the like.
The electroencephalogram signals are used as unsteady signals, the obtained original electroencephalogram data are often inconsistent in distribution, in order to obtain a stable emotion recognition mode in a machine learning model, the inter-class dispersion of the electroencephalogram data can be increased through a method of projecting to a sub-discrimination space, the intra-class dispersion is reduced, a better discrimination mode is obtained, and therefore recognition accuracy of the model is improved, and reliability of emotion man-machine interaction is guaranteed.
Disclosure of Invention
The invention aims to provide a combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method. The method can project the original data to a matrix A of a discrimination subspace, a projection matrix B connected with a label matrix and a label-free sample Y u for joint iteration optimization, and the sub-discrimination space is optimized through continuous iteration to obtain a better classification effect so as to improve the accuracy of emotion recognition, and the obtained A, B projection matrix is used for obtaining a key frequency band and a brain region related to the occurrence of emotion effect.
The specific steps of the invention are as follows:
step 1, acquiring brain electrical data of a tested person in c different emotion states.
Step 2, preprocessing and extracting features of the electroencephalogram data acquired in the step 1, wherein each sample matrix X consists of an electroencephalogram feature of a tested person, and a tag vector y is an emotion tag corresponding to the electroencephalogram feature in the sample matrix X; two different sample matrixes are selected and used as labeled data and unlabeled data respectively.
And 3, constructing a machine learning model of the combined judgment subspace mining and semi-supervised electroencephalogram emotion recognition method, and integrating the judgment subspace electroencephalogram data obtained by mapping the projection matrix and the semi-supervised emotion recognition model into a unified framework to obtain a combined optimized objective function.
3-1, Establishing an objective function of the matrix decomposition form AB, as shown in a formula (1):
in the formula (1), the components are as follows, For the input sample matrix, projection matrix/>For projecting raw data into a better discriminant subspace, projection matrix/>For concatenating data in the discriminant subspace with tag information,/>Representing information corresponding to the tag matrix, where n=l+u, l representing the number of tagged samples, u representing the number of untagged samples,/>Representing a pseudo tag corresponding to the label-free sample; i- 21 denotes the l 21 norm,/>Represents the square of the i 2 norm; λ represents a regularized term parameter;
3-2, further rewriting the objective function formula (1) as shown in formula (2):
In the formula (2), tr (·) is the trace operation of the matrix; Is a diagonal matrix in which each diagonal element has a value of
In the formula (3), G i represents an i-th row element of the matrix g=ab, I.I 2 represents l 2 norm.
And 4, initializing a pseudo tag Y u and a variable D, then obtaining update rules of all variables by fixing two variables and updating another variable according to the joint optimization objective function obtained in the step 3, and optimizing a label-free sample Y u, a projection matrix A and a connection matrix B in the objective function in sequence, wherein the optimization process is repeated for a plurality of times, so that joint iteration optimization is realized.
And 5, inputting the sample matrix X obtained in the step 2 into the objective function subjected to iterative optimization in the step 4 to obtain a corresponding predictive value label, wherein the predictive value label is the emotion state of a tested person corresponding to the sample at the acquisition time, and adding the obtained pseudo label into the training process of the model to realize semi-supervised learning.
Preferably, in step 2, the testee views different types of movies to induce different emotional states by using a video induction method, and the emotional categories include happiness, sadness, neutrality, fear, nausea and the like.
Preferably, the specific process of determining Y u, A, B in step 4 is as follows;
4-1 update Y u by fixing A, B to give Formula (2) is rewritten as shown in (4):
By solving Y u row by row, let Line i of Y u, formula (4) may be represented as formula (5)
According to equation (5), the transposes of c i and y i are represented by c i and y i in the following solution process, which is available from the Lagrangian multiplier method as equation (6):
Wherein η and β are lagrangian multipliers corresponding to constraint terms y i =1 and y i +.gtoreq.0 in formula (5), respectively; the optimal solution of y i can be obtained for equation (6) using the KKT condition;
4-2 update B by fixing A, Y u, rewrite equation (2) as shown in (7):
In the formula (7), the matrix B is derived and the derivative is set to 0, and the update rule of the obtained matrix B is represented by the formula (8):
B=(AT(XXT+λD)A)-1ATXY (8)
4-3 update A by fixing Y u, B, bring equation (8) into equation (2) to obtain equation (9):
Let S t=XXT,Sb=XYYTXT, where S t and S b represent intra-class dispersion and inter-class dispersion in the linear discriminant analysis, respectively, the update rule of the variable a can be expressed as formula (10).
Preferably, the pretreatment in step 2 is as follows:
2-1, downsampling the electroencephalogram data to 200Hz, and carrying out band-pass filtering on the electroencephalogram data to a range of 1-50 Hz; dividing the frequency band into Delta, theta, alpha, beta frequency bands and Gamma five frequency bands according to a 5 frequency band method;
2-2, respectively performing short-time Fourier transform with a time window of 4 seconds and no overlapping on the electroencephalogram data of the 5 frequency bands, and extracting a differential entropy characteristic h (X) as shown in a formula (11):
h(X)=-∫xf(x)ln(f(x))dx (11)
In the formula (11), X is an input sample matrix, and X is an element in the input sample matrix; f (x) is a probability density function;
The updated differential entropy characteristic h (X) is shown as a formula (12);
In the formula (12), sigma is the standard deviation of a probability density function; μ is the expectation of the probability density function.
Preferably, 17 leads are adopted for the electroencephalogram data acquisition, and 5 frequency bands are selected; the 5 frequency bands are respectively 1-4Hz, 4-8Hz, 8-14Hz, 14-31Hz and 31-50Hz.
Preferably, the projection matrix A, B in step 4 is used to explore the activation pattern in emotion recognition, let the matrix g=ab,The importance of each dimension feature is represented, and since the importance of each feature dimension can be measured by the normalized l 2 -norm, the key frequency band and brain region which are identified by the electroencephalogram emotion can be obtained through the obtained theta.
The invention has the beneficial effects that:
1. the combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method can project electroencephalogram data into a subspace to obtain a better classification surface, and the classification effect in an original data space is not ideal because the electroencephalogram data is unsteady.
2. The invention relates to a semi-supervised learning method, which can be combined with unlabeled sample data to train, a labeled sample is initially put into a model to train, then a pseudo label is marked on the unlabeled sample through the obtained model, and then the pseudo labeled sample is put into the model to train.
3. The electroencephalogram data is acquired by a plurality of electrode caps in the acquisition process, the sample data is influenced by experimental time and lead positions, and each lead represents a characteristic dimension. According to the invention, through calculation of the projection matrixes A and B, the frequency band and the lead which are more favorable for model training can be obtained, and the key frequency band and brain region which are in contact with the emotion effect can be obtained through implicit information obtained through model training.
Drawings
FIG. 1 is a diagram of a model framework of the present invention;
FIG. 2 is a diagram of critical bands of the present invention;
Fig. 3 is a diagram of key leads of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
The invention solves the important problem that the accuracy of the classification surface of the original space in the data set of the electroencephalogram emotion recognition is lower based on the following starting point: in emotion recognition, an electroencephalogram signal is considered to be used as an unsteady signal, stable emotion recognition modes can be obtained by preprocessed data, but the obtained model classification performance is weak because of small sample differences of different labels among data sets, and if the obtained model classification performance is weak, a good classification surface can be obtained by projecting the sample into a sub-discrimination space, so that intra-class dispersion among the data sets is small, inter-class dispersion is large, and a model with good robustness can be obtained. Therefore, the method that the data set is projected to the sub-discriminant space can be used for learning, and the method has important significance in improving the accuracy of emotion recognition.
As shown in fig. 1, a method for jointly distinguishing subspace mining and semi-supervised electroencephalogram emotion recognition specifically comprises the following steps:
and step 1, acquiring brain electrical data.
The emotion induction method adopted in the experiment is stimulus material induction, namely, the stimulus material is watched by a tested person to generate the stimulus material, so that the aim of inducing the corresponding emotion state of the tested person is fulfilled. The emotion of a person does not appear very strong under the daily condition, so that in order to acquire strong emotion information, a certain induction is required to be carried out on a tested person, 5 film fragments with obvious emotion tendencies are selected and respectively played to the tested person for watching at different times, and the electroencephalogram data of the tested person is acquired as an original emotion electroencephalogram data set by connecting the electroencephalogram cap leads to the corresponding brain areas while watching the film.
And carrying out M times of electroencephalogram data acquisition on N subjects under the same emotion-induced fragments to obtain N x M groups of electroencephalogram data, wherein the data volume of each group of data is d x N, d is the dimension of each group of data, and N is the number of time-related electroencephalogram data samples obtained by single acquisition. The set of data includes electroencephalographic data for a plurality of category labels obtained in a single acquisition. Each set of data serves as a sample matrix X. Each sample matrix X corresponds to one label y; tag y corresponds to the emotion classification of the subject.
To study the stability of emotion recognition and to ensure the effectiveness of the stimulus, each test was required to take 3 trials, at least three days apart. The test in each experiment required that 15 stimulus materials contained 5 emotion types to be viewed. Meanwhile, in order to ensure the effectiveness of stimulation and prevent the tested from being bored, the data checked in each experiment are completely different, and the total time for checking the data in each experiment is controlled to be about 50 minutes. In each trial, the participant watched one of the movie fragments while his electroencephalogram signal was collected with a 62-lead ESI NeuroScan system.
All the stimulating materials had 15 seconds before playing to introduce the background of the material and the emotion they want to induce. After the stimulation material is played, there is a self-evaluation and rest time of 15 seconds or 30 seconds depending on the type of material. If the stimulated material type is averse or fear, the rest time is 30 seconds and the time for happiness, neutrality and sadness is 15 seconds.
And 2, preprocessing all the electroencephalogram data obtained in the step 1 and extracting features. The invention is based on 62 leads, 5 frequency bands (Delta (1-4 Hz), theta (4-8 Hz), alpha (8-14 Hz), beta (14-31 Hz) and Gamma (31-50 Hz)), and extracts the differential entropy characteristics. In practical application, the number of leads depends on an electroencephalogram cap worn by a subject during data acquisition; the division of the frequency band also follows the 5-frequency band division with physiological significance; the most common features of electroencephalogram signals are power spectral density and differential entropy. The brain electrical signal of a person is very weak, which means that the brain electrical signal is easy to be interfered, the acquired result is difficult to directly perform experiments, and the requirements for preprocessing the brain electrical signal are provided:
The pretreatment process is as follows:
And 2-1, downsampling the electroencephalogram data to 200Hz, and carrying out band-pass filtering on the electroencephalogram data to a range of 1-50 Hz. According to the 5-frequency band method, it is divided into Delta, theta, alpha, beta and Gamma five frequency bands
And 2-2, respectively taking the electroencephalogram data of the 5 frequency bands as a sample matrix, respectively performing short-time Fourier transform with a time window of 4 seconds and no overlapping, and extracting differential entropy characteristics. The differential entropy feature h (X) is defined as:
h(X)=-∫xf(x)ln(f(x))dx (13)
In the formula (13), X is an input sample matrix (i.e., brain electrical data of a certain frequency band), and X is an element in the input sample matrix; f (x) is a probability density function. For a sample matrix X that follows a gaussian distribution, its differential entropy characteristics h (X) can be calculated as shown in equation (14):
in the formula (14), sigma is the standard deviation of a probability density function; μ is the expectation of the probability density function.
It can be seen that essentially the differential entropy features are logarithmic in form of the power spectral density features, i.eThe preprocessing of the brain electrical signals aims at improving the signal to noise ratio, so that the preprocessing effect of data is improved, and the interference is reduced.
And 3, constructing a combined judgment subspace mining and semi-supervised learning electroencephalogram emotion recognition machine learning model, and integrating the judgment subspace obtained by mapping the projection matrix A and the semi-supervised learning model into a unified framework to obtain a combined optimized objective function.
3-1, Establishing an objective function of a matrix decomposition form AB as shown in a formula (1);
in the formula (15), the amino acid sequence of the compound, For the input sample matrix, projection matrix/>For projecting raw data into a better discriminant subspace, projection matrix/>For concatenating data in the discriminant subspace with tag information,/>Representing information corresponding to the tag matrix, where n=l+u, l representing the number of tagged samples, u representing the number of untagged samples,/>Representing a pseudo tag corresponding to the label-free sample; i- 21 denotes the l 21 norm,/>Represents the square of the i 2 norm; λ represents a regularized term parameter;
3-2, further rewriting the objective function formula (15) as shown in formula (16):
in the formula (16), tr (·) is the trace operation of the matrix; Is a diagonal matrix in which each diagonal element has a value of
In formula (17), G i represents the i-th row element of the matrix g=ab, |·| 2 represents the l 2 norm; .
Step 4, initializing a pseudo tag Y u and a variable D firstly, then obtaining update rules of all variables by fixing two variables and updating another variable according to the joint optimization objective function obtained in the step 3, and optimizing a label-free sample Y u, a projection matrix A and a connection matrix B in a target function formula in sequence, and repeating the optimization process for a plurality of times to realize joint iteration optimization;
4-1 update Y u by fixing A, B to give Formula (16) is rewritten as shown in (18):
By solving Y u row by row, let Line i of Y u, formula (18) may be represented as formula (19)
From equation (19), the transposes of c i and y i are represented by c i and y i in the following solution process, which is available from Lagrangian multiplier method as equation (20):
Wherein η and β are lagrangian multipliers corresponding to constraint terms y i =1 and y i +.gtoreq.0 in formula (19), respectively; the optimal solution of y i is obtained for equation (20) using the KKT condition;
4-2 update B by fixing A, Y u, rewrite equation (16) to as shown in (21):
In the formula (21), the matrix B is derived and the derivative is set to 0, and the update rule of the obtained B is the formula (22):
B=(AT(XXT+λD)A)-1ATXY (22)
4-3 update A by fixing Y u, B, bring equation (22) into equation (16) to obtain equation (23):
Let S t=XXT,Sb=XYYTXT, where S t and S b represent intra-class dispersion and inter-class dispersion in linear discriminant analysis, respectively, the update rule of variable a can be expressed as:
Let g=ab and the like, Representing the importance of each dimension feature, since the importance of each feature dimension can be measured by its normalized l 2 -norm, the following equation can be derived:
Wherein g i represents a matrix I-th row element of (a);
And 5, inputting the sample matrix X obtained in the step 2 into the objective function subjected to iterative optimization in the step 4 to obtain a corresponding predictive value label, wherein the predictive value label is the emotion state of a tested person corresponding to the sample at the acquisition time, and adding the obtained pseudo label into the training process of the model to realize semi-supervised learning.
Compared with the currently popular semi-supervised emotion recognition model RESCALED LEAST Square Regression (RLSR 21) semi-supervised method for realizing the correlation of the brain electrical data and the marks by directly using the single mapping matrix, the result of the embodiment is higher in recognition precision.

Claims (3)

1. A combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method is characterized in that:
step 1, acquiring brain electrical data of a tested person in c different emotion states;
step 2, preprocessing and extracting features of the electroencephalogram data acquired in the step 1, wherein each sample matrix X consists of an electroencephalogram feature of a tested person, and a tag vector y is an emotion tag corresponding to the electroencephalogram feature in the sample matrix X; selecting two different sample matrixes as labeled data and unlabeled data respectively;
Step 3, constructing a machine learning model of a combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method, and integrating discrimination subspace electroencephalogram data obtained by mapping a projection matrix and the semi-supervised emotion recognition model into a unified framework to obtain a combined optimized objective function;
3-1, establishing an objective function of a matrix decomposition form AB, wherein the objective function is shown in a formula (1);
in the formula (1), the components are as follows, For the input sample matrix, projection matrix/>For projecting raw data into a better discriminant subspace, projection matrix/>For concatenating the data in the discrimination subspace with tag information,Representing information corresponding to the tag matrix, where n=l+u, l representing the number of tagged samples, u representing the number of untagged samples,/>Representing a pseudo tag corresponding to the label-free sample; i- 21 denotes the l 21 norm,/>Represents the square of the i 2 norm; λ represents a regularized term parameter;
3-2, further rewriting the objective function formula (1) as shown in formula (2):
In the formula (2), tr (·) is the trace operation of the matrix; Is a diagonal matrix in which each diagonal element has a value of
In formula (3), G i represents the i-th row element of the matrix g=ab, |·| 2 represents the l 2 norm;
Step 4, initializing a pseudo tag Y u and a variable D firstly, then obtaining update rules of all variables by fixing two variables and updating another variable according to the joint optimization objective function obtained in the step 3, and optimizing a label-free sample Y u, a projection matrix A and a connection matrix B in a target function formula in sequence, and repeating the optimization process for a plurality of times to realize joint iteration optimization;
4-1 update Y u by fixing A, B to give Formula (2) is rewritten as shown in (4):
By solving Y u row by row, let Line i of Y u, formula (4) is represented by formula (5)
According to equation (5), the transposes of c i and y i are represented by c i and y i in the following solution process, which is available from the Lagrangian multiplier method as equation (6):
Wherein η and β are lagrangian multipliers corresponding to constraint terms y i =1 and y i +.gtoreq.0 in formula (5), respectively; the optimal solution of y i can be obtained for equation (6) using the KKT condition;
4-2 update B by fixing A, Y u, rewrite equation (2) as shown in (8):
In the formula (8), the matrix B is derived and the derivative is set to 0, and the update rule of the obtained matrix B is represented by the formula (9):
B=(AT(XXT+λD)A)-1ATXY (9)
4-3 update A by fixing Y u, B, bring equation (9) into equation (2) to obtain equation (10):
Let S t=XXT,Sb=XYYTXT, where S t and S b represent intra-class dispersion and inter-class dispersion in linear discriminant analysis, respectively, the update rule of variable a is expressed as formula (11):
projection matrix A, B is used to explore the activation patterns in emotion recognition, let matrix g=ab, Representing the importance of each dimension feature, the importance of each feature dimension is measured by its normalized l 2 -norm, the following equation can be derived:
Wherein g i represents a matrix I-th row element of (a); obtaining a key frequency band and a brain region which are identified with the electroencephalogram emotion through the obtained theta;
And 5, inputting the sample matrix X obtained in the step 2 into the objective function subjected to iterative optimization in the step 4 to obtain a corresponding predictive value label, wherein the predictive value label is the emotion state of a tested person corresponding to the sample at the acquisition time, and adding the obtained pseudo label into the training process of the model to realize semi-supervised learning.
2. The method for jointly distinguishing subspace mining and semi-supervised electroencephalogram emotion recognition according to claim 1, wherein the emotion classification comprises: happiness, sadness, neutrality, fear, nausea.
3. The method for jointly distinguishing subspace mining and semi-supervised electroencephalogram emotion recognition according to claim 1, wherein the method is characterized in that: the pre-processing procedure in step 2 comprises the following sub-steps:
2-1, downsampling the electroencephalogram data to 200Hz, and carrying out band-pass filtering on the electroencephalogram data to a range of 1-50 Hz; dividing the frequency band into Delta, theta, alpha, beta frequency bands and Gamma five frequency bands according to a 5 frequency band method;
2-2, respectively performing short-time Fourier transform with a time window of 4 seconds and no overlapping on the electroencephalogram data of the 5 frequency bands, and extracting a differential entropy characteristic h (X) as shown in a formula (13):
h(X)=-∫xf(x)ln(f(x))dx (13)
in the formula (13), X is an input sample matrix, and X is an element in the input sample matrix; f (x) is a probability density function;
The updated differential entropy characteristic h (X) is shown as a formula (14);
in the formula (14), sigma is the standard deviation of a probability density function; μ is the expectation of the probability density function.
CN202111578215.8A 2021-12-22 2021-12-22 Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method Active CN114343674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111578215.8A CN114343674B (en) 2021-12-22 2021-12-22 Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111578215.8A CN114343674B (en) 2021-12-22 2021-12-22 Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method

Publications (2)

Publication Number Publication Date
CN114343674A CN114343674A (en) 2022-04-15
CN114343674B true CN114343674B (en) 2024-05-03

Family

ID=81101572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111578215.8A Active CN114343674B (en) 2021-12-22 2021-12-22 Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method

Country Status (1)

Country Link
CN (1) CN114343674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841214B (en) * 2022-05-18 2023-06-02 杭州电子科技大学 Pulse data classification method and device based on semi-supervised discrimination projection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009571A (en) * 2017-11-16 2018-05-08 苏州大学 A kind of semi-supervised data classification method of new direct-push and system
CN113157094A (en) * 2021-04-21 2021-07-23 杭州电子科技大学 Electroencephalogram emotion recognition method combining feature migration and graph semi-supervised label propagation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11537817B2 (en) * 2018-10-18 2022-12-27 Deepnorth Inc. Semi-supervised person re-identification using multi-view clustering

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009571A (en) * 2017-11-16 2018-05-08 苏州大学 A kind of semi-supervised data classification method of new direct-push and system
CN113157094A (en) * 2021-04-21 2021-07-23 杭州电子科技大学 Electroencephalogram emotion recognition method combining feature migration and graph semi-supervised label propagation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Semi-supervised Feature Selection via Rescaled Linear Regression;Xiaojun Chen 等;Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17);20170801;全文 *

Also Published As

Publication number Publication date
CN114343674A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
US20230039900A1 (en) Method for realizing a multi-channel convolutional recurrent neural network eeg emotion recognition model using transfer learning
AU2020100027A4 (en) Electroencephalogram-based negative emotion recognition method and system for aggressive behavior prediction
CN110472649B (en) Electroencephalogram emotion classification method and system based on multi-scale analysis and integrated tree model
CN114052735B (en) Deep field self-adaption-based electroencephalogram emotion recognition method and system
CN108056774A (en) Experimental paradigm mood analysis implementation method and its device based on visual transmission material
CN111407243B (en) Pulse signal pressure identification method based on deep learning
CN109871831B (en) Emotion recognition method and system
CN114343674B (en) Combined discrimination subspace mining and semi-supervised electroencephalogram emotion recognition method
An et al. Electroencephalogram emotion recognition based on 3D feature fusion and convolutional autoencoder
CN113157094B (en) Electroencephalogram emotion recognition method combining feature migration and graph semi-supervised label propagation
CN111797747A (en) Potential emotion recognition method based on EEG, BVP and micro-expression
CN109222966A (en) A kind of EEG signals sensibility classification method based on variation self-encoding encoder
CN115770044B (en) Emotion recognition method and device based on electroencephalogram phase amplitude coupling network
CN113180659A (en) Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network
CN113974627A (en) Emotion recognition method based on brain-computer generated confrontation
Xu et al. Lightweight eeg classification model based on eeg-sensor with few channels
Krishna et al. Continuous Silent Speech Recognition using EEG
Wang et al. EEG-based emotion identification using 1-D deep residual shrinkage network with microstate features
Immanuel et al. Analysis of different emotions with bio-signals (EEG) using deep CNN
CN114504331A (en) Mood recognition and classification method fusing CNN and LSTM
CN114186591A (en) Method for improving generalization capability of emotion recognition system
Huynh et al. An investigation of ensemble methods to classify electroencephalogram signaling modes
Li et al. Brain-Inspired Perception Feature and Cognition Model Applied to Safety Patrol Robot
Hagras et al. A biometric system based on single-channel EEG recording in one-second
Wirawan et al. Comparison of Baseline Reduction Methods for Emotion Recognition Based On Electroencephalogram Signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant