CN116701917B - Open set emotion recognition method based on physiological signals - Google Patents

Open set emotion recognition method based on physiological signals Download PDF

Info

Publication number
CN116701917B
CN116701917B CN202310937260.0A CN202310937260A CN116701917B CN 116701917 B CN116701917 B CN 116701917B CN 202310937260 A CN202310937260 A CN 202310937260A CN 116701917 B CN116701917 B CN 116701917B
Authority
CN
China
Prior art keywords
emotion
network
classifier
signals
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310937260.0A
Other languages
Chinese (zh)
Other versions
CN116701917A (en
Inventor
潘桐杰
叶娅兰
周镪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202310937260.0A priority Critical patent/CN116701917B/en
Publication of CN116701917A publication Critical patent/CN116701917A/en
Application granted granted Critical
Publication of CN116701917B publication Critical patent/CN116701917B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an open set emotion recognition method based on physiological signals, and belongs to the technical field of emotion type recognition. The invention comprises the following steps: collecting original electroencephalogram signals under different discrete emotions, and performing signal preprocessing on the original electroencephalogram signals to obtain training sample signals and emotion labels thereof; the method comprises the steps of inputting training sample signals into a feature extraction network to extract signal features, obtaining shared features of input signals of the feature extraction network, inputting the shared features into an emotion classification network to obtain predicted emotion types, and carrying out network parameter training on the feature extraction network and the emotion classification network based on a prediction result to obtain a trained feature extraction network and an emotion classification network, and obtaining emotion recognition results of electroencephalogram signals to be recognized based on the trained feature extraction network and the emotion classification network. The invention enables the recognition model to have the capacity of finding unknown emotion, and can adaptively adjust the decision boundary of the open set without prior knowledge of the unknown emotion.

Description

Open set emotion recognition method based on physiological signals
Technical Field
The invention belongs to the technical field of emotion type recognition, and particularly relates to an open set emotion recognition method based on physiological signals.
Background
Discrete emotion recognition (emotion type recognition) has important practical significance in a specific scene. While there have been many studies on discrete emotion recognition at present, the problem of open set recognition (Open Set Recognition, OSR) is an objectively existing but always neglected problem. Because of the wide variety of discrete emotions, which are closely related to subjective cognition of a person, different persons or the same person may produce different discrete emotions at different times, even with the same stimulus. Due to various factors, it is difficult for people to include all possible discrete emotions into the training set in advance, which results in the recognition model being prone to encountering new unknown emotion types during testing and application. The discrete emotion open set identification is to identify new emotion types outside the training set as unknown types on the basis of correctly distinguishing existing emotion types in the training set.
The problems existing in the discrete emotion open set identification scene are solved, and two key points are mainly: unknown class identification and threshold setting. Conventional recognition models completely divide the sample space, and regardless of where the sample points fall, the model recognizes it as a known class. This powerful generalization makes it impossible for the recognition model to reject samples on all classes simultaneously, and thus to find unknown classes. Furthermore, for a given class, the number of labeled samples is limited, the model cannot accurately determine the sample boundaries of a class, and if a sample is rejected to be identified as an unknown emotion type, the threshold determination problem is necessarily involved. However, the discrete emotion itself is various, and the emotion itself has boundary ambiguity, and setting the reject threshold empirically in practical application has extremely high risk. These two key issues make the discrete emotion open set recognition task extremely difficult.
Some researchers want to make models capable of finding unknown class samples by setting thresholds or improving classifiers for emotion recognition. For example, an open space risk term is used to explain the unknown class space, which first proposes an open set recognition algorithm based on SVM (support vector machine). And then a processing mode with two independent SVM classifiers is provided. And an open set recognition model is proposed based on sparse representation (Sparse Representation, SR) and an attempt is made to reconstruct the error distribution using extremum theory (ExtremeValueTheory, EVT) fit. Some solutions are based on an Open set version nearest neighbor classifier (Open-Set Nearest Neighbor classifier, OSNN) algorithm that proposes to set a threshold with a nearest neighbor distance ratio (Nearest Neighbor Distance Ratio, NNDR) technique, and to perform Open set identification based on the similarity score between the two most similar classes. Still another approach to improving the classifier using the "One-to-the-Rest" (One-vs-Rest) strategy proposes replacing the normalized exponential function layer (Softmax layer) with a deep open set classifier (Deep Open Classifier, DOC).
Other researchers take different ideas and they allow models to discover unknown classes by generating some samples of the pseudo-unknown classes during the training phase. Model training is performed, for example, using conditional generation network synthesis of unknown instances. Or directly using the generated countermeasure network (Generative Adversarial Networks, GAN) to generate a false sample training model to identify unknown classes. Still further, open set identification by conditional gaussian distribution learning (Conditional Gaussian Distribution Learning, CGDL) has been proposed based on improvements to Variational Auto-Encoder (VAE) architecture. And open Set Recognition placeholders (procer) algorithms have been proposed to find unknown classes, which generate the unknown classes by manifold mixing, adaptively Set classifier thresholds during training, and achieve good results in open Set Recognition tasks.
In summary, the above-described recognition method cannot sufficiently exhibit performance in a discrete emotion recognition scenario based on physiological signals. First, compared with image information, physiological signals themselves have problems of being susceptible to environmental noise and poor in distinguishing ability, and conventional methods of extracting manual features from images are not suitable for processing physiological signals. Second, none of the above-described deep learning-based discriminant models avoids the threshold setting problem, and although this problem is not fatal in the field of image recognition, in emotion recognition tasks, the boundary ambiguity of the emotion itself exacerbates the problem, and the reject threshold should be different for different discrete emotions, which further increases the difficulty of manually setting the threshold. Finally, the quality of the generated false sample cannot be guaranteed in the mode based on the generated model, a generation model aiming at a physiological signal with a good effect is not acknowledged at present, and the label of the generated sample is difficult to determine under the condition of lacking unknown emotion priori knowledge.
Disclosure of Invention
The invention aims at: aiming at the problems, an open set emotion recognition method based on physiological signals is provided to realize effective recognition of unknown emotion on the basis of guaranteeing the recognition precision of the known emotion.
The invention adopts the technical scheme that:
an open set emotion recognition method based on physiological signals, the method comprising the following steps:
step 1, acquiring original physiological signals (preferably electroencephalogram signals) of a subject under each emotion category based on the set emotion categories, and setting emotion labels corresponding to the original physiological signals;
step 2, carrying out signal preprocessing on the original physiological signals to obtain training sample signals and emotion labels thereof;
step 3, inputting the training sample signal into a feature extraction network to extract signal features, and obtaining shared features of input signals of the feature extraction network;
the characteristic extraction network comprises two parallel characteristic extraction branches, one branch is used for extracting channel information characteristics of an input signal from a channel information dimension, and the other branch is used for extracting time sequence characteristics of the input signal from a time sequence dimension; flattening the extracted channel information features and the time sequence features, and then performing feature stitching to obtain shared features of input signals;
step 4, inputting the shared features into an emotion classification network, acquiring predicted emotion types, and training network parameters of a feature extraction network and the emotion classification network based on a prediction result;
the emotion classification network comprises n classifiers, n is the emotion category number set in the step 1, each classifier corresponds to one emotion category set in the step 1 and is used for predicting whether a current input signal is the emotion category, each classifier outputs a binary prediction probability of the emotion category corresponding to the classifier, and a prediction result of each classifier is determined based on the maximum value in the binary prediction probability;
constraining the classification boundary of each classifier by using a difficult negative sample class during training, wherein the overall loss of each classifier during training comprises the target loss and the open set boundary loss of each classifier;
when the preset training ending condition is met, the trained feature extraction network and emotion classification network are saved;
step 5, acquiring emotion recognition results of the physiological signals to be recognized based on the trained feature extraction network and the emotion classification network: if the maximum value of the prediction results of the n classifiers is smaller than or equal to a specified threshold value, the emotion type of the n classifiers is an unknown type, otherwise, the recognition result of the physiological signal to be recognized is determined based on the emotion type corresponding to the maximum value.
In step 3, a convolutional neural network is used for extracting channel information characteristics of the input signal, and a long-term and short-term memory network is used for extracting time sequence characteristics of the input signal.
Further, in step 4, the target loss of the classifier is a binary cross entropy loss.
Further, in step 4, the open set boundary loss of the classifier is:
training sample signal x for any current input feature extraction network i Carrying out standardization processing (normalization processing) on the prediction results of the n classifiers to obtain n-dimensional vectors y, wherein the subscript i represents emotion class numbers;
definition of training sample Signal x i The emotion label of the current classifier is t (namely the true emotion type), and the prediction result corresponding to the current classifier is removed from the vector y and then based on the directionObtaining a training sample signal x according to the emotion type corresponding to the maximum value in the quantity y i Is a difficult negative sample class h;
standardized prediction results corresponding to difficult negative sample class h of each training sample signal based on same emotion label of current training batchCalculating the open set boundary loss of the current classifier +.>
wherein ,representing the prediction result of the classifier corresponding to the emotion tag t,/->Representing the prediction results corresponding to a plurality of training sample signals +.>Is selected from->Minimum value.
Further, in step 5, the specified threshold value when the emotion recognition result of the physiological signal to be recognized is obtained is 0.5.
The technical scheme provided by the invention has at least the following beneficial effects:
according to the invention, the structure of the classification network is changed, so that the identification model has the capacity of finding unknown emotion, and meanwhile, the difficult negative sample classification boundary sampling strategy is adjusted, so that the identification model can adaptively adjust the decision boundary of the open set without prior knowledge of the unknown emotion, and the problem of identifying the unknown emotion in the discrete emotion open set identification scene is effectively solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a process flow diagram of an open set emotion recognition method based on physiological signals according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the open set emotion recognition method based on physiological signals provided by the embodiment of the invention includes the following steps:
step S1, acquiring original brain electrical signals under different discrete emotions:
acquiring original electroencephalograms under different discrete emotions of a subject by using wearable electroencephalogram acquisition equipment, and setting emotion tags corresponding to the electroencephalograms based on corresponding discrete emotion types;
step S2, carrying out signal preprocessing on the original brain electrical signals to obtain training sample signals and emotion labels thereof:
cutting and segmenting and denoising the acquired original electroencephalogram signals to obtain first electroencephalogram signals which can be used for recognition model training, namely obtaining a plurality of training sample signals, wherein sample labels of the training sample signals are emotion labels corresponding to the original electroencephalogram signals;
step S3, inputting the training sample signal into a feature extraction network to extract signal features, and obtaining shared features of input signals of the feature extraction network;
inputting the first electroencephalogram signals into a parallel feature extraction network F, and extracting signal features from two dimensions of channel information and a time sequence; the feature extraction network F comprises two parallel feature extraction branches, one branch is used for extracting channel information features of the first electroencephalogram signal from the channel information dimension, and the other branch is used for extracting time sequence features of the first electroencephalogram signal from the time sequence dimension;
in the embodiment of the invention, the channel information characteristics are extracted by adopting a convolutional neural network (Convolutional Neural Networks, CNN), and the output is the extracted channel information characteristicsThe method comprises the following steps:, wherein ,/>Representing the output of a convolutional neural network, +.>Representing any training sample signal, the emotion label corresponding to the training sample signal is expressed as +.>Wherein the subscript i denotes an emotion type number.
The time series characteristics are extracted by using a Long Short-Term Memory (LSTM), and the output is the extracted time series characteristics:, wherein ,/>Representing the output of the long and short term memory network.
Then, the channel information outputted by the feature extraction network F is characterizedAnd time series feature->Flattening and characteristic splicingFor the finally extracted shared features (+)>) To be entered into an emotion classification network designed based on a "remaining pair" of strategies, namely: />
S4, inputting the shared features into an emotion classification network, acquiring predicted emotion categories, and training network parameters of the feature extraction network and the emotion classification network based on the prediction results to obtain a trained feature extraction network and an emotion classification network;
in the embodiment of the invention, the emotion classification network comprises n classifiers (each classifier is defined asWherein emotion category number->) N is the number of collected discrete emotion categories. Each classifier takes the shared characteristic as input, correspondingly distinguishes a known class, and outputs the prediction result of the emotion class corresponding to the classifier.
Ideas of introducing difficult negative sample classification boundary sampling in training process, for each classifierThe classification boundaries are further constrained using the most similar difficult negative class (referred to as the difficult negative class).
That is, when the feature extraction network F and the emotion classification network are trained on network parameters, the overall loss of each classifier includes the target loss and the open set boundary loss of each classifier.
And when the preset training ending condition is met (the training times obtain the maximum value or the training loss function value converges), storing the trained feature extraction network F and the emotion classification network.
Step 5, acquiring emotion recognition results of the electroencephalogram signals to be recognized based on the trained feature extraction network and emotion classification network: if the maximum value of the prediction results of all the classifiers is smaller than or equal to a specified threshold value (the optimal selection value is 0.5), the emotion type of the classifier is an unknown type, otherwise, the recognition result of the signal to be recognized is determined based on the emotion type corresponding to the maximum value.
As a possible preferred mode, the open set emotion recognition method based on physiological signals provided by the embodiment of the invention adopts a network structure comprising two parts: the feature extraction network F and the emotion classification network of the rest pair. The emotion types involved mainly include: happy, sad, calm, etc.
Under the discrete emotion open set recognition scene, all the discrete emotion categories contained in the training set and the test set form a complete set U all Training setAnd the classes contained therein are all known classes, in this embodiment, the test set U is considered testl =U all And the classes that only appear in the test set are unknown classes.
In this embodiment, a convolutional neural network (Convolution Neural Network, CNN) is used to extract the inter-channel information features of the input signalExtracting time sequence information characteristic of input signal by using Long Short-term memory network (LSTM) (Long Short-Term Memory network)>Finally, the features of the two dimensions are spliced to obtain a shared feature as input of a pair of Other (OVR) networks.
Preferably, in this embodiment, 4 layers of convolution layers (convolution layer 1 to convolution layer 4) are adopted to realize extraction of information features between channels, and the number of channels of the convolution layers 1 to 4 is set as follows: 64. 128, 256 and 64, resulting in an inter-channel information characteristic of 128 channels; the LSTM adopts a 3-layer structure, the number of hidden neurons in each layer is 64, and finally the time sequence information characteristics of 64 channels are obtained; finally, the inter-channel information features of 128 and the time sequence information of 64 channels are spliced to obtain the sharing feature of 1X 192 dimension.
For a pair ofIs designed to comprise n different classifiers C 1 , C 2 , . . . , C n Each classifier makes a classification decision for a known class.
Preferably, the training sample signal is given using Sigmoid as the activation function for each classifierThe Sigmoid layer outputs its classification prediction probability +_according to the following formula>
Wherein exp () represents an exponential function based on a natural constant e,for the input of the Sigmoid layer, it is indicated that the network thinks +.>Belonging to->Logarithmic class values, namely:
output of Sigmoid layerOnly +.>In the related, the approach degree of the sample point characteristic and the target class can be objectively reflected, and the normalized exponential function Softmax is calculated by +.>Occupy->The high confidence samples of the Softmax output are therefore not exactly equivalent to samples that are in absolute proximity to that class. Therefore, in this embodiment, sigmoid is used as the activation function of each classifier to obtain a training sample signal +.>Is>A classification prediction probability based on each classifier +.>The maximum value in (1) is used for obtaining the prediction result of each classifier and is marked as +.>
In training, training samples of a training set known class (x i ,y i ) The feature extraction network F and the classifiers are trained. Input training sample signal x i The classifier label corresponding to the i-th class is 1, and the classifier labels corresponding to the rest classes are 0. This process belongs to supervised training, the target loss for each classifier is a binary cross entropy loss (Binary Cross Entropy Loss, BCELoss):
through this partial training, decision boundaries for n known classes can be obtained, which can effectively identify corresponding known class samples.
The difficult negative sample classification boundary adopts a strategy as one of target losses to add a training step so as to restrict the open set decision boundary of each class to be converged between the class itself and the corresponding difficult negative class.
Training sample signal x i After entering the network, the prediction result of each classifier is obtainedNormalizing the prediction results of the n classifiers, and sorting into n-dimensional vectors y according to x i Corresponding label y i Obtaining the real class t of the input sample, and selecting the maximum value from the rest n-1 prediction results, namely, corresponding to the difficult negative sample class h:
wherein ,representing the normalized prediction result of the difficult negative sample class h of each training sample signal, +.>Representing the prediction result of the j-th classifier.
After the difficult negative class is determined, the open set boundary loss is calculated as follows:
wherein ,representing the prediction result of the classifier corresponding to the emotion tag t.
Thus, the overall target loss for each classifier during the training phaseDefinition of the definitionThe method comprises the following steps:
after training is completed, an open set boundary with good constraint can be obtained, so that label prediction of a test sample can be simply performed according to the following formula:
wherein ,representing the predicted outcome of the test sample,/->Represents the maximum value of the prediction results of the n classifiers.
The invention only relates to the output of a pair of rest classification networks, so that the unknown class rejection threshold is not required to be set manually.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
What has been described above is merely some embodiments of the present invention. It will be apparent to those skilled in the art that various modifications and improvements can be made without departing from the spirit of the invention.

Claims (4)

1. An open set emotion recognition method based on physiological signals is characterized by comprising the following steps:
step 1, acquiring original physiological signals of a subject under each emotion category based on the set emotion categories, and setting emotion labels corresponding to the original physiological signals;
step 2, carrying out signal preprocessing on the original physiological signals to obtain training sample signals and emotion labels thereof;
step 3, inputting the training sample signal into a feature extraction network to extract signal features, and obtaining shared features of input signals of the feature extraction network;
the characteristic extraction network comprises two parallel characteristic extraction branches, one branch is used for extracting channel information characteristics of an input signal from a channel information dimension, and the other branch is used for extracting time sequence characteristics of the input signal from a time sequence dimension; flattening the extracted channel information features and the time sequence features, and then performing feature stitching to obtain shared features of input signals;
step 4, inputting the shared features into an emotion classification network, acquiring predicted emotion types, and training network parameters of a feature extraction network and the emotion classification network based on a prediction result;
the emotion classification network comprises n classifiers, n is the emotion category number set in the step 1, each classifier corresponds to one emotion category set in the step 1 and is used for predicting whether a current input signal is the emotion category, each classifier outputs a binary prediction probability of the emotion category corresponding to the current input signal, and a prediction result of each classifier is determined based on the maximum value in the binary prediction probability;
constraining the classification boundary of each classifier by using a difficult negative sample class during training, wherein the overall loss of each classifier during training comprises the target loss and the open set boundary loss of each classifier;
when the preset training ending condition is met, the trained feature extraction network and emotion classification network are saved;
step 5, acquiring emotion recognition results of the physiological signals to be recognized based on the trained feature extraction network and the emotion classification network: if the maximum value of the prediction results of the n classifiers is smaller than or equal to a specified threshold value, the emotion type of the n classifiers is an unknown type, otherwise, the recognition result of the signal to be recognized is determined based on the emotion type corresponding to the maximum value;
in step 4, the open set boundary loss of the classifier is:
training sample signal x for any current input feature extraction network i Carrying out standardization processing on the prediction results of the n classifiers to obtain n-dimensional vectors y, wherein the subscript i represents emotion class numbers;
definition of training sample Signal x i The emotion label of the model (1) is t, the prediction result corresponding to the current classifier is removed from the vector y, and then a training sample signal x is obtained based on the emotion type corresponding to the maximum value in the vector y i Is a difficult negative sample class h;
standardized prediction results corresponding to difficult negative sample class h of each training sample signal based on same emotion label of current training batchCalculating the open set boundary loss of the current classifier +.>
wherein ,representing the prediction result of the classifier corresponding to the emotion tag t,/->Representing the prediction results corresponding to a plurality of training sample signals +.>Is selected from->Minimum value.
2. The method for identifying open set emotion based on physiological signals as claimed in claim 1, wherein in step 3, a convolutional neural network is used to extract channel information characteristics of the input signals, and a long-short-term memory network is used to extract time sequence characteristics of the input signals.
3. The method of claim 1, wherein in step 4, the target loss of the classifier is a binary cross entropy loss.
4. The method for identifying an open set emotion based on a physiological signal according to claim 1, wherein in step 5, a specified threshold value when obtaining an emotion identification result of the physiological signal to be identified is 0.5.
CN202310937260.0A 2023-07-28 2023-07-28 Open set emotion recognition method based on physiological signals Active CN116701917B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310937260.0A CN116701917B (en) 2023-07-28 2023-07-28 Open set emotion recognition method based on physiological signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310937260.0A CN116701917B (en) 2023-07-28 2023-07-28 Open set emotion recognition method based on physiological signals

Publications (2)

Publication Number Publication Date
CN116701917A CN116701917A (en) 2023-09-05
CN116701917B true CN116701917B (en) 2023-10-20

Family

ID=87837708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310937260.0A Active CN116701917B (en) 2023-07-28 2023-07-28 Open set emotion recognition method based on physiological signals

Country Status (1)

Country Link
CN (1) CN116701917B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084149A (en) * 2019-04-09 2019-08-02 南京邮电大学 A kind of face verification method based on difficult sample four-tuple dynamic boundary loss function
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 Electroencephalogram emotion recognition method based on attention mechanism
CN111709267A (en) * 2020-03-27 2020-09-25 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment
CN112381008A (en) * 2020-11-17 2021-02-19 天津大学 Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN112801182A (en) * 2021-01-27 2021-05-14 安徽大学 RGBT target tracking method based on difficult sample perception
CN113673434A (en) * 2021-08-23 2021-11-19 合肥工业大学 Electroencephalogram emotion recognition method based on efficient convolutional neural network and contrast learning
CN113729707A (en) * 2021-09-06 2021-12-03 桂林理工大学 FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
CN113887662A (en) * 2021-10-26 2022-01-04 北京理工大学重庆创新中心 Image classification method, device, equipment and medium based on residual error network
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption
CN114224342A (en) * 2021-12-06 2022-03-25 南京航空航天大学 Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network
CN114492513A (en) * 2021-07-15 2022-05-13 电子科技大学 Electroencephalogram emotion recognition method for adaptation to immunity domain based on attention mechanism in cross-user scene
CN114578967A (en) * 2022-03-08 2022-06-03 天津理工大学 Emotion recognition method and system based on electroencephalogram signals
CN115238835A (en) * 2022-09-23 2022-10-25 华南理工大学 Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion
CN116049636A (en) * 2023-02-03 2023-05-02 沈阳工业大学 Electroencephalogram emotion recognition method based on depth residual convolution neural network
CN116432070A (en) * 2021-12-29 2023-07-14 上海交通大学 ECG signal classification system and method based on deep learning neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR102014023780B1 (en) * 2014-09-25 2023-04-18 Universidade Estadual De Campinas - Unicamp (Br/Sp) METHOD FOR MULTICLASS CLASSIFICATION IN OPEN SCENARIOS AND USES OF THE SAME
US11141088B2 (en) * 2018-10-09 2021-10-12 Sony Corporation Electronic device for recognition of mental behavioral attributes based on deep neural networks
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
CN113627518B (en) * 2021-08-07 2023-08-08 福州大学 Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084149A (en) * 2019-04-09 2019-08-02 南京邮电大学 A kind of face verification method based on difficult sample four-tuple dynamic boundary loss function
CN110610168A (en) * 2019-09-20 2019-12-24 合肥工业大学 Electroencephalogram emotion recognition method based on attention mechanism
CN111709267A (en) * 2020-03-27 2020-09-25 吉林大学 Electroencephalogram signal emotion recognition method of deep convolutional neural network
CN112101152A (en) * 2020-09-01 2020-12-18 西安电子科技大学 Electroencephalogram emotion recognition method and system, computer equipment and wearable equipment
CN112381008A (en) * 2020-11-17 2021-02-19 天津大学 Electroencephalogram emotion recognition method based on parallel sequence channel mapping network
CN112801182A (en) * 2021-01-27 2021-05-14 安徽大学 RGBT target tracking method based on difficult sample perception
CN114492513A (en) * 2021-07-15 2022-05-13 电子科技大学 Electroencephalogram emotion recognition method for adaptation to immunity domain based on attention mechanism in cross-user scene
CN113673434A (en) * 2021-08-23 2021-11-19 合肥工业大学 Electroencephalogram emotion recognition method based on efficient convolutional neural network and contrast learning
CN113729707A (en) * 2021-09-06 2021-12-03 桂林理工大学 FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG
CN113887662A (en) * 2021-10-26 2022-01-04 北京理工大学重庆创新中心 Image classification method, device, equipment and medium based on residual error network
CN114052735A (en) * 2021-11-26 2022-02-18 山东大学 Electroencephalogram emotion recognition method and system based on depth field self-adaption
CN114224342A (en) * 2021-12-06 2022-03-25 南京航空航天大学 Multi-channel electroencephalogram emotion recognition method based on space-time fusion feature network
CN116432070A (en) * 2021-12-29 2023-07-14 上海交通大学 ECG signal classification system and method based on deep learning neural network
CN114578967A (en) * 2022-03-08 2022-06-03 天津理工大学 Emotion recognition method and system based on electroencephalogram signals
CN115238835A (en) * 2022-09-23 2022-10-25 华南理工大学 Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion
CN116049636A (en) * 2023-02-03 2023-05-02 沈阳工业大学 Electroencephalogram emotion recognition method based on depth residual convolution neural network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An investigation of deep learning models for EEG-based emotion recognition;Zhang Y 等;;《Frontiers in Neuroscience》;第14卷;622759 *
EEG emotion recognition using fusion model of graph convolutional neural networks and LSTM;Yin Y 等;;《Applied Soft Computing》;第100卷;106954 *
基于时序卷积网络的情感识别算法;宋振振 等;;《华东理工大学学报(自然科学版)》(第04期);564-572 *
基于脑电信号瞬时能量的情感识别方法;陈田 等;;《计算机工程》(第04期);196-204 *

Also Published As

Publication number Publication date
CN116701917A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
Tuli et al. Are convolutional neural networks or transformers more like human vision?
US10719780B2 (en) Efficient machine learning method
Jiang et al. Facial expression recognition based on convolutional block attention module and multi-feature fusion
Stafylakis et al. Deep word embeddings for visual speech recognition
Kusrini et al. The effect of Gaussian filter and data preprocessing on the classification of Punakawan puppet images with the convolutional neural network algorithm
CN112733764A (en) Method for recognizing video emotion information based on multiple modes
Rajput et al. A transfer learning-based brain tumor classification using magnetic resonance images
CN111191033A (en) Open set classification method based on classification utility
Kanjanawattana et al. Deep Learning-Based Emotion Recognition through Facial Expressions
CN113496251A (en) Device for determining a classifier for identifying an object in an image, device for identifying an object in an image and corresponding method
CN116701917B (en) Open set emotion recognition method based on physiological signals
CN117079017A (en) Credible small sample image identification and classification method
CN114022698A (en) Multi-tag behavior identification method and device based on binary tree structure
Al-Shareef et al. Face Recognition Using Deep Learning
Ali et al. Finger veins recognition using machine learning techniques
Haoran Face detection using the Siamese network with reconstruction supervision
Vatsa et al. Comparing the Performance of Classification Algorithms for Melanoma Skin Cancer
Prasad et al. Poaceae Family Leaf Disease Identification and Classification Applying Machine Learning
Fan et al. Unsupervised domain adaptation with generative adversarial networks for facial emotion recognition
Al-Obaydy et al. Open-set face recognition in video surveillance: a survey
CN116912921B (en) Expression recognition method and device, electronic equipment and readable storage medium
Zermane et al. Facial recognition and detection using Convolution Neural Networks
Gupta et al. Facial Expression Recognition with Combination of Geometric and Textural Domain Features Extractor using CNN and Machine Learning
Shetty et al. Medical Image Retrieval System for Endoscopy Images Using CNN
US20240290072A1 (en) Method and electrical device for training cross-domain classifier

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant