CN110464366A - A kind of Emotion identification method, system and storage medium - Google Patents
A kind of Emotion identification method, system and storage medium Download PDFInfo
- Publication number
- CN110464366A CN110464366A CN201910586342.9A CN201910586342A CN110464366A CN 110464366 A CN110464366 A CN 110464366A CN 201910586342 A CN201910586342 A CN 201910586342A CN 110464366 A CN110464366 A CN 110464366A
- Authority
- CN
- China
- Prior art keywords
- brain wave
- facial image
- emotional state
- user
- emotion identification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Abstract
The present invention relates to a kind of Emotion identification method, system and storage medium, the present invention predicts user emotion state according to facial image and brain wave using Valence-arousal two dimensional model by the facial image and brain wave of acquisition user.Compared with the existing technology, the present invention improves the accuracy of Emotion identification.
Description
Technical field
The present invention relates to field of information processing, more particularly, to a kind of Emotion identification method, system and storage medium.
Background technique
In recent years, emotion recognition has significant meaning in many fields, is the key factor in man-machine interactive system.
Emotion identification is applied on the every aspect of society, such as the mood of intelligent robot identification people, and by providing more preferably
Interaction feedback;Or for the interaction etc. according to people's difference mood change product.It is many in the prior art to utilize the various of people
The method that modal information (such as facial image of people, brain wave, voice etc.) carries out Emotion identification is suggested, these methods exist
It can recognize that the mood of object in certain degree, however be but limited to single mode, accuracy rate is lower.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology and deficiency, provides a kind of Emotion identification side that accuracy rate is high
Method, system and storage medium.
A kind of Emotion identification method, comprising the following steps:
Obtain the facial image and brain wave of user;
The emotional state of user is predicted respectively according to the brain wave of acquisition and facial image, obtains and is based on brain wave
With the emotional state prediction result based on facial image;
Knot is predicted to the emotional state based on facial image and based on brain wave using Valence-arousal two dimensional model
Fruit is mapped, and the emotional state of user is obtained according to mapping result.
Compared with the existing technology, the present invention utilizes Valence- by the facial image and brain wave of acquisition user
Arousal two dimensional model predicts user emotion state according to facial image and brain wave, improves the standard of Emotion identification
True property.
Further, described that the emotional state of user is predicted using the brain wave obtained, it obtains and is based on brain wave
Emotional state prediction result step in, comprising the following steps:
Wavelet transformation is carried out to the brain wave of acquisition and extracts power spectral density feature;
Power spectral density feature is chosen using recursive feature selection method;
Preset SVM model is called to classify the power spectral density feature of selection;
For prediction aggressiveness level and two kinds of situations of degree are waken up, call the feelings of different SVM model prediction users respectively
Thread value, and the emotional state prediction result based on brain wave is obtained according to the mood value;In which, each user is directly right
The physiological data of user oneself predicts there is specific aim.
Alternatively, described carried out in prediction steps using emotional state of the brain wave obtained to user, comprising the following steps:
Wavelet transformation is carried out to the brain wave of acquisition and extracts power spectral density feature;
The power spectral density feature of extraction is sampled using 10s as a sample, obtains temporal aspect;
Using shot and long term memory models to the temporal aspect regression forecasting, in the case of obtaining aggressiveness level and wake-up degree
The mood value of user, and the emotional state prediction result according to the mood value based on brain wave.In which, user was using originally
The data of other users predict the emotional state of itself, advantageously reduce data calculating;Two kinds of model prediction methods, can
It is respectively suitable for different situations, improves flexibility and the practicability of the model.
Further, described that the emotional state of user is predicted using the facial image obtained, it obtains and is based on face
In the emotional state prediction result step of image, comprising the following steps:
Face characteristic information is obtained from facial image;
Face characteristic information input CNN model is obtained into several height results;
The emotional state prediction result based on facial image is obtained according to the sub- result.By calling CNN model to face
Characteristic information is identified, the accuracy of emotion judgment is improved.
Further, the brain wave of described pair of acquisition use in wavelet transformation extraction power spectral density characterization step
The wavelet conversion coefficient of Daubechies carries out feature extraction, facilitates the various band informations extracted in eeg signal.
Further, it is described using Valence-arousal two dimensional model to based on facial image and based on brain wave
The step of emotional state prediction result is mapped, and obtains the emotional state of user according to mapping result specifically includes:
Parameter preset k, and by the accuracy rate adjusting parameter k of predicted value after calculating both modalities which fusion each time, it chooses
Parameter of the maximum parameter of accuracy rate as k;Predicted value is obtained by following manner:
Senum=kSface+(1-k)SEEG;
Wherein, SenumRepresent the fused predicted value of both modalities which, SfaceAnd SEEGRespectively represent facial image and brain wave
Output, and k represents the significance level of facial image, and 1-k represents the significance level of brain wave.
Predicted value is mapped using Valence-arousal two dimensional model, the feelings of user are obtained according to mapping result
Not-ready status.By merging the prediction result of facial image and brain wave, the accuracy of user emotion state judgement is improved.
Alternatively, it is described using Valence-arousal two dimensional model to based on facial image and based on the mood of brain wave
The step of status predication result is mapped, and obtains the emotional state of user according to mapping result specifically includes:
The emotional state prediction result based on facial image and based on brain wave is merged by following manner,
Obtain predicted value:
Wherein, n represents mode number, the SboostRepresent the fused prediction of emotional state prediction result of n mode
Value, sjThe output of corresponding mode is represented as a result, wjRepresent weight coefficient.
Predicted value is mapped using Valence-arousal two dimensional model, the feelings of user are obtained according to mapping result
Not-ready status.By fusion based on facial image and the emotional state prediction result based on brain wave, improves user emotion state and sentence
Disconnected accuracy.
Further, it obtains in the facial image and brain wave step of user, calls data collection framework to acquire data, hold
Row following steps:
Typing experiment information is simultaneously stored in database;
Visual transmission user is played out after countdown, and camera device and brain machine equipment is called to obtain user's face
Image and brain wave, circulation carry out several groups experiment;
Reaching to test after setting cycle-index terminates.By calling the data collection framework, hold computer program automatically
The above-mentioned process of row, facilitates information collection.
The present invention also provides a kind of computer readable storage mediums, store computer program thereon, the computer journey
It realizes when sequence is executed by processor such as the step of above-mentioned Emotion identification method.
The present invention also provides a kind of Emotion identification system, including camera, brain machine equipment, reservoir, processor and
It is stored in the computer program that can be executed in the reservoir and by the processor, the camera and brain machine equipment and processing
Device connection, the camera is for acquiring facial image, and the brain machine equipment for sampling human body electroencephalogram's wave, hold by the processor
The step of realizing above-mentioned Emotion identification method when the row computer program.
Further, the Emotion identification system further includes display, and the display is connect with processor, the display
Device intuitively clearly demonstrates current emotional acquisition and analysis prediction feelings for showing current emotional acquisition and analysis prediction case
Condition.
In order to better understand and implement, the invention will now be described in detail with reference to the accompanying drawings.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of Emotion identification method in the embodiment of the present invention 1;
Fig. 2 is work flow diagram in a kind of Emotion identification method in the embodiment of the present invention 1;
Fig. 3 is the structure chart of convolutional neural networks in the embodiment of the present invention 1;
Fig. 4 is the flow diagram for carrying out Emotion identification in the embodiment of the present invention 1 based on facial image;
Fig. 5 is the accuracy rate datagram that test experiment is tested using MAHNOB-HCI data set in the embodiment of the present invention 1;
Fig. 6 is the accuracy rate histogram that test experiment is tested using DEAP data set in the embodiment of the present invention 1;
Fig. 7 is human brain channel position figure in the embodiment of the present invention 1;
Fig. 8 is the accuracy rate histogram of online experiment in the embodiment of the present invention 1;
Fig. 9 is the structural schematic diagram of shot and long term memory models in the embodiment of the present invention 2;
Figure 10 is a kind of flow diagram of method of data capture in the embodiment of the present invention 3;
Figure 11 is a kind of information collection interactive interface of method of data capture in the embodiment of the present invention 3;
Figure 12 is the real-time emotion monitoring main interface in the embodiment of the present invention 4 in Emotion identification system;
Figure 13 is to show aggressiveness level and wake-up degree mood successive value in the embodiment of the present invention 4 in Emotion identification system
Distribution map;
Figure 14 is the distribution map of discrete mood value in Emotion identification system in the embodiment of the present invention 4;
Figure 15 is the visualization interface for extracting face characteristic information in the embodiment of the present invention 4 in Emotion identification system;
Figure 16 is each frequency domain brain wave intensity distribution under normal circumstances in the embodiment of the present invention 4;
Figure 17 is that each frequency domain brain wave intensity distribution under situation of movement occurs for head in the embodiment of the present invention 4.
Specific embodiment
Embodiment 1
It is the flow diagram of one of embodiment of the present invention Emotion identification method referring to FIG. 1-2,.
A kind of Emotion identification method, comprising the following steps:
S1: the facial image and brain wave of user are obtained;
Wherein, the facial image and brain wave of the user obtains after user receives emotional distress, the facial image
It can be face picture or face video, in a preferred embodiment, the facial image is extracted by face video, is passed through
Video is down-sampled at 4HZ, the face for finding out each frame is removed using human-face detector, then by scaling is 48 again after face gray processing
It is wide multiplied by 48 height as face characteristic information, to increase its accuracy, the brain wave is the original in each brain machine electrode channel
The value of raw brain wave.
As shown in figure 3, wherein figure (a) is the facial image got, scheming the facial image in (b) box is target figure
Picture, it is first that video is down-sampled at 4HZ in order to obtain the facial image in (b) in box.Then it is gone for using human-face detector
The face of each frame out, then using after face gray processing again scaling be 48 it is wide multiplied by 48 height as face characteristic information, wherein institute
Stating human-face detector can be using the human-face detector or other human-face detectors commonly used in the prior art in OpenCV.
Specifically, the human-face detector passes through the harr feature of construction image blocking, then the sliding for passing through a certain size
Window search image is to judge whether this window is face.The Haar feature is usually used in image zooming-out, wherein the face
Common in the art in the way of Haar feature extraction face information, the sliding window can be used in detector detection method
Mouth size can also be set according to practical situations.
S2: respectively predicting the emotional state of user according to the brain wave of acquisition and facial image, obtains and is based on brain
Electric wave and emotional state prediction result based on facial image;
In an alternative embodiment, described to carry out prediction steps using emotional state of the brain wave obtained to user
In, for each user one model of training, i.e. model depends on user (user dependence), comprising the following steps:
S201: the brain wave that will acquire carries out wavelet transformation and extracts power spectral density feature;
The wavelet transformation is suitable for multiscale analysis, and different frequency and time scale can be used using wavelet transformation
Check brain wave data.In a preferred embodiment, power spectrum is carried out using the wavelet conversion coefficient of Daubechies
Feature extraction is spent, is conducive to obtain the information in brain wave data comprising various frequency bands.
S202: power spectral density feature is chosen using recursive feature selection method;
The recursive feature selection method (recursive feature elimination) from feature set for selecting
Feature iterates to calculate feature weight (punishment parameter C of error term etc. by using Linear SVM classifier preferably to classify
1.0) and removing, there is 10% feature of lowest weightings to complete feature selecting in.Spy of the algorithm iteration until selecting half
It levies (42 features being selected in test experiment, 12 features are selected in online experiment).For each experimental subjects, make
Feature selecting is carried out with the division of training set on target data set, and the feature of selection is applied to test set.
S203: preset SVM model is called to classify the power spectral density feature of selection;In a preferred implementation
In example, the SVM model application gaussian kernel function, the super ginseng of the SVM model are as follows: penalty coefficient: C=1.0, in Gaussian kernel
Gamma is characterized the derivative of quantity.In other embodiments, the SVM model kernel function and parameters can also be according to reality
Demand setting.
S204: for prediction aggressiveness level and two kinds of situations of degree are waken up, call different SVM model prediction users respectively
Mood value, and according to the mood value obtain the emotional state prediction result based on brain wave.In the present embodiment, described to be based on
The emotional state prediction result of brain wave obtains in the following manner:
Wherein, SEEGFor the prediction score of brain wave, the rEEGFor the emotional state based on brain wave, using high and
Low respectively represents current emotional height, facilitates subsequent be compared.
It is described to be carried out in prediction steps using emotional state of the facial image obtained to user, comprising the following steps:
Face characteristic information is obtained from facial image;
Face characteristic information input CNN model is obtained into several height results;Wherein, the CNN model is by moving
Study building is moved to obtain, comprising the following steps:
The CNN model is identified in data set in general face and is trained;
All convolutional layer parameters of model are fixed, lesser learning rate (such as can be 0.001) is set collected
Second training is carried out on facial image.
In the present embodiment, the face characteristic information is inputted calls CNN model to obtain two sons as a result, specific steps
It is as follows:
As shown in figure 4, it is special to extract image by 3 convolutional layers of CNN model first for one 48 × 48 grayscale image
Sign, the convolution kernel that first convolutional layer is 32 3 × 3 × 1.Second convolutional layer is the volume for being 3 × 3 × 32 with 32 sizes
Product core.It is 3 × 3 × 32 convolution kernel that third convolutional layer, which has 64 sizes,.The characteristics of image extracted is sent after paving
It is fully connected to the 4th layer with 64 neurons.All convolutional layers and full articulamentum all apply ReLU activation primitive, save and calculate
Amount.The neural network is then separated into Liang Ge branch.Wherein, convolution kernel is the average weighted weight definition of pixel in input picture
Function, ' DO ' mean that this layer makes output be zeroed using random method for deactivating (dropout), in the present embodiment, in addition to second
Layer convolutional layer, other each layer of convolutional layer are all constant using 0 (padding) holding size is mended.
The case where for User dependence, classify to it.
First branch learns Valence, it includes the layer that two sizes connected entirely are 64 and 1.Then it will export defeated
Enter into sigmoid function, and reduce intersect entropy loss L1 to the maximum extent:
Wherein y1iIndicate the true tag (ground truth) of i-th of sample Valence,Indicate i-th of sample pair
It should be exported in the model of Valence, m indicates the size of training sample.
Second branch learns arousal, the layer that the size connected entirely it includes two is 64 and 1.Output is fed to
Sigmoid function, wherein sigmoid is common neural network threshold function table, is used for variable mappings between 0-1.Again
It minimizes and intersects entropy loss L2:
Wherein, y2iIndicate the true tag (ground truth) of arousal in i-th of sample,Indicate i-th of sample
This is exported corresponding to the model of arousal, and m indicates the size of training sample.
Minimize the associated losses of L1 and L2:
Wherein αpLinear weight and model it needs to be determined that hyper parameter.It, should if setting 0 for second weight
Model will degenerate for traditional single task learning method.
The emotional state prediction result based on facial image is obtained according to the sub- result.Specifically, it is trained up in model
After complete, in the following manner from the output valve S of networkfaceIn obtain Valence and arousal classification result:
For example, when branch valence score output is Sface=0.8, then it is assumed that its corresponding valence result belongs to
High is a kind of.
The emotional state prediction result based on facial image is obtained according to the sub- result.
S3: pre- to the emotional state based on facial image and based on brain wave using Valence-arousal two dimensional model
It surveys result to be mapped, the emotional state of user is obtained according to mapping result.
The Valence-arousal two dimensional model is most widely used dimension mood model.Valence/arousal
Coordinate system model maps discrete affective tag in two-dimensional space.Valence is aggressiveness level, and range is from unpleasant to pleased
Happy, arousal is wake-up degree, and for range from calmness to activation, the two can be used to describe the strong of mood as instant emotional state
Degree.Most of variations in view of mood come from two dimensions, and research uses the two dimensions as sentiment indicator in the present embodiment,
The value of degree as aggressiveness level and is waken up using the discrete value between 1 to 9, in the present embodiment, for user
The case where dependence, for each subject one classifier of training.Aggressiveness level and the state for waking up degree are carried out
Binary classification is classified as high (6-9 grades) and low (1-5 grades) grade, facilitates and compare.
In an alternative embodiment, in the S3 step, using Valence-arousal two dimensional model to based on people
Face image and emotional state prediction result based on brain wave are mapped, and obtain the emotional state of user according to mapping result
Step specifically includes:
Obtain the emotional state prediction result based on facial image and based on brain wave;Wherein, described to be based on facial image
It is respectively two that two single mode classifiers of brain wave and facial image export with the emotional state prediction result based on brain wave
A classification results;
Adjusting parameter k and the accuracy rate for calculating predicted value after both modalities which fusion each time, choose the maximum ginseng of accuracy rate
Parameter of the number as k;
Specifically, by enumerating the linear combination weight of 2 single mode classifiers output, to find a parameter k, so that
The linear combination of two mode moods output obtains best performance on training set.K is enumerated with 0.01 step-length, and each
It is secondary to enumerate the performance for all calculating fused model, so that a k is chosen, so that behaving oneself best on training set after fusion, it may be assumed that
For user dependence, finds out and choose a parameter k so that the accuracy rate of prediction is maximum.Wherein, for different tasks
On (valence and arousal), k value is different.
Predicted value is obtained by following manner:
Senum=kSface+(1-k)SEEG
Wherein, SenumRepresent the fused predicted value of both modalities which, SfaceAnd SEEGRespectively represent facial image and brain wave
Output, and k represents the significance level of facial image, and 1-k represents the significance level of brain wave.
For user dependence, the predicted value S that obtains through the above wayenumFor between 0~1 successive value, table
Representation model is biased to the probability size of high class, then obtains the result r of fusion by following manner againenum(high or low):
Wherein, renumRepresent the classification results (high or Low) predicted after user dependence fusion.
Predicted value is mapped using Valence-arousal two dimensional model, the feelings of user are obtained according to mapping result
Not-ready status.
In another alternative embodiment, in the S3 step, using Valence-arousal two dimensional model to being based on
Facial image and emotional state prediction result based on brain wave are mapped, and the emotional state of user is obtained according to mapping result
The step of specifically include:
Obtain the emotional state prediction result based on facial image and based on brain wave;
The emotional state prediction result based on facial image and based on brain wave is merged by following manner,
Obtain predicted value, specifically, by facial image and brain wave classifier export based on facial image and based on the feelings of brain wave
Not-ready status prediction result is merged as the sub-classifier of Adaboost, it is therefore an objective to find wj (j=for each sub-classifier
1,2 ... n) and obtain final output:
Wherein, n represents mode number, the SboostPredicted value after representing n Model Fusion, sjRepresent corresponding mode
Output as a result, wjRepresent weight coefficient.
Wherein rboost represents the result (high or low) of the prediction of second of fusion method, sj∈ { -1,1 } (j=1,
2 ..., n) represent the output of corresponding sub-classifier.For example, in the present embodiment, S1 is the mood classifier based on brain electricity
Output, and S2 represents the output of the mood classifier based on facial image.
Wherein, wj (j=1,2...n) is obtained by following manner: for a training set containing m sample, Wo Menxian
With s (xi)j∈ { -1,1 } indicates output of j-th of classifier for i-th of sample, and the true mark of i-th of sample is indicated with yi
Label, initialize the training weight of each sample by following manner first:
αi=1/m
Wherein αiRepresent the weight coefficient of i-th of sample, m representative sample number.Training weight is embodied in training data
When, if using current data point, the data of data point will be first multiplied by this weight coefficient;
Then the training for carrying out sub-classifier as described above, calculates error rate ε laterj:
Wherein, εjFor error rate, the tiIt obtains in the following manner:
Obtain calculative sub-classifier weight:
wj=ln ((1- εj)/εj)/2
The weight coefficient that each data point is updated according to following manner is more pointedly instructed for next classifier
Practice:
Using the fusion method in 2 different tasks (valence and arousal), difference is trained for 2 tasks
Parameter.
Predicted value is mapped using Valence-arousal two dimensional model, the feelings of user are obtained according to mapping result
Not-ready status.
The standard of test experiment and online experiment the Emotion identification method described in the present embodiment respectively will be passed through in the present embodiment
True rate is tested:
In test experiment, as seen in figs. 5-6, this reality is tested using MAHNOB-HCI data set and DEAP data set respectively
The accuracy of method described in example is applied, the MAHNOB-HCI data set and DEAP data set are international mood data collection.
For using the test experiment of MAHNOB-HCI and DEAP data set, using in human brain channel position shown in Fig. 7
14 channels (Fp1, T7, CP1, Oz, Fp2, F8, FC6, FC2, Cz, C4, T8, CP6, CP2, and PO4) 3 symmetrically to (T7-T8,
Fp1-Fp2, and CP1-CP2) carry out brain wave feature extraction.5 frequency bands used are respectively theta (4Hz < f < 8Hz),
Slow alpha (8Hz < f < 10Hz), alpha (8Hz < f < 12Hz), beta (12Hz < f < 30Hz), and gamma (30Hz < f <
45Hz).A total of 14 × 5+3 × 5=85 feature.After extracting feature, using Emotion identification method described in the present embodiment to brain
Wave data is tested.
It as shown in table 1, is the fusion accuracy rate after being tested using MAHNOB-HCI data set in test experiment.Its
In, the MAHNOB-HCI data set includes the brain wave of 30 participants, face video, audio, eye movement and peripheral physiology note
It records (such as body temperature, heartbeat).In this data set, each participant has viewed to extract from Hollywood movie and video website
20 segments, stimulate video duration be 35 to 117 seconds.Every time stimulation after, subject with 1 to 9 discrete level from
I assesses valence and arousal.
In user-dependence, one is reserved for each subject for MAHNOB-HCI data set
The mode of the cross validation of experiment is organized to be tested.Specifically, each subject has 20 groups of experiments, carries out 20 tests, often
19 groups of experiments are once divided as training set, leave one group of experiment as test set.
Average Emotion identification accuracy rate on table 1:MAHNOB-HCI data set
Task | Facial image | Brain wave | The first fusion method | Second of fusion method |
Valence | 72.31±12.02 | 75.38±12.16 | 80.30±11.37 | 80.00±12.40 |
Arousal | 71.15±11.62 | 68.85±10.02 | 74.23±10.34 | 71.54±11.16 |
It as shown in table 2, is to utilize the fusion accuracy rate after being tested in DEAP data set in test experiment.Wherein,
The DEAP data set includes the physiological signal and peripheral signal that 32 participants watch 40 1 minute music videos.The data
Collection also includes grading of the participant to each video of valence and arousal level.
On DEAP data set, for each subject, randomly chooses 20 tests and tested as training set, and by 20
In remaining test be used as test set.The best hyper parameter of cross validation preference pattern is carried out to training set, the hyper parameter is
Setting parameter value before starting test;Then in the model of test integrated test training.For each task, we use test
Measurement of the accuracy of collection as assessment models performance.
Average Emotion identification accuracy rate on table 2:DEAP data set
Task | Facial image | Brain wave | The first fusion method | Second of fusion method |
Valence | 73.33±13.59 | 66.25±9.04 | 75.00±11.08 | 75.21±10.94 |
Arousal | 69.79±15.64 | 71.88±12.48 | 75.63±11.92 | 74.17±14.04 |
It is above-mentioned the results showed that the accuracy rate of all fusion methods is all higher than single mode in test experiment.
In online experiment, including 20 subjects's (50% is women), the range of age from 17 to 22 (i.e. average value=
20.15, standard deviation=1.19).Experimentation is as follows:
At the beginning, the mood of subject is induced using video, while recording face-image and eeg signal.In video
At the end of, it is desirable that their aggressiveness level of subjects reported (valence) and degree (arousal) is waken up, also instant mood shape
State.The aggressiveness level and the value for waking up degree are the discrete value between 1 to 9.
Before the experiments were performed, the material for inducing mood is selected, 40 views are manually selected from a large amount of family movies
Frequency carries out editing.It is classified as 2 parts again for showing when acquiring trained and showing in collecting test data.Each part includes
20 videos.The duration of film clip is 70.52 to 195.12 seconds (average value=143.04, standard deviation=33.50).
Before being tested, it is trained the collection of data first, is used for training pattern.For each subject, receive
Collect the data of 20 groups of experiments.When every group of experiment starts, center Screen can all have countdown in 10 seconds, to attract the attention of subject
Power, and the prompt started as video.Start to play film video after countdown, on screen for inducing mood.Herein
Period collects 10 groups of brain electricity using camera 4 facial images of collection per second, and using brain machine mobile device (Emotive) is per second
Wave signal.Each film continues 2-3 minutes.In every group of off-test, SAM table appears in center Screen, to collect subject
Valence and arousal label.Instruction subject fills in entire table and clicks " submission " button to carry out next examination
It tests.Restore in test in continuous mood twice, there are also countdowns in 10 seconds for center Screen.Collection data (eeg signal,
Face-image and corresponding aggressiveness level and wake-up degree) for training above-mentioned model.
In test phase, each subject carries out 20 groups of experiments.The process tested every time and training stage data collection
Process is similar.Subject is stimulated using video when being different from training acquisition data, because identical video council causes
Identical physiological status is so as to cause can not differentiate that physiological status is to be generated by mood or generated by video.In each test knot
Shu Shi uses four kinds of different detectors (facial expression detector, brain wave detector, the first fusion detector and second
Kind of fusion detector) obtain result.Result is submitted by comparing prediction result and subject to count accuracy rate.Fig. 5 is logical
Cross the accuracy rate of the present embodiment the method that online experiment measures.
For online experiment, brain is carried out using 5 channels (AF3, AF4, T7, T8 and Pz) in human brain channel described in Fig. 7
Electric wave feature extraction.5 frequency bands used are respectively theta (4Hz < f < 8Hz), slow alpha (8Hz < f < 10Hz),
Alpha (8Hz < f < 12Hz), beta (12Hz < f < 30Hz) and gamma (30Hz < f < 45Hz).A total of 5 × 5=25 special
Sign.After extracting feature, brain wave data is tested using Emotion identification method described in the present embodiment.
Preferably, it is the accuracy for guaranteeing data in above-mentioned experimentation, is sitting in subject on comfortable chair and refers to
Show the body that subject avoids blinking or moving them as far as possible, while testing equipment and correcting camera position in advance, with true
The face for protecting reference object appears in center Screen.
Fig. 8 illustrates accuracy rate in the test process of 20 experimental subjects.Table 3 illustrates various methods in test process
Average Accuracy.It can be seen that in addition to standard of the first fusion method relative to brain wave in the space arousal in online experiment
True rate, the accuracy rate of all fusion methods are all higher than single mode.Also, since online experiment can not carry out the adjustment of super ginseng,
So that our model universality is higher.Therefore, in online experiment, just for user-dependence in the case where carry out it is real
It tests.
The significance test of data is carried out in user-dependence, first to the result of 4 kinds of methods (brain electricity
Wave, facial image, the first fusion method and second of fusion method) normal distribution-test (normality test) is carried out,
When the result of normal distribution detection is less than 0.05, it is believed that it meets normal distribution.For meeting the data of normal distribution, then into
The row side t examines, if the p value that the side t examines is less than 0.05, it is believed that it has significant difference, for not meeting the data of normal distribution
Nemenyi inspection is carried out, likewise, when its p value is less than 0.05, it is believed that there were significant differences for it.Significant difference means accurately
Rate is obviously improved.In the space valence of MAHNOB-HCI data set, Emotion identification is carried out using second of fusion method
There were significant differences (p=0.004) with the result using brain wave progress Emotion identification.For online experiment, in the space valence
In, there were significant differences (p=0.026) relative to the result of facial image for the first fusion method.And second of fusion method
With facial image method, there is significant difference in the space valence and the space arousal, p is 0.026 in the space valence
And p is 0.007 in the space arousal.
And for the calculating cost of model and time complexity, the calculating cost of entire algorithm is concentrated mainly on convolutional Neural
The part of network, compared to the parameter of convolutional neural networks, the parameter of SVM is few.And in fact, network is shared in the present embodiment
1,019,554 parameter, the present embodiment are using 950 video card of GTX, the time of the primary single sample propagated forward of progress
0.0647 second.And in terms of Model Fusion, the first fusion method is fairly simple but it calculates loss but with the increasing of mode
Multi index option rises.Because the complexity of the first fusion method is O (m100n), wherein m is number of samples, and n is mode number.
And the time complexity of second method is O (nm).Second method is line with the increase for calculating and losing that increases of mode
Property, therefore second of fusion method is more suitable under conditions of more multi-modal.
Embodiment 2
In the present embodiment, the Emotion identification method is roughly the same with embodiment 1, and difference is only that: the step
Rapid S2 is carried out in prediction steps using emotional state of the brain wave obtained to user, and one model of training is suitable for all tested
Person and the successive value for directly predicting valence and arousal, comprising the following steps:
S211: wavelet transformation is carried out to the brain wave of acquisition and extracts power spectral density feature;
S212: the power spectral density feature of extraction is sampled using 10s as a sample, obtains temporal aspect;
It is specifically included in the S212 step: when constructing temporal aspect, using 10 seconds as one sample, with 50% weight
Folded rate samples brain wave.Using each second as one time quantum, for example, have within one second 85 features for test experiment, that
One of sample is that one first dimension is 10, and the second dimension is 85 two-dimensional matrix, rather than a size is the one of 850
Dimensional vector.
S213: it using shot and long term memory models to the temporal aspect regression forecasting, obtains aggressiveness level and wakes up degree
In the case of user mood value, and according to the mood value obtain the emotional state prediction result based on brain wave.As shown in figure 9,
Including two layers of LSTM layers of A and B, an and then full articulamentum, being then followed by is output layer h1、h2.First LSTM layers by 10
LSTM unit (LSTM cell) composition, each unit include 128 neurons.LSTM layers of the second layer by 10 LSTM units
(LSTM cell) composition, each unit include 64 neurons.Full articulamentum includes 54 neurons.Output layer is by 2 nerves
First h1、h2Composition represents valence and arousal.Wherein, each layer applies 0.5 Dropout ratio and Relu activation
Function is linearly corrected, also, has all carried out data normalization between each layer there, and existing LSTM can be used in calculation
Model data normalized mode, calculating process are not especially limited.Preferably, it is lost using mean square deviation as the model
Function.
After the calling CNN model obtains two sub- results, described two sub- results are returned, predict valence
With the specific value of arousal.Wherein, the loss function uses mean square error, and calculation can use the prior art
Middle to carry out regression forecasting to CNN model using mean square error loss function, calculating process is not especially limited.
In the S3 step, using Valence-arousal two dimensional model to based on facial image and based on brain wave
The step of emotional state prediction result is mapped, and obtains the emotional state of user according to mapping result specifically includes:
Obtain the prediction result of brain wave and facial image;Wherein, the prediction result is respectively brain wave and face figure
As two classification results of two single mode classifiers output;
Adjusting parameter k simultaneously calculates the accuracy rate predicted after fusion each time, chooses the maximum ginseng as k of accuracy rate
Number;For user independence, the least absolute value of true value and predicted value is found out.That is, for user
The S that independence is obtained by following mannerenumFor predicted value, the predicted value is the successive value between 1~9;
Predicted value is obtained by following manner:
Senum=kSface+(1-k)SEEG
For user independence, the predicted value of model is immediately arrived at through the above way, utilizes Valence-
Arousal two dimensional model maps predicted value, and the emotional state of user is obtained according to mapping result.
In test experiment, side described in the present embodiment is tested using MAHNOB-HCI data set and DEAP data set respectively
The accuracy of method, specifically:
The data of all objects are chosen from MAHNOB-HCI and DEAP data set.
For brain wave, using all brain wave datas that can be got, that is, 256 groups of data per second, then will
All features per second are averaged one sample of synthesis.Data set is respectively divided into training set, verifying collection with the ratio of 6:2:2
And test set.Then training and adjustment model parameter on training set and verifying collection, then obtain on test set final again
As a result.
Table 3 compared the root-mean-square error of fusion method and single mode method.When being merged using first method
It waits, for valence, only 0.06 weight is assigned to facial image, and other 0.94 weights are assigned to brain wave.
I.e. the k value is 0.06.And for arousal, the k value is 0.22.Illustrate the increase with sample number, is tested in mood
In, brain wave is capable of providing more information for Emotion identification relative to facial image.Also, either user
Independence or user dependence, information fusion can always improve model performance.
Table 3: the absolute value (square of mean square deviation error) in various situations between true value and predicted value
Brain wave | Facial image | The first fusion method | |
Valence | 1.4994 | 2.0715 | 1.4965 |
Arousal | 1.5259 | 1.8501 | 1.4940 |
Compared with the existing technology, the present invention utilizes Valence- by the facial image and brain wave data of acquisition user
Arousal two dimensional model is distributed in the case of user dependence and user independence according to facial image and brain
Electric wave predicts user emotion state, meanwhile, to the brain wave and people by way of enumerating weight or enhancing study
The output result of face image is merged, and the accuracy of Emotion identification is improved.
Embodiment 3
Emotion identification method described in the present embodiment is roughly the same with embodiment 1, and difference is only that: obtaining user
Facial image and brain wave step in, call data collection framework acquire data, in the present embodiment, the data acquisition block
Frame includes camera device interface and brain machine equipment interface, in other embodiments, by pair for increasing corresponding modal sensor
Code is answered, the mood data that the frame can also be used for other multiple modalities is collected.For example, introducing electrocardio equipment, it is only necessary to real
Existing corresponding electrocardio equipment, which reads code, can be realized the acquisition that information is carried out according to the process of setting.
As shown in figs. 10-11, it calls data collection framework to acquire data, executes following steps:
Typing experiment information is simultaneously stored in database;
Visual transmission user is played out after countdown, and camera device and brain machine equipment is called to obtain user's face
Image and brain wave, circulation carry out several groups experiment;
Reaching to test after setting cycle-index terminates, into the end page.
By utilizing collection of the data collection framework for mood data, experimenter is carried out using the data collection framework
When multi-modal mood data is collected, it is only necessary to realize the correspondence code of corresponding modal sensor, set relevant parameter and call and be somebody's turn to do
Data collection framework can realize above-mentioned process automatically, facilitate information collection.
Embodiment 4
The present invention also provides a kind of Emotion identification systems, including camera, brain machine equipment, reservoir, processor and storage
There are in the reservoir and the computer program that can be executed by the processor, the camera and brain machine equipment and processor
Connection, the camera are executed for acquiring facial image, the brain machine equipment for sampling human body electroencephalogram's wave, the processor
The step of above-mentioned Emotion identification method is realized when the computer program.
In one embodiment, the Emotion identification system further includes display, and the display is connect with processor, institute
Display is stated for showing current emotional acquisition and analysis prediction case.Specifically, when the Emotion identification system is applied
When user dependence situation, since user dependence has the successive value of more accurate mood, can by mood from
Valence-arousal space reflection is to discrete space (happy, the mapping of the discrete mood such as sad).Wherein, it has mapped in total
16 kinds of discrete moods, respectively pride, anger, contempt, disgust, envy, guilt, shame, fear,
Sadness, surprise, interest, hope, relief, satisfaction, joy, elation.The Emotion identification system
The method that is mapped of uniting be from DEAP data set, will be discrete using continuous valence and arousal numerical value as inputting
Change mood label as output, is fitted a full Connection Neural Network.
Wherein, for Expression Recognition, the Emotion identification system carries out Expression Recognition with the frequency of 4 figures per second, for
Brain wave, the Emotion identification system carry out brain wave Emotion identification to 10~20 groups of brain waves per second, wherein the brain wave
Specific group number depends on the unduplicated brain wave group number that brain machine equipment can be read.In true monitoring interface, facial image circle
Face is per second to be updated 4 times, and brain wave interface is updated once every second.Mood shows that the mood and discrete in continuous space can be switched in interface
Mood.As shown in figs. 12-14, the main interface for real-time emotion monitoring and each interface.Wherein, the upper left corner Figure 12 is defeated
The facial image video entered, is read by camera;Lower section is the brain wave of continuous sampling, is read by brain machine equipment;Its upper right corner
Interface can be controlled respectively by button to be switched: as shown in figure 13, an interface shows the mood of valence and arousal
Successive value;As shown in figure 14, the discrete mood distribution of another showing interface.Wherein, the past is shown in the interface
Mood distribution in 10s.
As shown in figure 15, the Emotion identification system can also carry out the visualization of face characteristic information figure, i.e. visualization volume
The output of each convolutional layer in product neural network, wherein right side interface is respectively the extraction of three convolutional layers in the CNN model
Characteristics of image.
As shown in figs. 16-17, the Emotion identification system can also carry out the visualization of brain wave, by using the human brain of 3D
Model is visualized, wherein Tu16Wei for the numerical values recited of the different brain waves exported in above-mentioned brain wave acquisition process
Each frequency domain brain wave intensity distribution under normal circumstances, Figure 16 are that each frequency domain brain wave intensity under situation of movement occurs for head
Distribution map, in figure, color is more bright-coloured, and expression brain wave intensity is stronger.
The present invention also provides a kind of computer readable storage mediums, store computer program thereon, the computer program
It realizes when being executed by processor such as the step of above-mentioned Emotion identification method.
It wherein includes storage medium (the including but not limited to disk of program code that the present invention, which can be used in one or more,
Memory, CD-ROM, optical memory etc.) on the form of computer program product implemented.Computer-readable storage media packet
Permanent and non-permanent, removable and non-removable media is included, can be accomplished by any method or technique information storage.Letter
Breath can be computer readable instructions, data structure, the module of program or other data.The example packet of the storage medium of computer
Include but be not limited to: phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM),
Other kinds of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory
(EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM), digital versatile disc
(DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices or any other non-biography
Defeated medium, can be used for storage can be accessed by a computing device information.
The invention is not limited to above embodiment, if not departing from the present invention to various changes or deformation of the invention
Spirit and scope, if these changes and deformation belong within the scope of claim and equivalent technologies of the invention, then this hair
It is bright to be also intended to encompass these changes and deformation.
Claims (10)
1. a kind of Emotion identification method, which comprises the following steps:
Obtain the facial image and brain wave of user;
The emotional state of user is predicted respectively according to the brain wave of acquisition and facial image, obtains and is based on brain wave and base
In the emotional state prediction result of facial image;
Using Valence-arousal two dimensional model to the emotional state prediction result based on facial image and based on brain wave into
Row mapping, the emotional state of user is obtained according to mapping result.
2. Emotion identification method according to claim 1, it is characterised in that: the brain wave using acquisition is to user's
Emotional state is predicted, is obtained in the emotional state prediction result step based on brain wave, comprising the following steps:
Wavelet transformation is carried out to the brain wave of acquisition and extracts power spectral density feature;
Power spectral density feature is chosen using recursive feature selection method;
Preset SVM model is called to classify the power spectral density feature of selection;
For prediction aggressiveness level and two kinds of situations of degree are waken up, call the mood value of different SVM model prediction users respectively,
And the emotional state prediction result based on brain wave is obtained according to the mood value;
Alternatively, described carried out in prediction steps using emotional state of the brain wave obtained to user, comprising the following steps:
Wavelet transformation is carried out to the brain wave of acquisition and extracts power spectral density feature;
The power spectral density feature of extraction is sampled using 10s as a sample, obtains temporal aspect;
Using shot and long term memory models to the temporal aspect regression forecasting, user in the case of aggressiveness level and wake-up degree is obtained
Mood value, and according to the mood value obtain the emotional state prediction result based on brain wave.
3. Emotion identification method according to claim 1, it is characterised in that: the facial image using acquisition is to user
Emotional state predicted, obtain the emotional state prediction result step based on facial image in, comprising the following steps:
Face characteristic information is obtained from facial image;
Face characteristic information input CNN model is obtained into several height results;
The emotional state prediction result based on facial image is obtained according to the sub- result.
4. Emotion identification method according to claim 2, it is characterised in that: the brain wave of described pair of acquisition carries out small echo change
It changes and extracts in power spectral density characterization step, feature extraction is carried out using the wavelet conversion coefficient of Daubechies.
5. Emotion identification method according to claim 1, it is characterised in that: described to utilize Valence-arousal two dimension
Model maps the emotional state prediction result based on facial image and based on brain wave, obtains user according to mapping result
Emotional state the step of specifically include:
Parameter preset k, and by the accuracy rate adjusting parameter k of predicted value after calculating both modalities which fusion each time, it is accurate to choose
Parameter of the maximum parameter of rate as k;Predicted value is obtained by following manner:
Senum=kSface+(1-k)SEEG;
Wherein, SenumRepresent the fused predicted value of both modalities which, SfaceAnd SEEGIt respectively represents based on facial image and based on brain
The emotional state prediction result of electric wave, and k represents the significance level of facial image, 1-k represents the significance level of brain wave;
Predicted value is mapped using Valence-arousal two dimensional model, the mood shape of user is obtained according to mapping result
State.
6. Emotion identification method according to claim 1, it is characterised in that: described to utilize Valence-arousal two dimension
Model maps the emotional state prediction result based on facial image and based on brain wave, obtains user according to mapping result
Emotional state the step of specifically include:
The emotional state prediction result based on facial image and based on brain wave is merged by following manner, is obtained
Predicted value:
Wherein, n represents mode number, the SboostRepresent the fused predicted value of emotional state prediction result of n mode, sj
The output of corresponding mode is represented as a result, wjRepresent weight coefficient;
Predicted value is mapped using Valence-arousal two dimensional model, the mood shape of user is obtained according to mapping result
State.
7. Emotion identification method according to claim 1, it is characterised in that: obtain the facial image and brain wave step of user
In rapid, data collection framework is called to acquire data, executes following steps:
Typing experiment information is simultaneously stored in database;
Visual transmission user is played out after countdown, and camera device and brain machine equipment is called to obtain user's facial image
And brain wave, circulation carry out several groups experiment;
Reaching to test after setting cycle-index terminates.
8. a kind of computer readable storage medium, stores computer program thereon, it is characterised in that: the computer program is located
Manage the step of realizing the Emotion identification method as described in claim 1-7 any one when device executes.
9. a kind of Emotion identification system, it is characterised in that: including camera, brain machine equipment, reservoir, processor and be stored in
In the reservoir and the computer program that can be executed by the processor, the camera and brain machine equipment and processor connect
It connects, the camera executes institute for sampling human body electroencephalogram's wave, the processor for acquiring facial image, the brain machine equipment
The step of realizing the Emotion identification method as described in any one of claim 1-7 when stating computer program.
10. Emotion identification system according to claim 9, it is characterised in that: the Emotion identification system further includes display
Device, the display are connect with processor, and the display is for showing current emotional acquisition and analysis prediction case.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910586342.9A CN110464366A (en) | 2019-07-01 | 2019-07-01 | A kind of Emotion identification method, system and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910586342.9A CN110464366A (en) | 2019-07-01 | 2019-07-01 | A kind of Emotion identification method, system and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110464366A true CN110464366A (en) | 2019-11-19 |
Family
ID=68507037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910586342.9A Pending CN110464366A (en) | 2019-07-01 | 2019-07-01 | A kind of Emotion identification method, system and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110464366A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111046854A (en) * | 2020-01-10 | 2020-04-21 | 北京服装学院 | Brain wave external identification method, device and system |
CN111080000A (en) * | 2019-12-06 | 2020-04-28 | 南瑞集团有限公司 | Ultra-short term bus load prediction method based on PSR-DBN |
CN111134667A (en) * | 2020-01-19 | 2020-05-12 | 中国人民解放军战略支援部队信息工程大学 | Electroencephalogram signal-based time migration emotion recognition method and system |
CN111339847A (en) * | 2020-02-14 | 2020-06-26 | 福建帝视信息科技有限公司 | Face emotion recognition method based on graph convolution neural network |
CN111401166A (en) * | 2020-03-06 | 2020-07-10 | 中国科学技术大学 | Robust gesture recognition method based on electromyographic information decoding |
CN111738210A (en) * | 2020-07-20 | 2020-10-02 | 平安国际智慧城市科技股份有限公司 | Audio and video based student psychological state analysis method, device, terminal and medium |
CN112220455A (en) * | 2020-10-14 | 2021-01-15 | 深圳大学 | Emotion recognition method and device based on video electroencephalogram signals and computer equipment |
CN112270235A (en) * | 2020-10-20 | 2021-01-26 | 西安工程大学 | Improved SVM electroencephalogram signal emotion recognition method |
CN112465069A (en) * | 2020-12-15 | 2021-03-09 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN113017630A (en) * | 2021-03-02 | 2021-06-25 | 贵阳像树岭科技有限公司 | Visual perception emotion recognition method |
CN113421546A (en) * | 2021-06-30 | 2021-09-21 | 平安科技(深圳)有限公司 | Cross-tested multi-mode based speech synthesis method and related equipment |
CN113729710A (en) * | 2021-09-26 | 2021-12-03 | 华南师范大学 | Real-time attention assessment method and system integrating multiple physiological modes |
CN114129163A (en) * | 2021-10-22 | 2022-03-04 | 中央财经大学 | Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning |
WO2022067524A1 (en) * | 2020-09-29 | 2022-04-07 | 香港教育大学 | Automatic emotion recognition method and system, computing device and computer readable storage medium |
CN116077071A (en) * | 2023-02-10 | 2023-05-09 | 湖北工业大学 | Intelligent rehabilitation massage method, robot and storage medium |
CN116935480A (en) * | 2023-09-18 | 2023-10-24 | 四川天地宏华导航设备有限公司 | Emotion recognition method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
CN109730701A (en) * | 2019-01-03 | 2019-05-10 | 中国电子科技集团公司电子科学研究院 | A kind of acquisition methods and device of mood data |
-
2019
- 2019-07-01 CN CN201910586342.9A patent/CN110464366A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107220591A (en) * | 2017-04-28 | 2017-09-29 | 哈尔滨工业大学深圳研究生院 | Multi-modal intelligent mood sensing system |
CN109730701A (en) * | 2019-01-03 | 2019-05-10 | 中国电子科技集团公司电子科学研究院 | A kind of acquisition methods and device of mood data |
Non-Patent Citations (3)
Title |
---|
SANDER KOELSTRA,ET AL: "Fusion of facial expressions and EEG for implicit affective tagging", 《IMAGE AND VISION COMPUTING》 * |
YISI LIU,ET AL: "Real-Time EEG-Based Emotion Recognition and Its Applications", 《TRANS. ON COMPUT. SCI. XII》 * |
阚威,等: "基于LSTM的脑电情绪识别模型", 《南京大学学报(自然科学)》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111080000A (en) * | 2019-12-06 | 2020-04-28 | 南瑞集团有限公司 | Ultra-short term bus load prediction method based on PSR-DBN |
CN111046854A (en) * | 2020-01-10 | 2020-04-21 | 北京服装学院 | Brain wave external identification method, device and system |
CN111046854B (en) * | 2020-01-10 | 2024-01-26 | 北京服装学院 | Brain wave external identification method, device and system |
CN111134667A (en) * | 2020-01-19 | 2020-05-12 | 中国人民解放军战略支援部队信息工程大学 | Electroencephalogram signal-based time migration emotion recognition method and system |
CN111134667B (en) * | 2020-01-19 | 2024-01-26 | 中国人民解放军战略支援部队信息工程大学 | Time migration emotion recognition method and system based on electroencephalogram signals |
CN111339847A (en) * | 2020-02-14 | 2020-06-26 | 福建帝视信息科技有限公司 | Face emotion recognition method based on graph convolution neural network |
CN111339847B (en) * | 2020-02-14 | 2023-04-14 | 福建帝视信息科技有限公司 | Face emotion recognition method based on graph convolution neural network |
CN111401166A (en) * | 2020-03-06 | 2020-07-10 | 中国科学技术大学 | Robust gesture recognition method based on electromyographic information decoding |
CN111738210A (en) * | 2020-07-20 | 2020-10-02 | 平安国际智慧城市科技股份有限公司 | Audio and video based student psychological state analysis method, device, terminal and medium |
WO2022067524A1 (en) * | 2020-09-29 | 2022-04-07 | 香港教育大学 | Automatic emotion recognition method and system, computing device and computer readable storage medium |
CN112220455A (en) * | 2020-10-14 | 2021-01-15 | 深圳大学 | Emotion recognition method and device based on video electroencephalogram signals and computer equipment |
CN112270235A (en) * | 2020-10-20 | 2021-01-26 | 西安工程大学 | Improved SVM electroencephalogram signal emotion recognition method |
CN112465069A (en) * | 2020-12-15 | 2021-03-09 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN112465069B (en) * | 2020-12-15 | 2024-02-06 | 杭州电子科技大学 | Electroencephalogram emotion classification method based on multi-scale convolution kernel CNN |
CN113017630A (en) * | 2021-03-02 | 2021-06-25 | 贵阳像树岭科技有限公司 | Visual perception emotion recognition method |
CN113421546A (en) * | 2021-06-30 | 2021-09-21 | 平安科技(深圳)有限公司 | Cross-tested multi-mode based speech synthesis method and related equipment |
CN113421546B (en) * | 2021-06-30 | 2024-03-01 | 平安科技(深圳)有限公司 | Speech synthesis method based on cross-test multi-mode and related equipment |
CN113729710A (en) * | 2021-09-26 | 2021-12-03 | 华南师范大学 | Real-time attention assessment method and system integrating multiple physiological modes |
CN114129163A (en) * | 2021-10-22 | 2022-03-04 | 中央财经大学 | Electroencephalogram signal-based emotion analysis method and system for multi-view deep learning |
CN114129163B (en) * | 2021-10-22 | 2023-08-29 | 中央财经大学 | Emotion analysis method and system for multi-view deep learning based on electroencephalogram signals |
CN116077071B (en) * | 2023-02-10 | 2023-11-17 | 湖北工业大学 | Intelligent rehabilitation massage method, robot and storage medium |
CN116077071A (en) * | 2023-02-10 | 2023-05-09 | 湖北工业大学 | Intelligent rehabilitation massage method, robot and storage medium |
CN116935480B (en) * | 2023-09-18 | 2023-12-29 | 四川天地宏华导航设备有限公司 | Emotion recognition method and device |
CN116935480A (en) * | 2023-09-18 | 2023-10-24 | 四川天地宏华导航设备有限公司 | Emotion recognition method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110464366A (en) | A kind of Emotion identification method, system and storage medium | |
US10043099B2 (en) | Automatically computing emotions aroused from images through shape modeling | |
Littlewort et al. | Automatic coding of facial expressions displayed during posed and genuine pain | |
Zeng et al. | Spontaneous emotional facial expression detection. | |
Ma et al. | Contrast-based image attention analysis by using fuzzy growing | |
Wang et al. | Static topographic modeling for facial expression recognition and analysis | |
CN110139597A (en) | The system and method for being iterated classification using neuro-physiological signals | |
KR20190025564A (en) | System and method for facial expression recognition and annotation processing | |
CN110197729A (en) | Tranquillization state fMRI data classification method and device based on deep learning | |
CN111597870B (en) | Human body attribute identification method based on attention mechanism and multi-task learning | |
CN110427881A (en) | The micro- expression recognition method of integration across database and device based on the study of face local features | |
Sarath | Human emotions recognition from thermal images using Yolo algorithm | |
CN115410254A (en) | Multi-feature expression recognition method based on deep learning | |
Fu et al. | Personality trait detection based on ASM localization and deep learning | |
CN116645721B (en) | Sitting posture identification method and system based on deep learning | |
Kwaśniewska et al. | Real-time facial features detection from low resolution thermal images with deep classification models | |
CN113591797B (en) | Depth video behavior recognition method | |
KV et al. | Deep Learning Approach to Nailfold Capillaroscopy Based Diabetes Mellitus Detection | |
Moran | Classifying emotion using convolutional neural networks | |
Tiwari et al. | Personality prediction from Five-Factor Facial Traits using Deep learning | |
Ghalleb et al. | Demographic Face Profiling Based on Age, Gender and Race | |
Abdelhamid et al. | Adaptive gamma correction-based expert system for nonuniform illumination face enhancement | |
Alattab et al. | Efficient method of visual feature extraction for facial image detection and retrieval | |
Salah et al. | Recognize Facial Emotion Using Landmark Technique in Deep Learning | |
CN109214286A (en) | Face identification method based on the fusion of deep neural network multilayer feature |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191119 |