CN111000556A - Emotion recognition method based on deep fuzzy forest - Google Patents
Emotion recognition method based on deep fuzzy forest Download PDFInfo
- Publication number
- CN111000556A CN111000556A CN201911204760.3A CN201911204760A CN111000556A CN 111000556 A CN111000556 A CN 111000556A CN 201911204760 A CN201911204760 A CN 201911204760A CN 111000556 A CN111000556 A CN 111000556A
- Authority
- CN
- China
- Prior art keywords
- forest
- emotion
- fuzzy
- electroencephalogram signal
- recognition method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/369—Electroencephalography [EEG]
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Surgery (AREA)
- Molecular Biology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Physiology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Psychology (AREA)
- Developmental Disabilities (AREA)
- Child & Adolescent Psychology (AREA)
- Probability & Statistics with Applications (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Social Psychology (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention relates to an emotion recognition method based on a deep fuzzy forest, which comprises the following steps: s1: collecting an electroencephalogram signal; s2: preprocessing the electroencephalogram signal to remove noise; s3: inputting the electroencephalogram signal into a pre-trained depth fuzzy forest model to obtain an emotion recognition result, wherein in the step S3, the depth fuzzy forest model adopts multi-granularity scanning to obtain a probability vector of electroencephalogram signal characteristics from the electroencephalogram signal to serve as input of a cascade forest; and adopting a cascade forest to identify the probability vector of the electroencephalogram signal characteristics to obtain an emotion identification result, wherein the multi-granularity scanning and the cascade forest are both constructed by adopting a fuzzy decision tree. Compared with the prior art, the emotion recognition method combines the fuzzy set theory and the traditional decision tree learning strategy, and has the advantages of originality, high recognition degree, few parameters, capability of being used for small sample data sets, accurate and reliable result and the like.
Description
Technical Field
The invention relates to the field of emotion recognition, in particular to an emotion recognition method based on a deep fuzzy forest.
Background
With the development of science and technology, people live more and more abundantly, and the research means of psychology-related diseases are more and more diversified due to the integration of interdisciplinary knowledge and technology. The emotion is a psychological state integrating the feeling and thought of people and being generated, the psychological state is ubiquitous in daily life, work and learning of people, adverse emotions can affect the physical health of people and the mood of people, depression, anxiety and the like can be caused seriously, and the psychological relationship of people is greatly influenced. Music has also played an increasing role in the research and treatment of these diseases, and specialized "music therapy" disciplines have also now been developed. The music therapy is to study the influence of music on the functions of human body, so as to achieve the purposes of relaxing the mood and relieving the emotion.
Currently, there are two emotion recognition methods based on non-physiological signals and physiological signals. The physiological signals comprise autonomous physiological signals (electrocardio, myoelectricity, skin electricity and respiration), central nerve signals (electroencephalogram and cerebral blood oxygen signals) and the like, and compared with non-physiological signals (facial expressions, voice, limb actions and the like), the physiological signals are not easily controlled and influenced by subjective consciousness of people, so the objectivity and the identification accuracy are higher. The electroencephalogram signals can directly extract the brain signals and reflect the activity state of the brain, and the method has the advantages of convenience in extraction, high time resolution and strong real-time performance.
However, the electroencephalogram signal is a random non-stationary weak signal, is usually only 0.2-1 millivolt, has the characteristics of strong randomness and non-stationarity, nonlinearity, complex background noise and the like, and is usually used for identifying electroencephalogram data with high dimensionality and large data volume for comprehensive observation. In addition, due to the complex diversity of emotions, the emotion of the same person under the same piece of music can be different, and the emotion of different persons under different pieces of music can be the same, which are the difficulties of emotion recognition based on electroencephalogram signals of music.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the emotion recognition method based on the deep fuzzy forest, which can perform emotion recognition from a small sample data set and has high recognition degree.
The purpose of the invention can be realized by the following technical scheme:
a emotion recognition method based on a deep fuzzy forest comprises the following steps:
1) collecting an electroencephalogram signal;
2) preprocessing the electroencephalogram signal to remove noise;
3) and inputting the electroencephalogram signals into the pre-trained deep fuzzy forest model to obtain emotion recognition results.
Further, in the step 3), the deep fuzzy forest model adopts multi-granularity scanning, and probability vectors of electroencephalogram signal characteristics are obtained from the electroencephalogram signals and are used as input of the cascade forests; and adopting a cascade forest to identify the probability vector of the electroencephalogram signal characteristics to obtain an emotion identification result, wherein the multi-granularity scanning and the cascade forest are both constructed by adopting a fuzzy decision tree.
Further, the obtaining of the probability vector of the electroencephalogram signal characteristics as the input process of the cascade forest is specifically that the multi-granularity scanning comprises W sliding windows and N sliding windowsSFA fuzzy decision forest; each sliding window scans the electroencephalogram signal to generate N1An example is trained on a fuzzy decision forest to generate p ═ p1,p2,…pK]A probability vector of (2); sliding window per scan electroencephalogram signal produces N2A K-dimensional probability vector, N2=NSF×N1(ii) a The multi-granularity scanning connects the probability vectors to form a window vectorConnecting window vectors generated by all sliding windows in series to serve as input of a cascade forest; the shape of the sliding window comprises a rectangle and a square, and the fuzzy decision forest is constructed by adopting a fuzzy decision tree. Wherein, W, NSF、N1、N2And K are positive integers.
The frequency of electroencephalogram data acquired through experiments is 128Hz, namely, an electroencephalogram signal of one second has 128 points, the general number of leads of an electroencephalogram cap is 14, 32 or 64, the number of leads is far smaller than the data points on each lead, in multi-granularity scanning aiming at electroencephalogram emotion data, the electroencephalogram data are acquired in a long-time sequence form after being acquired by considering the difference between the electroencephalogram data and image data, in order to comprehensively extract characteristics and prevent omission, the sliding window of the invention has a rectangular shape and a square shape, and the sliding windows of the same shape have different sizes, so that the characteristics extracted by the sliding window contain more abundant information, and the electroencephalogram signal can be better characterized.
Further, the step of adopting the cascade forest to identify the electroencephalogram signal features specifically comprises the steps that each layer of the cascade forest is provided with a plurality of fuzzy decision forests; in the cascading forest, the input of a first layer is a probability vector of electroencephalogram signal characteristics, and the input of all other layers is the output of a previous layer; the output of the last layer of the cascaded forest is used to calculate the final accuracy, and the calculation expression of the final accuracy is as follows:
Fin(P)=Max{Ave[Pi×j]}
Pi×j=[P11,P12,…,P1j;…;Pi1,Pi2,…,Pij]
where Fin (×) is the final accuracy, P is the output of the last layer in the cascading forest, i is 1,2, …, NCF,j=1,2,…,K,NCFFor the number of fuzzy decision forests in each layer of the cascade forest, Pi×jIs that one sample is in the last layer of the cascade forest NCFThe classification probability, Ave [ ] of a fuzzy decision forest]For averaging, Max is taken as the maximum. According to the method, the complete fuzzy decision forest is not added into the cascading forest, so that the deep fuzzy forest model is simpler while the accuracy of emotion recognition results is guaranteed.
Further, in the step 3), the pre-training process of the depth fuzzy forest model specifically includes inputting a pre-established electroencephalogram signal training set into the pre-established depth fuzzy forest model, and performing iterative optimization on the depth fuzzy forest model based on the evaluation index until the output result of the depth fuzzy forest model meets the preset precision requirement.
Further, the depth fuzzy forest model is used for identifying a first emotion and a second emotion from the electroencephalogram signal, the electroencephalogram signal test set comprises a first emotion sample and a second emotion sample, the evaluation index comprises accuracy, precision and recall, and the calculation expression of the accuracy accuracycacy is as follows:
in the formula, TP is the number of the first emotion samples predicted to be the first emotion, TN is the number of the first emotion samples predicted to be the second emotion, FP is the number of the second emotion samples predicted to be the first emotion, and FN is the number of the first emotion samples predicted to be the second emotion;
the calculation expression of the precision is as follows:
the calculation expression of the recall rate recall is as follows:
further, in the step 2), the preprocessing of the electroencephalogram signal specifically includes filtering, segmenting and/or screening out singular samples of the electroencephalogram signal.
Further, the process of filtering the electroencephalogram signal specifically includes filtering the electroencephalogram signal to remove power frequency interference, high and low frequency noise, and/or removing electrooculogram, electrocardio, myoelectricity, and electrodermal.
Further, the step 1) specifically comprises the following steps:
101) selecting a plurality of subjects without emotion expression disorders;
102) selecting a part of objects from the step 101) to score a plurality of sections of audio, wherein the audio is capable of stimulating the objects to generate emotion to be recognized;
103) selecting a plurality of sections of audios with highest scores as test audios according to the scoring results;
104) and in a pre-established music emotion testing environment, the left objects are subjected to electroencephalogram signal acquisition by adopting a testing audio frequency.
Further, the step 104) is specifically that, in a pre-established music emotion testing environment, the subject is firstly adjusted to be in a calm state by relaxing and sitting the subject and closing eyes; then playing relaxed and relaxed music to make the subject completely calm; and finally, collecting the electroencephalogram signals of the rest objects by adopting the test audio.
Compared with the prior art, the invention has the following advantages:
(1) the emotion recognition method based on the deep fuzzy forest is adopted to recognize emotion of the electroencephalogram data, and has the advantages of originality, high recognition degree, few parameters, applicability to small sample data sets, easiness in understanding and the like; particularly, at present, research results related to electroencephalogram emotion recognition based on the depth fuzzy forest are not available, so that the emotion recognition method based on the depth fuzzy forest is original, and a new way is provided for recognition of music electroencephalogram emotion; the deep fuzzy forest is an integrated classification algorithm based on fuzzy decision trees, the classification prediction capability of the deep fuzzy forest can be compared favorably with a neural network, the deep fuzzy forest can cope with complex and diversified emotions, and the deep fuzzy forest has the advantage of high recognition degree; the super parameters of the deep fuzzy forest are few, the requirements on the data volume and the calculation facilities are not high, emotion recognition can be carried out in a small sample data set, and the method has the advantages of few parameters and capability of being used for the small sample data set; and the introduction of the fuzzy theory can also process fuzzy information in the data, and because a lot of fuzzy information exists in life, the fuzzy information can be close to life and has more practical significance.
(2) In the deep fuzzy forest model, the electroencephalogram signal characteristics are acquired by adopting multi-granularity scanning, and the electroencephalogram signal characteristics are identified by adopting cascade forests, so that when the input has high dimensionality, the characteristic learning capability of the deep fuzzy forest model can be further enhanced by the multi-granularity scanning; the cascade forest has structure perception capability, can adaptively determine the number of cascade levels, automatically set the complexity of a model, and has excellent effect even on a small-scale data set.
(3) Although the cascade forest provides a low-cost learning mode capable of replacing a deep neural network, whether data is fuzzy or not is not considered at the beginning of design, the invention provides a deep fuzzy forest model, a fuzzy theory is introduced into the deep fuzzy forest, a fuzzy decision tree can be improved on a clear decision tree algorithm, and a fuzzy set theory is combined with a traditional decision tree learning strategy by changing each decision tree in the cascade forest into the fuzzy decision tree, so that the model can process fuzzy information in the data, the application range of the emotion recognition method is widened, and the accuracy of emotion recognition results is improved.
(4) The evaluation indexes comprise accuracy, precision and recall rate, the deep fuzzy forest model is accurately evaluated, and the deep fuzzy forest model is iteratively optimized based on the evaluation indexes, so that the emotion recognition result of the deep fuzzy forest model is more accurate and reliable.
(5) The preprocessing process of the electroencephalogram signals comprises filtering, segmentation and screening of singular samples, and the interference of power frequency interference, high and low frequency noise, electrooculogram, electrocardio, myoelectricity and skin electricity is removed through filtering; obtaining more samples through segmentation; through the screening of singular samples, the data of excluding the abnormal points can cause wrong influence on emotion recognition, so that the preprocessed electroencephalogram signals are purer, and the accuracy of subsequent emotion recognition is improved.
(6) According to the method, the audio is firstly scored by selecting part of objects in the same batch of objects, so that the test audio is obtained, the reliability of the test audio is improved, the quality of the acquired electroencephalogram signal is higher, the difficulty of subsequent emotion recognition is reduced, and the accuracy of the emotion recognition result is improved.
(7) Before the electroencephalogram signals of the rest of the objects are acquired by the test audio, the objects are completely calm by relaxing the objects, closing the eyes and playing the relaxed music, so that the emotion expression of the objects is more accurate, the acquired electroencephalogram signals have higher quality, the difficulty of subsequent emotion recognition is reduced, and the accuracy of emotion recognition results is improved.
Drawings
FIG. 1 is an overall block diagram of the emotion recognition method based on the deep fuzzy forest of the present invention;
FIG. 2 is a diagram of the hardware connection of the music emotion testing environment according to the present invention;
FIG. 3 is a schematic diagram of a process of inducing pleasant mood in an embodiment of the present invention;
FIG. 4 is a schematic flow chart of filtering an electroencephalogram signal according to the present invention;
FIG. 5 is a schematic view of a process of feature extraction using multi-granularity scanning in the deep fuzzy forest model according to the present invention;
FIG. 6 is a schematic diagram of a cascaded forest structure in the deep fuzzy forest model of the present invention;
FIG. 7 is a schematic diagram illustrating the overall flow of the emotion recognition method based on the deep fuzzy forest according to the present invention;
FIG. 8 is a schematic flow chart of the emotion recognition method based on the deep fuzzy forest according to the present invention, including a training phase.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Example 1
As shown in fig. 1, the embodiment is an emotion recognition method based on a depth fuzzy forest, and mainly includes steps of music emotion electroencephalogram data acquisition, electroencephalogram signal preprocessing, and music emotion recognition of the depth fuzzy forest. The music electroencephalogram data acquisition is realized by establishing an electroencephalogram music emotion experiment, firstly selecting a quiet experiment environment with insufficient light, then dividing a tested object into two parts, wherein one part is used for selecting experiment materials, and the other part is tested to obtain electroencephalogram data based on music emotion. The pre-processing process of the electroencephalogram signals comprises three parts of filtering, segmenting and screening samples. The deep fuzzy forest emotion recognition is a process for classifying music electroencephalogram data and comprises two processes of multi-granularity scanning and extracting features and cascading forest emotion recognition.
As shown in fig. 8, the above steps are described in detail as follows:
1. music emotion electroencephalogram data acquisition S1
The music emotion electroencephalogram data acquisition is carried out by the following steps.
(1) Selecting a subject to be tested
The experiment selects the subjects who did not receive any professional music training, who were all healthy and did not have a mood expression disorder, right handedness (right hand is the habitual hand). Each subject voluntarily participates in the experiment, and the emotion is calm before the experiment without generating severe emotional fluctuation.
(2) Building music emotion testing environment
As shown in fig. 2, the environment components required for the experiment include a notebook computer, electroencephalogram equipment, sound equipment, and electroencephalogram recording software. The electroencephalogram cap is wirelessly connected with software on a computer through a matched USB receiver to transmit signals, and the computer is used for controlling electroencephalogram recording software and playing audio in real time; in order to ensure good music effect, the system uses the Bluetooth sound box as a music medium, so that a simple and easily-distributed software and hardware environment is formed.
(3) Selection of Experimental Induction Material
Experiment will select NeIndividuals participated in the experiment, and men and women accounted for half of the experiment. Half of the persons will be used for the selection of musical material, N will be preparedmThe segment can stimulate the brain to generate two kinds of audio frequencies of joyful emotion and sadness, half of each emotion is respectively invited to the segment, N is invited to the segment firstlyePer 2 volunteers used the following emotional Scale for this NmThe segment audio is scored. According to this NeThe final scoring results of 2 volunteers are respectively selected from Nm1(Nm1<Nm) The happy, sad audio with the highest segment score is taken as the remaining Ne2 elicitor material tested.
(4) Emotional Induction test
As shown in fig. 3, a sound-proof and weak-light shielding room is selected, a chair, a PC, a bluetooth sound and a set of electroencephalogram testing device are arranged in the shielding room, the purpose, the flow and the attention of the experiment to be tested are informed, and the test to be tested is guided to sit and wear the equipment. The tested person sits on the chair in a relaxing way, closes the eyes for 2min and adjusts the chair to a calm state, and two pieces of relaxing music are played for the tested person in the period. And after the loosening is finished, keeping the environment quiet for 30s to ensure that the testee is completely calm.
To avoid negative emotions interfering with positive emotion induction, two experiments were performed in the order of positive and negative groups. The flow of the inducing experiment of joyful emotion and sad emotion is the same except that the played music is different, and each tested person obtains N in total after the experiment0Music emotional recordings.
2. Electroencephalogram signal preprocessing S2
The preprocessing of the electroencephalogram signal includes three parts, which are filtering, segmentation and singular sample screening of the electroencephalogram data, and is described in detail below.
2.1 filtering of EEG signals
As shown in fig. 4, the electroencephalogram signal is a non-stationary random signal and is very weak, so that some noise is inevitably generated in the acquisition process, and according to the characteristics of the noise, different methods can be adopted, and the specific process is as follows:
(1) a 50Hz notch filter is adopted to remove power frequency interference of the electroencephalogram signals;
(2) removing high and low frequency noise by using a band-pass filter of 0.5-50 Hz;
(3) and removing electrooculogram, electrocardio, myoelectricity and electrodermal electricity by adopting a wavelet denoising method.
2.2 segmentation and screening of electroencephalogram signals
In order to obtain more samples, the filtered electroencephalogram data can be displayed according to the time length L (L < L)0) And (4) carrying out segmentation, and then removing samples with larger baseline drift caused by head movement, body movement and the like, wherein data with abnormal points can cause wrong influence on classification. Each tested terminal has NsAnd (4) sampling.
3. Deep fuzzy forest music emotion recognition S3
The deep fuzzy forest is inspired by a neural network, and the characteristic relation and classification are processed by adopting multi-granularity scanning and cascading forests. When the input has high dimensionality, its characterization learning ability can be further enhanced by multi-granularity scanning. The cascade forest has structure perception capability, can self-adaptively determine the number of cascade levels, automatically set the complexity of a model, and has excellent effect even on a small-scale data set, and both the multi-granularity scanning and the cascade forest are constructed by adopting fuzzy decision trees.
3.1 feature extraction based on Multi-granular scanning
As shown in fig. 5, the multi-granularity scanning process includes the following steps:
(1) the preprocessed EEG signal S ═ S1,s2,…,sN]TWherein, in the step (A), (N is the number of leads, L0Is the amount of data on each lead). S will be used directly in the multi-granularity scan.
(2) The window size in multi-granularity scanning is x ═ x1,x2,…xn]TWherein x isi=[x1,x2,…xl],i=1,2…n,x∈Rn×lN is less than or equal to N, and L is less than or equal to L. Through multi-granularity scanning, N is generated1An example is used in the subsequent training of fuzzy decision forests, N1=(N-n+1)×(L0-l+1)。
(3) The multi-granularity scan produces probability vectors for concatenation: if the music electroencephalogram emotion is subjected to K classification recognition, an example x is trained by a fuzzy decision forest to generate p ═ p1,p2,…pK]Probability vector of (2), multiple granularity scan part total NSFA fuzzy decision forest, then for a sliding window of one size, there will be N2Generation of K-dimensional probability vectors, N2=NSF×N1. The multi-granularity scanning connects the probability vectors to form a new vector(window vectors) and connecting the window vectors generated by all the sliding windows in series to serve as the input of the cascade forest. In the embodiment, the shapes of the sliding windows comprise a square and a rectangle, and the sizes of the sliding windows with the same shape are different, and the fuzzy decision forest is constructed by replacing the decision tree in the random forest with the fuzzy decision tree.
(4) W windows are selected, and finally N will exist3A number of probability vectors are generated which,wherein N is2iIs the P generated by the ith sliding windowSThe number of vectors.
3.2 cascaded forest emotion recognition
As shown in fig. 6, the deep fuzzy forest establishes deep learning by cascading forests, the last part of the output of the multi-granularity scan is used as the input of the cascading forests, the process identifies the features generated by the multi-granularity scan, and in the cascading forests, each layer is independently used for processing the probability. Except the input of the first layer, the input of all other layers is related to the output of the previous layer, and the specific cascade process comprises the following steps:
(1) enhanced feature generation: in K classification of brain emotion, p is assumedc=[pc1,pc2,…pcK]Is the probability generated by each fuzzy decision forest, and each layer of the cascaded forests has NCFA fuzzy decision forest, the enhanced feature vector generated by each layer is N4=K×NCFBy pcrfTo represent this enhanced feature, then
(2) Outputting each layer of the cascade forest: suppose thatCascade total NLLayer, then NLThe output of-1 is:
(3) and (3) calculating the accuracy: the last layer of the cascaded forest will not be the input for the next layer, which will generate NCFThe final accuracy is calculated by K-dimensional probability vectors. The final accuracy is determined by the following equation:
Fin(P)=Max{Ave[Pi×j]}
Pi×j=[P11,P12,…,P1j;…;Pi1,Pi2,…,Pij]
wherein, i is 1,2, …, NCF,j=1,2,…,K,Pi×jIs that one sample is N in the last layer of cascade forestsCFThe classification probability of individual forests, Ave, means averaging. The final classification result is the probability of happy positive emotions and sad negative emotions, and the emotions with high probability are judged to be the emotions.
3.3 training of deep fuzzy forest models
The system adopts three indexes of accuracy rate (accuracycacy), precision rate (precision) and recall rate (call) to optimize the structure and parameters of the deep fuzzy forest model, and the calculation formula of each index is as follows:
wherein, TP represents the number of predicting the positive sample into the positive class, TN represents the number of predicting the positive sample into the negative class, FP represents the number of predicting the negative sample into the positive class, and FN represents the number of predicting the negative sample into the negative class.
As shown in fig. 7, when evaluating the electroencephalogram music emotion recognition accuracy of the depth fuzzy forest, the accuracy level that the depth fuzzy forest model is expected to reach is set in advance, the system automatically adjusts the parameters of the depth fuzzy forest, and if the accuracy does not meet the set requirement, the parameters of the depth fuzzy forest are adjusted: window size (N × l) and number (W) including multi-granularity scan, number of layers (N) of cascaded forestsL) And fuzzy decision forest per layer (N)CF) The number of (2).
4. Acquisition of emotion recognition result S4
And loading the new preprocessed music emotion data into the trained deep fuzzy forest model to obtain an emotion recognition result.
The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.
Claims (10)
1. A emotion recognition method based on a deep fuzzy forest is characterized by comprising the following steps:
1) collecting an electroencephalogram signal;
2) preprocessing the electroencephalogram signal to remove noise;
3) and inputting the electroencephalogram signals into the pre-trained deep fuzzy forest model to obtain emotion recognition results.
2. The emotion recognition method based on the depth fuzzy forest as claimed in claim 1, wherein in said step 3), the depth fuzzy forest model adopts multi-granularity scanning, and obtains the probability vector of EEG signal characteristics from EEG signals as the input of the cascade forest; and adopting a cascade forest to identify the probability vector of the electroencephalogram signal characteristics to obtain an emotion identification result, wherein the multi-granularity scanning and the cascade forest are both constructed by adopting a fuzzy decision tree.
3. The emotion recognition method based on the depth fuzzy forest as claimed in claim 2, wherein the probability vector of the EEG features is obtained by the following steps, wherein the multi-granularity scanning comprises W sliding windows and NSFA fuzzy decision forest; each sliding window scans the electroencephalogram signal to generate N1An example is trained on a fuzzy decision forest to generate p ═ p1,p2,…pK]A probability vector of (2); sliding window per scan electroencephalogram signal produces N2A K-dimensional probability vector, N2=NSF×N1(ii) a The multi-granularity scanning connects the probability vectors to form a window vectorConnecting window vectors generated by all sliding windows in series to serve as input of a cascade forest; the shape of the sliding window comprises a rectangle and a square, and the fuzzy decision forest is constructed by adopting a fuzzy decision tree.
4. The emotion recognition method based on the depth fuzzy forest as claimed in claim 2, wherein the step of adopting the cascade forest to recognize the electroencephalogram signal features is to provide a plurality of fuzzy decision forests on each layer of the cascade forest; in the cascading forest, except the input of the first layer, the inputs of all other layers are the outputs of the previous layer; the output of the last layer of the cascaded forest is used for calculating the final accuracy, and the calculation expression of the final accuracy is as follows:
Fin(P)=Max{Ave[Pi×j]}
Pi×j=[P11,P12,…,P1j;…;Pi1,Pi2,…,Pij]
where Fin (.) is the final accuracy and P is the last in the cascading forestOutput of one layer, i ═ 1,2, …, NCF,j=1,2,…,K,NCFFor the number of fuzzy decision forests in each layer of the cascade forest, Pi×jIs that one sample is in the last layer of the cascade forest NCFThe classification probability, Ave [ ] of a fuzzy decision forest]For averaging, Max is taken as the maximum.
5. The emotion recognition method based on the depth fuzzy forest as claimed in claim 1, wherein in the step 3), the pre-training process of the depth fuzzy forest model specifically comprises inputting a pre-established electroencephalogram signal training set into the pre-established depth fuzzy forest model, and performing iterative optimization on the depth fuzzy forest model based on the evaluation index until the output result of the depth fuzzy forest model meets the preset precision requirement.
6. The emotion recognition method based on the deep fuzzy forest as claimed in claim 5, wherein the deep fuzzy forest model is used for recognizing a first emotion and a second emotion from an electroencephalogram signal, the electroencephalogram signal training set comprises a first emotion sample and a second emotion sample, the evaluation index comprises accuracy, precision and recall, and the calculation expression of the accuracy is as follows:
in the formula, TP is the number of the first emotion samples predicted to be the first emotion, TN is the number of the first emotion samples predicted to be the second emotion, FP is the number of the second emotion samples predicted to be the first emotion, and FN is the number of the first emotion samples predicted to be the second emotion;
the calculation expression of the precision is as follows:
the calculation expression of the recall rate recall is as follows:
7. the emotion recognition method based on the depth fuzzy forest as claimed in claim 1, wherein in the step 2), the preprocessing of the electroencephalogram signal is specifically to respectively perform filtering, segmentation and/or singular sample screening on the electroencephalogram signal.
8. The emotion recognition method based on the deep fuzzy forest as claimed in claim 7, wherein the process of filtering the electroencephalogram signal is specifically to respectively filter power frequency interference, remove high and low frequency noise, and/or remove electro-oculogram, electro-cardiogram, myoelectricity and skin electricity from the electroencephalogram signal.
9. The emotion recognition method based on the deep fuzzy forest as claimed in claim 1, wherein the step 1) specifically comprises the following steps:
101) selecting a plurality of subjects without emotion expression disorders;
102) selecting a part of objects from the step 101) to score a plurality of sections of audio, wherein the audio is capable of stimulating the objects to generate emotion to be recognized;
103) selecting a plurality of sections of audios with highest scores as test audios according to the scoring results;
104) and in a pre-established music emotion testing environment, the left objects are subjected to electroencephalogram signal acquisition by adopting a testing audio frequency.
10. The emotion recognition method based on the deep fuzzy forest as claimed in claim 9, wherein the step 104) is specifically that, in a pre-established music emotion testing environment, the subject is firstly adjusted to be calm by allowing the subject to sit relaxed and eyes to be closed; then playing relaxed and relaxed music to make the subject completely calm; and finally, collecting the electroencephalogram signals of the rest objects by adopting the test audio.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911204760.3A CN111000556A (en) | 2019-11-29 | 2019-11-29 | Emotion recognition method based on deep fuzzy forest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911204760.3A CN111000556A (en) | 2019-11-29 | 2019-11-29 | Emotion recognition method based on deep fuzzy forest |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111000556A true CN111000556A (en) | 2020-04-14 |
Family
ID=70113455
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911204760.3A Pending CN111000556A (en) | 2019-11-29 | 2019-11-29 | Emotion recognition method based on deep fuzzy forest |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111000556A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696670A (en) * | 2020-06-16 | 2020-09-22 | 广州三瑞医疗器械有限公司 | Intelligent prenatal fetus monitoring interpretation method based on deep forest |
CN113208633A (en) * | 2021-04-07 | 2021-08-06 | 北京脑陆科技有限公司 | Emotion recognition method and system based on EEG brain waves |
CN113553896A (en) * | 2021-03-25 | 2021-10-26 | 杭州电子科技大学 | Electroencephalogram emotion recognition method based on multi-feature deep forest |
WO2022067524A1 (en) * | 2020-09-29 | 2022-04-07 | 香港教育大学 | Automatic emotion recognition method and system, computing device and computer readable storage medium |
CN115919313A (en) * | 2022-11-25 | 2023-04-07 | 合肥工业大学 | Facial myoelectricity emotion recognition method based on space-time characteristics |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389037A (en) * | 2018-08-30 | 2019-02-26 | 中国地质大学(武汉) | A kind of sensibility classification method based on depth forest and transfer learning |
CN109480833A (en) * | 2018-08-30 | 2019-03-19 | 北京航空航天大学 | The pretreatment and recognition methods of epileptic's EEG signals based on artificial intelligence |
CN110070133A (en) * | 2019-04-24 | 2019-07-30 | 北京工业大学 | A kind of brain function network class method based on depth forest |
-
2019
- 2019-11-29 CN CN201911204760.3A patent/CN111000556A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109389037A (en) * | 2018-08-30 | 2019-02-26 | 中国地质大学(武汉) | A kind of sensibility classification method based on depth forest and transfer learning |
CN109480833A (en) * | 2018-08-30 | 2019-03-19 | 北京航空航天大学 | The pretreatment and recognition methods of epileptic's EEG signals based on artificial intelligence |
CN110070133A (en) * | 2019-04-24 | 2019-07-30 | 北京工业大学 | A kind of brain function network class method based on depth forest |
Non-Patent Citations (2)
Title |
---|
曹健: "《深度级联模糊决策森林的设计与研究》", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 * |
金雨露 等: "基于深度森林的脑电情绪识别研究", 《软件导刊》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111696670A (en) * | 2020-06-16 | 2020-09-22 | 广州三瑞医疗器械有限公司 | Intelligent prenatal fetus monitoring interpretation method based on deep forest |
WO2022067524A1 (en) * | 2020-09-29 | 2022-04-07 | 香港教育大学 | Automatic emotion recognition method and system, computing device and computer readable storage medium |
CN113553896A (en) * | 2021-03-25 | 2021-10-26 | 杭州电子科技大学 | Electroencephalogram emotion recognition method based on multi-feature deep forest |
CN113553896B (en) * | 2021-03-25 | 2024-02-09 | 杭州电子科技大学 | Electroencephalogram emotion recognition method based on multi-feature depth forest |
CN113208633A (en) * | 2021-04-07 | 2021-08-06 | 北京脑陆科技有限公司 | Emotion recognition method and system based on EEG brain waves |
CN115919313A (en) * | 2022-11-25 | 2023-04-07 | 合肥工业大学 | Facial myoelectricity emotion recognition method based on space-time characteristics |
CN115919313B (en) * | 2022-11-25 | 2024-04-19 | 合肥工业大学 | Facial myoelectricity emotion recognition method based on space-time characteristics |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110507335B (en) | Multi-mode information based criminal psychological health state assessment method and system | |
CN109157231B (en) | Portable multichannel depression tendency evaluation system based on emotional stimulation task | |
CN111000556A (en) | Emotion recognition method based on deep fuzzy forest | |
Zheng et al. | EEG-based emotion classification using deep belief networks | |
JP3310498B2 (en) | Biological information analyzer and biological information analysis method | |
CN110353702A (en) | A kind of emotion identification method and system based on shallow-layer convolutional neural networks | |
CN109784023B (en) | Steady-state vision-evoked electroencephalogram identity recognition method and system based on deep learning | |
CN107220591A (en) | Multi-modal intelligent mood sensing system | |
CN112656427A (en) | Electroencephalogram emotion recognition method based on dimension model | |
CN110946576A (en) | Visual evoked potential emotion recognition method based on width learning | |
CN113729707A (en) | FECNN-LSTM-based emotion recognition method based on multi-mode fusion of eye movement and PPG | |
CN106073706A (en) | A kind of customized information towards Mini-mental Status Examination and audio data analysis method and system | |
CN112488002B (en) | Emotion recognition method and system based on N170 | |
CN110135285B (en) | Electroencephalogram resting state identity authentication method and device using single-lead equipment | |
CN111920420B (en) | Patient behavior multi-modal analysis and prediction system based on statistical learning | |
Kim et al. | Wedea: A new eeg-based framework for emotion recognition | |
CN110881975A (en) | Emotion recognition method and system based on electroencephalogram signals | |
CN115640827B (en) | Intelligent closed-loop feedback network method and system for processing electrical stimulation data | |
Li et al. | Multi-modal emotion recognition based on deep learning of EEG and audio signals | |
Nguyen et al. | A potential approach for emotion prediction using heart rate signals | |
Pratiwi et al. | EEG-based happy and sad emotions classification using LSTM and bidirectional LSTM | |
CN117883082A (en) | Abnormal emotion recognition method, system, equipment and medium | |
Immanuel et al. | Recognition of emotion with deep learning using EEG signals-the next big wave for stress management in this covid-19 outbreak | |
Wijayanto et al. | Biometric identification based on EEG signal with photo stimuli using Hjorth descriptor | |
CN115690528A (en) | Electroencephalogram signal aesthetic evaluation processing method, device, medium and terminal across main body scene |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200414 |
|
RJ01 | Rejection of invention patent application after publication |