CN114391846A - Emotion recognition method and system based on filtering type feature selection - Google Patents

Emotion recognition method and system based on filtering type feature selection Download PDF

Info

Publication number
CN114391846A
CN114391846A CN202210069288.2A CN202210069288A CN114391846A CN 114391846 A CN114391846 A CN 114391846A CN 202210069288 A CN202210069288 A CN 202210069288A CN 114391846 A CN114391846 A CN 114391846A
Authority
CN
China
Prior art keywords
signal
sample set
original
emotion recognition
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210069288.2A
Other languages
Chinese (zh)
Other versions
CN114391846B (en
Inventor
吴万庆
幸运
蒋明哲
张献斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202210069288.2A priority Critical patent/CN114391846B/en
Publication of CN114391846A publication Critical patent/CN114391846A/en
Application granted granted Critical
Publication of CN114391846B publication Critical patent/CN114391846B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/33Heart-related electrical modalities, e.g. electrocardiography [ECG] specially adapted for cooperation with other devices
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/352Detecting R peaks, e.g. for synchronising diagnostic apparatus; Estimating R-R interval
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Physiology (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Developmental Disabilities (AREA)
  • Pulmonology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an emotion recognition method and system based on filtering type feature selection, wherein the method comprises the following steps: acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set; carrying out signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal; performing time domain feature extraction, frequency domain feature extraction and nonlinear feature extraction on the preprocessed signals to obtain signal features; carrying out feature selection and feature fusion on the signal features, and constructing to obtain a fusion signal sample set; training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model. The system comprises: the device comprises a data acquisition module, a data preprocessing module, a feature extraction module, a feature selection module and an identification module. By using the method and the device, the emotion characteristics can be completely expressed, so that the effect of the emotion recognition model is improved. The invention can be widely applied to the field of emotion recognition.

Description

Emotion recognition method and system based on filtering type feature selection
Technical Field
The invention relates to the field of emotion recognition, in particular to an emotion recognition method and system based on filtering type feature selection.
Background
Nowadays, with the acceleration of life rhythm, people also face more and more workload, and family life also faces more and more stress, so more and more people lead to that the mental bearing capacity becomes more fragile because of increasing stress, and then produce psychological burden, and the emotional state can intuitively reflect the joyful degree of the emotional recognizer's reaction in this period of time, helps to carry out subsequent adjustment or intervention. Currently, emotion recognition can be performed through human images, facial expressions, voice and texts, but in these studies, the actual emotion is hidden easily under the control of subjective consciousness of a subject, so that the problem of inaccurate emotion recognition results exists.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an emotion recognition method and system based on filtering type feature selection, which can completely express emotion features so as to improve the effect of an emotion recognition model.
The first technical scheme adopted by the invention is as follows: a method for emotion recognition based on filtering feature selection, comprising the steps of:
acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
carrying out signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal;
performing time domain feature extraction, frequency domain feature extraction and nonlinear feature extraction on the preprocessed signals to obtain signal features;
carrying out feature selection and feature fusion on the signal features, and constructing to obtain a fusion signal sample set;
training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
Further, still include:
training a classification model based on a single signal sample set to obtain a single signal emotion recognition model;
and comparing the recognition effect of the single-signal emotion recognition model with that of the multiple physiological emotion recognition models.
Further, the step of acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set specifically includes:
synchronously acquiring original electrocardiosignals, original pulse wave signals and original skin electric signals based on a multi-channel physiological parameter instrument and a shim GSR + interaction device;
and constructing a single signal sample set according to the original electrocardiosignals, the original pulse wave signals and the original skin electric signals.
Further, the step of performing signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal specifically includes:
denoising the original electrocardiosignals in the single signal sample set based on a Butterworth filter to obtain pure electrocardiosignals;
denoising the original pulse wave signals in the single signal sample set based on a Butterworth filter to obtain pure pulse wave signals;
down-sampling the original skin electrical signal in the single signal sample set to obtain a down-sampled skin electrical signal;
decomposing the downsampled skin electrical signal based on CvxEDA to obtain a skin conductance level and a skin conductance response;
pure electrocardiosignals, pure pulse wave signals, skin conductance level and skin conductance response are taken as preprocessed signals.
Further, the signal characteristics comprise electrocardiosignal characteristics, pulse wave signal characteristics and skin electric signal characteristics, the electrocardiosignal characteristics comprise RR interval standard deviation, RR interval root mean square, RR interval average value, RR adjacent interval difference value larger than 50ms, percentage of RR adjacent interval difference value larger than 50ms, multi-scale entropy, heartbeat frequency per minute, ultralow frequency power spectrum, low frequency power spectrum, high frequency power spectrum and low frequency ratio high frequency, the pulse wave signal characteristics comprise adjacent main wave peak value interval standard deviation, root mean square of adjacent main wave peak value interval difference value, adjacent main wave peak value interval average value, first power energy spectrum, second power energy spectrum, main wave width, main wave height, main wave width ratio height, main wave peak value amplitude maximum value, main wave peak value amplitude standard deviation, main wave peak value amplitude average value, pulse frequency per minute and multi-entropy scale, the skin electrical signal characteristics comprise average value of skin conductance level, standard deviation of skin conductance level, area value of skin conductance level curve, skin conductance response standard deviation, area of skin conductance response curve, number of peaks of skin conductance response curve, maximum value of peak amplitude of skin conductance response curve, average value of peak amplitude of skin conductance response curve, rise time of skin conductance response curve and skin electrical power spectrum.
Further, the step of performing feature selection and feature fusion on the signal features and constructing a fused signal sample set includes:
based on a data cutting method, intercepting data before and after the emotion jump moment, performing supervised filtering type feature selection, and constructing a feature subset;
normalizing the signal features in the feature subset to obtain normalized features;
and performing series-stage fusion on the normalized features, and constructing a fusion signal sample set.
Further, the step of intercepting data before and after the emotion jump moment and performing supervised filtering type feature selection based on the data cutting method to construct a feature subset specifically includes:
based on a data cutting method, intercepting data one minute before and after the emotion jump moment and performing feature extraction to obtain a feature sample set;
judging that the characteristic sample set is normally distributed, and judging and retaining the significant difference characteristic based on the paired sample T test;
judging that the characteristic sample set presents non-normal distribution, and judging and keeping the significant difference characteristic based on Wilcoxon test;
and performing feature selection on the significant difference features based on Pearson correlation analysis, and constructing a feature subset according to the selected features.
The second technical scheme adopted by the invention is as follows: an emotion recognition system based on filtering feature selection, comprising:
the data acquisition module is used for acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
the data preprocessing module is used for performing signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal;
the characteristic extraction module is used for carrying out time domain characteristic extraction, frequency domain characteristic extraction and nonlinear characteristic extraction on the preprocessed signals to obtain signal characteristics;
the characteristic selection module is used for carrying out characteristic selection and characteristic fusion on the signal characteristics and constructing a fusion signal sample set;
and the recognition module is used for training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
The method and the system have the beneficial effects that: the invention considers that the actual emotion is continuously changed, takes the physiological signal characteristics with obvious difference change at the emotion jump moment as the input of the machine learning classification model, improves the accuracy of the emotion recognition result, considers the normal distribution condition of data and utilizes different discrimination modes according to different conditions, and is beneficial to better feature selection, thereby further improving the accuracy of the emotion recognition result.
Drawings
FIG. 1 is a flow chart of the steps of a method of emotion recognition based on filtering feature selection of the present invention;
FIG. 2 is a block diagram of the emotion recognition system based on filtering feature selection according to the present invention;
FIG. 3 is a diagram illustrating steps for creating feature subsets according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
As shown in fig. 1, the present invention provides an emotion recognition method based on filtering feature selection, which includes the following steps:
s1, collecting an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
s1.1, synchronously acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal based on a multi-channel physiological parameter instrument and a shim GSR + interaction device;
s1.2, constructing a single signal sample set according to the original electrocardiosignals, the original pulse wave signals and the original skin electric signals.
Specifically, electrocardiosignals, pulse wave signals and skin electric signals are selected as emotion recognition samples, a plurality of volunteers with healthy physical conditions are selected as experimental study objects, each volunteer wears a multichannel physiological parameter instrument and a Shimmer GSR + interaction device and is provided with an electrocardiogram sensor ECG, a photoelectric pulse wave sensor PPG and a skin electric sensor GSR, the sampling frequencies are respectively 400Hz, 200Hz and 200Hz, and the acquired data are a single signal sample set.
S2, performing signal denoising, down-sampling and decomposition processing on the single signal sample set to obtain a preprocessed signal;
specifically, the human physiological signals are very weak physiological low-frequency electric signals, the maximum amplitude is usually not more than 5mV, and the signal frequency is between 0.05 Hz and 100 Hz. Is susceptible to the surrounding environment and the instrument itself, and therefore requires signal preprocessing.
S2.1, denoising the original electrocardiosignals in the single signal sample set based on a Butterworth filter to obtain pure electrocardiosignals;
specifically, the electrocardiosignal is subjected to Baseline drift filtering below 0.5Hz by a Butterworth high-pass filter, electromyographic interference above 45Hz is filtered by a low-pass filter, and finally power frequency interference is filtered by a notch filter 50 Hz.
S2.2, denoising the original pulse wave signals in the single signal sample set based on the Butterworth filter to obtain pure pulse wave signals;
specifically, the pulse wave signals are filtered by a Butterworth high-pass filter to remove baseline drift below 0.5Hz, and then the low-pass filter is used for removing electromyographic interference above 20 Hz.
S2.3, performing down-sampling on the original skin electrical signal in the single signal sample set to obtain a down-sampled skin electrical signal;
s2.4, decomposing the downsampling skin electrical signal based on CvxEDA to obtain a skin conductance level and a skin conductance response;
specifically, the electrical skin signal is down-sampled to 4Hz and then decomposed into Skin Conductance Levels (SCL) and Skin Conductance Responses (SCRs) by CvxEDA.
And S2.5, taking the pure electrocardiosignals, the pure pulse wave signals, the skin conductance level and the skin conductance reaction as preprocessed signals.
S3, performing time domain feature extraction, frequency domain feature extraction and nonlinear feature extraction on the preprocessed signals to obtain signal features;
specifically, the physiological signal is used as a data source to perform emotion state classification, identification and analysis, representative features need to be extracted from the physiological signal to serve as important indexes of emotion analysis, time domain features, frequency domain features and nonlinear features of the physiological signal are selected, a sliding window of the physiological signal is set to be 60 seconds, and the step length is 1 second.
Electrocardiosignal characteristics: the standard deviation of RR intervals, the root mean square of the difference, the average value of RR intervals, the percentage of the difference value of RR adjacent intervals being more than 50ms, the multi-scale entropy, the number of heartbeat per minute, the ultra-low frequency power spectrum, the high frequency power spectrum and the low frequency ratio are 11 features.
Pulse wave signal characteristics: 13 features of adjacent main wave peak value interval standard deviation, root mean square of adjacent main wave peak value interval difference, adjacent main wave peak value interval average, power energy spectrum (P <0.15), power energy spectrum (0.15< P <0.4), main wave width, main wave height, main wave width ratio height, main wave peak value amplitude maximum, main wave peak value amplitude standard deviation, main wave peak value amplitude average, pulse per minute frequency and multi-scale entropy.
Skin electrical signal characteristics: average skin conductance level, standard deviation, area of curve value; the skin electric power spectrum has 10 characteristics of skin electric power response standard deviation, curve area, number of peaks, maximum value of peak amplitude, average value of peak amplitude and rise time of the curve.
And 34 time domain, frequency domain and nonlinear features are extracted.
S4, performing feature selection and feature fusion on the signal features, and constructing a fusion signal sample set;
specifically, the main purpose of feature selection is to select the most beneficial relevant features to the algorithm from the original data, reduce the dimensionality of the data and the difficulty of the learning task, and improve the efficiency of the model. The method selects a supervised filtering type feature selection method, wherein the filtering type method refers to that when the prediction capability of the selected features is evaluated, certain heuristic criteria based on information statistics are used, the selected feature subsets are different according to different evaluation criteria, but noise features can be eliminated quickly, and the method is high in calculation efficiency and high in universality; supervised refers to measuring the relationship between features and categories and features to determine feature subsets.
S4.1, intercepting data before and after the emotion jump moment based on a data cutting method, selecting supervised filtering type features, and constructing a feature subset;
s4.1.1, intercepting data one minute before and after the emotion jump moment based on a data cutting method, and performing feature extraction to obtain a feature sample set;
s4.1.2, judging that the characteristic sample set presents normal distribution, and judging and keeping the significant difference characteristic based on the paired sample T test;
s4.1.3, judging that the characteristic sample set presents non-normal distribution, and judging and keeping the significant difference characteristic based on Wilcoxon test;
s4.1.4, selecting the features of the significant difference based on Pearson correlation analysis, and constructing the optimal feature subset according to the selected features.
Specifically, referring to fig. 3, data of one minute before and after a physiological signal when a mood jumps are intercepted through data segmentation, and the data is subjected to feature extraction to construct a feature sample set. Judging whether the sample set presents normal distribution or not by using Shapiro-Wilk (S-W) statistical test, and judging the physiological signal significant difference (P <0.05) under three emotion jump states of neutral emotion jumping into negative emotion, negative emotion jumping into neutral emotion and neutral emotion jumping into positive emotion by using paired sample T (T-test) test in parameter test if the sample set presents normal distribution; if the abnormal distribution is presented, Wilcoxon test is used for judging the significant difference (P <0.05) of the emotional jump moment, the feature without significant difference is eliminated, Pearson correlation analysis is used for feature selection, if the correlation coefficient R is greater than 0.9, the correlation coefficient R is considered to be strong correlation, and one feature is selected, so that 12 physiological signal features are selected in total, wherein the central electrical signal features comprise RR adjacent interval difference value of greater than 50ms, heartbeat times per minute, low-frequency power spectrum and low-frequency high-frequency; the pulse wave signal is characterized by the interval standard deviation of adjacent main wave peak values, the root mean square of the interval difference of the adjacent main wave peak values, the width ratio height of the main wave and the maximum value of the amplitude of the main wave peak value; the skin electrical signal is characterized by the average value of the skin conductance level, the standard deviation of the skin conductance response, the maximum value of the peak amplitude and the rising time of the curve.
S4.2, performing normalization processing on the signal features in the feature subset to obtain normalized features.
Specifically, the optimal feature subset of the three physiological signals is subjected to zero-mean normalization (z-score normalization), the mean of the processed data is 0, the standard deviation is 1, and the calculation formula is as follows:
Figure BDA0003481402560000061
in the above formula, xiRepresenting the original feature values, mean { x } and std { x } represent the mean and standard deviation, respectively, in the original feature set.
And S4.3, performing serial-stage fusion on the normalized features, and constructing a fusion signal sample set.
And S5, training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
Specifically, the emotion recognition problem is first converted into a three-classification problem, with positive emotions being recorded as +1, neutral emotions being recorded as 0, and negative emotions being recorded as-1. Namely, the output results are +1, -1 and 0 respectively, and the purpose of identifying the 3 types of emotion samples is realized. If most of the K nearest neighbors of a sample in the feature space belong to a certain class, then the sample also belongs to this class and has the characteristics of the sample on this class. The method only determines the category of the sample to be classified according to the category of the nearest sample or samples in the determination of classification decision. The KNN method is only related to a very small number of adjacent samples when the classification is decided. The study set the value of K to 10 and the Distance calculation selected the Euclidean Distance (Euclidean Distance), which is calculated as follows,
Figure BDA0003481402560000062
where ρ is a point (x)2,y2) And point (x)1,y1) The euclidean distance between; | X | is a point (X)2,y2) Euclidean distance to the origin.
Further as a preferred embodiment of the method, the method further comprises:
s6, training a classification model based on a single signal sample set to obtain a single signal emotion recognition model;
and S7, comparing the recognition effect of the single-signal emotion recognition model with that of the multiple physiological emotion recognition models.
The invention carries out a plurality of groups of experiments in the actually collected physiological signals, and proves that the emotion recognition effect of the physiological signal characteristics with obvious change effect based on the emotion jump moment is better than that of the traditional characteristics, and the accuracy rate is higher.
The invention uses three evaluation indexes to measure the performance of our network model:
accuracy between predicted and true values (Accuracy):
Figure BDA0003481402560000063
sensitivity between predicted and true values (Sensitivity):
Figure BDA0003481402560000071
and Specificity between predicted and true values (Specificity):
Figure BDA0003481402560000072
in the above formula, TP, TN, FP and FN are defined as true positive, true negative, false positive and false negative, respectively.
According to simulation results, compared with the traditional feature engineering method, the feature selection method has the advantages of high accuracy, specificity and sensitivity, and more obvious effect in classifying multiple physiological signal emotions. The results show that the classification accuracy of the single physiological signal is lower than that of the multiple physiological signals, which verifies that the prior statement that the accuracy of the joint identification of the multiple physiological signals is higher than that of the single physiological signal parameter is provided.
According to the invention, the physiological signal characteristics with obvious difference change at the moment of emotional jump are used as the input end of the machine learning classification model, the traditional characteristic engineering directly uses the obvious difference between different emotional states, and the fact that the emotion is continuously changed in reality is not considered. Compared with the method that the features extracted by the traditional feature engineering are directly input into the model, the classification accuracy is obviously improved; according to the method, the normal distribution condition of the data is considered, and different discrimination modes are utilized according to different conditions, so that the characteristic selection is favorably carried out; the invention directly obtains the emotion recognition result from the characteristics of the physiological signals, and is more objective and accurate compared with the prior scale method.
As shown in fig. 2, an emotion recognition system based on filtering feature selection includes:
the data acquisition module is used for acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
the data preprocessing module is used for performing signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal;
the characteristic extraction module is used for carrying out time domain characteristic extraction, frequency domain characteristic extraction and nonlinear characteristic extraction on the preprocessed signals to obtain signal characteristics;
the characteristic selection module is used for carrying out characteristic selection and characteristic fusion on the signal characteristics and constructing a fusion signal sample set;
and the recognition module is used for training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
An emotion recognition device based on filtering-type feature selection:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a method of emotion recognition based on filtering feature selection as described above.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
A storage medium having stored therein instructions executable by a processor, the storage medium comprising: the processor-executable instructions, when executed by the processor, are for implementing a method of emotion recognition based on filtering feature selection as described above.
The contents in the above method embodiments are all applicable to the present storage medium embodiment, the functions specifically implemented by the present storage medium embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present storage medium embodiment are also the same as those achieved by the above method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A method for emotion recognition based on filtering type feature selection is characterized by comprising the following steps:
acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
carrying out signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal;
performing time domain feature extraction, frequency domain feature extraction and nonlinear feature extraction on the preprocessed signals to obtain signal features;
carrying out feature selection and feature fusion on the signal features, and constructing to obtain a fusion signal sample set;
training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
2. The method of emotion recognition based on filtering-type feature selection according to claim 1, further comprising:
training a classification model based on a single signal sample set to obtain a single signal emotion recognition model;
and comparing the recognition effect of the single-signal emotion recognition model with that of the multiple physiological emotion recognition models.
3. The method for emotion recognition based on filtering-type feature selection according to claim 2, wherein the step of acquiring the original electrocardiosignal, the original pulse wave signal and the original skin electric signal to obtain a single signal sample set specifically comprises:
synchronously acquiring original electrocardiosignals, original pulse wave signals and original skin electric signals based on a multi-channel physiological parameter instrument and a shim GSR + interaction device;
and constructing a single signal sample set according to the original electrocardiosignals, the original pulse wave signals and the original skin electric signals.
4. The emotion recognition method based on filtering feature selection according to claim 3, wherein the step of performing signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal specifically comprises:
denoising the original electrocardiosignals in the single signal sample set based on a Butterworth filter to obtain pure electrocardiosignals;
denoising the original pulse wave signals in the single signal sample set based on a Butterworth filter to obtain pure pulse wave signals;
down-sampling the original skin electrical signal in the single signal sample set to obtain a down-sampled skin electrical signal;
decomposing the downsampled skin electrical signal based on CvxEDA to obtain a skin conductance level and a skin conductance response;
pure electrocardiosignals, pure pulse wave signals, skin conductance level and skin conductance response are taken as preprocessed signals.
5. The method of claim 4, wherein the signal features comprise electrocardiosignal features, pulse wave signal features and cutaneous electric signal features, the electrocardiosignal features comprise standard deviation of RR intervals, root mean square of RR intervals, average value of RR intervals, difference value of RR adjacent intervals greater than 50ms, percentage of difference value of RR adjacent intervals greater than 50ms, multi-scale entropy, heartbeat times per minute, ultralow frequency power spectrum, low frequency power spectrum, high frequency power spectrum and low frequency ratio high frequency, the pulse wave signal features comprise standard deviation of interval of adjacent main wave peaks, root mean square of interval difference value of adjacent main wave peaks, average value of interval of adjacent main wave peaks, first power energy spectrum, second power energy spectrum, width of main wave, height of main wave, width ratio height of main wave, maximum value of amplitude of main wave peaks, The skin electric signal characteristics comprise average value of skin conductance level, standard deviation of skin conductance level, area value of a skin conductance level curve, standard deviation of skin conductance response, area of a skin conductance response curve, number of peaks of the skin conductance response curve, maximum value of peak amplitude of the skin conductance response curve, average value of peak amplitude of the skin conductance response curve, rise time of the skin conductance response curve and skin electric power spectrum.
6. The method according to claim 5, wherein the step of performing feature selection and feature fusion on the signal features and constructing a fused signal sample set specifically comprises:
based on a data cutting method, intercepting data before and after the emotion jump moment, performing supervised filtering type feature selection, and constructing a feature subset;
normalizing the signal features in the feature subset to obtain normalized features;
and performing series-stage fusion on the normalized features, and constructing a fusion signal sample set.
7. The method for emotion recognition based on filtering-type feature selection according to claim 6, wherein the step of intercepting data before and after an emotion jump time and performing supervised filtering-type feature selection to construct a feature subset based on the data segmentation method specifically comprises:
based on a data cutting method, intercepting data one minute before and after the emotion jump moment and performing feature extraction to obtain a feature sample set;
judging that the characteristic sample set is normally distributed, and judging and retaining the significant difference characteristic based on the paired sample T test;
judging that the characteristic sample set presents non-normal distribution, and judging and keeping the significant difference characteristic based on Wilcoxon test;
and performing feature selection on the significant difference features based on Pearson correlation analysis, and constructing a feature subset according to the selected features.
8. An emotion recognition system based on filtering-type feature selection, comprising:
the data acquisition module is used for acquiring an original electrocardiosignal, an original pulse wave signal and an original skin electric signal to obtain a single signal sample set;
the data preprocessing module is used for performing signal denoising processing, down-sampling processing and decomposition processing on the single signal sample set to obtain a preprocessed signal;
the characteristic extraction module is used for carrying out time domain characteristic extraction, frequency domain characteristic extraction and nonlinear characteristic extraction on the preprocessed signals to obtain signal characteristics;
the characteristic selection module is used for carrying out characteristic selection and characteristic fusion on the signal characteristics and constructing a fusion signal sample set;
and the recognition module is used for training the classification model based on the fusion signal set to obtain a multi-physiological-signal emotion recognition model.
CN202210069288.2A 2022-01-21 2022-01-21 Emotion recognition method and system based on filtering type feature selection Active CN114391846B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210069288.2A CN114391846B (en) 2022-01-21 2022-01-21 Emotion recognition method and system based on filtering type feature selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210069288.2A CN114391846B (en) 2022-01-21 2022-01-21 Emotion recognition method and system based on filtering type feature selection

Publications (2)

Publication Number Publication Date
CN114391846A true CN114391846A (en) 2022-04-26
CN114391846B CN114391846B (en) 2023-12-01

Family

ID=81233714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210069288.2A Active CN114391846B (en) 2022-01-21 2022-01-21 Emotion recognition method and system based on filtering type feature selection

Country Status (1)

Country Link
CN (1) CN114391846B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115067909A (en) * 2022-07-21 2022-09-20 中国民用航空总局第二研究所 Remote tower human factor work efficiency determination method based on biological information data processing
CN115568853A (en) * 2022-09-26 2023-01-06 山东大学 Psychological stress state assessment method and system based on picoelectric signals
CN115644872A (en) * 2022-10-26 2023-01-31 广州建友信息科技有限公司 Emotion recognition method, device and medium
CN115715680A (en) * 2022-12-01 2023-02-28 杭州市第七人民医院 Anxiety discrimination method and device based on connective tissue potential

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102512160A (en) * 2011-12-16 2012-06-27 天津大学 Electroencephalogram emotional state feature extraction method based on adaptive tracking in different frequency bands
CN107007291A (en) * 2017-04-05 2017-08-04 天津大学 Intense strain intensity identifying system and information processing method based on multi-physiological-parameter
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN108309328A (en) * 2018-01-31 2018-07-24 南京邮电大学 A kind of Emotion identification method based on adaptive fuzzy support vector machines
CN109620262A (en) * 2018-12-12 2019-04-16 华南理工大学 A kind of Emotion identification system and method based on wearable bracelet
CN110619301A (en) * 2019-09-13 2019-12-27 道和安邦(天津)安防科技有限公司 Emotion automatic identification method based on bimodal signals
CN113197579A (en) * 2021-06-07 2021-08-03 山东大学 Intelligent psychological assessment method and system based on multi-mode information fusion
CN113397546A (en) * 2021-06-24 2021-09-17 福州大学 Method and system for constructing emotion recognition model based on machine learning and physiological signals

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102512160A (en) * 2011-12-16 2012-06-27 天津大学 Electroencephalogram emotional state feature extraction method based on adaptive tracking in different frequency bands
CN107007291A (en) * 2017-04-05 2017-08-04 天津大学 Intense strain intensity identifying system and information processing method based on multi-physiological-parameter
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN108309328A (en) * 2018-01-31 2018-07-24 南京邮电大学 A kind of Emotion identification method based on adaptive fuzzy support vector machines
CN109620262A (en) * 2018-12-12 2019-04-16 华南理工大学 A kind of Emotion identification system and method based on wearable bracelet
CN110619301A (en) * 2019-09-13 2019-12-27 道和安邦(天津)安防科技有限公司 Emotion automatic identification method based on bimodal signals
CN113197579A (en) * 2021-06-07 2021-08-03 山东大学 Intelligent psychological assessment method and system based on multi-mode information fusion
CN113397546A (en) * 2021-06-24 2021-09-17 福州大学 Method and system for constructing emotion recognition model based on machine learning and physiological signals

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115067909A (en) * 2022-07-21 2022-09-20 中国民用航空总局第二研究所 Remote tower human factor work efficiency determination method based on biological information data processing
CN115568853A (en) * 2022-09-26 2023-01-06 山东大学 Psychological stress state assessment method and system based on picoelectric signals
CN115644872A (en) * 2022-10-26 2023-01-31 广州建友信息科技有限公司 Emotion recognition method, device and medium
CN115715680A (en) * 2022-12-01 2023-02-28 杭州市第七人民医院 Anxiety discrimination method and device based on connective tissue potential

Also Published As

Publication number Publication date
CN114391846B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN114391846B (en) Emotion recognition method and system based on filtering type feature selection
US20210000426A1 (en) Classification system of epileptic eeg signals based on non-linear dynamics features
CN111329474B (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
CN110353673B (en) Electroencephalogram channel selection method based on standard mutual information
Lu et al. Classification of single-channel EEG signals for epileptic seizures detection based on hybrid features
CN112674782B (en) Device and method for detecting epileptic-like electrical activity of epileptic during inter-seizure period
Wu et al. Fast, accurate localization of epileptic seizure onset zones based on detection of high-frequency oscillations using improved wavelet transform and matching pursuit methods
Rangappa et al. Classification of cardiac arrhythmia stages using hybrid features extraction with k-nearest neighbour classifier of ecg signals
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
CN112641451A (en) Multi-scale residual error network sleep staging method and system based on single-channel electroencephalogram signal
Zhang et al. Roughness-length-based characteristic analysis of intracranial EEG and epileptic seizure prediction
CN113180696A (en) Intracranial electroencephalogram detection method and device, electronic equipment and storage medium
CN110543831A (en) brain print identification method based on convolutional neural network
CN111067513B (en) Sleep quality detection key brain area judgment method based on characteristic weight self-learning
CN115299963A (en) High-frequency oscillation signal automatic detection algorithm and system based on waveform characteristic template
Lian et al. Spatial enhanced pattern through graph convolutional neural network for epileptic EEG identification
CN115089179A (en) Psychological emotion insights analysis method and system
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
CN111613338B (en) Method and system for constructing spike-slow complex wave detection model
CN116392087A (en) Sleep stability quantification and adjustment method, system and device based on modal decomposition
Al-hajjar et al. Epileptic seizure detection using feature importance and ML classifiers
Kaleem et al. Telephone-quality pathological speech classification using empirical mode decomposition
CN115067910A (en) Heart rate variability pressure detection method, device, storage medium and system
CN114532994A (en) Automatic detection method for unsupervised electroencephalogram high-frequency oscillation signals based on convolution variational self-encoder
RU2751137C1 (en) Method for determining sleep phase in long-term eeg recording

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant