CN110991428A - Breathing signal emotion recognition method and system based on multi-scale entropy - Google Patents
Breathing signal emotion recognition method and system based on multi-scale entropy Download PDFInfo
- Publication number
- CN110991428A CN110991428A CN201911394604.8A CN201911394604A CN110991428A CN 110991428 A CN110991428 A CN 110991428A CN 201911394604 A CN201911394604 A CN 201911394604A CN 110991428 A CN110991428 A CN 110991428A
- Authority
- CN
- China
- Prior art keywords
- scale entropy
- time
- time series
- sequence
- calculating
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 38
- 230000029058 respiratory gaseous exchange Effects 0.000 title description 9
- 230000000241 respiratory effect Effects 0.000 claims abstract description 52
- 238000007637 random forest analysis Methods 0.000 claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 15
- 238000012216 screening Methods 0.000 claims abstract description 14
- 238000007781 pre-processing Methods 0.000 claims abstract description 13
- 230000036651 mood Effects 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000005469 granulation Methods 0.000 claims description 4
- 230000003179 granulation Effects 0.000 claims description 4
- 239000013598 vector Substances 0.000 claims description 4
- 230000001965 increasing effect Effects 0.000 claims description 3
- 230000008451 emotion Effects 0.000 description 25
- 238000004422 calculation algorithm Methods 0.000 description 17
- 238000012549 training Methods 0.000 description 16
- 230000002996 emotional effect Effects 0.000 description 8
- 238000003066 decision tree Methods 0.000 description 6
- 201000010099 disease Diseases 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 5
- 241000282414 Homo sapiens Species 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000001939 inductive effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012952 Resampling Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011840 criminal investigation Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 239000003814 drug Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000000638 stimulation Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007636 ensemble learning method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000013401 experimental design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000009323 psychological health Effects 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 230000002269 spontaneous effect Effects 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2113—Selection of the most significant subset of features by ranking or filtering the set of features, e.g. using a measure of variance or of feature cross-correlation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Child & Adolescent Psychology (AREA)
- Biomedical Technology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Signal Processing (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a respiratory signal emotion recognition method and system based on multi-scale entropy, which comprises the following steps: collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals; performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy; screening the characteristics to screen out an optimal characteristic subset; and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
Description
Technical Field
The disclosure relates to the technical field of signal identification, in particular to a respiratory signal emotion identification method and system based on multi-scale entropy.
Background
The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
With the progress of society and the development of economy, the influence of emotion on life and work is attracting more and more attention. Under the fast-paced living pressure, people often produce various bad emotions, the incidence of psychological diseases such as anxiety, depression and the like is increased, and the health and even the life of people are seriously threatened. In the present society, people still lack the attention to psychological diseases, and according to the investigation of the world health organization, if the psychological health problem continues to be not generally regarded, 120 hundred million working days are wasted by 2030 worldwide, and the economic loss can reach 9250 ten thousand dollars. The psychological world of people is very complex, the traditional psychological diseases are diagnosed by subjective judgment of psychologists, and scientific and objective diagnosis and evaluation basis is lacked, so that the diseases are difficult to be diagnosed and treated in time. Scientific and accurate emotion recognition is particularly important for the prevention and diagnosis of psychological diseases, and thus emotion recognition research is one of the new focuses in the medical industry.
On the other hand, the artificial intelligence technology is developed rapidly, the machine intelligence is realized, and the learning ability of the machine is not separated from the human-computer interaction. The emotion calculation is the basis for realizing human-computer interaction, the machine senses the emotional state of human beings and analyzes the emotional state, and then feedback is made to complete interaction, so that the emotion intellectualization of the machine is a necessary loop in the development of artificial intelligence. Emotion recognition is an important component of emotion calculation, and has great research value in the fields of clinical medicine, criminal investigation, personalized equipment research and development and the like.
In the course of implementing the present disclosure, the inventors found that the following technical problems exist in the prior art:
currently, researchers commonly use data such as facial expressions, voices, gestures, and the like for emotion recognition. Such signals contain a large amount of emotional information, so that the emotional state of the human body can be recognized, but the emotional state of the human body is easily hidden by the interference of subjective consciousness, so that the emotion recognition mode has defects. The physiological signals are spontaneous signals generated by human organs or tissues and are not influenced by artificial subjective factors, and compared with other signals, the physiological signals have the characteristics of objectivity, trueness and high reliability. Studies have found that emotion recognition based on physiological signals, which are specifically responsive to different emotions, is entirely feasible.
In the field of clinical medicine, equipment with an emotion recognition function is developed, the emotional state of a patient is accurately, efficiently and real-timely monitored and recognized, intervention and warning can be timely given when the patient is in bad emotion, and the method has important practical significance. In addition, emotion recognition has wide application prospects in the fields of criminal investigation, traffic safety, entertainment and the like. The respiration signal is an important and easily-collected signal source in the physiological signal and plays an important role in emotional research.
Disclosure of Invention
In order to solve the defects of the prior art, the present disclosure provides a respiratory signal emotion recognition method and system based on multi-scale entropy; the emotion recognition classification precision and accuracy can be effectively improved, and means are provided for emotion analysis.
In a first aspect, the present disclosure provides a respiratory signal emotion recognition method based on multi-scale entropy;
a respiratory signal emotion recognition method based on multi-scale entropy comprises the following steps:
collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
screening the characteristics to screen out an optimal characteristic subset;
and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
In a second aspect, the present disclosure also provides a respiratory signal emotion recognition system based on multi-scale entropy;
respiratory signal emotion recognition system based on multi-scale entropy, comprising:
a pre-processing module configured to: collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
a feature extraction module configured to: performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
a feature screening module configured to: screening the characteristics to screen out an optimal characteristic subset;
a classification module configured to: and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
In a third aspect, the present disclosure also provides an electronic device comprising a memory and a processor, and computer instructions stored on the memory and executed on the processor, wherein the computer instructions, when executed by the processor, perform the steps of the method of the first aspect.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium for storing computer instructions which, when executed by a processor, perform the steps of the method of the first aspect.
Compared with the prior art, the beneficial effect of this disclosure is:
the multi-scale entropy is innovatively applied to feature calculation of respiratory signal emotion recognition, so that the operation time can be effectively reduced, the operation rate can be improved, and the classification precision and accuracy can be improved. The method comprises the steps of collecting respiration signals under 6 emotions through an emotion induction experiment, extracting multi-scale entropy characteristics of the respiration signals after preprocessing the respiration signals, evaluating and screening an original characteristic set by utilizing a Relieff algorithm, constructing an emotion recognition model by adopting a random forest algorithm, optimizing algorithm parameters through cross-folding inspection and a grid optimization algorithm, establishing the emotion recognition model based on an optimal characteristic subset and optimal parameters, and realizing emotion recognition. The method can effectively improve the emotion classification precision and accuracy.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application.
FIG. 1 is a flowchart of a method according to a first embodiment of the disclosure;
FIG. 2 is a schematic diagram of an experimental design according to a first embodiment of the present disclosure;
fig. 3(a) and 3(b) are waveforms before and after the respiratory signal preprocessing according to the first embodiment of the disclosure;
FIG. 4 is a flow chart of the random forest algorithm emotion recognition model establishment in the first embodiment of the present disclosure;
Detailed Description
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
In the first embodiment, the present embodiment provides a respiratory signal emotion recognition method based on multi-scale entropy;
as shown in FIG. 1, the respiratory signal emotion recognition method based on multi-scale entropy includes:
s1: collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
s2: performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
s3: screening the characteristics to screen out an optimal characteristic subset;
s4: and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
As one or more embodiments, in S1, a respiration signal of the emotion to be recognized is acquired; the method comprises the following specific steps: and collecting through a multi-lead physiological collection system.
The collection of physiological signals under different emotions is the basis of emotion recognition research, and the establishment of a reasonable experimental scheme and the establishment of a reliable experimental environment are the keys for effectively inducing emotion and acquiring high-quality physiological signals.
As shown in fig. 2, the experimental facility consists of two computers and a physiological signal acquisition instrument. The notebook computer is an emotion exciting part and is used for playing emotion exciting videos and recording the videos to record the experimental process of the testee; the desktop computer and the physiological signal acquisition instrument are signal acquisition parts and are used for acquiring, recording and displaying emotional physiological signals. The experimental device adopts a multi-lead physiological acquisition system to acquire the respiratory signals to be tested, and the sampling frequency is 1000 Hz. The emotion inducing material is a video clip of 7-10 minutes and is respectively guided to six emotions of neutrality, fear, sadness, happiness, anger and disgust of the testee, the testee completes emotion excitation under the stimulation of different emotion guiding videos, and the physiological acquisition system synchronously acquires 5-minute effective respiratory signal data of the testee in a waking state under the stimulation of different emotion inducing videos. During the period, the experimenter is strictly separated from the testee, so that the interference to the testee is avoided. And 5-minute time intervals are set in the middle of different video plays and used for stabilizing the emotion of the testee and ensuring the effectiveness of each section of respiratory signal.
As one or more embodiments, in S1, preprocessing the acquired respiratory signal; the method comprises the following specific steps:
and removing invalid data, resampling and low-pass filtering to remove noise interference.
Fig. 3(a) and 3(b) show the respiration signal after being preprocessed. The original signal sampling rate is 1000Hz, the signal is down-sampled to reduce the amount of computation and thus increase the budget speed, and the resampling frequency is set to 15 Hz. The frequency of the respiratory signal is generally between 0.1 Hz and 0.4Hz, in order to perform denoising and filtering on the original respiratory signal, a second-order IIR peak filter is adopted in the filter disclosed by the invention, the central frequency is set to be 0.375Hz, and the bandwidth is 0.5 Hz.
As one or more embodiments, in S2, feature extraction is performed on the preprocessed respiratory signal based on multi-scale entropy; the method comprises the following specific steps: extracting multi-scale entropy characteristics, combining the multi-scale entropy characteristics and improving the multi-scale entropy characteristics.
Further, extracting the multi-scale entropy features comprises:
s200: coarse graining;
s201: and (5) multi-scale entropy extraction.
Further, the coarse granulation step of S200 specifically includes:
s2001: for one dimension of the originalTime-varying sequences { x }1,x2,x3,…,xi,…,xNReconstructed time series on multiple scales asWherein tau is a time scale factor, the length of the reconstructed time sequence is N/tau, and the specific formula is as follows:
s2002: selecting a time series { x) of window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtaining
S2003: the time window is shifted forward by tau time units to obtain a new time series { xτ+1,xτ+2,…,x2τCalculating the mean value in the window, namely
S2004: moving the time window forward in a non-overlapping mode until the traversal of the original sequence is finished, and obtaining the time sequence after the mean value is coarsely granulatedWherein J is N/tau.
Further, the multi-scale entropy extraction step of S201 includes:
s2011: one-dimensional discrete time seriesM continuous points in the sequence are selected to form a vectorWherein i is 1,2,3, …, J-m + 1;
s2012: two vectors A are definedm(i) AndAm(j) distance between themWherein k is 0,1, …, m-1, i is 1,2,3, …, J-m +1, J is 1,2,3, …, J-m +1, and i is not equal to J;
s2013: given a threshold parameter r, count dm(i, j) is less than r SDyIs recorded as the template matching number Nm(i) And calculating the number of template matches Nm(i) The ratio to J-m is recordedWherein SDyIs the standard deviation of the one-dimensional discrete sequence Y;
s2014: according to the steps S2011-S2013, increasing the m dimension to the m +1 dimension, and calculating to obtain Bm+1(r);
S2015: the multi-scale entropy of time series Y is:
when the value of J is a finite value,
further, the step of extracting the combined multi-scale entropy features comprises:
s211: let original one-dimensional discrete time series X ═ X1,x2,x3,…,xi,…,xNThe reconstructed mean coarse grained time series isWhereinIs the k-th mean coarse grained time series,rho is the sequence length after the mean value coarse graining, and the specific formula is as follows:
s212: let k equal to 1, select a time series { x with window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtainingThe time window is shifted forward by tau time units to obtain a new time series { xτ+1,xτ+2,…,x2τCalculating the mean value in the window, namelyMoving the time window forward in a non-overlapping mode, repeating the steps until the traversal of the original sequence is finished, and obtaining the time sequence after the mean value is coarsely granulated
S213: according to the step S212, until k is taken as tau, the reconstructed mean value coarse graining time sequence is obtained
S214: calculating corresponding multi-scale entropy based on the obtained mean coarse graining time sequence;
s215: calculating the mean value of the obtained multi-scale entropy, namely a combined multi-scale entropy value,
further, the step of extracting the improved multi-scale entropy features comprises:
s221: adopting moving average coarse graining to ensure that the length of the reconstructed time sequence is only tau-1 less than the original length, and the specific calculation comprises the following steps:
for a given one-dimensional discrete time series { x1,x2,x3,…,xi,…,xNThe time sequence reconstructed by the moving average coarse graining is set asThe length of the reconstructed time sequence is N-tau +1, and the specific formula is as follows:
s222: selecting a time series { x) of window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtaining
S223: the time window is moved forward by 1 time unit to obtain a new time series { x }2,x3,…,xτ+1Calculating the mean value in the window, namely
S224: moving the time window forward until the traversal of the original sequence is finished, and obtaining the time sequence after the moving average is coarsely granulatedWherein J-N- τ + 1.
it will be appreciated that the calculation of the multi-scale entropy requires the value of the time scale factor τ to be determined first, since the degree of coarsening of the time series is affected by the size of τ. If the value of tau is too large, the difference of short time in the original signal can be erased; the value of tau is too small, and the time sequence after coarse graining is not beneficial to mining the original sequence information. Setting a proper scale factor value interval is important for extracting multi-scale entropy characteristics. The value range of tau is set to be 1-15, the frequency of the respiratory signal after down sampling is 15Hz, when the value of tau is 15, the frequency of the time sequence after coarse granulation is 15/15 Hz which is 1Hz, the maximum frequency of the original signal is 0.5Hz and is 0.1-0.5 Hz higher than the frequency range of the respiratory signal according to the Nyquist sampling theorem, and therefore the coarse granulation time sequence with the value of tau within the interval of 1-15 can completely reflect the physiological information of the respiratory signal.
As described above, multi-scale entropy, combined multi-scale entropy and improved multi-scale entropy under scale factors of 1-15 are respectively extracted from the respiratory signal to serve as a respiratory signal emotion feature set.
As one or more embodiments, in S3, the feature set is filtered to screen out an optimal feature subset; the method comprises the following specific steps:
and screening the feature set by adopting a Relieff algorithm to screen out an optimal feature subset.
It should be understood that the features are screened by adopting a Relieff algorithm, and an optimal feature subset is screened out; the method comprises the following specific steps:
it should be understood that feature screening is feature subset selection, which means that N features are selected from existing M features to optimize a specific index of a system, and is a process of selecting some most effective features from an original feature set to reduce the dimensionality of a data set and remove invalid features, and is an important means for improving the performance of a learning algorithm.
The method is characterized in that feature screening is carried out based on a Relieff algorithm, similar to a K-nearest neighbor algorithm, a sample R is randomly selected from training samples, K nearest neighbor samples in the same class as R and K nearest neighbor samples in each class different from the R class are found out, the distances between the sample R and the nearest neighbor samples in the same class are respectively calculated, the feature weight is modified according to the distances between the sample R and the nearest neighbor samples in different classes, and an optimal feature subset is selected according to a set threshold value.
The algorithm flow is shown in table 1:
TABLE 1Relieff algorithm flow
In the characteristic weight updating formula, class (Rt) represents the class of the sample Rt selected at the tth time; p (c) represents the proportion of c samples in the total samples; hj represents the jth of k nearest neighbor samples of the same class as the target sample; mj (c) represents the jth nearest neighbor sample in a class c different from the target sample; diff (a, Rt, Hj) represents the distance between the sample Rt and the sample Hj on the feature a, and the distance calculation method is classified into a continuous type and a discrete type according to the feature attributes, and the specific formula is as follows:
as one or more embodiments, as shown in fig. 4, in S4, a pre-trained random forest classifier; the specific training steps include:
constructing a random forest classifier and constructing a training set;
training the random forest classifier by using a training set to obtain a trained random forest classifier;
in the training process, the number of decision trees of the random forest and the splitting attribute number of the decision trees are determined by adopting a ten-fold cross test and a grid optimization algorithm on a training set.
The training set includes: a respiratory signal feature of a known emotion classification label, the respiratory signal feature set comprising: multi-scale entropy features, combined multi-scale entropy features, and improved multi-scale entropy features.
The invention discloses a breathing signal emotion recognition model established based on a random forest algorithm, wherein the random forest is a combined classifier taking a classification regression tree as a basic classifier, and is one of integrated learning algorithms. Common ensemble learning methods can be divided into two categories: boosting and bagging. The random forest is a decision tree model based on a bagging framework, the random forest comprises a plurality of trees, each tree gives a classification result, and the generation rule of each tree is as follows:
(1) if the training set size is N, for each tree, extracting N training samples from training randomly and in a replacement way, taking the N training samples as the training set of the tree, and repeating the steps for K times to generate K groups of training sample sets.
(2) If the sample dimension of each feature is M, a constant M < < M is assigned, and M features are randomly selected from the M features.
(3) Each tree was grown to the greatest extent possible with m features and no pruning process was performed.
Random forests have two important parameters: the number of decision trees Ntree in the random forest and the size Mtry of the attribute set when the CART tree is split. The method adopts a ten-fold cross test and a grid optimization algorithm to determine the number of decision trees of a random forest and the splitting attribute number of the decision trees on a training set, and comprises the following specific steps:
(1) dividing the data into ten parts randomly, wherein nine parts are used as training sets for random forest model learning, and the remaining part is used as a test set for evaluating the model classification accuracy;
(2) setting parameters Ntree and Mtree, wherein the value range of the Ntree is 50 to 500, and the step length is 10; the value range of Mtry is 5 to 10, and the step length is 1. And establishing a two-dimensional grid by taking the Ntree and the Mtree as coordinates, wherein nodes in the grid are parameter pairs consisting of the Ntree and the Mtree.
(3) Respectively calculating the classification accuracy of the random forest under each group of Ntree and Mtry parameter pairs;
(4) and calculating the average value of the classification accuracy by utilizing the ten-fold intersection to determine the optimal algorithm parameters.
The second embodiment also provides a respiratory signal emotion recognition system based on the multi-scale entropy;
a respiratory signal emotion recognition system based on multi-scale entropy;
respiratory signal emotion recognition system based on multi-scale entropy, comprising:
a pre-processing module configured to: collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
a feature extraction module configured to: performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
a feature screening module configured to: screening the characteristics to screen out an optimal characteristic subset;
a classification module configured to: and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
In a third embodiment, the present embodiment further provides an electronic device, which includes a memory, a processor, and computer instructions stored in the memory and executed on the processor, where the computer instructions, when executed by the processor, implement the steps of the method in the first embodiment.
In a fourth embodiment, the present embodiment further provides a computer-readable storage medium for storing computer instructions, and the computer instructions, when executed by a processor, perform the steps of the method in the first embodiment.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A respiratory signal emotion recognition method based on multi-scale entropy is characterized by comprising the following steps:
collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
screening the characteristics to screen out an optimal characteristic subset;
and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
2. The method of claim 1, wherein the feature extraction is performed on the preprocessed respiratory signals based on multi-scale entropy; the method comprises the following specific steps: extracting multi-scale entropy characteristics, combining the multi-scale entropy characteristics and improving the multi-scale entropy characteristics.
3. The method of claim 2, wherein extracting multi-scale entropy features comprises:
s200: coarse graining;
s201: and (5) multi-scale entropy extraction.
4. The method of claim 3, wherein extracting multi-scale entropy features comprises:
the coarse granulation step of S200 specifically includes:
s2001: for the original one-dimensional discrete time series x1,x2,x3,…,xi,…,xNReconstructed time series on multiple scales asWherein tau is a time scale factor, the length of the reconstructed time sequence is N/tau, and the specific formula is as follows:
s2002: selecting a time series { x) of window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtaining
S2003: the time window is shifted forward by tau time units to obtain a new time series { xτ+1,xτ+2,…,x2τCalculating the mean value in the window, namely
5. The method as claimed in claim 3, wherein the multi-scale entropy extracting step of S201 comprises:
s2011: one-dimensional discrete time seriesM continuous points in the sequence are selected to form a vectorWherein i is 1,2,3, …, J-m + 1;
s2012: two vectors A are definedm(i) And Am(j) Distance between themWherein k is 0,1, …, m-1, i is 1,2,3, …, J-m +1, J is 1,2,3, …, J-m +1, and i is not equal to J;
s2013: given a threshold parameter r, count dm(i, j) is less than r SDyIs recorded as the template matching number Nm(i) And calculating the number of template matches Nm(i) The ratio to J-m is recordedWherein SDyIs the standard deviation of the one-dimensional discrete sequence Y;
s2014: according to the steps S2011-S2013, increasing the m dimension to the m +1 dimension, and calculating to obtain Bm+1(r);
S2015: the multi-scale entropy of time series Y is:
when the value of J is a finite value,
6. the method of claim 2, wherein the step of extracting the combined multi-scale entropy features comprises:
s211: let original one-dimensional discrete time series X ═ X1,x2,x3,…,xi,…,xNThe reconstructed mean coarse grained time series isWhereinIs the k-th mean coarse grained time series,rho is the sequence length after the mean value coarse graining, and the specific formula is as follows:
s212: let k equal to 1, select a time series { x with window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtainingThe time window is shifted forward by tau time units to obtain a new time series { xτ+1,xτ+2,…,x2τCalculating the mean value in the window, namelyMoving the time window forward in a non-overlapping mode, repeating the steps until the traversal of the original sequence is finished, and obtaining the time sequence after the mean value is coarsely granulated
S213: according to the step S212, until k is taken as tau, the reconstructed mean value coarse graining time sequence is obtained
S214: calculating corresponding multi-scale entropy based on the obtained mean coarse graining time sequence;
7. the method of claim 2, wherein the step of extracting the improved multi-scale entropy features comprises:
s221: adopting moving average coarse graining to ensure that the length of the reconstructed time sequence is only tau-1 less than the original length, and the specific calculation comprises the following steps:
for a given one-dimensional discrete time series { x1,x2,x3,…,xi,…,xNThe time sequence reconstructed by the moving average coarse graining is set asThe length of the reconstructed time sequence is N-tau +1, and the specific formula is as follows:
s222: selecting a time series { x) of window length τ1,x2,…,xτCalculating the mean value thereof, i.e. obtaining
S223: the time window is moved forward by 1 time unit to obtain a new time series { x }2,x3,…,xτ+1Calculating the mean value in the window, namely
S224: moving the time window forward untilThe original sequence is traversed and finished, and the time sequence after the moving average is coarsely granulated can be obtainedWherein J is N- τ + 1;
8. respiratory signal emotion recognition system based on multi-scale entropy, characterized by includes:
a pre-processing module configured to: collecting a respiratory signal of a mood to be recognized; preprocessing the collected respiratory signals;
a feature extraction module configured to: performing feature extraction on the preprocessed respiratory signals based on multi-scale entropy;
a feature screening module configured to: screening the characteristics to screen out an optimal characteristic subset;
a classification module configured to: and inputting the optimal feature subset into a pre-trained random forest classifier, and outputting emotion recognition classification results.
9. An electronic device comprising a memory and a processor and computer instructions stored on the memory and executable on the processor, the computer instructions when executed by the processor performing the steps of the method of any of claims 1 to 7.
10. A computer-readable storage medium storing computer instructions which, when executed by a processor, perform the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911394604.8A CN110991428A (en) | 2019-12-30 | 2019-12-30 | Breathing signal emotion recognition method and system based on multi-scale entropy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911394604.8A CN110991428A (en) | 2019-12-30 | 2019-12-30 | Breathing signal emotion recognition method and system based on multi-scale entropy |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110991428A true CN110991428A (en) | 2020-04-10 |
Family
ID=70078799
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911394604.8A Pending CN110991428A (en) | 2019-12-30 | 2019-12-30 | Breathing signal emotion recognition method and system based on multi-scale entropy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110991428A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563451A (en) * | 2020-05-06 | 2020-08-21 | 浙江工业大学 | Mechanical ventilation ineffective inspiration effort identification method based on multi-scale wavelet features |
CN111916066A (en) * | 2020-08-13 | 2020-11-10 | 山东大学 | Random forest based voice tone recognition method and system |
CN112043252A (en) * | 2020-10-10 | 2020-12-08 | 山东大学 | Emotion recognition system and method based on respiratory component in pulse signal |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150216436A1 (en) * | 2012-09-07 | 2015-08-06 | Children's Medical Center Corporation | Detection of epileptogenic brains with non-linear analysis of electromagnetic signals |
CN109190570A (en) * | 2018-09-11 | 2019-01-11 | 河南工业大学 | A kind of brain electricity emotion identification method based on wavelet transform and multi-scale entropy |
CN109770892A (en) * | 2019-02-01 | 2019-05-21 | 中国科学院电子学研究所 | A kind of sleep stage method based on electrocardiosignal |
CN109993093A (en) * | 2019-03-25 | 2019-07-09 | 山东大学 | Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic |
-
2019
- 2019-12-30 CN CN201911394604.8A patent/CN110991428A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150216436A1 (en) * | 2012-09-07 | 2015-08-06 | Children's Medical Center Corporation | Detection of epileptogenic brains with non-linear analysis of electromagnetic signals |
CN109190570A (en) * | 2018-09-11 | 2019-01-11 | 河南工业大学 | A kind of brain electricity emotion identification method based on wavelet transform and multi-scale entropy |
CN109770892A (en) * | 2019-02-01 | 2019-05-21 | 中国科学院电子学研究所 | A kind of sleep stage method based on electrocardiosignal |
CN109993093A (en) * | 2019-03-25 | 2019-07-09 | 山东大学 | Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic |
Non-Patent Citations (1)
Title |
---|
李飞: "基于心肺系统的情绪识别研究" * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563451A (en) * | 2020-05-06 | 2020-08-21 | 浙江工业大学 | Mechanical ventilation ineffective inspiration effort identification method based on multi-scale wavelet features |
CN111563451B (en) * | 2020-05-06 | 2023-09-12 | 浙江工业大学 | Mechanical ventilation ineffective inhalation effort identification method based on multi-scale wavelet characteristics |
CN111916066A (en) * | 2020-08-13 | 2020-11-10 | 山东大学 | Random forest based voice tone recognition method and system |
CN112043252A (en) * | 2020-10-10 | 2020-12-08 | 山东大学 | Emotion recognition system and method based on respiratory component in pulse signal |
CN112043252B (en) * | 2020-10-10 | 2021-09-28 | 山东大学 | Emotion recognition system and method based on respiratory component in pulse signal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Rubin et al. | Recognizing abnormal heart sounds using deep learning | |
Thuwajit et al. | EEGWaveNet: Multiscale CNN-based spatiotemporal feature extraction for EEG seizure detection | |
Khare et al. | PDCNNet: An automatic framework for the detection of Parkinson’s disease using EEG signals | |
Zheng et al. | EEG-based emotion classification using deep belief networks | |
CN109645990B (en) | Computer mode identification method for electroencephalogram signals of epileptics | |
Umamaheswari et al. | An enhanced human speech emotion recognition using hybrid of PRNN and KNN | |
CN110991428A (en) | Breathing signal emotion recognition method and system based on multi-scale entropy | |
Exarchos et al. | EEG transient event detection and classification using association rules | |
Khalili et al. | Emotion detection using brain and peripheral signals | |
CN113197579A (en) | Intelligent psychological assessment method and system based on multi-mode information fusion | |
CN110575141A (en) | Epilepsy detection method based on generation countermeasure network | |
Liang et al. | Obstructive sleep apnea detection using combination of CNN and LSTM techniques | |
Anh-Dao et al. | A multistage system for automatic detection of epileptic spikes | |
Lopes et al. | Ensemble deep neural network for automatic classification of eeg independent components | |
Dastgoshadeh et al. | Detection of epileptic seizures through EEG signals using entropy features and ensemble learning | |
CN113796873A (en) | Wearable dynamic electrocardiosignal classification method and system | |
CN113749658A (en) | Cardiopulmonary coupling depression state identification method and system based on ensemble learning | |
Kaur et al. | Multi-class support vector machine classifier in EMG diagnosis | |
Slama et al. | ConvNet: 1D-Convolutional Neural Networks for Cardiac Arrhythmia Recognition Using ECG Signals. | |
Komisaruk et al. | Neural network model for artifacts marking in EEG signals | |
CN113925495B (en) | Arterial and venous fistula abnormal tremor signal identification system and method combining statistical learning and time-frequency analysis | |
Bengacemi et al. | Surface EMG Signal Classification for Parkinson's Disease using WCC Descriptor and ANN Classifier. | |
CN115017996A (en) | Mental load prediction method and system based on multiple physiological parameters | |
Suwida et al. | Application of Machine Learning Algorithm for Mental State Attention Classification Based on Electroencephalogram Signals | |
Samiei et al. | A complex network approach to time series analysis with application in diagnosis of neuromuscular disorders |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200410 |