CN112545513A - Music-induced electroencephalogram-based depression identification method - Google Patents

Music-induced electroencephalogram-based depression identification method Download PDF

Info

Publication number
CN112545513A
CN112545513A CN202011392523.7A CN202011392523A CN112545513A CN 112545513 A CN112545513 A CN 112545513A CN 202011392523 A CN202011392523 A CN 202011392523A CN 112545513 A CN112545513 A CN 112545513A
Authority
CN
China
Prior art keywords
music
electroencephalogram
stimulation
negative
depression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011392523.7A
Other languages
Chinese (zh)
Inventor
张晨洁
陈亮
刘璐
郭滨
张宏源
姜丰
胡延飞
张毅恒
白雪梅
孙佳楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202011392523.7A priority Critical patent/CN112545513A/en
Publication of CN112545513A publication Critical patent/CN112545513A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides a depression recognition method based on music-induced electroencephalogram, and relates to the field of depression recognition, wherein the method comprises the following steps: step one, collecting electroencephalogram data when a subject receives neutral, negative and positive music stimulation, and then preprocessing the electroencephalogram data; extracting linear and nonlinear characteristics of the preprocessed electroencephalogram signals under three different music stimuli; thirdly, performing linear combination on the electroencephalogram signal characteristics extracted under different modes by using a characteristic level fusion technology; selecting linear combination characteristics of the electroencephalogram signals under different modes by adopting a t test characteristic selection method, and carrying out genetic algorithm characteristic weighting to obtain weighted characteristics; and fifthly, inputting the weighted features into a classifier to be trained to construct a depression recognition model. The method classifies the characteristics of different fusion modes, the KNN classifier has the highest identification accuracy in the fusion of positive and negative music stimulation, and objective indexes and bases can be provided for auxiliary identification of the depression.

Description

Music-induced electroencephalogram-based depression identification method
Technical Field
The invention relates to the technical field of medical treatment for depression detection, in particular to a depression identification method based on music-induced electroencephalogram.
Background
Depression is also called depressive disorder, and is an affective disorder mental disease with high incidence rate. Patients with depression often have typical symptoms of depressed mood, loss of interest and pleasure, loss of energy or fatigue. The diagnosis of depression should be based on medical history, clinical symptoms, course of disease, physical examination and laboratory examination. Currently, the international universal diagnostic standards generally comprise ICD-10 and DSM-IV, and the ICD-10 is mainly adopted domestically. Because the number of patients with depression is increased year by year, the underlying neural mechanism and pathological principle are unclear, and the diagnosis result is influenced by subjective factors, so that the clinical diagnosis of depression becomes very difficult. Therefore, a more objective and convenient diagnosis method is urgently needed, the limitation of traditional medical diagnosis is broken, the medical prevention and treatment recognition rate of depression is improved, and a patient is helped to be treated timely and effectively.
The brain electrical signals are the spontaneous, rhythmic electrical discharge activity of neurons from the scalp surface. It contains a great deal of physiological and pathological information and is the main basis for diagnosing nervous system diseases. The electroencephalogram has the characteristics of safety, low price, simple and convenient operation, no wound and the like, and is widely applied to the diagnosis of the brain diseases. In recent years, a plurality of researches find that the electroencephalogram data of depression patients and healthy contrast persons have different variation rules on parameters such as wave bands, power, wave amplitudes and the like. Leuchter et al analyzed the resting EEG data for 121 moderately depressed patients and 37 healthy controls and found that depressed patients in the delta, theta, alpha and beta bands showed overall higher performance than normal controls and that the power and synchrony in the alpha band of the prefrontal lobe was significantly different from the normal controls.
Music, as a common stimulating material, can significantly induce human emotions. Music acts on cerebral cortex through regular frequency change of sound waves, has effects on hypothalamus and peripheral system, improves excitability of cortical nerves, and activates and improves emotional state. Dharmadhikari AS et al compared the hemispheric differences in frontal lobe theta power before and during music listening in depressed patients and controls and studies found that the average frontal lobe theta power and frontal lobe theta asymmetry were significantly increased in the left hemisphere during music listening in controls without depression. In depression patients, frontal lobe theta asymmetry is reversed when listening to music. Marko Punkanen et al studied the emotional perception in music of depressed patients, and the studies showed that depressed patients perceive more negative emotions in musical stimuli such as happiness, sadness, fear, anger, and gentleness, which provides a means for the identification of depression. The invention adopts three music stimuli of neutral, negative and positive to induce corresponding electroencephalogram signals, establishes an effective depression recognition model and improves the auxiliary effect on clinical diagnosis of depression.
Disclosure of Invention
The invention provides a music-induced electroencephalogram-based depression recognition method, which is characterized in that a new model is established to distinguish mild depression patients from normal control groups by inducing corresponding electroencephalogram signals through music stimulation. And fusing electroencephalogram data of different models by adopting a feature fusion technology to construct a depression identification model. Electroencephalographic signals were recorded simultaneously for the depressed group and the normal control group under different musical stimuli. And then linear and nonlinear features are extracted from electroencephalogram signals of each model, and the features are weighted by using a genetic algorithm to improve the overall performance of the identification framework.
The technical solution for realizing the purpose of the invention is as follows: a depression identification method based on music-induced electroencephalogram is realized by the following steps:
step1, synchronously acquiring electroencephalogram data of a subject when the subject receives neutral, negative and positive music stimulation by using an electroencephalogram acquisition device, and then preprocessing the electroencephalogram data;
step2, respectively extracting linear characteristics such as frequency and power spectrum of three different music stimulation electroencephalogram signals and nonlinear characteristics such as power spectrum entropy, sample entropy and related dimensions by adopting a linear analysis method (such as wavelet transformation) and a nonlinear analysis method;
step3, linear combination is carried out on linear and nonlinear features of the extracted electroencephalogram signals under different modes by utilizing a feature level fusion technology to obtain multi-mode fusion features;
step4, selecting fusion characteristics by adopting a t-test characteristic selection method and carrying out characteristic weighting by adopting a genetic algorithm to obtain weighted characteristics;
and 5, inputting the weighted features into a classifier to be trained to construct a depression recognition model.
The invention has the beneficial effects that: the invention uses music to stimulate the electroencephalogram data, adopts a t-test feature selection method to select new features in the linear combination feature matrix and adopts a genetic algorithm to carry out feature weighting, thereby improving the overall performance of the classification recognition model and realizing higher classification accuracy. According to the method, the classification performance of the classifiers in different fusion modes is compared, and the KNN classifier is found to have the highest identification accuracy in the fusion of positive and negative music stimulation and is superior to other classification methods.
Drawings
FIG. 1 is a flow chart of a method for acquiring a depression recognition model fused with electroencephalogram data under neutral, negative and positive music stimulation;
FIG. 2 is a flow chart of an electroencephalogram (EEG) acquisition experiment;
FIG. 3 is a graph comparing results of electroencephalogram (EEG) before and after removing ocular artifacts (EOG);
FIG. 4 is a comparison graph of mean values of single modality and fused modality classifiers for different musical stimuli;
FIG. 5 is a graph of a comparison of the performance of fusion modalities and their constituent modalities;
FIG. 6 is a graph of accuracy for an optimal individual modality and an optimal fusion modality;
Detailed Description
The invention is described in further detail below with reference to the figures and the specific examples.
The invention provides a depression identification method based on music-induced electroencephalogram signal feature fusion, which comprises the following steps:
step1, synchronously acquiring electroencephalogram data of a subject when the subject receives neutral, negative and positive music stimulation by using an electroencephalogram acquisition device, and then preprocessing the electroencephalogram data;
the invention adopts three music stimulations (neutral music stimulation, negative music stimulation and positive music stimulation) with different emotions to record and analyze the electroencephalogram signals of a subject, and synchronously collects the electroencephalogram data when receiving the neutral, negative and positive music stimulation. The experimental process is completed in 5 music stimulations, including 2 neutral stimulations, 2 negative stimulations and 1 positive stimulation, each music stimulation is played in sequence, the time of each music is 45s, the rest is 6s after each stimulation, and the experiment is carried out for about 5 minutes.
The detailed experimental procedure (fig. 2) is as follows:
(1) ensuring that the subject is awake and meets inclusion criteria;
(2) explaining the experiment content, the process and the related attention to the testee;
(3) the subject wears an electroencephalogram acquisition device;
(4) the electrodes are ensured to be in accurate positions and have good contact;
(5) pre-acquisition is carried out for 1 minute to ensure that correct EEG signals are acquired, and if abnormal signals are found, instruments are required to be adjusted currently;
(6) after the preliminary experiment is normal, the experiment is formally started;
(7) and playing each music stimulation in sequence, and simultaneously acquiring the electroencephalogram signals of the testee. After each stimulation, the subject rested for 6 s. The playing sequence is as follows: neutral music stimuli, negative music stimuli, and finally positive music stimuli;
(8) participants were informed of the end of the experiment and checked for data quality. If the quality is poor, data needs to be collected again.
In order to obtain relatively pure electroencephalogram data, the original electroencephalogram signal needs to be preprocessed, and the process is as follows:
first, the line frequency noise is mainly caused by the power supply of the device itself, and its frequency is 50 Hz. The power frequency noise is removed at a frequency of 50Hz using a 50Hz notch filter.
Second, the electrocardiogram is produced by the rhythmic operation of the heart, with a large amplitude. Since the heart is located far from the head, the electrocardiographic signals are greatly attenuated when it reaches the scalp. Therefore, electrocardiograms are often ignored when preprocessing the brain electrical signals.
Thirdly, muscle contraction produces myoelectricity, the frequency of which is mainly concentrated in the high frequency band above 100 Hz. The EEG signal frequency of the present invention is 0.5-50 Hz. Therefore, a finite impulse response filter based on the Blackman time window is adopted to remove the high-frequency band noise caused by myoelectricity.
Fourth, EOG is inevitably recorded while using prefrontal EEG sites; while EOG has a frequency of 0.1-100Hz and overlaps with the EEG. The invention adopts a Kalman filtering method to estimate pure EOG artifacts by combining discrete wavelet transform and an adaptive prediction filter. The pair of electroencephalogram signals before and after the removal of the electroencephalogram noise is shown in fig. 3.
Step2, respectively extracting linear characteristics such as frequency and power spectrum of three different music stimulation electroencephalogram signals and nonlinear characteristics such as power spectrum entropy, sample entropy and related dimensions by adopting a linear analysis method (such as wavelet transformation) and a nonlinear analysis method;
the invention extracts 60 linear characteristics and 36 nonlinear characteristics of preprocessed electroencephalogram signals in four wave bands of full wave band (0.5-50Hz), theta (4-8Hz), alpha (8-13Hz), beta (13-30Hz) and gamma (30-50 Hz).
The linear characteristics of the brain electrical signals comprise relative center frequency, absolute center frequency, relative power and absolute power of theta, alpha, beta and gamma waves, and absolute power, center frequency, skewness, kurtosis and peak value of the whole wave band.
The brain electrical signal nonlinear characteristics include variance, Hjorth activity, power spectral entropy, sample entropy, Shannon entropy, correlation dimension and C0 complexity of the whole wave band.
(A) Entropy of the power spectrum: entropy evaluation of the power spectrum of the intensity of brain activity; the greater the entropy, the more active the brain. The invention takes the information entropy of the power spectrum of the brain electricity signal as the power spectrum entropy.
Figure BDA0002813745000000041
Figure BDA0002813745000000042
Wherein p'x(wi) Is the power spectrum of the signal.
(B) Sample entropy: the sample entropy is a nonlinear index of the complexity of the reaction time sequence, and the statistical difference between the depression patients and the normal control group can be found by using the sample entropy. The specific algorithm is as follows:
step 1: the sequences x (1), x (2), x (n) are sequentially grouped into m-dimensional vectors, i.e.
Xm(i)=[x(i)x,(i+1),...,x(i+m-1)] (3)
In the formula: i is more than or equal to 1 and less than or equal to N-m + 1.
Step 2: definition vector Xm(i) And vector Xm(j) Is d [ X ] is the distance betweenm(i),Xm(j)]I.e. by
d[Xm(i),Xm(j)]=max|x(i+k)-x(j+k)| (4)
In the formula: k is more than or equal to 1 and less than or equal to m-1, i is more than or equal to 1 and less than or equal to j, and N-m +1, i is not equal to j.
Step 3: given a similar tolerance r (r)>0) For each 1. ltoreq. i.ltoreq.N-m, d [ X ] is countedm(i),Xm(j)]The ratio of the number < r to the total number of vectors N-m-1 is recorded as
Figure BDA0002813745000000043
Figure BDA0002813745000000044
Wherein j is more than or equal to 1 and less than or equal to N-m, and i is not equal to j. Average it over all i:
Figure BDA0002813745000000051
for m +1 point vectors, the same holds true
Figure BDA0002813745000000052
Wherein j is more than or equal to 1 and less than or equal to N-m, and i is not equal to j. Average it over all i:
Figure BDA0002813745000000053
step 4: the sample entropy of this sequence is:
Figure BDA0002813745000000054
however, in practice, N cannot be ∞, and when N takes a finite value, it is estimated that:
SampEn(m,r,N)=-ln[Am(r)/Bm(r)] (10)
the value of SampEn (m, r, N) is related to the selection of the parameters m, r, N. In general, the embedding dimension m is 1 or 2, the similarity margin r is between 0.1SD and 0.25SD (SD is the standard deviation of the time series), and the calculated sample entropy has reasonable statistical characteristics. When the entropy of the sample is calculated, m is 2, and r is 0.2 SD. (C) Shannon entropy: shannon entropy is a measure of random variables and random signal uncertainty. The larger the entropy, the greater the uncertainty and randomness. The method adopts Shannon entropy to measure the ordered state of the brain electrical signals. Shannon entropy is defined as follows:
Figure BDA0002813745000000055
(D) correlation dimension: the correlation dimension may reflect the degree of correlation between points in the state space, measuring the complexity of the system. The larger the dimension, the lower the degree of association, proving the higher the complexity of the system. Can be defined in the following way:
Figure BDA0002813745000000056
where C r is the correlation integral and r is the radial distance around each reference point.
(E) C0-complexity: the complexity of C0 is to decompose the EEG signal into regular and irregular components to reflect the ratio of the irregular components in the EEG timing signal to the original EEG signal. The higher the complexity value of C0, the stronger the randomness of the brain electrical sequence. The C0 complexity calculation formula is as follows:
Figure BDA0002813745000000057
where s (t) is the original sequence, s' (t) ═ FT-1(FT (s (t))), FT is Fourier transform.
Step3, linear combination is carried out on linear and nonlinear features of the extracted electroencephalogram signals under different modes by utilizing a feature level fusion technology to obtain multi-mode fusion features;
the invention uses a characteristic segment fusion method to carry out linear combination on the electroencephalogram characteristics extracted under different modes, thereby realizing mutual supplement among the characteristics.
First, the EEG features in each individual modality are extracted. The feature matrix is as follows:
xpos={x1,x2…xm} (14)
xneu={x′1,x′2...x′m} (15)
xneg={x″1,x″2...x″m} (16)
whereinxposRepresenting a feature matrix in a positive music stimulation mode; x is the number ofneuRepresenting a feature matrix in a neutral music stimulation mode; x is the number ofnegRepresenting a feature matrix in a negative musical stimulus modality.
Then, a feature level fusion method is adopted to carry out linear combination on the feature matrixes of the three modes, and a generated new matrix is marked as U.
U1={u1,u2...um},U2={v1,v2...vm},U3={w1,w2...wm} (17)
Finally, the fusion feature matrix is calculated as follows:
U1=βxpos+γxneg,U2=βxpos+γxneu,U3=βxneg+γxneu (18)
ui=βxi+γxi″,vi=βxi+γxi′,ui=βxi″+γxi′ (19)
wherein the combination coefficients beta and gamma are set to 1 and-1, U respectively1Feature matrix, U, representing a blend of positive and negative musical stimuli2Feature matrix, U, representing a blend of positive and neutral musical stimuli3A feature matrix representing a blend of negative and neutral musical stimuli.
Step4, selecting fusion characteristics by adopting a t-test characteristic selection method and carrying out characteristic weighting by adopting a genetic algorithm to obtain weighted characteristics;
the invention adopts a t-test feature selection method to select the electroencephalogram signal fusion features under different modes and carry out genetic algorithm feature weighting, so as to better compare the difference of the fusion new features of the depression patients and the normal control group and improve the performance of the classification recognition model.
Genetic Algorithm (GA) is a global optimization algorithm. The algorithm often needs to adjust four parameters, namely Pop, T, Pc, Pm. The setting of each parameter requires multiple adjustments to obtain a reasonable value that is ultimately suitable for the problem to be solved.
And (4) Pop: and (4) population quantity. The larger the value, the greater the diversity, but the computational efficiency will be reduced. Therefore, it is necessary to set a reasonable population number in the adjustment process.
T: the algebra that needs to be completed;
pc: the probability of mating for each individual in each generation;
pm: the probability of mutation per individual generation;
the final parameter is selected to be Pop-30, T-500, Pc-0.9, and Pm-0.5 by adjusting several times.
Step 5, inputting the weighted features into a classifier to be trained to construct a depression recognition model;
the classification performance of different classifiers in different modes and the average performance of each classifier in a single mode and a fusion mode are compared by using three traditional classifiers of KNN, SVM and DT, and the results are respectively shown in tables 1 and 2.
TABLE 1 Performance (%) -of different classifiers in different modalities
Figure BDA0002813745000000071
TABLE 2 average Performance (%)% of each classifier in Individual and fused modalities
Figure BDA0002813745000000072
The result shows that the KNN classification effect is superior to that of the SVM and the DT no matter in a single mode or a fusion mode. Therefore, the KNN classifier is more suitable for classifying depression electroencephalogram signals.
In addition, the present invention compares the classification performance of individual modalities and fusion modalities under different musical stimuli with the performance of different fusion modalities and the performance of individual modalities composed thereof, as shown in fig. 4 and 5, respectively. As can be seen from fig. 4, for a single modality, the features of the positive music stimulus modality achieve the best classification accuracy; for the fusion modality, the fusion of positive and negative music stimuli performed best. As can be seen from fig. 5, the fused modality of positive and negative musical stimuli performs better than the two individual modalities that make up it. Thus, the best fusion method is the fusion of positive and negative musical stimuli.
In order to determine the best classification, the accuracy of different classifiers under the fusion of positive music stimuli and negative music stimuli was also compared, as shown in fig. 6. The result shows that the optimal individual mode is the KNN classifier under positive music stimulation, and the optimal fusion mode is the KNN classifier under positive and negative music stimulation.
In summary, the classification accuracy of the KNN is highest in the three classifiers regardless of the individual modality or the fusion modality; the KNN classifier obtains the best depression identification in the positive and negative music stimulation fusion mode, and the accuracy rate is 86.98%. Therefore, the present invention recognizes that the KNN classifier is more suitable for distinguishing depression patients from normal controls in the fusion of positive and negative musical stimuli.

Claims (5)

1. A depression identification method based on music-induced electroencephalogram is characterized by comprising the following steps:
step1, synchronously acquiring electroencephalogram data of a subject when the subject receives neutral, negative and positive music stimulation by using an electroencephalogram acquisition device, and then preprocessing the electroencephalogram data;
step2, respectively extracting linear characteristics such as frequency and power spectrum of three different music stimulation electroencephalogram signals and nonlinear characteristics such as power spectrum entropy, sample entropy and related dimensions by adopting a linear analysis method (such as wavelet transformation) and a nonlinear analysis method;
step3, linear combination is carried out on linear and nonlinear features of the extracted electroencephalogram signals under different modes by utilizing a feature level fusion technology to obtain multi-mode fusion features;
step4, selecting fusion characteristics by adopting a t-test characteristic selection method and carrying out characteristic weighting by adopting a genetic algorithm to obtain weighted characteristics;
and 5, inputting the weighted features into a classifier to be trained to construct a depression recognition model.
2. The method for identifying depression based on music-induced electroencephalogram according to claim 1, characterized in that: in the first step, the music stimulation is divided into three categories through a Valence and Arousal two-dimensional emotion model, and the three categories are neutral music stimulation, negative music stimulation and positive music stimulation respectively. The electroencephalogram data is acquired by electroencephalogram signals acquired when the subject receives music stimulation of the three different emotions. The experimental process is completed in 5 music stimulations, including 2 neutral stimulations, 2 negative stimulations and 1 positive stimulation, each music stimulation is played in sequence, the time of each music is 45s, the rest is 6s after each stimulation, and the experiment is carried out for about 5 minutes.
3. The method for identifying depression based on music-induced electroencephalogram according to claim 1, characterized in that: in the first step, the brain electrical data preprocessing process is as follows:
a. removing power frequency noise caused by a power supply device at the frequency of 50Hz by adopting a 50Hz notch filter;
b. removing high-frequency band noise caused by myoelectricity by adopting a finite impulse response filter based on a Blackman time window;
c. eye artifacts (EOG) are removed by adopting a Kalman filtering method in combination with discrete wavelet transform and an adaptive prediction filter.
4. The method for identifying depression based on music-induced electroencephalogram according to claim 1, characterized in that: in the third step, the EEG characteristics under the single mode are combined by adopting a characteristic stage fusion method, so that mutual complementation between the characteristics is realized.
First, the EEG features in each individual modality are extracted. The feature matrix is as follows:
xpos={x1,x2...xm}
xneu={x1′,x2′...xm′}
xneg={x1″,x2″...xm″}
wherein xposRepresenting a feature matrix in a positive music stimulation mode; x is the number ofneuRepresenting a feature matrix in a neutral music stimulation mode; x is the number ofnegRepresenting a feature matrix in a negative musical stimulus modality.
Then, a feature level fusion method is adopted to carry out linear combination on the feature matrixes of the three modes, and a generated new matrix is marked as U.
U1={u1,u2...um},U2={v1,v2...vm},U3={w1,w2...wm}
Finally, the fusion feature matrix is calculated as follows:
U1=βxpos+γxneg,U2=βxpos+γxneu,U3=βxneg+γxneu
ui=βxi+γxi″,vi=βxi+γxi′,ui=βxi″+γxi
wherein the combination coefficients beta and gamma are set to 1 and-1, U respectively1Feature matrix, U, representing a blend of positive and negative musical stimuli2Feature matrix, U, representing a blend of positive and neutral musical stimuli3A feature matrix representing a blend of negative and neutral musical stimuli.
5. The method for identifying depression based on music-induced electroencephalogram according to claim 1, characterized in that: and step five, identifying depression acquired under the neutral, negative and positive music stimulation by adopting three classifiers of KNN, DT and SVM, finding that the depression patient is more sensitive to the negative mood stimulation and shows enhanced attention to the negative mood, and simultaneously obtaining the best depression identification in the positive and negative music stimulation fusion mode by the KNN classifier.
CN202011392523.7A 2020-12-04 2020-12-04 Music-induced electroencephalogram-based depression identification method Pending CN112545513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011392523.7A CN112545513A (en) 2020-12-04 2020-12-04 Music-induced electroencephalogram-based depression identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011392523.7A CN112545513A (en) 2020-12-04 2020-12-04 Music-induced electroencephalogram-based depression identification method

Publications (1)

Publication Number Publication Date
CN112545513A true CN112545513A (en) 2021-03-26

Family

ID=75047149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011392523.7A Pending CN112545513A (en) 2020-12-04 2020-12-04 Music-induced electroencephalogram-based depression identification method

Country Status (1)

Country Link
CN (1) CN112545513A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082447A (en) * 2021-04-02 2021-07-09 电子科技大学 Prediction method for music modulation brain plasticity effect of fMRI brain loop
CN113180662A (en) * 2021-04-07 2021-07-30 北京脑陆科技有限公司 EEG signal-based anxiety state intervention method and system
CN113180661A (en) * 2021-04-07 2021-07-30 北京脑陆科技有限公司 Method and system for regulating and controlling anxiety state based on EEG signal
CN113208617A (en) * 2021-04-06 2021-08-06 北京脑陆科技有限公司 Anxiety state nerve regulation and control method and system based on EEG signal
CN113349795A (en) * 2021-06-15 2021-09-07 杭州电子科技大学 Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition
CN113397563A (en) * 2021-07-22 2021-09-17 北京脑陆科技有限公司 Training method, device, terminal and medium for depression classification model
CN113558636A (en) * 2021-07-05 2021-10-29 杭州电子科技大学 Method for classifying dementia degree of Alzheimer disease patient based on music electroencephalogram signal arrangement entropy
CN113855022A (en) * 2021-10-11 2021-12-31 北京工业大学 Emotion evaluation method and device based on eye movement physiological signals
CN117633667A (en) * 2024-01-26 2024-03-01 吉林大学第一医院 N270 waveform-based depression symptom identification method, device and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407733A (en) * 2016-12-12 2017-02-15 兰州大学 Depression risk screening system and method based on virtual reality scene electroencephalogram signal
CN110876626A (en) * 2019-11-22 2020-03-13 兰州大学 Depression detection system based on optimal lead selection of multi-lead electroencephalogram
CN111068159A (en) * 2019-12-27 2020-04-28 兰州大学 Music feedback depression mood adjusting system based on electroencephalogram signals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407733A (en) * 2016-12-12 2017-02-15 兰州大学 Depression risk screening system and method based on virtual reality scene electroencephalogram signal
CN110876626A (en) * 2019-11-22 2020-03-13 兰州大学 Depression detection system based on optimal lead selection of multi-lead electroencephalogram
CN111068159A (en) * 2019-12-27 2020-04-28 兰州大学 Music feedback depression mood adjusting system based on electroencephalogram signals

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113082447A (en) * 2021-04-02 2021-07-09 电子科技大学 Prediction method for music modulation brain plasticity effect of fMRI brain loop
CN113082447B (en) * 2021-04-02 2021-12-07 电子科技大学 Prediction method for music modulation brain plasticity effect of fMRI brain loop
CN113208617A (en) * 2021-04-06 2021-08-06 北京脑陆科技有限公司 Anxiety state nerve regulation and control method and system based on EEG signal
CN113180662A (en) * 2021-04-07 2021-07-30 北京脑陆科技有限公司 EEG signal-based anxiety state intervention method and system
CN113180661A (en) * 2021-04-07 2021-07-30 北京脑陆科技有限公司 Method and system for regulating and controlling anxiety state based on EEG signal
CN113349795A (en) * 2021-06-15 2021-09-07 杭州电子科技大学 Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition
CN113349795B (en) * 2021-06-15 2022-04-08 杭州电子科技大学 Depression electroencephalogram analysis method based on sparse low-rank tensor decomposition
CN113558636A (en) * 2021-07-05 2021-10-29 杭州电子科技大学 Method for classifying dementia degree of Alzheimer disease patient based on music electroencephalogram signal arrangement entropy
CN113558636B (en) * 2021-07-05 2024-04-02 杭州电子科技大学 Method for classifying dementia degree of Alzheimer disease patient based on musical electroencephalogram permutation entropy
CN113397563A (en) * 2021-07-22 2021-09-17 北京脑陆科技有限公司 Training method, device, terminal and medium for depression classification model
CN113855022A (en) * 2021-10-11 2021-12-31 北京工业大学 Emotion evaluation method and device based on eye movement physiological signals
CN117633667A (en) * 2024-01-26 2024-03-01 吉林大学第一医院 N270 waveform-based depression symptom identification method, device and equipment

Similar Documents

Publication Publication Date Title
CN112545513A (en) Music-induced electroencephalogram-based depression identification method
CN110876626B (en) Depression detection system based on optimal lead selection of multi-lead electroencephalogram
Khosla et al. A comparative analysis of signal processing and classification methods for different applications based on EEG signals
Khare et al. PDCNNet: An automatic framework for the detection of Parkinson’s disease using EEG signals
Hamad et al. Feature extraction of epilepsy EEG using discrete wavelet transform
Blinowska et al. Electroencephalography (eeg)
Poulos, M. Rangoussi, N. Alexandris, A. Evangelou On the use of EEG features towards person identification via neural networks
EP3737283A1 (en) System and method for classifying and modulating brain behavioral states
Prucnal et al. Effect of feature extraction on automatic sleep stage classification by artificial neural network
Zainuddin et al. Alpha and beta EEG brainwave signal classification technique: A conceptual study
CN115640827B (en) Intelligent closed-loop feedback network method and system for processing electrical stimulation data
Djamal et al. Significant variables extraction of post-stroke EEG signal using wavelet and SOM kohonen
Song et al. Common spatial generative adversarial networks based EEG data augmentation for cross-subject brain-computer interface
Jadhav et al. Automated sleep stage scoring using time-frequency spectra convolution neural network
Akella et al. Classifying multi-level stress responses from brain cortical EEG in nurses and non-health professionals using machine learning auto encoder
KR101498812B1 (en) Insomnia tests and derived indicators using eeg
KR20080107961A (en) User adaptative pattern clinical diagnosis/medical system and method using brain waves and the sense infomation treatment techniques
Furman et al. Finger flexion imagery: EEG classification through physiologically-inspired feature extraction and hierarchical voting
Munavalli et al. Introduction to Brain–Computer Interface: Applications and Challenges
Goshvarpour et al. Fusion framework for emotional electrocardiogram and galvanic skin response recognition: Applying wavelet transform
CN116870366A (en) Repeated transcranial magnetic stimulation individuation data processing method for Alzheimer disease
Fadav et al. A machine learning approach for addiction detection using phase amplitude coupling of EEG signals
Showmik et al. Beat-wise Classification of Arrhythmia Using Novel Combination of ECG Signal-Specific and Signal-Independent Features with ML Algorithms
Sellami et al. Analysis of speech related EEG signals using emotiv epoc+ headset, fast fourier transform, principal component analysis, and K-nearest neighbor methods
Wijayanto et al. Complexity Based Multilevel Signal Analysis for Epileptic Seizure Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination