WO2018014436A1 - 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法 - Google Patents

一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法 Download PDF

Info

Publication number
WO2018014436A1
WO2018014436A1 PCT/CN2016/098165 CN2016098165W WO2018014436A1 WO 2018014436 A1 WO2018014436 A1 WO 2018014436A1 CN 2016098165 W CN2016098165 W CN 2016098165W WO 2018014436 A1 WO2018014436 A1 WO 2018014436A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotional
emotion
recognition model
time
matrix
Prior art date
Application number
PCT/CN2016/098165
Other languages
English (en)
French (fr)
Inventor
刘爽
明东
仝晶晶
安兴伟
许敏鹏
綦宏志
何峰
周鹏
Original Assignee
天津大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 天津大学 filed Critical 天津大学
Publication of WO2018014436A1 publication Critical patent/WO2018014436A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/725Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis

Definitions

  • the invention relates to the field of brain electrical recognition, in particular to an emotional brain electrical recognition method for improving the time robustness of an emotion recognition model.
  • Emotion is a comprehensive state in which a person's objective things satisfy their own needs. As a high-level function of the human brain, it guarantees the survival and adaptation of the organism, affecting people's learning, memory and decision-making to varying degrees. In people's daily work and life, the role of emotions is everywhere. Negative emotions can affect our physical and mental health, reduce the quality and efficiency of our work, and cause serious work mistakes. Studies have shown that long-term accumulation of negative emotions can impair the function of the immune system, making people more susceptible to infection by surrounding viruses. Therefore, it is necessary to find negative emotions and give appropriate intervention and regulation in a timely manner, especially for drivers, astronauts and other special workers.
  • Electroencephalography has received the attention of researchers because of its high temporal resolution, control from human factors, and the ability to objectively and truly reflect people's emotional state. was introduced into the field of emotional recognition.
  • the proposed new theoretical method has improved the accuracy of emotion recognition based on EEG to a certain extent.
  • the recognition rate is greatly reduced, it is difficult to meet the needs of the application, and establishing a high-accuracy emotion recognition model still faces enormous challenges.
  • One of the difficulties is how to eliminate or reduce the time effect of EEG signals, and thus improve the time universality of the emotion recognition model. It is well known that hormone levels, external environments (such as temperature and humidity), and diet and sleep can cause physiological signals to differ, so there are differences in EEG signals even in the same emotional state at different times. Moreover, in practical applications, the establishment of the emotion recognition model and the recognition of the emotional state are bound to have a certain time interval, and the test data will not participate in the establishment of the emotion recognition model, especially in some special application scenarios, such as astronaut emotions. The identification of the state, the establishment of the recognition model takes place in the preparation phase on the ground, and the identification of the emotional state takes place in the working phase in space. It is impractical to build a recognition model on the same day and then immediately enter the application.
  • the invention provides an emotional brain electrical recognition method for improving the time robustness of the emotion recognition model.
  • the invention can effectively improve the time robustness and universality of the emotion recognition model, and solve the bottleneck problem in the current emotion recognition.
  • the model is applied to the application and has considerable social and economic benefits, as described below:
  • An emotional brain electrical recognition method for improving time robustness of an emotion recognition model comprising the following steps:
  • Pre-processing the collected 64-lead EEG signals including: variable reference to binaural averaging; downsampling to 500 Hz; 1-100 Hz bandpass filtering; and using independent component analysis algorithm to remove ocular electrical interference;
  • the pre-processed EEG signals are searched by a frequency-separable adaptive tracking algorithm to find the best separability frequency band of each user, and the power spectral density of the best separable frequency band of each lead is calculated to form an emotional feature matrix. ;
  • the principal component analysis method is used to reduce the dimension of the obtained emotional feature matrix as the final feature matrix.
  • Using the support vector machine classifier to identify the features in the final feature matrix increase the time-weakening time feature of the sentiment model by increasing the emotional model, improve the temporal robustness of the emotional model, distinguish different emotional states, and establish an emotional recognition model. .
  • the method further includes:
  • the EEG signals of 64 subjects under different emotional states were collected from different time periods.
  • the step of finding the optimal separability frequency band of each user by using the algorithm of the pre-processed EEG signal using the separable band adaptive tracking algorithm is specifically:
  • the weightable DW(f) can be obtained from the Fisher ratio; the DFC is calculated by the band iterative selection method, the number of iterations is equal to the number of frequency bands to be obtained; and the best separable frequency band is obtained from the obtained number of frequency bands.
  • the obtaining the best separable frequency band from the obtained number of frequency bands is specifically:
  • the principal component analysis method is used to perform dimensionality reduction processing on the obtained emotional feature matrix, and the steps as the final feature matrix are specifically as follows:
  • the characteristic root size of each principal component represents the amount of information contained in it, and the cumulative contribution rate of the first k principal components is obtained;
  • the data of each day is separately normalized and normalized to [-1.1] to obtain a feature matrix
  • the SVM classifier is used to build the emotion recognition model. During the modeling process, multiple days of data are put into the training set to improve the time robustness of the classifier.
  • the technical solution provided by the present invention has the beneficial effects that the main object of the present invention is to propose a new method for improving the time robustness of the emotion recognition model, and finding the optimal separable frequency band for each user through the separable band adaptive tracking method.
  • the invention can effectively improve the time robustness and accuracy of the emotion recognition model, and can obtain considerable social and economic benefits.
  • the best implementation plan is to use patent transfer, technical cooperation or product development.
  • 1 is a flow chart of an emotional brain electrical recognition method for improving time robustness of an emotion recognition model
  • Figure 2 is a 60-lead EEG lead diagram
  • Figure 3 is a schedule of test experiments
  • Figure 4 is a flow chart of adaptive tracking calculation of separable frequency bands
  • Figure 5 is a flow chart of the DFCs algorithm
  • Figure 6 shows the recognition accuracy under different training days.
  • the present invention proposes a new emotional brain electrical recognition method for improving the time robustness of the emotion recognition model, and finds the best separability of each user through the separable frequency band adaptive tracking algorithm.
  • the frequency band the power spectral density of the best separable frequency band of each lead is calculated separately, which constitutes the emotional feature matrix.
  • the principal component analysis method is used to reduce the dimensionality of the obtained feature matrix, which is used as the feature matrix of the final emotion recognition.
  • the vector machine establishes the emotion recognition model, and increases the time feature of the concentrated sample by increasing the emotional model to improve the time robustness of the emotional model, so as to accurately and objectively perform emotion recognition.
  • the method overcomes the above two problems, and does not reduce the type of emotions identified, and the data of the test set does not participate in the establishment of the emotion recognition model, which satisfies the requirements in practical applications.
  • the embodiment of the invention provides an emotional brain electrical recognition method for improving the time robustness of the emotion recognition model.
  • the emotional brain electrical recognition method comprises the following steps:
  • the data acquisition phase collects EEG signals of 64 guides in different emotional states (positive, neutral, passive) in different time periods;
  • 102 Perform four steps of preprocessing on the collected 64-lead EEG signal. Including: variable reference to binaural averaging; downsampling to 500Hz; 1-100Hz bandpass filtering; and independent component analysis (ICA) to ocular electrical interference;
  • ICA independent component analysis
  • the pre-processed EEG signal is searched by a separable frequency band adaptive tracking algorithm to find the best separability frequency band of each user, and calculate the power spectral density of the best separable frequency band of each lead to form an emotion.
  • Feature matrix Feature matrix
  • 105 Using the support vector machine classifier to identify the features in the final feature matrix, increase the time-weakening time feature of the concentration model by increasing the emotional model training, improve the temporal robustness of the emotional model, distinguish different emotional states, and establish emotions. Identify the model.
  • embodiments of the present invention weaken time-specific features by increasing the number of days in the training set. Prior to feature recognition, the data for each day is first normalized separately.
  • the embodiment of the present invention finds the best separability frequency band of each user through the separable frequency band adaptive tracking algorithm, and separately calculates the power spectral density of the best separable frequency band of each lead to form an emotional feature matrix.
  • the principal component analysis method is used to reduce the dimension of the obtained feature matrix.
  • the support vector machine is used to establish the emotion recognition model. By increasing the emotional model to train the concentrated sample to weaken the time feature, the emotional model is improved. Time robustness to accurately and objectively perform emotion recognition. The invention can effectively improve the time robustness and accuracy of the emotion recognition model.
  • Embodiment 1 The scheme in Embodiment 1 will be described in detail below with reference to FIG. 2, FIG. 3, FIG. 4 and FIG. 5. For details, refer to the following description:
  • the EEG acquisition device is Neuroscan's 64-lead amplifier and Scan 4.5 acquisition system.
  • the electrodes are placed in accordance with the standard 10-20 system specified by the International Brain EEG Association.
  • the lead distribution of the 60-electrode outside the electro-optical and reference electrodes is removed. as shown in picture 2.
  • the right mastoid was used as the reference electrode, and the center of the forehead of the brain was grounded.
  • the impedance of all the electrodes was kept below 5k ohm, and the sampling frequency was 1 000 Hz.
  • Figure 3 is a test schedule of subjects. Each subject came to the laboratory for data collection at the same time of the day, using video to induce positive, neutral, and negative emotional states.
  • the reference potential at the time of acquisition is at the right ear papillae, which results in a low signal amplitude of the right brain region lead. Therefore, the reference potential conversion is first performed, and the reference potential is changed to the M1 and M2 leads located at the mastoid sites on both sides to facilitate subsequent data processing.
  • the sampling frequency of the system is 1000 Hz, mainly to meet the requirements of rapid changes in EEG signals.
  • the sampling frequency of 1000 Hz is much larger than the theoretical sampling frequency of the Nyquist theorem, and the excessive sampling frequency leads to an excessive amount of data, which reduces the efficiency of subsequent processing. Therefore, the collected data should be downsampled, and the sampling frequency of the EEG signal is reduced from 1000 Hz to 500 Hz.
  • bandpass filtering of 1 Hz to 100 Hz is performed to remove DC interference and high frequency signals.
  • the collected EEG signals will inevitably contain the effects of ocular electricity (including the up and down movement of the eyeball, left and right movements, blinking) and myoelectric signals.
  • the electro-oculogram, especially the blink of an eye is particularly strong, and the most affected by the electro-oculogram is the lead in the forehead area.
  • the embodiment of the present invention separately The method of filtering by Independent Component Analysis (ICA) is filtered out.
  • ICA Independent Component Analysis
  • the embodiment of the present invention uses adaptive tracking of discriminative frequency components (ATDFCs) to find the best frequency band that can distinguish different emotion types. This is important for accurate extraction of features and improved classification accuracy.
  • DFC adaptive tracking method is shown in Figure 4.
  • each lead has a discrete time-frequency matrix I n (f, t).
  • S w (f, t), S B (f, t), m k (f, t), m (f, t), and F R (f, t) are two-dimensional matrices.
  • S w (f,t) and S B (f,t) represent intra-class and inter-class differences
  • F R (f,t) is the fisher ratio
  • m k (f,t) is the average time-frequency of the k-th class.
  • Density, m(f, t) is the average time-frequency density of all classes
  • the weightable DW(f) can be obtained from the Fisher ratio, and its calculation method is as shown in formula (4):
  • represents the time period in which the STFT is calculated.
  • the DFC is calculated by the band iterative selection method, and the number of iterations is equal to the number of bands to be obtained.
  • the most separable frequency band is 9 to 14 Hz, and the DW(f) corresponding to 9, 10, 11, 12, and 13 Hz is set to Zero, then calculate the second separable band; repeat this process until the required number of bands is obtained.
  • Step 2 When the frequency window moves along the frequency axis of DW(f), the energy distribution ⁇ is calculated according to the formula (5).
  • F i represents the center frequency of the ith frequency band when the frequency window moves along the frequency axis.
  • F i represents the center frequency of the ith frequency band when the frequency window moves along the frequency axis.
  • the frequency window width is 3 Hz
  • 97 frequency bands can be obtained, for example, 1 to 4 Hz, 2 to 5 Hz, 3 to 6 Hz, 4 to 7 Hz, ..., 97 Hz to 100 Hz.
  • Step3 according to the maximum energy distribution ⁇ , select the best among all F i As in formula (6).
  • each j corresponds to an optimal center frequency
  • Step5 After calculating ⁇ j , set a threshold value ⁇ min .
  • the first separable frequency band of each lead, the power spectrum values of the second separable frequency band and the third separable frequency band are selected to establish a characteristic matrix P Ni*180 of each day.
  • Ni is the number of samples on the ith day.
  • 60 lead * 3 bands 180 dimensions.
  • the embodiment of the present invention first uses the PCA to perform dimensionality reduction processing on the feature vectors obtained each day.
  • PCA uses a set of linearly independent and mutually orthogonal new vectors to represent the rows (or columns) of the original data matrix, to achieve the number of compressed variables, eliminate redundant information, and maximize the purpose of saving valid information.
  • the original vector group is (P 1 , P 2 , . . . , P 180 ), and the principal component vector group is denoted as (F 1 , F 2 , . . . , F m ), and usually m is less than 180.
  • the relationship between the principal component and the original vector group is:
  • P b,h is the hth feature of the bth sample.
  • the feature vector U 180*180 is used as the coordinate axis of the main component to form a new vector space.
  • U 180*180 ' is the transpose matrix of U 180*180 .
  • ⁇ i is the obtained i-th eigenvalue.
  • a total of 7 principal components are obtained.
  • the contribution rate of the first principal component F1 is 48%
  • the contribution rate of F2 is 32%
  • the contribution rate of F3 is 15%
  • the contribution rate of F4, F5, F6, and F7 is 5% (the contribution of 7 principal components)
  • the rate is 100%).
  • the cumulative contribution rate of the first three principal components (F1, F2, F3) reaches 95%, that is, the first three principal components contain 95% of the information of the seven principal components.
  • the three principal components are selected. Pattern recognition as new data, guaranteed The amount of information reduces the dimension of the feature matrix.
  • the support vector machine (SVM) [3] is used to establish an emotion recognition model to identify the user's current emotional state.
  • SVM support vector machine
  • a portion of the samples are used to build a classifier, called a training set, and the remaining samples are used to test the classifier, called a test set.
  • Embodiments of the present invention first improve the temporal robustness of the emotion recognition model by increasing the number of days in the training set samples. So in the pattern recognition phase, four days of data are used to train the model, and the remaining day's samples are attributed to the test set. This facilitates the training set to extract emotion-related features while weakening time-specific features.
  • F j min is the minimum value of the jth column of the characteristic matrix F Ni*d after dimensionality reduction.
  • F j max is the characteristic matrix F Ni*d The maximum value of the j column.
  • the embodiment of the present invention finds the best separable frequency band of each user through the separable frequency band adaptive tracking method; the feature dimension is reduced by the principal component analysis method; and the number of days in the training set sample is increased. Weakening time-specific features, and then accurately and stably performing emotion recognition in real time.
  • the invention can effectively improve the time robustness and accuracy of the emotion recognition model.
  • Figure 6 shows the correct recognition rate for the number of days of different training samples for nine participants.
  • the vertical axis is the average recognition accuracy rate under the obtained N-day conditions.
  • the correct rate increases, and the correct rate is positively correlated with the number of days of the training sample; the 4-day sample is used to train the classifier to train compared to the 1-day sample.
  • the correct rate increased by approximately 10% with statistical differences (p ⁇ 0.01). This also verifies that the method proposed by the present invention is effective.
  • the embodiment of the invention finds the best separable frequency band of each user through the separable frequency band adaptive tracking method, and the principal component analysis method performs feature dimension reduction, and increases the number of days in the training set sample in the emotion recognition model to strengthen the emotion-related features. Time-specific features are weakened, which improves the temporal robustness of the emotion recognition model.
  • This Figure 6 illustrates that increasing the number of days in the training set can significantly increase the time robustness of the classifier.
  • the invention can effectively improve the time robustness and readiness of the emotion recognition model, and provides technical support for the emotion recognition from the laboratory to the application.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,包括:对采集到的64导脑电信号进行包括变参考到双耳平均;降采样到500Hz;1-100Hz带通滤波;以及利用独立成分分析的算法去除眼电干扰的预处理;将预处理后的脑电信号通过可分频段自适应跟踪找到最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵;利用主成分分析法对特征矩阵进行降维;使用支持向量机分类器对降维后的脑电功率谱特征进行识别,建立情绪识别模型。上述方案通过可分频段自适应跟踪找到最佳可分性频段,通过增加情绪识别模型的训练集中样本的天数强化了情绪相关特征,弱化了时间特异性特征,提高了情绪识别模型的时间鲁棒性。

Description

一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法 技术领域
本发明涉及脑电识别领域,尤其涉及一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法。
背景技术
情绪(emotion)是人对客观事物是否满足自身需要而产生的综合状态。它作为人脑的高级功能,保证着有机体的生存和适应,不同程度上影响着人的学习、记忆与决策。在人们的日常工作和生活中,情绪的作用无处不在。负性情绪会影响我们的身心健康,降低工作质量与效率,也会造成严重的工作失误。有研究证明,负性情绪的长期积累,会损害免疫系统的功能,使人们更容易受到周围病毒的感染。所以,适时地发现负性情绪并给予适当的干预与调控十分必要,尤其是对司机,航天员等一些特殊工作者。另一方面,在人机交互系统里,如果系统能够捕捉到人的情绪状态,那么人机交互就会变得更加友好,自然与高效。情绪的分析与识别已经成为神经科学、心理学、认知科学、计算机科学和人工智能等领域学科交叉的一项重要的研究课题。
随着神经生理学的发展和脑成像技术的兴起,脑电信号(Electroencephalography,EEG)因其时间分辨率高、不受人为因素控制、能够客观真实地反映人的情绪状态而受到研究人员的重视并被引入到情绪识别领域。而且新式理论方法的提出在一定程度上提高了基于脑电的情绪识别准确率。然而一旦走向实际应用,识别率大幅度下降,很难满足应用的需求,建立高精确度的情绪识别模型仍面临巨大的挑战。
其中一个难点就是如何剔除或降低脑电信号的时间效应,进而提高情绪识别模型的时间普适性。众所周知,激素水平,外部环境(比如温度与湿度),以及饮食与睡眠都能引起生理信号的差异,所以在不同的时间里即使在同一种情绪状态下的脑电信号也是有差异性的。而且在实际应用中,情绪识别模型的建立与情绪状态的识别势必会存在一定的时间间隔,而且测试数据不会参与情绪识别模型的建立,特别是在一些特殊的应用场景里,比如航天员情绪状态的识别,识别模型的建立发生在地面上的准备阶段,而情绪状态的识别发生在太空里的工作阶段。当天建立识别模型然后马上进入应用是不切实际的。
综上所述,剔除或降低脑电信号的时间效应,进而提高情绪识别模型的时间鲁棒性是十分必要的。在现有的研究中,关于情绪分类器的时间普适性的研究寥寥无几。2001年, Picard等人[1]尝试去除时间效应对情绪识别模型的影响,采用其他情绪状态减去平静状态的方法,但是利用该方法就无法识别中性情绪,情绪类型会减少,而对中性情绪状态的识别也是非常重要且不可缺少的,中性情绪是情绪稳定性的一个重要指标。2012年,Chueh,Tung-Hung等人[2]利用多元方差分析的方法去除时间效应的影响,提高了分类器的性能。但是依然存在着一个问题,就是测试集中的数据不是独立的,依然与其他时间的数据混合在一起构建分类器,这在实际应用中也是不切实际的。
发明内容
本发明提供了一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,本发明可有效地提高情绪识别模型的时间鲁棒性和普适性,解决目前情绪识别中的瓶颈问题,将模型推向应用,并获得可观的社会效益和经济效益,详见下文描述:
一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,所述情绪脑电识别方法包括以下步骤:
对采集到的64导脑电信号进行预处理,包括:变参考到双耳平均;降采样到500Hz;1-100Hz带通滤波;以及利用独立成分分析的算法去除眼电干扰;
将预处理后的脑电信号采用可分频段自适应跟踪的算法找到每个用户的最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵;
利用主成分分析法对得到的情绪特征矩阵进行降维处理,作为最终的特征矩阵;
使用支持向量机分类器对最终的特征矩阵中的特征进行识别,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,将不同情绪状态区分开,建立情绪识别模型。
所述方法还包括:
采集不同时间段被试者在不同情绪状态下64导的脑电信号。
所述将预处理后的脑电信号采用可分频段自适应跟踪的算法找到每个用户的最佳可分性频段的步骤具体为:
1)使用短时傅里叶变换计算出每一导联的时频矩阵;
2)计算fisher比率,用它来衡量类内和类间的能量差异;
3)可分权重DW(f)可由Fisher比率求得;通过波段迭代选择法计算DFC,迭代次数等于需要获得的频段数;从获得的频段数中获取最佳可分性频段。
所述从获得的频段数中获取最佳可分性频段具体为:
当频率窗口沿着DW(f)的频率轴移动时,计算能量分布;根据最大能量分布α,在所有的频段数中选择最佳
Figure PCTCN2016098165-appb-000001
计算
Figure PCTCN2016098165-appb-000002
的相对变化δj
设一个门槛值δmin,比较δ2和δmin,如果δ2大于δmin,则接着比较δ3与δmin的大小,直到找到δj小于δmin,那么j-1的位置就是最佳可分性的频段。
所述利用主成分分析法对得到的情绪特征矩阵进行降维处理,作为最终的特征矩阵的步骤具体为:
1)对原始数据进行标准化处理,得到原始矩阵;然后求其协方差矩阵;对协方差矩阵进行特征根分解,得到特征根矩阵及特征向量;
2)求原始矩阵在新的矢量空间中的投影,即主成分向量组;
3)每个主成分的特征根大小代表其蕴含信息量的多少,求前k个主成分的累积贡献率;
4)选定预设的累积贡献率,使前d个主成分FNi*d作为新的数据进行模式识别。
所述使用支持向量机分类器对最终的特征矩阵中的特征进行识别,将不同情绪状态区分开,建立情绪识别模型的步骤具体为:
对每一天的数据分别进行列归一化,归一化到[-1.1],得到特征矩阵;
利用SVM分类器建立情绪识别模型,建模过程中,将多天的数据放入训练集,以提高分类器的时间鲁棒性。
本发明提供的技术方案的有益效果是:本发明的主旨是提出一种提高情绪识别模型时间鲁棒性的新方法,通过可分频段自适应跟踪法寻找到每个用户的最佳可分频段,通过提高训练集中样本的天数来弱化时间特异性特征,继而准确稳定实时地进行情绪识别。该项发明可有效地提高情绪识别模型的时间鲁棒性和准确性,可获得可观的社会效益和经济效益。最佳实施方案拟采用专利转让、技术合作或产品开发。
附图说明
图1为一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法的流程图;
图2为60导EEG导联图;
图3为被试实验时间安排表;
图4为可分频段自适应跟踪计算流程图;
图5为DFCs算法流程图;
图6为不同训练天数下的识别正确率。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面对本发明实施方式作进一步地详细描述。
为了解决背景技术中的问题,本发明实施例提出了一种新的提高情绪识别模型时间鲁棒性的情绪脑电识别方法,通过可分频段自适应跟踪算法找到每个用户的最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵,利用主成分分析法对得到的特征矩阵进行降维处理,作为最终情绪识别的特征矩阵,使用支持向量机建立情绪识别模型,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,从而准确、客观的进行情绪识别。
该方法克服了上述的两个问题,既不会减少识别的情绪种类,测试集的数据又不会参与情绪识别模型的建立,满足实际应用中的要求。
实施例1
本发明实施例提供了一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,参见图1,该情绪脑电识别方法包括以下步骤:
101:数据采集阶段采集的是不同时间段被试者在不同情绪状态下(积极、中性、消极)64导的脑电信号;
102:对采集到的64导脑电信号进行四个步骤的预处理。包括:变参考到双耳平均;降采样到500Hz;1-100Hz带通滤波;以及独立成分分析(independent component analysis,ICA)去眼电干扰;
103:将预处理后的脑电信号采用可分频段自适应跟踪的算法找到每个用户的最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵;
104:利用主成分分析法对得到的情绪特征矩阵进行降维处理,作为最终的特征矩阵;
105:使用支持向量机分类器对最终的特征矩阵中的特征进行识别,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,将不同情绪状态区分开,建立情绪识别模型。
为了提高情绪识别模型的时间鲁棒性,本发明实施例通过增加训练集中样本的天数来弱化时间特异性的特征。在特征识别之前,首先分别对每一天的数据进行归一化。
综上所述,本发明实施例通过可分频段自适应跟踪算法找到每个用户的最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵,利用主成分分析法对得到的特征矩阵进行降维处理,作为最终情绪识别的特征矩阵,使用支持向量机建立情绪识别模型,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,从而准确、客观的进行情绪识别。该项发明可有效地提高情绪识别模型的时间鲁棒性和准确性。
实施例2
下面结合图2、图3、图4、图5对实施例1中的方案进行详细的介绍,详见下文描述:
201:数据采集阶段;
其中,脑电采集装置为Neuroscan公司的64导联放大器和Scan 4.5采集系统,电极遵照国际脑电协会规定的标准10-20系统放置,去除眼电和参考电极外的60导电极的导联分布如图2所示。采集时以右侧乳突作为参考电极,脑部前额顶侧中央处接地,所有电极的阻抗均保持在5k欧以下,采样频率为1 000Hz。
每名被试者需在一个月内进行5次数据采集,每一次采集之间的时间间隔分别是一天、三天、一周和两周。图3是被试者实验时间安排表。每名被试者在该天的同一时间来到实验室进行数据采集,用视频诱发被试积极、中性、消极三种情绪状态。
202:数据预处理;
对采集到的64导脑电信号进行四个步骤的预处理。包括:变参考到双耳平均;降采样到500Hz:1-100Hz带通滤波以及ICA去眼电干扰。
其中,采集时的参考电位在右耳乳突处,这导致右脑区导联的信号幅值偏低。因此首先进行参考电位转换,将参考电位变为位于两侧乳突部位的M1、M2导联处,便于后续数据处理。系统的采样频率是1000Hz,主要是为了满足脑电信号变化迅速的要求。但1000Hz的采样频率远远大于奈奎斯特定理的理论采样频率,并且采样频率过大会导致数据量过大使后续处理效率降低。因此,要对采集到的数据进行降采样,脑电信号的采样频率由1000Hz降为500Hz。
本发明实施例进行1Hz~100Hz的带通滤波以去除直流干扰以及高频信号。由于采集到的脑电信号不可避免地会含有眼电(包括眼球的上下、左右移动,眨眼)及肌电信号带来的影响。其中眼电,尤其是眨眼的眼电信号特别强烈,受眼电影响最大的是前额区域的导联。对于脑电信号中掺杂的眼电信号及肌电信号产生的影响,本发明实施例通过独立分 量分析(Independent Component Analysis,ICA)滤波的方法予以滤除。
其中,具体滤波的方法为本领域技术人员所公知,本发明实施例对此不做限制。
203:可分频段的自适应跟踪法;
因为不同用户会有不同的最佳可分性频段,因此本发明实施例利用可分频段自适应跟踪(adaptive tracking of discriminative frequency components,ATDFCs)的方法找到可将不同情绪类型区分开的最佳频段,这对准确的提取特征、提高分类正确率是很重要的。DFC自适应跟踪法的计算过程如图4所示。
1)使用短时傅里叶变换计算出每一导联的时频矩阵;
因此,每一个导联都有一个离散时频矩阵In(f,t)。
2)计算fisher比率,用它来衡量类内(同一模式内,在本发明实施例中是指同一情绪类型)和类间(不同模式之间,在本发明实施例中是指不同情绪类型之间)的能量差异。其计算方法如公式(1)、公式(2)、公式(3)所示。
Figure PCTCN2016098165-appb-000003
Figure PCTCN2016098165-appb-000004
Figure PCTCN2016098165-appb-000005
其中,Sw(f,t)、SB(f,t)、mk(f,t)、m(f,t)以及FR(f,t)是二维矩阵。Sw(f,t)、SB(f,t)分别代表类内以及类间差异,FR(f,t)为fisher比率,mk(f,t)是第k类的平均时频密度,m(f,t)是所有类的平均时频密度,C代表类别数,本发明实施例中C=3,nk是第k类中的样本数。
3)可分权重DW(f)可由Fisher比率求得,其计算方法如公式(4):
Figure PCTCN2016098165-appb-000006
其中,τ代表STFT计算时的时间段。
4)获得DW(f)后,通过波段迭代选择法计算DFC,迭代次数等于需要获得的频段数。
可以使用下面提到的Step1到Step5五个步骤来计算最具可分性的频段,如图5所示。然后将最具可分性频段下的权重DW(f)置为零,再计算可分性位于第二位的频段。
例如,最具可分性的频段为9~14Hz,就将9、10、11、12和13Hz对应的DW(f)置为 零,再计算第二可分频段;不断重复这个过程直到获得需要的频段数。
Step1、首先确定需要被选择的频段为1~100Hz,滑动的频率窗口在3~7Hz间以步长1Hz变化(如图5所示)。因此,可以得到5个不同带宽参数记为BWj(j=1,2,3,4,5)。
Step2、当频率窗口沿着DW(f)的频率轴移动时,根据公式(5)计算能量分布α。
Figure PCTCN2016098165-appb-000007
其中,Fi代表频率窗口沿着频率轴移动时第i个频段的中心频率。例如,当频率窗宽为3Hz时,则可获得97个频段,例如:1~4Hz、2~5Hz、3~6Hz、4~7Hz,…,97Hz~100Hz。
Step3、根据最大能量分布α,在所有的Fi中选择最佳
Figure PCTCN2016098165-appb-000008
如公式(6)。
Figure PCTCN2016098165-appb-000009
对每一个BWj都要求出一个
Figure PCTCN2016098165-appb-000010
因此,每一个j,都对应一个最佳中心频率
Figure PCTCN2016098165-appb-000011
以及最佳能量分布
Figure PCTCN2016098165-appb-000012
Step4、为了比较每个BWj的分辨能力,计算
Figure PCTCN2016098165-appb-000013
的相对变化,j=(2,3,4,5),利用公式(7)计算δj
Figure PCTCN2016098165-appb-000014
Step5、计算完δj之后,设一个门槛值δmin
实验证明,对于不同的门槛值,例如:10%,20%,30%,40%,…,,门槛值越小,该算法越趋近于选择频率窗为3Hz的频率段。
比较δ2和δmin,如果δ2大于δmin,则接着比较δ3与δmin的大小,直到找到δj小于δmin。那么j-1的位置就是最具可分性的频段。
本发明实施例选择每一导联的第一可分频段,第二可分频段和第三可分频段的功率谱值建立每一天的特征矩阵PNi*180。Ni为第i天的样本数量。60导联*3频段=180维特征。
PNi*180=(P1,P2,L,P180)    (8)
204:主成分分析法降维;
实际应用时,由于各个参数所含的信息之间具有一定的重叠性和相关性,若直接将它们用于模式识别,会造成模型参数的过度拟合而降低分类的准确性和可靠性,且会因为数据量过大而降低分类的速度。因此在模式分类之前,本发明实施例首先利用PCA对每一天得到的特征向量进行降维处理。
PCA根据方差最大化原理,用一组线性无关且相互正交的新向量表征原来的数据矩阵的行(或列),达到压缩变量个数,剔除冗余信息,最大化保存有效信息的目的。原始向量组为(P1,P2,…,P180),主成分向量组记为(F1,F2,…,Fm),通常m小于180。则主成分与原始向量组的关系为:
Figure PCTCN2016098165-appb-000015
其中,F1蕴含信息量最多,具有最大方差,称为第一主成分,F2,…,Fm依次递减,称为第二主成分、…、第m主成分。因此主成分分析的过程可以看作是确定权重系数αk,h(k=1,…,m;h=1,…180)的过程。
在本发明实施例中,第i天得到的Ni个样本(i=1,2,3,4,5),可用下面的矩阵表示
Figure PCTCN2016098165-appb-000016
其中,Pb,h为第b个样本的第h个特征。
用PCA进行特征降维的求解过程如下:
1)对原始数据PNi*180进行标准化处理,矩阵中的元素减去所在列的均值,然后除以所在列的标准差,使得每个变量的均值为0,方差为1,得到矩阵PNi*180 *
PNi*180 *=[yb,h]Ni*180,b=1,2,…,Ni;h=1,2,…,180          (11)
Figure PCTCN2016098165-appb-000017
其中,
Figure PCTCN2016098165-appb-000018
2)然后求其协方差矩阵C180*180,PNi*180 *中任两列之间可以计算两变量间的协方差,于是得到协方差矩阵:
Figure PCTCN2016098165-appb-000019
3)对协方差矩阵C180*180进行特征根分解,得到特征根矩阵Λ180*180及特征向量U180*180
C180*180=U180*180Λ180*180U180*180′   (14)
其中,特征向量U180*180作为主成分的坐标轴,构成新的矢量空间,
Figure PCTCN2016098165-appb-000020
其中,特征根λr(r=1,2,…,180)的大小代表第r个主成分蕴含的信息量。U180*180′是U180*180的转置矩阵。
4)求原始数据PNi*180在新的矢量空间中的投影,即主成分向量组FNi*180
FNi*180=PNi*180U180*180  (15)
5)求累积贡献率。每个主成分的特征根大小代表其蕴含信息量的多少。求前k个主成分的累积贡献率(k=1,…,180)。
Figure PCTCN2016098165-appb-000021
其中,λi是求出的第i个特征根。
6)选定预设的累积贡献率,使前d个主成分FNi*d作为新的数据进行模式识别(d<180)。
例如:一共得出7个主成分。第一个主成分F1的贡献率为48%,F2的贡献率为32%,F3的贡献率为15%,F4,F5,F6,F7总共的贡献率为5%(7个主成分的贡献率一共是100%)。那么前三个主成分(F1,F2,F3)的累积贡献率达到了95%,也就是说,前三个主成分蕴含了7个主成分95%的信息,那么,选择这三个主成分作为新的数据进行模式识别,在保证 信息量的同时降低了特征矩阵的维数。
205:情绪识别模型的建立;
特征降维后,使用支持向量机(Support Vector Machine,SVM)[3]建立情绪识别模型,识别用户当前的情绪状态。在模式识别阶段,一部分样本用来建立分类器,称为训练集,剩余的样本用来测试分类器,称为测试集。
本发明实施例首先通过增加训练集中样本的天数来提高情绪识别模型的时间鲁棒性。所以在模式识别阶段,将四天的数据用于训练模型,剩余一天的样本归于测试集。这样有利于训练集提取与情绪相关的特征,而弱化时间特异性的特征。
建立分类器之前,首先需要对每一天的数据分别进行列归一化,归一化到[-1.1],得到特征矩阵PPNi*d,(d为降维后特征矩阵的维数)
Figure PCTCN2016098165-appb-000022
PPi,j=(ymax-ymin)*(Fi,j-Fj min)/(Fj max-Fj min)+ymin,
其中,ymax=1,ymin=-1;Fj min为降维后的特征矩阵FNi*d的第j列的最小值,同理,Fj max为特征矩阵FNi*d的第j列的最大值。归一化后,利用SVM分类器建立情绪识别模型。
综上所述,本发明实施例通过可分频段自适应跟踪法寻找到每个用户的最佳可分频段;通过主成分分析法对特征矩阵进行特征降维;通过提高训练集中样本的天数来弱化时间特异性特征,继而准确稳定实时地进行情绪识别。该项发明可有效地提高情绪识别模型的时间鲁棒性和准确性。
实施例3
下面结合图6对实施例1和2的方案进行可行性验证,详见下文描述:
图6是9名被试不同训练样本天数下的识别正确率。横轴是训练集中样本的天数N(N=1,2,3,4),即N天的样本做训练,剩余的5-N天的样本做测试。纵轴是得到的N天条件下的平均识别正确率。由图6中可以看出,随着训练集中天数的增加,正确率提高,正确率与训练样本的天数呈正相关;4天的样本用于训练分类器相比于1天的样本做训练,其 正确率提高了大约10%,且具有统计学差异(p<0.01)。这也验证了本发明提出的方法有效。
本发明实施例通过可分频段自适应跟踪法找到每个用户的最佳可分频段,主成分分析法进行特征降维,并增加情绪识别模型中训练集中样本的天数强化了情绪相关的特征,弱化了时间特异性的特征,从而提高了情绪识别模型的时间鲁棒性。该图6说明了增加训练集中样本的天数可以显著提高分类器的时间鲁棒性。本发明可有效地提高情绪识别模型的时间鲁棒性和准备性,为情绪识别从实验室走向应用提供了技术支持。
参考文献
[1]PICARD R W,VYZAS E,HEALEY J.Toward machine emotional intelligence:Analysis of affective physiological state[J].Pattern Analysis and Machine Intelligence,IEEE Transactions on,2001,23(10):1175-91.
[2]CHUEH T-H,CHEN T-B,LU H H-S,et al.Statistical Prediction of Emotional States by Physiological Signals with Manova and Machine Learning[J].International Journal of Pattern Recognition and Artificial Intelligence,2012,26(04):
[3]HIDALGO-MU OZ A R,L PEZ M M,SANTOS I M,et al.Application of SVM-RFE on EEG signals for detecting the most relevant scalp regions linked to affective valence processing[J].Expert Systems with Applications,2013,40(6):2102–8.
本领域技术人员可以理解附图只是一个优选实施例的示意图,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
以上所述仅为本发明的较佳实施例,并不用以限制本发明,凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (6)

  1. 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述情绪脑电识别方法包括以下步骤:
    对采集到的64导脑电信号进行预处理,包括:变参考到双耳平均;降采样到500Hz;1-100Hz带通滤波;以及利用独立成分分析的算法去除眼电干扰;
    将预处理后的脑电信号采用可分频段自适应跟踪的算法找到每个用户的最佳可分性频段,分别计算每一导联的最佳可分频段的功率谱密度,构成情绪特征矩阵;
    利用主成分分析法对得到的情绪特征矩阵进行降维处理,作为最终的特征矩阵;
    使用支持向量机分类器对最终的特征矩阵中的特征进行识别,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,将不同情绪状态区分开,建立情绪识别模型。
  2. 根据权利要求1所述的一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述方法还包括:
    采集不同时间段被试者在不同情绪状态下64导的脑电信号。
  3. 根据权利要求1所述的一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述将预处理后的脑电信号采用可分频段自适应跟踪的算法找到每个用户的最佳可分性频段的步骤具体为:
    1)使用短时傅里叶变换计算出每一导联的时频矩阵;
    2)计算fisher比率,用它来衡量类内和类间的能量差异;
    3)可分权重DW(f)可由Fisher比率求得;通过波段迭代选择法计算DFC,迭代次数等于需要获得的频段数;从获得的频段数中获取最佳可分性频段。
  4. 根据权利要求1所述的一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述从获得的频段数中获取最佳可分性频段具体为:
    当频率窗口沿着DW(f)的频率轴移动时,计算能量分布;根据最大能量分布α,在所有的频段数中选择最佳
    Figure PCTCN2016098165-appb-100001
    计算
    Figure PCTCN2016098165-appb-100002
    的相对变化δj
    设一个门槛值δmin,比较δ2和δmin,如果δ2大于δmin,则接着比较δ3与δmin的大小,直到找到δj小于δmin,那么j-1的位置就是最佳可分性的频段。
  5. 根据权利要求1所述的一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述利用主成分分析法对得到的情绪特征矩阵进行降维处理,作为最终的特征矩阵的步骤具体为:
    1)对原始数据进行标准化处理,得到原始矩阵;然后求其协方差矩阵;对协方差矩阵进行特征根分解,得到特征根矩阵及特征向量;
    2)求原始矩阵在新的矢量空间中的投影,即主成分向量组;
    3)每个主成分的特征根大小代表其蕴含信息量的多少,求前k个主成分的累积贡献率;
    4)选定预设的累积贡献率,使前d个主成分FNi*d作为新的数据进行模式识别。
  6. 根据权利要求1所述的一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法,其特征在于,所述使用支持向量机分类器对最终的特征矩阵中的特征进行识别,通过增加情绪模型训练集中样本的天数弱化时间特征,提高情绪模型的时间鲁棒性,将不同情绪状态区分开,建立情绪识别模型的步骤具体为:
    对每一天的数据分别进行列归一化,归一化到[-1.1],得到特征矩阵;
    利用SVM分类器建立情绪识别模型;建模过程中,将多天的数据放入训练集,以提高分类器的时间鲁棒性。
PCT/CN2016/098165 2016-07-18 2016-09-06 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法 WO2018014436A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610574108.0 2016-07-18
CN201610574108.0A CN106108894A (zh) 2016-07-18 2016-07-18 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法

Publications (1)

Publication Number Publication Date
WO2018014436A1 true WO2018014436A1 (zh) 2018-01-25

Family

ID=57289670

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/098165 WO2018014436A1 (zh) 2016-07-18 2016-09-06 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法

Country Status (2)

Country Link
CN (1) CN106108894A (zh)
WO (1) WO2018014436A1 (zh)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117787A (zh) * 2018-08-10 2019-01-01 太原理工大学 一种情感脑电信号识别方法及系统
CN109948516A (zh) * 2019-03-18 2019-06-28 湖南大学 一种基于能量最大化与核svm的复合电能质量扰动识别方法及方法
CN111134667A (zh) * 2020-01-19 2020-05-12 中国人民解放军战略支援部队信息工程大学 基于脑电信号的时间迁移情绪识别方法及系统
CN111312215A (zh) * 2019-12-20 2020-06-19 台州学院 一种基于卷积神经网络和双耳表征的自然语音情感识别方法
US20200245890A1 (en) * 2017-07-24 2020-08-06 Thought Beanie Limited Biofeedback system and wearable device
CN111528866A (zh) * 2020-04-30 2020-08-14 北京脑陆科技有限公司 一种基于LightGBM模型的EEG信号情绪识别方法
CN111832438A (zh) * 2020-06-27 2020-10-27 西安电子科技大学 一种面向情感识别的脑电信号通道选择方法、系统及应用
CN112101152A (zh) * 2020-09-01 2020-12-18 西安电子科技大学 一种脑电情感识别方法、系统、计算机设备、可穿戴设备
CN112132328A (zh) * 2020-09-04 2020-12-25 国网上海市电力公司 一种光伏输出功率超短期局域情绪重构神经网络预测方法
CN112263252A (zh) * 2020-09-28 2021-01-26 贵州大学 基于hrv特征和三层svr的pad情绪维度预测方法
CN113128552A (zh) * 2021-03-02 2021-07-16 杭州电子科技大学 一种基于深度可分离因果图卷积网络的脑电情绪识别方法
CN113554110A (zh) * 2021-07-30 2021-10-26 合肥工业大学 一种基于二值胶囊网络的脑电情绪识别方法
CN113688673A (zh) * 2021-07-15 2021-11-23 电子科技大学 在线场景下心电信号的跨用户情感识别方法
CN114190944A (zh) * 2021-12-23 2022-03-18 上海交通大学 基于脑电信号的鲁棒情绪识别方法
CN114218986A (zh) * 2021-12-10 2022-03-22 中国航空综合技术研究所 基于eeg脑电信号数据的状态分类方法
CN114578963A (zh) * 2022-02-23 2022-06-03 华东理工大学 一种基于特征可视化和多模态融合的脑电身份识别方法
CN114638252A (zh) * 2022-02-11 2022-06-17 南京邮电大学 一种基于脑电的身份识别方法
CN114662524A (zh) * 2020-12-22 2022-06-24 上海交通大学 基于脑电信号的即插即用式域适应方法
CN114779930A (zh) * 2021-04-14 2022-07-22 三峡大学 基于一对多支持向量机的vr用户触觉体验的情绪识别方法
CN114947852A (zh) * 2022-06-14 2022-08-30 华南师范大学 一种多模态情感识别方法、装置、设备及存储介质
CN116369949A (zh) * 2023-06-06 2023-07-04 南昌航空大学 一种脑电信号分级情绪识别方法、系统、电子设备及介质
CN118141377A (zh) * 2024-05-10 2024-06-07 吉林大学 患者的负性情绪监测系统及方法

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106725452A (zh) * 2016-11-29 2017-05-31 太原理工大学 基于情感诱发的脑电信号识别方法
CN106805969B (zh) * 2016-12-20 2019-12-24 广州视源电子科技股份有限公司 基于卡尔曼滤波和小波变换的脑电放松度识别方法及装置
CN106974648B (zh) * 2017-03-27 2020-02-14 广州视源电子科技股份有限公司 基于时域及频域空间的脑电放松度识别方法及装置
CN107411737A (zh) * 2017-04-18 2017-12-01 天津大学 一种基于静息脑电相似性的情绪跨时间识别方法
CN107411738A (zh) * 2017-04-18 2017-12-01 天津大学 一种基于静息脑电相似性的情绪跨个体识别方法
CN107463792B (zh) * 2017-09-21 2023-11-21 北京大智商医疗器械有限公司 神经反馈装置、系统及方法
CN109598180A (zh) * 2017-09-30 2019-04-09 深圳市岩尚科技有限公司 光电容积脉搏波的质量评估方法
CN108042145A (zh) * 2017-11-28 2018-05-18 广州视源电子科技股份有限公司 情绪状态识别方法和系统、情绪状态识别设备
CN109009101B (zh) * 2018-07-27 2021-04-06 杭州电子科技大学 一种脑电信号自适应实时去噪方法
CN109255309B (zh) * 2018-08-28 2021-03-23 中国人民解放军战略支援部队信息工程大学 面向遥感图像目标检测的脑电与眼动融合方法及装置
CN110070105B (zh) * 2019-03-25 2021-03-02 中国科学院自动化研究所 基于元学习实例快速筛选的脑电情绪识别方法、系统
CN110390272B (zh) * 2019-06-30 2023-07-18 天津大学 一种基于加权主成分分析的eeg信号特征降维方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488189A (zh) * 2009-02-04 2009-07-22 天津大学 基于独立分量自动聚类处理的脑电信号处理方法
CN102499677A (zh) * 2011-12-16 2012-06-20 天津大学 基于脑电非线性特征的情绪状态识别方法
CN105395192A (zh) * 2015-12-09 2016-03-16 恒爱高科(北京)科技有限公司 一种基于脑电的可穿戴情感识别方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101488189A (zh) * 2009-02-04 2009-07-22 天津大学 基于独立分量自动聚类处理的脑电信号处理方法
CN102499677A (zh) * 2011-12-16 2012-06-20 天津大学 基于脑电非线性特征的情绪状态识别方法
CN105395192A (zh) * 2015-12-09 2016-03-16 恒爱高科(北京)科技有限公司 一种基于脑电的可穿戴情感识别方法和系统

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZENG, HONGMEI: "Extraction and Classification of EEG Features Evoked by Emotional Pictures", ELECTRONIC MEDICINE & PUBLIC HEALTH, CHINA MASTER'S THESES FULL-TEXT DATABASE, 31 July 2012 (2012-07-31), ISSN: 1674-0246 *

Cited By (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200245890A1 (en) * 2017-07-24 2020-08-06 Thought Beanie Limited Biofeedback system and wearable device
CN109117787A (zh) * 2018-08-10 2019-01-01 太原理工大学 一种情感脑电信号识别方法及系统
CN109948516A (zh) * 2019-03-18 2019-06-28 湖南大学 一种基于能量最大化与核svm的复合电能质量扰动识别方法及方法
CN109948516B (zh) * 2019-03-18 2022-12-02 湖南大学 一种基于能量最大化与核svm的复合电能质量扰动识别方法及方法
CN111312215A (zh) * 2019-12-20 2020-06-19 台州学院 一种基于卷积神经网络和双耳表征的自然语音情感识别方法
CN111312215B (zh) * 2019-12-20 2023-05-30 台州学院 一种基于卷积神经网络和双耳表征的自然语音情感识别方法
CN111134667A (zh) * 2020-01-19 2020-05-12 中国人民解放军战略支援部队信息工程大学 基于脑电信号的时间迁移情绪识别方法及系统
CN111134667B (zh) * 2020-01-19 2024-01-26 中国人民解放军战略支援部队信息工程大学 基于脑电信号的时间迁移情绪识别方法及系统
CN111528866A (zh) * 2020-04-30 2020-08-14 北京脑陆科技有限公司 一种基于LightGBM模型的EEG信号情绪识别方法
CN111832438A (zh) * 2020-06-27 2020-10-27 西安电子科技大学 一种面向情感识别的脑电信号通道选择方法、系统及应用
CN111832438B (zh) * 2020-06-27 2024-02-06 西安电子科技大学 一种面向情感识别的脑电信号通道选择方法、系统及应用
CN112101152A (zh) * 2020-09-01 2020-12-18 西安电子科技大学 一种脑电情感识别方法、系统、计算机设备、可穿戴设备
CN112101152B (zh) * 2020-09-01 2024-02-02 西安电子科技大学 一种脑电情感识别方法、系统、计算机设备、可穿戴设备
CN112132328A (zh) * 2020-09-04 2020-12-25 国网上海市电力公司 一种光伏输出功率超短期局域情绪重构神经网络预测方法
CN112263252A (zh) * 2020-09-28 2021-01-26 贵州大学 基于hrv特征和三层svr的pad情绪维度预测方法
CN112263252B (zh) * 2020-09-28 2024-05-03 贵州大学 基于hrv特征和三层svr的pad情绪维度预测方法
CN114662524B (zh) * 2020-12-22 2024-05-31 上海零唯一思科技有限公司 基于脑电信号的即插即用式域适应方法
CN114662524A (zh) * 2020-12-22 2022-06-24 上海交通大学 基于脑电信号的即插即用式域适应方法
CN113128552A (zh) * 2021-03-02 2021-07-16 杭州电子科技大学 一种基于深度可分离因果图卷积网络的脑电情绪识别方法
CN113128552B (zh) * 2021-03-02 2024-02-02 杭州电子科技大学 一种基于深度可分离因果图卷积网络的脑电情绪识别方法
CN114779930A (zh) * 2021-04-14 2022-07-22 三峡大学 基于一对多支持向量机的vr用户触觉体验的情绪识别方法
CN114779930B (zh) * 2021-04-14 2024-05-14 三峡大学 基于一对多支持向量机的vr用户触觉体验的情绪识别方法
CN113688673A (zh) * 2021-07-15 2021-11-23 电子科技大学 在线场景下心电信号的跨用户情感识别方法
CN113688673B (zh) * 2021-07-15 2023-05-30 电子科技大学 在线场景下心电信号的跨用户情感识别方法
CN113554110B (zh) * 2021-07-30 2024-03-01 合肥工业大学 一种基于二值胶囊网络的脑电情绪识别方法
CN113554110A (zh) * 2021-07-30 2021-10-26 合肥工业大学 一种基于二值胶囊网络的脑电情绪识别方法
CN114218986A (zh) * 2021-12-10 2022-03-22 中国航空综合技术研究所 基于eeg脑电信号数据的状态分类方法
CN114218986B (zh) * 2021-12-10 2024-05-07 中国航空综合技术研究所 基于eeg脑电信号数据的状态分类方法
CN114190944B (zh) * 2021-12-23 2023-08-22 上海交通大学 基于脑电信号的鲁棒情绪识别方法
CN114190944A (zh) * 2021-12-23 2022-03-18 上海交通大学 基于脑电信号的鲁棒情绪识别方法
CN114638252A (zh) * 2022-02-11 2022-06-17 南京邮电大学 一种基于脑电的身份识别方法
CN114578963B (zh) * 2022-02-23 2024-04-05 华东理工大学 一种基于特征可视化和多模态融合的脑电身份识别方法
CN114578963A (zh) * 2022-02-23 2022-06-03 华东理工大学 一种基于特征可视化和多模态融合的脑电身份识别方法
CN114947852B (zh) * 2022-06-14 2023-01-10 华南师范大学 一种多模态情感识别方法、装置、设备及存储介质
CN114947852A (zh) * 2022-06-14 2022-08-30 华南师范大学 一种多模态情感识别方法、装置、设备及存储介质
CN116369949B (zh) * 2023-06-06 2023-09-15 南昌航空大学 一种脑电信号分级情绪识别方法、系统、电子设备及介质
CN116369949A (zh) * 2023-06-06 2023-07-04 南昌航空大学 一种脑电信号分级情绪识别方法、系统、电子设备及介质
CN118141377A (zh) * 2024-05-10 2024-06-07 吉林大学 患者的负性情绪监测系统及方法

Also Published As

Publication number Publication date
CN106108894A (zh) 2016-11-16

Similar Documents

Publication Publication Date Title
WO2018014436A1 (zh) 一种提高情绪识别模型时间鲁棒性的情绪脑电识别方法
Abo-Zahhad et al. A new multi-level approach to EEG based human authentication using eye blinking
Abo-Zahhad et al. A new EEG acquisition protocol for biometric identification using eye blinking signals
KR102221264B1 (ko) 인간 감정 인식을 위한 딥 생리적 정서 네트워크를 이용한 인간 감정 추정 방법 및 그 시스템
US11221672B2 (en) Asymmetric EEG-based coding and decoding method for brain-computer interfaces
Singh et al. Small sample motor imagery classification using regularized Riemannian features
CN111265212A (zh) 一种运动想象脑电信号分类方法及闭环训练测试交互系统
CN109993068A (zh) 一种基于心率和面部特征的非接触式的人类情感识别方法
Thomas et al. Utilizing individual alpha frequency and delta band power in EEG based biometric recognition
Temiyasathit Increase performance of four-class classification for motor-imagery based brain-computer interface
CN108470182B (zh) 一种用于非对称脑电特征增强与识别的脑-机接口方法
Ramos-Aguilar et al. Analysis of EEG signal processing techniques based on spectrograms
Song et al. Adaptive common spatial pattern for single-trial EEG classification in multisubject BCI
Miao et al. Automated CCA-MWF algorithm for unsupervised identification and removal of EOG artifacts from EEG
Lao et al. Learning prototype spatial filters for subject-independent SSVEP-based brain-computer interface
Katyal et al. EEG signal and video analysis based depression indication
Kaewwit et al. High accuracy EEG biometrics identification using ICA and AR model
Zhang et al. EEG recognition of motor imagery based on SVM ensemble
CN112869743B (zh) 一种考虑认知分心的运动起始意图神经解析方法
Yang et al. Hybrid EEG-EOG system for intelligent prosthesis control based on common spatial pattern algorithm
Singh et al. Motor imagery classification based on subject to subject transfer in Riemannian manifold
Li et al. Classification of imaginary movements in ECoG
Farooq et al. Motor imagery based multivariate EEG signal classification for brain controlled interface applications
Tang et al. L1-norm based discriminative spatial pattern for single-trial EEG classification
Hendrawan et al. Identification of optimum segment in single channel EEG biometric system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16909363

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 11.06.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 16909363

Country of ref document: EP

Kind code of ref document: A1