CN113017628B - Conscious emotion recognition method and system integrating ERP components and nonlinear features - Google Patents
Conscious emotion recognition method and system integrating ERP components and nonlinear features Download PDFInfo
- Publication number
- CN113017628B CN113017628B CN202110156080.XA CN202110156080A CN113017628B CN 113017628 B CN113017628 B CN 113017628B CN 202110156080 A CN202110156080 A CN 202110156080A CN 113017628 B CN113017628 B CN 113017628B
- Authority
- CN
- China
- Prior art keywords
- features
- erp
- feature
- emotion
- conscious
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 31
- 230000008451 emotion Effects 0.000 claims abstract description 45
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 19
- 239000013598 vector Substances 0.000 claims abstract description 18
- 208000003443 Unconsciousness Diseases 0.000 claims abstract description 12
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000007637 random forest analysis Methods 0.000 claims abstract description 11
- 238000000605 extraction Methods 0.000 claims description 30
- 239000000284 extract Substances 0.000 claims description 24
- 230000004927 fusion Effects 0.000 claims description 22
- 230000015654 memory Effects 0.000 claims description 22
- 230000002996 emotional effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 6
- 238000004422 calculation algorithm Methods 0.000 claims description 5
- 239000000203 mixture Substances 0.000 claims 3
- 238000004458 analytical method Methods 0.000 abstract description 11
- 238000005516 engineering process Methods 0.000 abstract description 6
- 238000012935 Averaging Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 8
- 238000011160 research Methods 0.000 description 8
- 210000004556 brain Anatomy 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 210000001652 frontal lobe Anatomy 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007177 brain activity Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000021615 conjugation Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 230000000518 effect on emotion Effects 0.000 description 1
- 238000007636 ensemble learning method Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003557 neuropsychological effect Effects 0.000 description 1
- 210000000869 occipital lobe Anatomy 0.000 description 1
- 230000001936 parietal effect Effects 0.000 description 1
- 210000001152 parietal lobe Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 208000020016 psychiatric disease Diseases 0.000 description 1
- 230000008255 psychological mechanism Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 210000003478 temporal lobe Anatomy 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7225—Details of analogue processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Psychiatry (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Power Engineering (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
技术领域technical field
本申请涉及情绪识别技术领域,特别是涉及融合ERP成分与非线性特征的意识情绪识别方法及系统。The present application relates to the technical field of emotion recognition, and in particular, to a method and system for conscious emotion recognition that integrates ERP components and nonlinear features.
背景技术Background technique
本部分的陈述仅仅是提到了与本申请相关的背景技术,并不必然构成现有技术。The statements in this section merely mention the background art related to the present application and do not necessarily constitute prior art.
情绪的好坏决定着我们的生活质量,甚至会影响我们的生命健康。积极的情绪会使我们对生活充满希望,进而对我们的日常生活、工作和学习有一定的促进作用,消极的情绪则相反,甚至会使人患上严重的精神疾病因此失去宝贵的生命。随着脑与认知神经科学的发展,脑电(Electroencephalogram,EEG)信号由于不仅具有无创、客观性强、不易伪装、高时间分辨率和成本低等特性,而且可以真实的反映人类情绪状态的有效信息,面向脑电信号的情感识别逐渐成为了人机交互领域的研究热点之一。The quality of our emotions determines our quality of life, and even affects our health. Positive emotions will make us full of hope in life, and then have a certain promotion effect on our daily life, work and study. Negative emotions are the opposite, and even cause people to suffer from serious mental illnesses and lose precious lives. With the development of brain and cognitive neuroscience, electroencephalogram (EEG) signals not only have the characteristics of non-invasiveness, strong objectivity, not easy to camouflage, high temporal resolution and low cost, but also can truly reflect human emotional state. EEG-oriented emotion recognition has gradually become one of the research hotspots in the field of human-computer interaction.
如何自动化的提取反映脑电信号本质的特征和提高情感识别的准确度是研究者们关心的首要问题,也是一直努力实现的研究目标。时频分析可以同时表示信号在时域和频域的瞬时变化,是一种被用来处理非平稳和非线性时间序列的有力工具。为了有效的识别不同的情绪,将时频分析方法与传统的特征提取方法相结合,提出了一种融合ERP成分与非线性特征MMSE的意识情绪识别方法。How to automatically extract the features that reflect the essence of EEG signals and improve the accuracy of emotion recognition is the primary concern of researchers, and it is also the research goal that they have been striving to achieve. Time-frequency analysis can represent instantaneous changes of signals in both time and frequency domains, and is a powerful tool used to deal with non-stationary and nonlinear time series. In order to effectively identify different emotions, a time-frequency analysis method is combined with traditional feature extraction methods, and a conscious emotion recognition method is proposed that combines ERP components and nonlinear features MMSE.
在实现本公开的过程中,发明人发现现有技术中存在以下技术问题:In the process of realizing the present disclosure, the inventor found that the following technical problems exist in the prior art:
随着特征提取方法的发展与完善,单独的特征提取方法已经不能满足目前的研究需要,一些研究者也开始提出一些组合策略,如将时频域方法与非线性方法进行结合可以实现更好的识别效果。在认知神经科学的研究中发现,并非所有频带都对情绪识别的研究是有效的,脑电信号的不同频带和不同的大脑活动密切相关。对于脑电频带的选择,研究者们大多都是基于前人的经验。在频带提取阶段,离散小波变换(Discrete WaveletTransform,DWT)对高频信号的分解不够精细,容易丢失情感信息。与DWT相比,小波包分解(Wavelet Packet Decomposition,WPD)可以实现更详细的信号分解,并且高频部分的分辨率比DWT更好。研究表明,混合的时频分析方法可以有效的分析生理信号,如脑电信号、眼电信号、心电信号以及皮肤电信号。Kabir和Das等人的研究表明集成的经验模态分解(Empirical Mode Decomposition,EMD)和DWT比单独的EMD或DWT可以实现更高的分类精度。Rahman等人发现变分模态分解(Variational Mode Decomposition,VMD)和DWT比只使用VMD和DWT方法可以获得更大的时频分辨率,更有可能能够捕捉到反映脑电信号中的大脑非线性特征。With the development and improvement of feature extraction methods, individual feature extraction methods can no longer meet the current research needs. Some researchers have also begun to propose some combination strategies. For example, combining time-frequency domain methods with nonlinear methods can achieve better results. Identify the effect. In cognitive neuroscience research, it is found that not all frequency bands are valid for emotion recognition research, and different frequency bands of EEG signals are closely related to different brain activities. For the selection of EEG frequency bands, researchers are mostly based on previous experience. In the frequency band extraction stage, the discrete wavelet transform (Discrete Wavelet Transform, DWT) does not decompose the high-frequency signal finely enough, and it is easy to lose emotional information. Compared with DWT, Wavelet Packet Decomposition (WPD) can achieve more detailed signal decomposition, and the resolution of high-frequency parts is better than DWT. Studies have shown that the hybrid time-frequency analysis method can effectively analyze physiological signals, such as EEG, OMG, ECG and skin electrical signals. Studies by Kabir and Das et al. show that the integrated Empirical Mode Decomposition (EMD) and DWT can achieve higher classification accuracy than either EMD or DWT alone. Rahman et al. found that Variational Mode Decomposition (VMD) and DWT can achieve greater time-frequency resolution than only VMD and DWT methods, and are more likely to capture brain nonlinearities that reflect EEG signals. feature.
心理学的研究表明,N200、P300、N300和LPC等ERP成分在面向情绪的意识和无意识分析中有显著的差异,但ERP成分并未作为特征用于情感识别,而仅由心理学专业人员进行识别和分析,这项工作耗时又费力,自动的ERP成分识别与提取技术的使用显得尤为重要。因此提出了一个基于Shapelet技术的自动的ERP特征匹配与提取方法。Psychological studies have shown that ERP components such as N200, P300, N300, and LPC have significant differences in emotion-oriented conscious and unconscious analysis, but ERP components are not used as features for emotion recognition, but only by psychology professionals. Identification and analysis are time-consuming and labor-intensive, and the use of automatic ERP component identification and extraction technology is particularly important. Therefore, an automatic ERP feature matching and extraction method based on Shapelet technology is proposed.
综上所述,提出一种新的面向意识与无意识情绪识别的特征提取方法具有重要意义。In summary, it is of great significance to propose a new feature extraction method for conscious and unconscious emotion recognition.
发明内容SUMMARY OF THE INVENTION
为了解决现有技术的不足,本申请提供了融合ERP成分与非线性特征的意识情绪识别方法及系统;首先,对原始脑电信号进行预处理,预处理步骤包括伪迹去除、叠加平均和通道选择;其次,提出了一个基于Shapelet技术的ERP特征匹配与提取方法,自动提取N200、P300和N300等ERP特征;第三,采用集成的时频分析方法变分模态分解VMD和小波包分解WPD获取更好的时频分辨率和EEG信号的非线性特征,进一步提取非线性特征MMSE,并将其与ERP特征融合形成情绪特征向量;最后,将融合的特征向量输入随机森林分类器进行意识情绪与无意识情绪识别。In order to solve the deficiencies of the prior art, the present application provides a method and system for conscious emotion recognition that integrates ERP components and nonlinear features; first, the original EEG signals are preprocessed, and the preprocessing steps include artifact removal, superposition average and channel selection; secondly, an ERP feature matching and extraction method based on Shapelet technology is proposed to automatically extract ERP features such as N200, P300 and N300; thirdly, an integrated time-frequency analysis method is used for variational modal decomposition VMD and wavelet packet decomposition WPD Obtain better time-frequency resolution and nonlinear features of EEG signals, further extract nonlinear features MMSE, and fuse them with ERP features to form emotion feature vectors; finally, the fused feature vectors are input into the random forest classifier for conscious emotion. Recognition with unconscious emotions.
第一方面,本申请提供了融合ERP成分与非线性特征的意识情绪方法;In the first aspect, the present application provides a conscious emotion method integrating ERP components and nonlinear features;
融合ERP成分与非线性特征的意识情绪方法,包括:A conscious emotion approach that integrates ERP components and nonlinear features, including:
获取原始脑电信号,对所述原始脑电信号进行预处理;obtaining the original EEG signal, and preprocessing the original EEG signal;
对预处理后的脑电信号,提取ERP特征;Extract ERP features from the preprocessed EEG signals;
对预处理后的脑电信号,提取多尺度样本熵MMSE特征;Extract multi-scale sample entropy MMSE features from preprocessed EEG signals;
将ERP特征和多尺度样本熵MMSE特征进行融合,得到融合特征;Integrate ERP features and multi-scale sample entropy MMSE features to obtain fused features;
将融合特征输入到训练后的分类器中,输出情绪识别结果。The fusion features are input into the trained classifier, and the emotion recognition result is output.
第二方面,本申请提供了融合ERP成分与非线性特征的意识情绪系统;In the second aspect, the present application provides a conscious emotion system that integrates ERP components and nonlinear features;
融合ERP成分与非线性特征的意识情绪系统,包括:A conscious emotion system that integrates ERP components and nonlinear features, including:
获取模块,其被配置为:获取原始脑电信号,对所述原始脑电信号进行预处理;an acquisition module, which is configured to: acquire original EEG signals, and preprocess the original EEG signals;
ERP特征提取模块,其被配置为:对预处理后的脑电信号,提取ERP特征;an ERP feature extraction module, which is configured to: extract ERP features from the preprocessed EEG signals;
非线性特征提取模块,其被配置为:对预处理后的脑电信号,提取多尺度样本熵MMSE特征;a nonlinear feature extraction module, which is configured to: extract multi-scale sample entropy MMSE features from the preprocessed EEG signals;
特征融合模块,其被配置为:将ERP特征和多尺度样本熵MMSE特征进行融合,得到融合特征;a feature fusion module, which is configured to: fuse ERP features and multi-scale sample entropy MMSE features to obtain fusion features;
情绪识别模块,其被配置为:将融合特征输入到训练后的分类器中,输出情绪识别结果。The emotion recognition module is configured to: input the fusion feature into the trained classifier, and output the emotion recognition result.
第三方面,本申请还提供了一种电子设备,包括:一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与存储器连接,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述第一方面所述的方法。In a third aspect, the present application also provides an electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein the processor is connected to the memory, and one or more of the above The computer program is stored in the memory, and when the electronic device runs, the processor executes one or more computer programs stored in the memory, so that the electronic device performs the method described in the first aspect above.
第四方面,本申请还提供了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成第一方面所述的方法。In a fourth aspect, the present application further provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by a processor, the method described in the first aspect is completed.
第五方面,本申请还提供了一种计算机程序(产品),包括计算机程序,所述计算机程序当在一个或多个处理器上运行的时候用于实现前述第一方面任意一项的方法。In a fifth aspect, the present application also provides a computer program (product), including a computer program, which when run on one or more processors, is used to implement the method of any one of the foregoing first aspects.
与现有技术相比,本申请的有益效果是:Compared with the prior art, the beneficial effects of the present application are:
本公开方法由五部分组成:数据预处理部分、ERP特征提取部分、非线性特征MMSE的提取部分、特征向量的融合部分和分类器分类与预测部分。通过分析发现,并不是所有的EEG频带都适合情绪识别的研究,本公开提取与情感最为相关的EEG频带,即β和γ频带,并将其重构为一个新的情感频带。针对ERP特征的手动提取费时费力且提取困难,提出基于Shapelet技术的自动ERP特征匹配与提取方法,自动的提取N200、P300和N300等与意识和无意识情绪相关的ERP特征;针对集成的时频分析方法可以更好的捕捉非线性特征以实现更好的识别效果,提出基于VMD-WPD的非线性特征提取方法,即在VMD-WPD的基础上提取非线性特征MMSE。The disclosed method consists of five parts: data preprocessing part, ERP feature extraction part, extraction part of nonlinear feature MMSE, fusion part of feature vector and classifier classification and prediction part. Through analysis, it is found that not all EEG frequency bands are suitable for emotion recognition research. The present disclosure extracts the EEG frequency bands most related to emotion, namely β and γ frequency bands, and reconstructs them into a new emotion frequency band. Aiming at the time-consuming and difficult extraction of ERP features manually, an automatic ERP feature matching and extraction method based on Shapelet technology was proposed to automatically extract ERP features related to conscious and unconscious emotions such as N200, P300 and N300; for integrated time-frequency analysis The method can better capture nonlinear features to achieve better recognition effect. A nonlinear feature extraction method based on VMD-WPD is proposed, that is, nonlinear feature MMSE is extracted on the basis of VMD-WPD.
根据伪迹的去除标准,去除原始脑电信号的伪迹,再进行叠加平均,然后选择与意识情绪相关的通道得到每个被试的预处理后的脑电信号;ERP特征提取是将预处理的脑电信号数据进行Shapelet处理得到N200、P300和N300等与意识和无意识情绪相关的ERP特征;非线性特征MMSE的提取是首先使用VMD将预处理的脑电信号数据进行分解得到变分模态分量(Variational Mode Component,VMF),再将分解得到的VMF进行小波包分解与重构,得到情感频带VMFβ+γ,并计算其MMSE;特征向量的融合部分将ERP特征与非线性特征MMSE进行融合形成一个新的意识情绪识别的特征向量;最后将特征向量送入分类器中进行分类与预测,进而进行意识情绪与无意识情绪状态的识别。According to the artifact removal standard, the artifact of the original EEG signal is removed, and then superimposed and averaged, and then the channel related to the conscious emotion is selected to obtain the preprocessed EEG signal of each subject; ERP feature extraction is to extract the preprocessed EEG signal. The ERP features related to conscious and unconscious emotions such as N200, P300 and N300 are obtained by Shapelet processing of the EEG signal data; the extraction of nonlinear feature MMSE is to first use VMD to decompose the preprocessed EEG signal data to obtain variational modalities Variational Mode Component (VMF), and then the decomposed VMF is decomposed and reconstructed by wavelet packet, and the emotional frequency band VMF β+γ is obtained, and its MMSE is calculated; the fusion part of the feature vector combines the ERP feature with the nonlinear feature MMSE. A new feature vector for conscious emotion recognition is formed by fusion; finally, the feature vector is sent to the classifier for classification and prediction, and then the conscious emotion and unconscious emotional state are identified.
本发明附加方面的优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will become apparent from the description which follows, or may be learned by practice of the invention.
附图说明Description of drawings
构成本申请的一部分的说明书附图用来提供对本申请的进一步理解,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。The accompanying drawings that form a part of the present application are used to provide further understanding of the present application, and the schematic embodiments and descriptions of the present application are used to explain the present application and do not constitute improper limitations on the present application.
图1为本公开实施例中的基于ERP成分与非线性特征MMSE相结合的意识情绪识别方法流程图;1 is a flowchart of a method for conscious emotion recognition based on the combination of ERP components and nonlinear feature MMSE in an embodiment of the present disclosure;
图2为本公开实施例中的ERP特征的匹配与提取示意图;2 is a schematic diagram of matching and extraction of ERP features in an embodiment of the present disclosure;
图3为本公开实施例中的采样频率为1000Hz的WPD二叉树示意图;3 is a schematic diagram of a WPD binary tree with a sampling frequency of 1000 Hz in an embodiment of the present disclosure;
图4为本公开实施例中的MMSE的移动平均粗粒度过程图。FIG. 4 is a moving average coarse-grained process diagram of MMSE in an embodiment of the present disclosure.
具体实施方式Detailed ways
应该指出,以下详细说明都是示例性的,旨在对本申请提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本申请所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the application. Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本申请的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terminology used herein is for the purpose of describing specific embodiments only, and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural as well, furthermore, it is to be understood that the terms "including" and "having" and any conjugations thereof are intended to cover the non-exclusive A process, method, system, product or device comprising, for example, a series of steps or units is not necessarily limited to those steps or units expressly listed, but may include those steps or units not expressly listed or for such processes, methods, Other steps or units inherent to the product or equipment.
在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。Embodiments of the invention and features of the embodiments may be combined with each other without conflict.
实施例一Example 1
本实施例提供了融合ERP成分与非线性特征的意识情绪方法;The present embodiment provides a conscious emotion method integrating ERP components and nonlinear features;
如图1所示,融合ERP成分与非线性特征的意识情绪方法,包括:As shown in Figure 1, the conscious emotion method that integrates ERP components and nonlinear features includes:
S101:获取原始脑电信号,对所述原始脑电信号进行预处理;S101: Obtain an original EEG signal, and preprocess the original EEG signal;
S102:对预处理后的脑电信号,提取ERP特征;S102: extracting ERP features from the preprocessed EEG signals;
S103:对预处理后的脑电信号,提取多尺度样本熵MMSE特征;S103: Extract the multi-scale sample entropy MMSE feature from the preprocessed EEG signal;
S104:将ERP特征和多尺度样本熵MMSE特征进行融合,得到融合特征;S104: Integrate the ERP feature and the multi-scale sample entropy MMSE feature to obtain a fused feature;
S105:将融合特征输入到训练后的分类器中,输出情绪识别结果。S105: Input the fusion feature into the trained classifier, and output the emotion recognition result.
作为一个或多个实施例,所述S101:获取原始脑电信号,对所述原始脑电信号进行预处理;具体包括:As one or more embodiments, the S101: Acquire the original EEG signal, and preprocess the original EEG signal; specifically, it includes:
对获取的原始脑电信号去除伪迹,然后叠加平均,最后进行意识情绪识别的通道选择,选择与意识情绪相关的通道,最终得到预处理后的脑电信号。Artifacts are removed from the acquired original EEG signals, and then superimposed and averaged. Finally, the channel selection for conscious emotion recognition is performed, and the channel related to conscious emotion is selected, and the preprocessed EEG signal is finally obtained.
其中,与意识情绪相关的通道,包括:F3、FZ、F4、FC3、FCZ、FC4、C3、Cz和C4,其中F代表额叶(Frontal lobe),C:代表中线(Central),Z:代表零点(zero)即左右脑中心。Among them, the channels related to conscious emotions include: F3, FZ, F4, FC3, FCZ, FC4, C3, Cz and C4, where F represents the frontal lobe, C: represents the central line (Central), and Z: represents Zero is the center of the left and right brain.
电极点的命名规则依据10-20系统电极放置法,其中:F:额叶(Frontal lobe)、FP:前额叶(Frontal poles)、T:颞叶(Temporal lobes)、O:枕叶(Occipital lobes)、P:顶叶(Parietal lobes)、C:中线(Central)、Z:零点(zero)即左右脑中心、FCZ代表额叶、中线与脑中心的交界处;奇数代表左脑、偶数代表右脑。The naming rules of electrode points are based on the 10-20 system electrode placement method, where: F: Frontal lobe, FP: Frontal poles, T: Temporal lobes, O: Occipital lobes ), P: Parietal lobes, C: Central, Z: Zero, the center of the left and right brain, FCZ, the frontal lobe, the junction of the midline and the center of the brain; odd numbers represent the left brain, even numbers represent the right brain.
本实施例的步骤101中,本公开使用自采意识与无意识情绪数据集,数据预处理主要包括伪迹去除、叠加平均和通道选择。In step 101 of this embodiment, the present disclosure uses self-collected conscious and unconscious emotion data sets, and data preprocessing mainly includes artifact removal, superposition averaging, and channel selection.
伪迹排除标准是在试验中伪迹信号的长度占总长度的百分比,通常,如果超过20%,则认为该数据无效。The artifact exclusion criterion is the percentage of the length of the artifact signal to the total length in the experiment, generally, if it exceeds 20%, the data is considered invalid.
对筛选出的被试的脑电数据进行叠加平均处理,按照刺激的情绪类型和意识水平叠加出各条件下的ERP。叠加平均会抵消掉大部分伪迹使得数据会更加的纯净,能够得到更多有用的信息。The EEG data of the selected subjects were superimposed and averaged, and the ERPs under each condition were superimposed according to the emotional type of the stimulus and the level of consciousness. The stacking average will cancel out most of the artifacts so that the data will be more pure and more useful information can be obtained.
在通道选择方面,由于使用全通道一方面会有冗余信息,另一方面会增加计算的复杂度。这是由于并不是所有的通道都与情绪和认知有关,而是分布在某个特定的区域,全通道计算会增加计算复杂度。In terms of channel selection, the use of full channels will have redundant information on the one hand, and increase the computational complexity on the other hand. This is because not all channels are related to emotion and cognition, but are distributed in a specific area, and full-channel computing will increase the computational complexity.
N2、N3主要分布在前额和中央区域,P3主要分布在中央、额区和顶区,结合面部表情觉知的心理机制的研究成果,本发明选择F3、Fz、F4、FC3、FCZ、FC4、C3、CZ、C4、CP3、CPZ、CP4、P3、PZ、P4、PO3、POZ和PO4等18个通道的数据进行研究。N2 and N3 are mainly distributed in the forehead and central area, and P3 is mainly distributed in the central, frontal and parietal areas. Combined with the research results of the psychological mechanism of facial expression perception, the present invention selects F3, Fz, F4, FC3, FCZ, FC4, The data of 18 channels including C3, CZ, C4, CP3, CPZ, CP4, P3, PZ, P4, PO3, POZ and PO4 were studied.
作为一个或多个实施例,所述S102:对预处理后的脑电信号,提取ERP特征;具体包括:As one or more embodiments, the S102: extracting ERP features from the preprocessed EEG signals; specifically including:
对预处理后的脑电信号,基于Shapelet算法进行ERP特征匹配与提取,提取出若干个ERP特征。For the preprocessed EEG signals, ERP feature matching and extraction were performed based on Shapelet algorithm, and several ERP features were extracted.
其中,若干个ERP特征,包括:N200、P300和N300,其中:N代表Negative,N200是指200ms出现的负波,同样地,P代表Positive,P300是指300ms出现的正波。Among them, several ERP features, including: N200, P300 and N300, where: N stands for Negative, N200 refers to the negative wave that appears at 200ms, similarly, P stands for Positive, and P300 refers to the positive wave that appears at 300ms.
将提取出来的所有ERP特征,组成一个特征向量x1。All the extracted ERP features are formed into a feature vector x 1 .
进一步地,所述S102:对预处理后的脑电信号,提取ERP特征;详细步骤包括:Further, the S102: extracting ERP features from the preprocessed EEG signals; the detailed steps include:
S1020:人工提取ERP特征,并将人工提取出来的ERP特征作为已知特征;S1020: Manually extract ERP features, and use the manually extracted ERP features as known features;
S1021:生成与已知特征长度相等的候选子序列T;S1021: Generate a candidate subsequence T equal to the known feature length;
S1022:依次计算已知特征Sk的任意两点间的斜率,并将其存储在集合S'k;S1022: Calculate the slope between any two points of the known feature S k in turn, and store it in the set S'k;
S1023:依次计算候选子序列T的斜率并将其存储在集合T'中;S1023: Calculate the slopes of the candidate subsequences T in turn and store them in the set T';
S1024:计算S'k和T'的斜率之差的绝对值,判断每个绝对值是否小于自定义的相似阈值ε,如果是,则执行下一步;如果否,则将把斜率之差绝对值的两倍作为惩罚值;S1024: Calculate the absolute value of the difference between the slopes of S' k and T', and judge whether each absolute value is smaller than the user-defined similarity threshold ε, if so, execute the next step; if not, the absolute value of the difference between the slopes will be calculated twice as the penalty value;
S1025:根据所有斜率差的和,找出所有与已知特征相似的子序列,若所有斜率差的和最小,标记并输出最相似的子序列;S1025: According to the sum of all slope differences, find all subsequences similar to known features, and if the sum of all slope differences is the smallest, mark and output the most similar subsequence;
S1026:将最相似的子序列存入x1。S1026: Store the most similar subsequence in x 1 .
在本实施例的步骤S102中,使用Shapelet技术提取ERP特征形成一个特征向量x1。N200、P300、N300和LPC等ERP成分在面向情绪的意识和无意识分析中有显著的差异,但ERP成分并未作为特征用于情绪识别。因此,本发明提出了一个基于Shapelet技术的特征匹配与提取算法,该算法实现自动的识别和提取ERP成分。已知预处理后脑电信号S,特征子序列Sk和自定义相似阈值ε。In step S102 of this embodiment, the ERP features are extracted by using the Shapelet technology to form a feature vector x 1 . ERP components such as N200, P300, N300, and LPC showed significant differences in emotion-oriented conscious and unconscious analyses, but ERP components were not used as features for emotion recognition. Therefore, the present invention proposes a feature matching and extraction algorithm based on Shapelet technology, which realizes automatic identification and extraction of ERP components. The preprocessed EEG signal S, the feature subsequence Sk and the custom similarity threshold ε are known.
该算法的功能是在特定时间段内检索与指定形状相似的未知特征。换句话说,未知P300是基于已知P300进行匹配并输出的,其中已知特征子序列是指我们从ERP中手动提取的N200,P300和N300成分。这些子序列用于匹配和提取未知的相应ERP成分。手动提取已知ERP子序列的标准基于相应ERP成分出现的时间段。为了使特征向量的维数保持一致,将ERP特征的时间段长度统一设置为50个数据点。图2显示了N300特征的匹配与提取过程示意图。The function of the algorithm is to retrieve unknown features similar to a specified shape within a specific time period. In other words, unknown P300 is matched and output based on known P300, where known feature subsequences refer to the N200, P300 and N300 components that we manually extracted from the ERP. These subsequences were used to match and extract unknown corresponding ERP components. The criteria for manual extraction of known ERP subsequences were based on the time period in which the corresponding ERP components appeared. In order to keep the dimensionality of the feature vectors consistent, the time period length of the ERP features is uniformly set to 50 data points. Figure 2 shows a schematic diagram of the matching and extraction process of N300 features.
作为一个或多个实施例,所述S103:对预处理后的脑电信号,提取多尺度样本熵MMSE特征;包括:As one or more embodiments, the S103: extracting multi-scale sample entropy MMSE features from the preprocessed EEG signal; including:
对预处理后的脑电信号,进行变分模态分解和小波包分解与重构,得到基于β和γ频带的重构信号,并计算重构信号的多尺度样本熵MMSE。Variational modal decomposition and wavelet packet decomposition and reconstruction are performed on the preprocessed EEG signals to obtain reconstructed signals based on β and γ frequency bands, and the multi-scale sample entropy MMSE of the reconstructed signals is calculated.
进一步地,所述S103:对预处理后的脑电信号,提取多尺度样本熵MMSE特征;详细步骤包括:Further, the S103: extract the multi-scale sample entropy MMSE feature from the preprocessed EEG signal; the detailed steps include:
对预处理的脑电信号,使用变分模态分解VMD进行分解,得到不同水平的分量VMF;The preprocessed EEG signal is decomposed using variational modal decomposition VMD to obtain components of different levels VMF;
对每个分量VMF,使用小波包分解WPD进行分解,然后对分解后的结果进行重构,得到情感频带VMFβ+γ;For each component VMF, use the wavelet packet decomposition WPD to decompose, and then reconstruct the decomposed result to obtain the emotional frequency band VMF β+γ ;
计算每个情感频带VMFβ+γ的多尺度样本熵MMSE特征;Calculate the multi-scale sample entropy MMSE features of each emotional band VMF β+γ ;
将多尺度样本熵MMSE特征存入x2。Store the multiscale sample entropy MMSE features in x 2 .
进一步地,所述计算每个情感频带VMFβ+γ的多尺度样本熵MMSE特征;具体包括:Further, the multi-scale sample entropy MMSE feature of calculating each emotional frequency band VMF β+γ ; specifically includes:
(1)让Sθ表示以θ为尺度因子的移动平均时间序列,移动平均时间序列的生成过程如图4所示,Sθ的定义如下:(1) Let S θ denote the moving average time series with θ as the scale factor. The generation process of the moving average time series is shown in Figure 4. The definition of S θ is as follows:
(2)计算移动时间序列Sθ的在延迟时间δ=θ下的样本熵,命名为MMSE。计算公式如下:(2) Calculate the sample entropy of the moving time series S θ under the delay time δ=θ, and name it as MMSE. Calculated as follows:
MMSE(s,m,θ,r)=SampleEn(Sθ,m,δ=θ,r) (2)MMSE(s,m,θ,r)=SampleEn(S θ ,m,δ=θ,r) (2)
其中,SampleEn为样本熵的计算函数,m表示嵌入维数,r表示相似容限。Among them, SampleEn is the calculation function of sample entropy, m is the embedding dimension, and r is the similarity tolerance.
在本实施例的步骤S103,对情感识别效果最好的EEG频带是β和γ频带,这与神经心理学关于情感频带的研究相一致。In step S103 of this embodiment, the EEG frequency bands with the best effect on emotion recognition are the β and γ frequency bands, which is consistent with the neuropsychological research on emotion frequency bands.
在WPD的过程中,输入信号被分解为低频部分和高频部分,其中低频部分和高频部分又分别被进一步的分解为低频部分和高频部分,重复此过程可以得到一棵完整的小波包二叉树。In the process of WPD, the input signal is decomposed into low-frequency part and high-frequency part, and the low-frequency part and high-frequency part are further decomposed into low-frequency part and high-frequency part respectively. Repeating this process can get a complete wavelet packet Binary tree.
本公开使用的是五层小波包分解,小波包二叉树的示意图如图3。在进行WPD时,选择db4小波基,原因是该小波基与正常的EEG信号比较接近,能够产生更加精确的分解结果。The present disclosure uses five-layer wavelet packet decomposition, and a schematic diagram of a wavelet packet binary tree is shown in FIG. 3 . When performing WPD, the db4 wavelet base is selected because the wavelet base is closer to the normal EEG signal and can produce more accurate decomposition results.
由于集成的时频分析方法可以更好的捕捉脑电信号的非线性特征,所以基于VMD-WPD提出了一个频带重构方法,即提取β和γ频带再将其重构为一个情感频带VMFβ+γ,在此基础上进一步提取非线性特征MMSE。已知预处理后脑电信号S,VMF的个数K;WPD分解的层数i;尺度因子θ。Since the integrated time-frequency analysis method can better capture the nonlinear characteristics of EEG signals, a frequency band reconstruction method is proposed based on VMD-WPD, that is, the β and γ frequency bands are extracted and then reconstructed into an emotional frequency band VMF β . +γ , on this basis, the nonlinear feature MMSE is further extracted. It is known that the EEG signal S after preprocessing, the number K of VMFs; the number of layers i of WPD decomposition; the scale factor θ.
作为一个或多个实施例,所述S104:将ERP特征和多尺度样本熵MMSE特征进行融合,得到融合特征;具体包括:As one or more embodiments, the S104: fuse ERP features and multi-scale sample entropy MMSE features to obtain fused features; specifically including:
将特征向量x1与特征向量x2进行特征融合形成一个新的情绪特征向量x,即x=(x1,x2)。The feature vector x 1 and the feature vector x 2 are feature-fused to form a new emotion feature vector x, that is, x=(x 1 , x 2 ).
作为一个或多个实施例,所述S105:将融合特征输入到训练后的分类器中,输出情绪识别结果;具体包括:As one or more embodiments, the S105: input the fusion feature into the trained classifier, and output the emotion recognition result; specifically include:
将情绪特征向量,送入训练后的随机森林分类器中,进行意识情绪与无意识情绪状态的分类识别。The emotion feature vector is sent to the trained random forest classifier to classify and identify conscious emotion and unconscious emotional state.
进一步地,所述训练后的随机森林分类器,训练步骤包括:Further, the training step of the random forest classifier after training includes:
构建随机森林分类器;Build a random forest classifier;
构建训练集,所述训练集,包括已知情绪识别结果的情绪特征向量;Constructing a training set, the training set includes emotion feature vectors of known emotion recognition results;
将训练集输入到随机森林分类器中,对其进行训练,得到训练后的随机森林分类器。Input the training set into the random forest classifier, train it, and get the trained random forest classifier.
在本实施例的步骤S105中,将特征向量x送入随机森林分类器中进行意识情绪与无意识情绪的识别。由于传统的分类器容易出现过拟合,这会导致分类精度的下降,许多的研究者就通过组合分类器的方法来提高分类精度。在这种背景下,随机森林分类器应运而生,它是最为重要的基于Bagging的集成学习方法之一。大量的研究表明,随机森林是情绪识别领域比较受欢迎的分类器之一。In step S105 of this embodiment, the feature vector x is sent to a random forest classifier to identify conscious emotions and unconscious emotions. Since traditional classifiers are prone to overfitting, which will lead to a decline in classification accuracy, many researchers have improved the classification accuracy by combining classifiers. In this context, the random forest classifier came into being, which is one of the most important bagging-based ensemble learning methods. Numerous studies have shown that random forest is one of the more popular classifiers in the field of emotion recognition.
实施例二
本实施例提供了融合ERP成分与非线性特征的意识情绪系统;The present embodiment provides a conscious emotion system integrating ERP components and nonlinear features;
融合ERP成分与非线性特征的意识情绪系统,包括:A conscious emotion system that integrates ERP components and nonlinear features, including:
获取模块,其被配置为:获取原始脑电信号,对所述原始脑电信号进行预处理;an acquisition module, which is configured to: acquire original EEG signals, and preprocess the original EEG signals;
ERP特征提取模块,其被配置为:对预处理后的脑电信号,提取ERP特征;an ERP feature extraction module, which is configured to: extract ERP features from the preprocessed EEG signals;
非线性特征提取模块,其被配置为:对预处理后的脑电信号,提取多尺度样本熵MMSE特征;a nonlinear feature extraction module, which is configured to: extract multi-scale sample entropy MMSE features from the preprocessed EEG signals;
特征融合模块,其被配置为:将ERP特征和多尺度样本熵MMSE特征进行融合,得到融合特征;a feature fusion module, which is configured to: fuse ERP features and multi-scale sample entropy MMSE features to obtain fusion features;
情绪识别模块,其被配置为:将融合特征输入到训练后的分类器中,输出情绪识别结果。The emotion recognition module is configured to: input the fusion feature into the trained classifier, and output the emotion recognition result.
此处需要说明的是,上述获取模块、ERP特征提取模块、非线性特征提取模块、特征融合模块和情绪识别模块对应于实施例一中的步骤S101至S105,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例一所公开的内容。需要说明的是,上述模块作为系统的一部分可以在诸如一组计算机可执行指令的计算机系统中执行。It should be noted here that the above acquisition module, ERP feature extraction module, nonlinear feature extraction module, feature fusion module and emotion recognition module correspond to steps S101 to S105 in
上述实施例中对各个实施例的描述各有侧重,某个实施例中没有详述的部分可以参见其他实施例的相关描述。The description of each embodiment in the foregoing embodiments has its own emphasis. For the part that is not described in detail in a certain embodiment, reference may be made to the relevant description of other embodiments.
所提出的系统,可以通过其他的方式实现。例如以上所描述的系统实施例仅仅是示意性的,例如上述模块的划分,仅仅为一种逻辑功能划分,实际实现时,可以有另外的划分方式,例如多个模块可以结合或者可以集成到另外一个系统,或一些特征可以忽略,或不执行。The proposed system can be implemented in other ways. For example, the system embodiments described above are only illustrative. For example, the division of the above-mentioned modules is only a logical function division. In actual implementation, there may be other division methods. For example, multiple modules may be combined or integrated into other A system, or some feature, can be ignored, or not implemented.
实施例三
本实施例还提供了一种电子设备,包括:一个或多个处理器、一个或多个存储器、以及一个或多个计算机程序;其中,处理器与存储器连接,上述一个或多个计算机程序被存储在存储器中,当电子设备运行时,该处理器执行该存储器存储的一个或多个计算机程序,以使电子设备执行上述实施例一所述的方法。This embodiment also provides an electronic device, comprising: one or more processors, one or more memories, and one or more computer programs; wherein the processor is connected to the memory, and the one or more computer programs are Stored in the memory, when the electronic device runs, the processor executes one or more computer programs stored in the memory, so that the electronic device executes the method described in the first embodiment.
应理解,本实施例中,处理器可以是中央处理单元CPU,处理器还可以是其他通用处理器、数字信号处理器DSP、专用集成电路ASIC,现成可编程门阵列FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that, in this embodiment, the processor may be a central processing unit (CPU), and the processor may also be other general-purpose processors, digital signal processors, DSPs, application-specific integrated circuits (ASICs), off-the-shelf programmable gate arrays (FPGAs), or other programmable logic devices. , discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据、存储器的一部分还可以包括非易失性随机存储器。例如,存储器还可以存储设备类型的信息。The memory may include read-only memory and random access memory and provide instructions and data to the processor, and a portion of the memory may also include non-volatile random access memory. For example, the memory may also store device type information.
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。In the implementation process, each step of the above-mentioned method can be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
实施例一中的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器、闪存、只读存储器、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。The method in the first embodiment can be directly embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor. The software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art. The storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware. To avoid repetition, detailed description is omitted here.
本领域普通技术人员可以意识到,结合本实施例描述的各示例的单元及算法步骤,能够以电子硬件或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art can realize that the units and algorithm steps of each example described in conjunction with this embodiment can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
实施例四
本实施例还提供了一种计算机可读存储介质,用于存储计算机指令,所述计算机指令被处理器执行时,完成实施例一所述的方法。This embodiment also provides a computer-readable storage medium for storing computer instructions, and when the computer instructions are executed by the processor, the method described in the first embodiment is completed.
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。The above descriptions are only preferred embodiments of the present application, and are not intended to limit the present application. For those skilled in the art, the present application may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of this application shall be included within the protection scope of this application.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110156080.XA CN113017628B (en) | 2021-02-04 | 2021-02-04 | Conscious emotion recognition method and system integrating ERP components and nonlinear features |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110156080.XA CN113017628B (en) | 2021-02-04 | 2021-02-04 | Conscious emotion recognition method and system integrating ERP components and nonlinear features |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113017628A CN113017628A (en) | 2021-06-25 |
CN113017628B true CN113017628B (en) | 2022-06-10 |
Family
ID=76459993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110156080.XA Active CN113017628B (en) | 2021-02-04 | 2021-02-04 | Conscious emotion recognition method and system integrating ERP components and nonlinear features |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113017628B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114429174B (en) * | 2021-12-17 | 2025-05-06 | 安徽银边医疗科技有限公司 | Emotion recognition method of EEG signals based on brain network and multi-scale permutation entropy |
CN115192040B (en) * | 2022-07-18 | 2023-08-11 | 天津大学 | EEG emotion recognition method and device based on Poincaré graph and second-order difference graph |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103230276A (en) * | 2013-02-01 | 2013-08-07 | 上海中医药大学附属岳阳中西医结合医院 | Device and method for quantitatively evaluating and recording human body subjective feeling |
CN103577562A (en) * | 2013-10-24 | 2014-02-12 | 河海大学 | Multi-measurement time series similarity analysis method |
CN104020845A (en) * | 2014-03-27 | 2014-09-03 | 浙江大学 | Acceleration transducer placement-unrelated movement recognition method based on shapelet characteristic |
CN107871140A (en) * | 2017-11-07 | 2018-04-03 | 哈尔滨工程大学 | A Method of Similarity Measurement Based on Slope Elasticity |
CN108959704A (en) * | 2018-05-28 | 2018-12-07 | 华北电力大学 | A kind of rewards and punishments weight type simulation sequence similarity analysis method considering metamorphosis |
CN110032585A (en) * | 2019-04-02 | 2019-07-19 | 北京科技大学 | A kind of time series bilayer symbolism method and device |
CN110781945A (en) * | 2019-10-22 | 2020-02-11 | 太原理工大学 | Electroencephalogram signal emotion recognition method and system integrating multiple features |
CN111310570A (en) * | 2020-01-16 | 2020-06-19 | 山东师范大学 | A method and system for EEG emotion recognition based on VMD and WPD |
CN111766473A (en) * | 2020-06-30 | 2020-10-13 | 济南轨道交通集团有限公司 | Power distribution network single-phase earth fault positioning method and system based on slope distance |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8392255B2 (en) * | 2007-08-29 | 2013-03-05 | The Nielsen Company (Us), Llc | Content based selection and meta tagging of advertisement breaks |
US20150045700A1 (en) * | 2013-08-09 | 2015-02-12 | University Of Washington Through Its Center For Commercialization | Patient activity monitoring systems and associated methods |
CN105342596A (en) * | 2015-12-15 | 2016-02-24 | 深圳市珑骧智能科技有限公司 | Method and system capable of increasing heart rate detection accuracy |
CN110585590A (en) * | 2019-09-24 | 2019-12-20 | 中国人民解放军陆军军医大学第一附属医院 | Transcranial direct current stimulation device and data processing method |
CN111134667B (en) * | 2020-01-19 | 2024-01-26 | 中国人民解放军战略支援部队信息工程大学 | Time migration emotion recognition method and system based on EEG signals |
-
2021
- 2021-02-04 CN CN202110156080.XA patent/CN113017628B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103230276A (en) * | 2013-02-01 | 2013-08-07 | 上海中医药大学附属岳阳中西医结合医院 | Device and method for quantitatively evaluating and recording human body subjective feeling |
CN103577562A (en) * | 2013-10-24 | 2014-02-12 | 河海大学 | Multi-measurement time series similarity analysis method |
CN104020845A (en) * | 2014-03-27 | 2014-09-03 | 浙江大学 | Acceleration transducer placement-unrelated movement recognition method based on shapelet characteristic |
CN107871140A (en) * | 2017-11-07 | 2018-04-03 | 哈尔滨工程大学 | A Method of Similarity Measurement Based on Slope Elasticity |
CN108959704A (en) * | 2018-05-28 | 2018-12-07 | 华北电力大学 | A kind of rewards and punishments weight type simulation sequence similarity analysis method considering metamorphosis |
CN110032585A (en) * | 2019-04-02 | 2019-07-19 | 北京科技大学 | A kind of time series bilayer symbolism method and device |
CN110781945A (en) * | 2019-10-22 | 2020-02-11 | 太原理工大学 | Electroencephalogram signal emotion recognition method and system integrating multiple features |
CN111310570A (en) * | 2020-01-16 | 2020-06-19 | 山东师范大学 | A method and system for EEG emotion recognition based on VMD and WPD |
CN111766473A (en) * | 2020-06-30 | 2020-10-13 | 济南轨道交通集团有限公司 | Power distribution network single-phase earth fault positioning method and system based on slope distance |
Non-Patent Citations (2)
Title |
---|
Time series shapelets: a novel technique that allows accurate, interpretable and fast classification;Lexiang Ye,Eamonn Keogh;《Data Min Knowl Disc》;20111231;全文 * |
基于趋势特征表示的shapelet分类方法;闫欣鸣,孟凡荣,闫秋艳;《计算机应用》;20171231;第37卷(第8期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN113017628A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Autthasan et al. | MIN2Net: End-to-end multi-task learning for subject-independent motor imagery EEG classification | |
Wang et al. | Channel selection method for EEG emotion recognition using normalized mutual information | |
Abo‐Zahhad et al. | State‐of‐the‐art methods and future perspectives for personal recognition based on electroencephalogram signals | |
CN111310570B (en) | A method and system for EEG emotion recognition based on VMD and WPD | |
Chen et al. | Emotion recognition based on fusion of long short-term memory networks and SVMs | |
Kumar et al. | Automated Schizophrenia detection using local descriptors with EEG signals | |
Qiu et al. | LightSeizureNet: A lightweight deep learning model for real-time epileptic seizure detection | |
CN113017628B (en) | Conscious emotion recognition method and system integrating ERP components and nonlinear features | |
Xu et al. | AMDET: Attention based multiple dimensions EEG transformer for emotion recognition | |
Gümüslü et al. | Emotion recognition using EEG and physiological data for robot-assisted rehabilitation systems | |
CN117158970B (en) | Emotion recognition method, system, medium and computer | |
Chen et al. | BAFNet: bottleneck attention based fusion network for sleep apnea detection | |
Saini et al. | DSCNN-CAU: Deep-learning-based mental activity classification for IoT implementation toward portable BCI | |
Gunda et al. | Lightweight attention mechanisms for EEG emotion recognition for brain computer interface | |
Priyadarshini et al. | Emotion Recognition based on fusion of multimodal physiological signals using LSTM and GRU | |
Nirabi et al. | Machine learning-based stress level detection from EEG signals | |
Wang et al. | Wearable wireless dual-channel EEG system for emotion recognition based on machine learning | |
Hassan et al. | Review of EEG signals classification using machine learning and deep-learning techniques | |
Zachariah et al. | Automatic EEG artifact removal by independent component analysis using critical EEG rhythms | |
Guo et al. | Eeg emotion recognition based on dynamical graph attention network | |
Li et al. | MS-FTSCNN: An EEG emotion recognition method from the combination of multi-domain features | |
CN114676720B (en) | Mental state recognition method and system based on graph neural network | |
Rajendran et al. | Machine learning based human mental state classification using wavelet packet decomposition-an EEG study | |
CN116269386B (en) | Multichannel physiological time sequence emotion recognition method based on ordinal division network | |
CN116531001A (en) | Method and device for generating multi-listener electroencephalogram signals and identifying emotion of cross-listener |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240315 Address after: 401329 No. 99, Xinfeng Avenue, Jinfeng Town, Gaoxin District, Jiulongpo District, Chongqing Patentee after: Chongqing Science City Intellectual Property Operation Center Co.,Ltd. Country or region after: China Address before: 250014 No. 88, Wenhua East Road, Lixia District, Shandong, Ji'nan Patentee before: SHANDONG NORMAL University Country or region before: China |
|
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20240819 Address after: Room 344, 3rd Floor, Building 6, No. 1 Courtyard, Shangdi 10th Street, Haidian District, Beijing 100080 Patentee after: Zhonghao Liying Technology (Beijing) Co.,Ltd. Country or region after: China Address before: 401329 No. 99, Xinfeng Avenue, Jinfeng Town, Gaoxin District, Jiulongpo District, Chongqing Patentee before: Chongqing Science City Intellectual Property Operation Center Co.,Ltd. Country or region before: China |
|
TR01 | Transfer of patent right |