CN114209319A - fNIRS emotion recognition method and system based on graph network and adaptive denoising - Google Patents
fNIRS emotion recognition method and system based on graph network and adaptive denoising Download PDFInfo
- Publication number
- CN114209319A CN114209319A CN202111315105.2A CN202111315105A CN114209319A CN 114209319 A CN114209319 A CN 114209319A CN 202111315105 A CN202111315105 A CN 202111315105A CN 114209319 A CN114209319 A CN 114209319A
- Authority
- CN
- China
- Prior art keywords
- fnirs
- graph
- emotion recognition
- emotion
- variation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000008909 emotion recognition Effects 0.000 title claims abstract description 43
- 230000003044 adaptive effect Effects 0.000 title claims description 21
- INGWEZCOABYORO-UHFFFAOYSA-N 2-(furan-2-yl)-7-methyl-1h-1,8-naphthyridin-4-one Chemical compound N=1C2=NC(C)=CC=C2C(O)=CC=1C1=CC=CO1 INGWEZCOABYORO-UHFFFAOYSA-N 0.000 claims abstract description 18
- 108010064719 Oxyhemoglobins Proteins 0.000 claims abstract description 18
- 108010002255 deoxyhemoglobin Proteins 0.000 claims abstract description 18
- 230000008451 emotion Effects 0.000 claims abstract description 17
- 238000002835 absorbance Methods 0.000 claims abstract description 12
- 238000013507 mapping Methods 0.000 claims abstract description 11
- 210000004556 brain Anatomy 0.000 claims abstract description 9
- 239000000523 sample Substances 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 8
- 230000008569 process Effects 0.000 claims description 8
- 238000000605 extraction Methods 0.000 claims description 6
- 230000007246 mechanism Effects 0.000 claims description 6
- 230000015654 memory Effects 0.000 claims description 6
- 210000005013 brain tissue Anatomy 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 3
- 238000011423 initialization method Methods 0.000 claims description 2
- 230000010354 integration Effects 0.000 claims description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims 1
- 108010054147 Hemoglobins Proteins 0.000 abstract 2
- 102000001554 Hemoglobins Human genes 0.000 abstract 2
- 230000002996 emotional effect Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000000537 electroencephalography Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000000638 stimulation Effects 0.000 description 3
- 238000004497 NIR spectroscopy Methods 0.000 description 2
- 230000008033 biological extinction Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000003542 behavioural effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Public Health (AREA)
- Artificial Intelligence (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Psychiatry (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Physiology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Computation (AREA)
- Child & Adolescent Psychology (AREA)
- Developmental Disabilities (AREA)
- Educational Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Optics & Photonics (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Description
技术领域technical field
本发明涉及人机信号识别领域,具体涉及基于图网络及自适应去噪的fNIRS情绪识别方法及系统。The invention relates to the field of human-machine signal recognition, in particular to a fNIRS emotion recognition method and system based on graph network and adaptive denoising.
背景技术Background technique
情绪影响着人的认知与行为活动,同时也是心理健康的重要影响因素。情绪识别作为其中研究的热点可被分为非生理信号与生理信号两种。作为传统的生理指标检测方法,脑电、脑磁和功能磁共振等在情绪识别中取得了一定的进步。与此同时,此类方法的局限性也逐步显露,如:时间或空间分辨率低、采集设备成本高昂、易受干扰、不便携带等。Emotions affect people's cognitive and behavioral activities, and are also an important factor in mental health. As a research hotspot, emotion recognition can be divided into non-physiological signals and physiological signals. As traditional physiological index detection methods, EEG, EEG and fMRI have made certain progress in emotion recognition. At the same time, the limitations of such methods are gradually revealed, such as low temporal or spatial resolution, high cost of acquisition equipment, susceptibility to interference, and inconvenience to carry.
近年来,随着近红外技术的发展及采集设备的技术升级,功能性近红外光谱技术(Functional Near-Infrared Spectroscopy,fNIRS)作为一种新兴的非侵入性的大脑检测手段,具有依从性高、抗干扰能力强、便携易实施和低成本等优点,适用于所有可能的被试群体和实验场景。伴随着5G技术、物联网、人机交互、机器学习等技术的不断发展,基于fNIRS的情绪分析在医疗保健、媒体娱乐、信息检索、教育以及智能可穿戴设备等领域都有着重要的意义及广阔的应用前景。因此,一种基于功能性近红外光谱技术的情绪识别方法及系统具有广泛的需求及广阔的前景。In recent years, with the development of near-infrared technology and the technical upgrade of acquisition equipment, functional near-infrared spectroscopy (fNIRS), as an emerging non-invasive brain detection method, has high compliance, With the advantages of strong anti-interference ability, portability, easy implementation and low cost, it is suitable for all possible test groups and experimental scenarios. With the continuous development of 5G technology, Internet of Things, human-computer interaction, machine learning and other technologies, sentiment analysis based on fNIRS is of great significance and broadness in the fields of healthcare, media entertainment, information retrieval, education, and smart wearable devices. application prospects. Therefore, an emotion recognition method and system based on functional near-infrared spectroscopy has wide demands and broad prospects.
发明内容SUMMARY OF THE INVENTION
为了克服现有技术存在的缺点与不足,本发明提供一种基于图网络及自适应去噪的fNIRS情绪识别方法及系统。In order to overcome the shortcomings and deficiencies of the prior art, the present invention provides an fNIRS emotion recognition method and system based on a graph network and adaptive denoising.
本发明采用如下技术方案:The present invention adopts following technical scheme:
如图1所示,一种基于图网络及自适应去噪的fNIRS情绪识别方法,包括如下步骤:As shown in Figure 1, a fNIRS emotion recognition method based on graph network and adaptive denoising includes the following steps:
S1 fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,利用比尔-朗伯定律求得吸光度变化与脑组织中的吸光色团浓度变化量的关系方程,主要是指氧合血红蛋白和脱氧血红蛋白,通过方程的求解进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量。The S1 fNIRS acquisition device continuously collects the change of light intensity before and after emission and reception, converts the change of light intensity into the change of absorbance, and uses the Beer-Lambert law to obtain the difference between the change of absorbance and the concentration of light-absorbing chromophore in brain tissue. The relationship equation mainly refers to oxyhemoglobin and deoxyhemoglobin, and the relative change of oxyhemoglobin and deoxyhemoglobin concentrations is further obtained by solving the equation.
进一步过程如下:The further process is as follows:
S1.1从采集设备中获取两种不同波长的连续波原始近红外光强变化,记为与 S1.1 Obtain the continuous wave original near-infrared light intensity changes of two different wavelengths from the acquisition device, denoted as and
S1.2将原始光强信息转换为吸光度与 S1.2 Convert raw light intensity information to absorbance and
S1.3、根据比尔-朗伯定律求得吸光度变化与脑组织中吸光色团浓度相对变化量的关系方程;S1.3. According to the Beer-Lambert law, the relationship equation between the change of absorbance and the relative change of the concentration of light-absorbing chromophore in brain tissue is obtained;
其中,ε为摩尔消光系数,d为检测深度,DPF为微分路径因子;Among them, ε is the molar extinction coefficient, d is the detection depth, and DPF is the differential path factor;
S1.4、通过方程的求解,得到氧合血红蛋白和脱氧血红蛋白浓度的相对变化量,记为ΔCHbO(t)和ΔCHbR(t)。S1.4. Through the solution of the equation, the relative changes of oxyhemoglobin and deoxyhemoglobin concentrations are obtained, which are recorded as ΔC HbO (t) and ΔC HbR (t).
S2通过自适应去噪网络模型去噪得到纯净信号,所述自适应去噪网络模型的输入信号为上一步骤得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量,输出信号为纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据。S2 obtains a pure signal by denoising the adaptive denoising network model, the input signal of the adaptive denoising network model is the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin obtained in the previous step, and the output signal is pure oxyhemoglobin and relative changes in deoxyhemoglobin concentration data.
进一步具体为:More specifically:
如图2所示,所述自适应去噪网络模型包括多个大小不同的卷积与反卷积块所构成的深度卷积对抗对,记为Gp,其中输入为带噪数据ΔCHbO(t)和ΔCHbR(t),生成网络输出为生成纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据,记为和 As shown in FIG. 2 , the adaptive denoising network model includes a depthwise convolution confrontation formed by multiple convolution and deconvolution blocks of different sizes, denoted as G p , where the input is the noisy data ΔC HbO ( t) and ΔC HbR (t), the output of the generation network is the relative change data of the pure oxyhemoglobin and deoxyhemoglobin concentrations, denoted as and
去噪网络模型的输出与输入的差值为生成的纯噪声信号,记为 The difference between the output of the denoising network model and the input is the generated pure noise signal, denoted as
将生成的纯噪声信号与纯净的数据PHbO(t)和PHbR(t)相加,获得生成的带噪数据,记为和 The pure noise signal that will be generated Add to the clean data P HbO (t) and P HbR (t) to obtain the generated noisy data, denoted as and
记ΔCHbO(t),ΔCHbR(t)为ΔC,PHbO(t),PHbR(t)为P,则模型的总损失定义为:Denote ΔC HbO (t), ΔC HbR (t) as ΔC, P HbO (t), and P HbR (t) as P, then the total loss of the model is defined as:
其中,为噪声判别式的损失:in, is the loss of the noise discriminant:
为纯净判别式的损失: is the loss of pure discriminant:
为循环一致性损失: For cycle consistency loss:
通过迭代训练后,去噪算法为:After iterative training, the denoising algorithm is:
PHbO(t),PHbR(t)=Gp(ΔCHbO(t),ΔCHbR(t)) (8)P HbO (t), P HbR (t) = G p (ΔC HbO (t),ΔC HbR (t)) (8)
可以获得纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据PHbO(t)和PHbR(t)。Relative change data P HbO (t) and P HbR (t) of pure oxyhemoglobin and deoxyhemoglobin concentrations can be obtained.
S3结合探针及通道特性进行图节点的映射,使用图网络进行脑拓扑的还原,通过动态图注意力情绪识别网络模型,分类输出情绪标签。所述动态图注意力情绪识别网络模型包括图卷积和注意力机制,所述探针为fNIRS的光极,包括发射方和接收方。S3 combines the characteristics of probes and channels to map nodes in the graph, uses the graph network to restore the brain topology, and uses the dynamic graph attention emotion recognition network model to classify and output emotion labels. The dynamic graph attention emotion recognition network model includes graph convolution and attention mechanism, and the probe is the optode of fNIRS, including a transmitter and a receiver.
如图3所示,具体过程为:As shown in Figure 3, the specific process is:
S3.1首先进行图的定义,记为G(V,E,W),其中V表示图节点的结合,|V|=n表示共n个节点,对应为fNIRS的n个通道的数据序列ΔCHbO(t),ΔCHbR(t)记为X;E表示图中边的集合,及fNIRS中不同通道的集合;W为邻接矩阵,定义节点的连接关系,即fNIRS中不同通道的关联性,其中邻接矩阵中的值描述节点间的关系重要性,值wij初始化方法使用高斯核函数方法:S3.1 First define the graph, denoted as G(V,E,W), where V represents the combination of graph nodes, |V|=n represents a total of n nodes, corresponding to the data sequence ΔC of n channels of fNIRS HbO (t), ΔC HbR (t) is denoted as X; E represents the set of edges in the graph and the set of different channels in fNIRS; W is the adjacency matrix, which defines the connection relationship of nodes, that is, the correlation of different channels in fNIRS, The value in the adjacency matrix describes the importance of the relationship between nodes, and the initialization method of the value w ij uses the Gaussian kernel function method:
式中,dist为节点间的高斯距离,θ与τ为高斯距离算法中的固定参数。In the formula, dist is the Gaussian distance between nodes, and θ and τ are fixed parameters in the Gaussian distance algorithm.
S3.2、为了引入图注意力,先计算每个节点与邻近节点的相似系数eij:S3.2. In order to introduce graph attention, first calculate the similarity coefficient e ij between each node and neighboring nodes:
其中,a为全局映射矩阵,和分别为节点i和节点j的权重矩阵;where a is the global mapping matrix, and are the weight matrices of node i and node j, respectively;
S3.3计算图节点间的注意力系数并进行归一化,记为αij:S3.3 calculates the attention coefficient between graph nodes and normalizes it, denoted as α ij :
式中,LeakyReLU()为非线性激活函数;In the formula, LeakyReLU() is a nonlinear activation function;
S3.4、在图卷积中,利用多头注意力进行加权求和进行参数整合,获得新特征X′i:S3.4. In graph convolution, multi-head attention is used to perform weighted summation for parameter integration to obtain new features X′ i :
式中,σ为非线性映射关系,K为注意力头的数量,||表示拼接;In the formula, σ is the nonlinear mapping relationship, K is the number of attention heads, and || represents splicing;
S3.5、进行池化降维,通过展平与全连接层后经过分类器输出情绪类别,模型框架如图2所示;S3.5, perform pooling dimension reduction, output emotion categories through the classifier after flattening and fully connected layers, the model framework is shown in Figure 2;
S3.6、模型采用交叉熵加上一个正则化项作为损失函数,表示为:S3.6. The model uses cross entropy plus a regularization term as the loss function, which is expressed as:
其中,CossEntropy()表示交叉熵计算,l,分别为真实标签与预测值,为学习率,用于表示模型所有参数;Among them, CossEntropy() represents cross entropy calculation, l, are the true label and the predicted value, respectively, is the learning rate, Used to represent all parameters of the model;
S3.7、采用反向传播算法实现邻接矩阵的动态变化,计算损失函数对邻接矩阵偏微分进行网络的迭代更新:S3.7. Use the back-propagation algorithm to realize the dynamic change of the adjacency matrix, and calculate the loss function to iteratively update the network on the partial differential of the adjacency matrix:
迭代更新公式为:The iterative update formula is:
一种基于图网络及自适应去噪的fNIRS情绪识别系统,包括:A fNIRS emotion recognition system based on graph network and adaptive denoising, including:
fNIRS采集模块:利用fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量;fNIRS acquisition module: use the fNIRS acquisition equipment to continuously collect the change of light intensity before and after transmission and reception, convert the change of light intensity into the change of absorbance, and further obtain the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin;
fNIRS自适应去噪网络模块:用于得到纯净信号;fNIRS adaptive denoising network module: used to obtain pure signals;
fNIRS动态图注意力情绪识别网络模块:根据纯净信号输出情绪标签。fNIRS dynamic graph attention emotion recognition network module: output emotion labels according to pure signals.
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述fNIRS情绪识别方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the fNIRS emotion recognition method when the processor executes the computer program.
一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现fNIRS情绪识别方法的步骤。A storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the fNIRS emotion recognition method.
本发明的有益效果:Beneficial effects of the present invention:
(1)在数据的去噪过程中,采用基于生成对抗网络的自适应去噪模型。对比传统基于机器学习的数据去噪方法,本发明能够很大程度上避免数据去噪过程中的人工参与及经验分析,克服传统方法具有较强任务依赖性的缺点,在多任务条件下具有较高的自适应性。同时,通过生成对抗学习的方式,能够无需特定假设地解决fNIRS在“静态任务”和“动态任务”中的噪声差异问题,去噪模型具有较强泛化能力。(1) In the process of data denoising, an adaptive denoising model based on generative adversarial network is adopted. Compared with the traditional data denoising method based on machine learning, the present invention can largely avoid manual participation and experience analysis in the process of data denoising, overcome the disadvantage that the traditional method has strong task dependence, and has relatively high performance under multi-task conditions. High adaptability. At the same time, through the method of generative adversarial learning, the noise difference problem of fNIRS in "static tasks" and "dynamic tasks" can be solved without specific assumptions, and the denoising model has strong generalization ability.
(2)在fNIRS信号的情绪表征提取中,使用深度学习的网络模型。对比传统的手工特征提取方法,本发明能通过数据驱动的方式,经过网络的学习实现对fNIRS中情绪特征的提取,克服了传统方法中固定特征的计算中存在的维度有限、有效性的不确定等问题。通过深度学习的网络对特征进行学习与提取,能够有效地获取不同维度的情绪表征,增强对fNIRS数据中情绪特征的提取与利用。(2) In the emotional representation extraction of fNIRS signals, a deep learning network model is used. Compared with the traditional manual feature extraction method, the present invention can realize the extraction of emotional features in fNIRS through network learning in a data-driven manner, overcoming the limited dimension and uncertainty of validity existing in the calculation of fixed features in the traditional method. And other issues. Learning and extracting features through a deep learning network can effectively obtain emotional representations of different dimensions, and enhance the extraction and utilization of emotional features in fNIRS data.
(3)使用了动态图卷积神经网络对具有探针位置信息、通道信号的fNIRS数据进行有效建模。对比传统将数据简单看作时间序列信号而使用基于支持向量机、贝叶斯分类器的机器学习方法或基于长短期记忆的循环神经网络进行分析,本发明提出使用图的方法对fNIRS数据进行拓扑映射,将不同探针映射成图中的节点,时间序列的数据为节点的特征,而不同通过的关系通过邻接矩阵表示为图中的边。本方法充分利用数据的特征,对脑结构的拓扑具有还原性,同时对不同通道的数据关联性进行了表征,能够提高网络模型在fNIRS脑信号检测的情绪识别中的准确率。(3) The dynamic graph convolutional neural network is used to effectively model the fNIRS data with probe position information and channel signal. Compared with the traditional data that is simply regarded as a time series signal and analyzed using a machine learning method based on a support vector machine, a Bayesian classifier, or a recurrent neural network based on long short-term memory, the present invention proposes to use a graph method to perform topology on fNIRS data. Mapping, which maps different probes to nodes in the graph, the time series data is the feature of the node, and the relationship of different passes is represented as the edge in the graph through the adjacency matrix. The method makes full use of the characteristics of the data, has the restoration of the topology of the brain structure, and simultaneously characterizes the data correlation of different channels, which can improve the accuracy of the network model in the emotion recognition of fNIRS brain signal detection.
(4)本方法引入了图注意力机制,通过计算不同探针节点与相邻节点的相似系数来获得不同节点间的注意力系数,在图卷积过程中使用多头注意力机制的加权求和进行节点特征的更新。本方法能够在模型的训练过程中,引入节点相邻节点的特征联系,使得模型能够更好地对fNIRS中不同通道数据的关联特征进行提取,同时也能获取不同脑区域对不同情绪的激活反应,在脑信号的情绪识别具有显著作用。(4) This method introduces a graph attention mechanism, obtains the attention coefficients between different nodes by calculating the similarity coefficients between different probe nodes and adjacent nodes, and uses the weighted summation of the multi-head attention mechanism in the graph convolution process. Update node features. This method can introduce the feature relationship of adjacent nodes in the model training process, so that the model can better extract the correlation features of different channel data in fNIRS, and can also obtain the activation responses of different brain regions to different emotions , has a significant role in emotion recognition of brain signals.
附图说明Description of drawings
图1是本发明的工作流程图;Fig. 1 is the working flow chart of the present invention;
图2是本发明的fNIRS自适应去噪网络模型结构图;Fig. 2 is the fNIRS adaptive denoising network model structural diagram of the present invention;
图3是本发明的fNIRS动态图注意力情绪识别网络模型结构图;Fig. 3 is the fNIRS dynamic graph attention emotion recognition network model structure diagram of the present invention;
图4是本发明fNIRS采集模块示意图。FIG. 4 is a schematic diagram of the fNIRS acquisition module of the present invention.
具体实施方式Detailed ways
下面结合实施例及附图,对本发明作进一步地详细说明,但本发明的实施方式不限于此。The present invention will be described in further detail below with reference to the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.
实施例1Example 1
一种基于图网络及自适应去噪的fNIRS情绪识别方法,适用于fNIRS采集设备的情绪识别任务,主要包括外部情绪刺激步骤、fNIRS采集步骤、fNIRS自适应去噪步骤、fNIRS情绪识别步骤。An fNIRS emotion recognition method based on graph network and adaptive denoising is suitable for emotion recognition tasks of fNIRS acquisition equipment, and mainly includes an external emotional stimulation step, an fNIRS acquisition step, an fNIRS adaptive denoising step, and an fNIRS emotion recognition step.
外部情绪刺激步骤,外部情绪刺激材料采用愤怒、厌恶、恐惧、愉快、悲伤、惊讶六个情感标签类型的视频,使用者通过观看视频进行情绪的诱发。In the external emotional stimulation step, the external emotional stimulation material adopts videos of six emotional label types of anger, disgust, fear, pleasure, sadness and surprise, and users can induce emotions by watching the videos.
如图4所示,fNIRS采集步骤,fNIRS采集设备由多通道双波长近红外连续波发射-接收源组成。As shown in Figure 4, the fNIRS acquisition step, the fNIRS acquisition equipment is composed of multi-channel dual-wavelength near-infrared continuous wave transmitter-receiver sources.
首先,使用者通过佩戴fNIRS采集设备,实时采集、记录不同光极在发射-接收前后光强的变化量。First, the user wears the fNIRS acquisition device to collect and record the change of light intensity of different optodes before and after emission and reception in real time.
设任一时刻的单通道光强变化量为:与 Let the single-channel light intensity change at any time be: and
计算转换为吸光度为: The calculation is converted to absorbance as:
根据比尔-朗伯定律:ΔA=ε×d×DPF,其中,ε为摩尔消光系数,d为检测深度,DPF为微分路径因子。According to Beer-Lambert law: ΔA=ε×d×DPF, where ε is the molar extinction coefficient, d is the detection depth, and DPF is the differential path factor.
通过求解吸光度变化与脑组织中吸光色团浓度相对变化量的关系方程,获得氧合血红蛋白和脱氧血红蛋白浓度的相对变化量,记为ΔCHbO和ΔCHbR。By solving the relationship equation between the change of absorbance and the relative change of the concentration of light-absorbing chromophore in brain tissue, the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin was obtained, which was recorded as ΔC HbO and ΔC HbR .
fNIRS自适应去噪步骤fNIRS adaptive denoising step
将数据输入到fNIRS自适应去噪模块进行去噪增强,经过训练好的生成网络获得纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据PHbO与PHbR。PHbO,PHbR=Gp(ΔCHbO,ΔCHbR)。The data is input into the fNIRS adaptive denoising module for denoising enhancement, and the trained generation network obtains the relative change data P HbO and P HbR of pure oxyhemoglobin and deoxyhemoglobin concentrations. P HbO , PHbR=G p (ΔC HbO , ΔC HbR ).
所述fNIRS自适应去噪模块基于生成对抗网络,通过卷积对fNIRS信号进行特征提取,并通过反卷积进行纯净信号生成。网络的训练过程中,输入成对的带噪信号与纯净信号,通过两个判别器的对抗损失进行训练,同时引入循环一致性损失进行空间映射的约束,提高模型的训练效率。The fNIRS adaptive denoising module is based on a generative adversarial network, performs feature extraction on the fNIRS signal through convolution, and generates a pure signal through deconvolution. In the training process of the network, input pairs of noisy signals and pure signals are trained through the adversarial loss of the two discriminators, and at the same time, the cycle consistency loss is introduced to constrain the spatial mapping to improve the training efficiency of the model.
fNIRS情绪识别步骤fNIRS emotion recognition steps
将所有通道的纯净数据进行图节点映射:PHbO,PHbR→V,V为图节点的集合。使用高斯核函数进行邻接矩阵W的初始化,其中高斯核函数为:Map the pure data of all channels to graph nodes: P HbO , P HbR → V, V is the set of graph nodes. Use the Gaussian kernel function to initialize the adjacency matrix W, where the Gaussian kernel function is:
式中,dist为节点间的高斯距离,θ与τ为高斯距离算法中的固定参数。In the formula, dist is the Gaussian distance between nodes, and θ and τ are fixed parameters in the Gaussian distance algorithm.
计算每个节点与邻近节点的相似系数a为全局映射矩阵,和分别为节点i和节点j的权重矩阵。Calculate the similarity coefficient between each node and neighboring nodes a is the global mapping matrix, and are the weight matrices of node i and node j, respectively.
使用LeakyReLU非线性激活函数归一化图节点注意力系数为αij,Using the LeakyReLU nonlinear activation function to normalize the graph node attention coefficient as α ij ,
通过图卷积加权求并拼接多头注意力参数获得新特征 σ为非线性映射关系,K为注意力头的数量,||表示拼接。New features obtained by graph convolution weighted summation and splicing of multi-head attention parameters σ is the nonlinear mapping relationship, K is the number of attention heads, and || represents splicing.
进行池化降维,通过展平与全连接层后经过分类器输出情绪识别概率。The pooling dimension is reduced, and the emotion recognition probability is output through the classifier after flattening and full connection layer.
在本实施例中,所述的情绪识别概率为愤怒、厌恶、愉快、悲伤、惊讶六个埃克曼情绪分类标签的概率值,且 In this embodiment, the emotion recognition probability is the probability value of the six Ekman emotion classification labels of anger, disgust, happiness, sadness, and surprise, and
现有技术均是基于生理信号的情绪识别方法大多以脑电信号与功能磁共振为主,本发明提出一种基于功能性近红外光谱技术的情绪识别方法,充分挖掘新型非入侵性脑检测方法在情绪研究中的作用和潜能,不仅在实际应用中具有重大意义,同时也开创了生理信号的情绪分析新方法。Existing technologies are mostly based on physiological signal-based emotion recognition methods, and most of them are based on EEG signals and functional magnetic resonance. The role and potential in emotion research is not only of great significance in practical applications, but also creates a new method for emotional analysis of physiological signals.
本申请采用的信号去噪方法,能够实现多通道的fNIRS信号进行端对端的自适应去噪,算法具有较高的较强的泛化能力与普适性。The signal de-noising method adopted in this application can realize end-to-end adaptive de-noising of multi-channel fNIRS signals, and the algorithm has high generalization ability and universality.
本发明提出基于动态图注意力模型,采用动态图卷积的方法进行脑拓扑的构建及特征的提取,同时引入注意力机制进行fNIRS通道间关联特征的提取,提高模型的学习能力,获得较高的情绪识别准确率。The invention proposes a dynamic graph attention model based on the dynamic graph convolution method to construct the brain topology and extract features, and at the same time introduces an attention mechanism to extract the correlation features between fNIRS channels, so as to improve the learning ability of the model and obtain higher emotion recognition accuracy.
本发明采用深度学习的方法,通过数据驱动进行特征的学习的提取,提高了情绪特征的表达能力,同时避免人为的参与。The invention adopts the method of deep learning, and extracts the learning of features through data driving, which improves the expression ability of emotional features and avoids human participation.
实施例2Example 2
一种基于图网络及自适应去噪的fNIRS情绪识别系统,包括:A fNIRS emotion recognition system based on graph network and adaptive denoising, including:
fNIRS采集模块:利用fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量;fNIRS acquisition module: use the fNIRS acquisition equipment to continuously collect the change of light intensity before and after transmission and reception, convert the change of light intensity into the change of absorbance, and further obtain the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin;
fNIRS自适应去噪网络模块:用于得到纯净信号;fNIRS adaptive denoising network module: used to obtain pure signals;
fNIRS动态图注意力情绪识别网络模块:根据纯净信号输出情绪标签。fNIRS dynamic graph attention emotion recognition network module: output emotion labels according to pure signals.
实施例3Example 3
一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述fNIRS情绪识别方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the fNIRS emotion recognition method when the processor executes the computer program.
实施例4Example 4
一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现fNIRS情绪识别方法的步骤。A storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the fNIRS emotion recognition method.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the described embodiments, and any other changes, modifications, substitutions, and combinations made without departing from the spirit and principle of the present invention , simplification, all should be equivalent replacement modes, and are all included in the protection scope of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111315105.2A CN114209319B (en) | 2021-11-08 | 2021-11-08 | fNIRS emotion recognition method and system based on graph network and self-adaptive denoising |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111315105.2A CN114209319B (en) | 2021-11-08 | 2021-11-08 | fNIRS emotion recognition method and system based on graph network and self-adaptive denoising |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114209319A true CN114209319A (en) | 2022-03-22 |
CN114209319B CN114209319B (en) | 2024-03-29 |
Family
ID=80696655
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111315105.2A Active CN114209319B (en) | 2021-11-08 | 2021-11-08 | fNIRS emotion recognition method and system based on graph network and self-adaptive denoising |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114209319B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782449A (en) * | 2022-06-23 | 2022-07-22 | 中国科学技术大学 | Method, system, equipment and storage medium for extracting key points in lower limb X-ray image |
CN117156072A (en) * | 2023-11-01 | 2023-12-01 | 慧创科仪(北京)科技有限公司 | Device for processing near infrared data of multiple persons, processing equipment and storage medium |
WO2025028393A1 (en) * | 2023-07-28 | 2025-02-06 | 憲吾 田代 | Brain activity measuring device |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070202477A1 (en) * | 2004-09-02 | 2007-08-30 | Nagaoka University Of Technology | Emotional state determination method |
US20170172479A1 (en) * | 2015-12-21 | 2017-06-22 | Outerfacing Technology LLC | Acquiring and processing non-contact functional near-infrared spectroscopy data |
CN107280685A (en) * | 2017-07-21 | 2017-10-24 | 国家康复辅具研究中心 | Top layer physiological noise minimizing technology and system |
US20190239792A1 (en) * | 2018-02-07 | 2019-08-08 | Denso Corporation | Emotion identification apparatus |
CN111466876A (en) * | 2020-03-24 | 2020-07-31 | 山东大学 | An auxiliary diagnosis system for Alzheimer's disease based on fNIRS and graph neural network |
WO2020166091A1 (en) * | 2019-02-15 | 2020-08-20 | 俊徳 加藤 | Biological function measurement device, and biological function measurement method, and program |
WO2021067464A1 (en) * | 2019-10-01 | 2021-04-08 | The Board Of Trustees Of The Leland Stanford Junior University | Joint dynamic causal modeling and biophysics modeling to enable multi-scale brain network function modeling |
CN113180650A (en) * | 2021-01-25 | 2021-07-30 | 北京不器科技发展有限公司 | Near-infrared brain imaging atlas identification method |
KR102288267B1 (en) * | 2020-07-22 | 2021-08-11 | 액티브레인바이오(주) | AI(Artificial Intelligence) BASED METHOD OF PROVIDING BRAIN INFORMATION |
CN113598774A (en) * | 2021-07-16 | 2021-11-05 | 中国科学院软件研究所 | Active emotion multi-label classification method and device based on multi-channel electroencephalogram data |
-
2021
- 2021-11-08 CN CN202111315105.2A patent/CN114209319B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070202477A1 (en) * | 2004-09-02 | 2007-08-30 | Nagaoka University Of Technology | Emotional state determination method |
US20170172479A1 (en) * | 2015-12-21 | 2017-06-22 | Outerfacing Technology LLC | Acquiring and processing non-contact functional near-infrared spectroscopy data |
CN107280685A (en) * | 2017-07-21 | 2017-10-24 | 国家康复辅具研究中心 | Top layer physiological noise minimizing technology and system |
US20190239792A1 (en) * | 2018-02-07 | 2019-08-08 | Denso Corporation | Emotion identification apparatus |
WO2020166091A1 (en) * | 2019-02-15 | 2020-08-20 | 俊徳 加藤 | Biological function measurement device, and biological function measurement method, and program |
WO2021067464A1 (en) * | 2019-10-01 | 2021-04-08 | The Board Of Trustees Of The Leland Stanford Junior University | Joint dynamic causal modeling and biophysics modeling to enable multi-scale brain network function modeling |
CN111466876A (en) * | 2020-03-24 | 2020-07-31 | 山东大学 | An auxiliary diagnosis system for Alzheimer's disease based on fNIRS and graph neural network |
KR102288267B1 (en) * | 2020-07-22 | 2021-08-11 | 액티브레인바이오(주) | AI(Artificial Intelligence) BASED METHOD OF PROVIDING BRAIN INFORMATION |
CN113180650A (en) * | 2021-01-25 | 2021-07-30 | 北京不器科技发展有限公司 | Near-infrared brain imaging atlas identification method |
CN113598774A (en) * | 2021-07-16 | 2021-11-05 | 中国科学院软件研究所 | Active emotion multi-label classification method and device based on multi-channel electroencephalogram data |
Non-Patent Citations (1)
Title |
---|
LEMONQC: "图注意力网络(GAT)", pages 1 - 3, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/118605260?utm_id=0> * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114782449A (en) * | 2022-06-23 | 2022-07-22 | 中国科学技术大学 | Method, system, equipment and storage medium for extracting key points in lower limb X-ray image |
WO2025028393A1 (en) * | 2023-07-28 | 2025-02-06 | 憲吾 田代 | Brain activity measuring device |
CN117156072A (en) * | 2023-11-01 | 2023-12-01 | 慧创科仪(北京)科技有限公司 | Device for processing near infrared data of multiple persons, processing equipment and storage medium |
CN117156072B (en) * | 2023-11-01 | 2024-02-13 | 慧创科仪(北京)科技有限公司 | Device for processing near infrared data of multiple persons, processing equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114209319B (en) | 2024-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks | |
Dalvi et al. | A survey of ai-based facial emotion recognition: Features, ml & dl techniques, age-wise datasets and future directions | |
CN114209319A (en) | fNIRS emotion recognition method and system based on graph network and adaptive denoising | |
Kumar Arora et al. | Optimal facial feature based emotional recognition using deep learning algorithm | |
CN110765873B (en) | Facial expression recognition method and device based on expression intensity label distribution | |
CN110367967B (en) | Portable lightweight human brain state detection method based on data fusion | |
CN112766355B (en) | A method for EEG emotion recognition under label noise | |
CN113343860A (en) | Bimodal fusion emotion recognition method based on video image and voice | |
CN113157094A (en) | Electroencephalogram emotion recognition method combining feature migration and graph semi-supervised label propagation | |
CN115349860A (en) | A multi-modal emotion recognition method, system, device and medium | |
Jinliang et al. | EEG emotion recognition based on granger causality and capsnet neural network | |
CN117033638A (en) | Text emotion classification method based on EEG cognition alignment knowledge graph | |
CN117725367A (en) | Speech imagination brain electrolysis code method for source domain mobility self-adaptive learning | |
CN115238835A (en) | Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion | |
Younis et al. | Machine learning for human emotion recognition: a comprehensive review | |
CN111709284B (en) | Dance Emotion Recognition Method Based on CNN-LSTM | |
ALISAWI et al. | Real-time emotion recognition using deep learning methods: systematic review | |
CN115909438A (en) | Pain expression recognition system based on deep spatio-temporal convolutional neural network | |
CN116738330A (en) | Semi-supervision domain self-adaptive electroencephalogram signal classification method | |
Jayasekara et al. | Timecaps: Capturing time series data with capsule networks | |
CN115169386A (en) | Weak supervision increasing activity identification method based on meta-attention mechanism | |
CN117942079B (en) | Emotion intelligence classification method and system based on multidimensional sensing and fusion | |
Zhu et al. | Annotation efficiency in multimodal emotion recognition with deep learning | |
Lu et al. | LGL-BCI: A lightweight geometric learning framework for motor imagery-based brain-computer interfaces | |
Aljaloud et al. | Facial Emotion Recognition using Neighborhood Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |