CN114209319A - fNIRS emotion recognition method and system based on graph network and adaptive denoising - Google Patents

fNIRS emotion recognition method and system based on graph network and adaptive denoising Download PDF

Info

Publication number
CN114209319A
CN114209319A CN202111315105.2A CN202111315105A CN114209319A CN 114209319 A CN114209319 A CN 114209319A CN 202111315105 A CN202111315105 A CN 202111315105A CN 114209319 A CN114209319 A CN 114209319A
Authority
CN
China
Prior art keywords
fnirs
graph
emotion recognition
emotion
variation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111315105.2A
Other languages
Chinese (zh)
Other versions
CN114209319B (en
Inventor
青春美
岑敬伦
徐向民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202111315105.2A priority Critical patent/CN114209319B/en
Publication of CN114209319A publication Critical patent/CN114209319A/en
Application granted granted Critical
Publication of CN114209319B publication Critical patent/CN114209319B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Psychiatry (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Optics & Photonics (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an fNIRS emotion recognition method and system based on a graph network and self-adaptive denoising, which comprises the steps that fNIRS acquisition equipment continuously acquires the variation of light intensity before and after transmitting and receiving, converts the variation of the light intensity into the variation of absorbance, and further obtains the relative variation of the concentration of oxygenated hemoglobin and deoxygenated hemoglobin; denoising through a self-adaptive denoising network model to obtain a pure signal, wherein the input signal of the self-adaptive denoising network model is the relative variation of the concentrations of oxyhemoglobin and deoxyhemoglobin obtained in the last step, and the output signal is the relative variation data of the concentrations of pure oxyhemoglobin and deoxyhemoglobin; mapping graph nodes by combining probe and channel characteristics, restoring brain topology by using a graph network, recognizing network models by dynamic graph attention emotion, and classifying and outputting emotion labels. The invention solves the problems of complex wearing, difficult operation and the like of the brain-computer interface in practical application at present.

Description

基于图网络及自适应去噪的fNIRS情绪识别方法及系统fNIRS emotion recognition method and system based on graph network and adaptive denoising

技术领域technical field

本发明涉及人机信号识别领域,具体涉及基于图网络及自适应去噪的fNIRS情绪识别方法及系统。The invention relates to the field of human-machine signal recognition, in particular to a fNIRS emotion recognition method and system based on graph network and adaptive denoising.

背景技术Background technique

情绪影响着人的认知与行为活动,同时也是心理健康的重要影响因素。情绪识别作为其中研究的热点可被分为非生理信号与生理信号两种。作为传统的生理指标检测方法,脑电、脑磁和功能磁共振等在情绪识别中取得了一定的进步。与此同时,此类方法的局限性也逐步显露,如:时间或空间分辨率低、采集设备成本高昂、易受干扰、不便携带等。Emotions affect people's cognitive and behavioral activities, and are also an important factor in mental health. As a research hotspot, emotion recognition can be divided into non-physiological signals and physiological signals. As traditional physiological index detection methods, EEG, EEG and fMRI have made certain progress in emotion recognition. At the same time, the limitations of such methods are gradually revealed, such as low temporal or spatial resolution, high cost of acquisition equipment, susceptibility to interference, and inconvenience to carry.

近年来,随着近红外技术的发展及采集设备的技术升级,功能性近红外光谱技术(Functional Near-Infrared Spectroscopy,fNIRS)作为一种新兴的非侵入性的大脑检测手段,具有依从性高、抗干扰能力强、便携易实施和低成本等优点,适用于所有可能的被试群体和实验场景。伴随着5G技术、物联网、人机交互、机器学习等技术的不断发展,基于fNIRS的情绪分析在医疗保健、媒体娱乐、信息检索、教育以及智能可穿戴设备等领域都有着重要的意义及广阔的应用前景。因此,一种基于功能性近红外光谱技术的情绪识别方法及系统具有广泛的需求及广阔的前景。In recent years, with the development of near-infrared technology and the technical upgrade of acquisition equipment, functional near-infrared spectroscopy (fNIRS), as an emerging non-invasive brain detection method, has high compliance, With the advantages of strong anti-interference ability, portability, easy implementation and low cost, it is suitable for all possible test groups and experimental scenarios. With the continuous development of 5G technology, Internet of Things, human-computer interaction, machine learning and other technologies, sentiment analysis based on fNIRS is of great significance and broadness in the fields of healthcare, media entertainment, information retrieval, education, and smart wearable devices. application prospects. Therefore, an emotion recognition method and system based on functional near-infrared spectroscopy has wide demands and broad prospects.

发明内容SUMMARY OF THE INVENTION

为了克服现有技术存在的缺点与不足,本发明提供一种基于图网络及自适应去噪的fNIRS情绪识别方法及系统。In order to overcome the shortcomings and deficiencies of the prior art, the present invention provides an fNIRS emotion recognition method and system based on a graph network and adaptive denoising.

本发明采用如下技术方案:The present invention adopts following technical scheme:

如图1所示,一种基于图网络及自适应去噪的fNIRS情绪识别方法,包括如下步骤:As shown in Figure 1, a fNIRS emotion recognition method based on graph network and adaptive denoising includes the following steps:

S1 fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,利用比尔-朗伯定律求得吸光度变化与脑组织中的吸光色团浓度变化量的关系方程,主要是指氧合血红蛋白和脱氧血红蛋白,通过方程的求解进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量。The S1 fNIRS acquisition device continuously collects the change of light intensity before and after emission and reception, converts the change of light intensity into the change of absorbance, and uses the Beer-Lambert law to obtain the difference between the change of absorbance and the concentration of light-absorbing chromophore in brain tissue. The relationship equation mainly refers to oxyhemoglobin and deoxyhemoglobin, and the relative change of oxyhemoglobin and deoxyhemoglobin concentrations is further obtained by solving the equation.

进一步过程如下:The further process is as follows:

S1.1从采集设备中获取两种不同波长的连续波原始近红外光强变化,记为

Figure BDA0003343365130000021
Figure BDA0003343365130000022
Figure BDA0003343365130000023
S1.1 Obtain the continuous wave original near-infrared light intensity changes of two different wavelengths from the acquisition device, denoted as
Figure BDA0003343365130000021
and
Figure BDA0003343365130000022
Figure BDA0003343365130000023

S1.2将原始光强信息转换为吸光度

Figure BDA0003343365130000024
Figure BDA0003343365130000025
S1.2 Convert raw light intensity information to absorbance
Figure BDA0003343365130000024
and
Figure BDA0003343365130000025

Figure BDA0003343365130000026
Figure BDA0003343365130000026

Figure BDA0003343365130000027
Figure BDA0003343365130000027

S1.3、根据比尔-朗伯定律求得吸光度变化与脑组织中吸光色团浓度相对变化量的关系方程;S1.3. According to the Beer-Lambert law, the relationship equation between the change of absorbance and the relative change of the concentration of light-absorbing chromophore in brain tissue is obtained;

Figure BDA0003343365130000028
Figure BDA0003343365130000028

其中,ε为摩尔消光系数,d为检测深度,DPF为微分路径因子;Among them, ε is the molar extinction coefficient, d is the detection depth, and DPF is the differential path factor;

S1.4、通过方程的求解,得到氧合血红蛋白和脱氧血红蛋白浓度的相对变化量,记为ΔCHbO(t)和ΔCHbR(t)。S1.4. Through the solution of the equation, the relative changes of oxyhemoglobin and deoxyhemoglobin concentrations are obtained, which are recorded as ΔC HbO (t) and ΔC HbR (t).

S2通过自适应去噪网络模型去噪得到纯净信号,所述自适应去噪网络模型的输入信号为上一步骤得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量,输出信号为纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据。S2 obtains a pure signal by denoising the adaptive denoising network model, the input signal of the adaptive denoising network model is the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin obtained in the previous step, and the output signal is pure oxyhemoglobin and relative changes in deoxyhemoglobin concentration data.

进一步具体为:More specifically:

如图2所示,所述自适应去噪网络模型包括多个大小不同的卷积与反卷积块所构成的深度卷积对抗对,记为Gp,其中输入为带噪数据ΔCHbO(t)和ΔCHbR(t),生成网络输出为生成纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据,记为

Figure BDA0003343365130000029
Figure BDA00033433651300000210
As shown in FIG. 2 , the adaptive denoising network model includes a depthwise convolution confrontation formed by multiple convolution and deconvolution blocks of different sizes, denoted as G p , where the input is the noisy data ΔC HbO ( t) and ΔC HbR (t), the output of the generation network is the relative change data of the pure oxyhemoglobin and deoxyhemoglobin concentrations, denoted as
Figure BDA0003343365130000029
and
Figure BDA00033433651300000210

去噪网络模型的输出与输入的差值为生成的纯噪声信号,记为

Figure BDA00033433651300000211
The difference between the output of the denoising network model and the input is the generated pure noise signal, denoted as
Figure BDA00033433651300000211

将生成的纯噪声信号

Figure BDA00033433651300000212
与纯净的数据PHbO(t)和PHbR(t)相加,获得生成的带噪数据,记为
Figure BDA00033433651300000213
Figure BDA00033433651300000214
The pure noise signal that will be generated
Figure BDA00033433651300000212
Add to the clean data P HbO (t) and P HbR (t) to obtain the generated noisy data, denoted as
Figure BDA00033433651300000213
and
Figure BDA00033433651300000214

记ΔCHbO(t),ΔCHbR(t)为ΔC,PHbO(t),PHbR(t)为P,则模型的总损失定义为:Denote ΔC HbO (t), ΔC HbR (t) as ΔC, P HbO (t), and P HbR (t) as P, then the total loss of the model is defined as:

Figure BDA00033433651300000215
Figure BDA00033433651300000215

其中,

Figure BDA00033433651300000216
为噪声判别式的损失:in,
Figure BDA00033433651300000216
is the loss of the noise discriminant:

Figure BDA0003343365130000031
Figure BDA0003343365130000031

Figure BDA0003343365130000032
为纯净判别式的损失:
Figure BDA0003343365130000032
is the loss of pure discriminant:

Figure BDA0003343365130000033
Figure BDA0003343365130000033

Figure BDA0003343365130000034
为循环一致性损失:
Figure BDA0003343365130000034
For cycle consistency loss:

Figure BDA0003343365130000035
Figure BDA0003343365130000035

通过迭代训练后,去噪算法为:After iterative training, the denoising algorithm is:

PHbO(t),PHbR(t)=Gp(ΔCHbO(t),ΔCHbR(t)) (8)P HbO (t), P HbR (t) = G p (ΔC HbO (t),ΔC HbR (t)) (8)

可以获得纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据PHbO(t)和PHbR(t)。Relative change data P HbO (t) and P HbR (t) of pure oxyhemoglobin and deoxyhemoglobin concentrations can be obtained.

S3结合探针及通道特性进行图节点的映射,使用图网络进行脑拓扑的还原,通过动态图注意力情绪识别网络模型,分类输出情绪标签。所述动态图注意力情绪识别网络模型包括图卷积和注意力机制,所述探针为fNIRS的光极,包括发射方和接收方。S3 combines the characteristics of probes and channels to map nodes in the graph, uses the graph network to restore the brain topology, and uses the dynamic graph attention emotion recognition network model to classify and output emotion labels. The dynamic graph attention emotion recognition network model includes graph convolution and attention mechanism, and the probe is the optode of fNIRS, including a transmitter and a receiver.

如图3所示,具体过程为:As shown in Figure 3, the specific process is:

S3.1首先进行图的定义,记为G(V,E,W),其中V表示图节点的结合,|V|=n表示共n个节点,对应为fNIRS的n个通道的数据序列ΔCHbO(t),ΔCHbR(t)记为X;E表示图中边的集合,及fNIRS中不同通道的集合;W为邻接矩阵,定义节点的连接关系,即fNIRS中不同通道的关联性,其中邻接矩阵中的值描述节点间的关系重要性,值wij初始化方法使用高斯核函数方法:S3.1 First define the graph, denoted as G(V,E,W), where V represents the combination of graph nodes, |V|=n represents a total of n nodes, corresponding to the data sequence ΔC of n channels of fNIRS HbO (t), ΔC HbR (t) is denoted as X; E represents the set of edges in the graph and the set of different channels in fNIRS; W is the adjacency matrix, which defines the connection relationship of nodes, that is, the correlation of different channels in fNIRS, The value in the adjacency matrix describes the importance of the relationship between nodes, and the initialization method of the value w ij uses the Gaussian kernel function method:

Figure BDA0003343365130000036
Figure BDA0003343365130000036

式中,dist为节点间的高斯距离,θ与τ为高斯距离算法中的固定参数。In the formula, dist is the Gaussian distance between nodes, and θ and τ are fixed parameters in the Gaussian distance algorithm.

S3.2、为了引入图注意力,先计算每个节点与邻近节点的相似系数eijS3.2. In order to introduce graph attention, first calculate the similarity coefficient e ij between each node and neighboring nodes:

Figure BDA0003343365130000037
Figure BDA0003343365130000037

其中,a为全局映射矩阵,

Figure BDA0003343365130000038
Figure BDA0003343365130000039
分别为节点i和节点j的权重矩阵;where a is the global mapping matrix,
Figure BDA0003343365130000038
and
Figure BDA0003343365130000039
are the weight matrices of node i and node j, respectively;

S3.3计算图节点间的注意力系数并进行归一化,记为αijS3.3 calculates the attention coefficient between graph nodes and normalizes it, denoted as α ij :

Figure BDA0003343365130000041
Figure BDA0003343365130000041

式中,LeakyReLU()为非线性激活函数;In the formula, LeakyReLU() is a nonlinear activation function;

S3.4、在图卷积中,利用多头注意力进行加权求和进行参数整合,获得新特征X′iS3.4. In graph convolution, multi-head attention is used to perform weighted summation for parameter integration to obtain new features X′ i :

Figure BDA0003343365130000042
Figure BDA0003343365130000042

式中,σ为非线性映射关系,K为注意力头的数量,||表示拼接;In the formula, σ is the nonlinear mapping relationship, K is the number of attention heads, and || represents splicing;

S3.5、进行池化降维,通过展平与全连接层后经过分类器输出情绪类别,模型框架如图2所示;S3.5, perform pooling dimension reduction, output emotion categories through the classifier after flattening and fully connected layers, the model framework is shown in Figure 2;

S3.6、模型采用交叉熵加上一个正则化项作为损失函数,表示为:S3.6. The model uses cross entropy plus a regularization term as the loss function, which is expressed as:

Figure BDA0003343365130000043
Figure BDA0003343365130000043

其中,CossEntropy()表示交叉熵计算,l,

Figure BDA0003343365130000044
分别为真实标签与预测值,
Figure BDA0003343365130000045
为学习率,
Figure BDA0003343365130000046
用于表示模型所有参数;Among them, CossEntropy() represents cross entropy calculation, l,
Figure BDA0003343365130000044
are the true label and the predicted value, respectively,
Figure BDA0003343365130000045
is the learning rate,
Figure BDA0003343365130000046
Used to represent all parameters of the model;

S3.7、采用反向传播算法实现邻接矩阵的动态变化,计算损失函数对邻接矩阵偏微分进行网络的迭代更新:S3.7. Use the back-propagation algorithm to realize the dynamic change of the adjacency matrix, and calculate the loss function to iteratively update the network on the partial differential of the adjacency matrix:

Figure BDA0003343365130000047
Figure BDA0003343365130000047

迭代更新公式为:The iterative update formula is:

Figure BDA0003343365130000048
Figure BDA0003343365130000048

一种基于图网络及自适应去噪的fNIRS情绪识别系统,包括:A fNIRS emotion recognition system based on graph network and adaptive denoising, including:

fNIRS采集模块:利用fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量;fNIRS acquisition module: use the fNIRS acquisition equipment to continuously collect the change of light intensity before and after transmission and reception, convert the change of light intensity into the change of absorbance, and further obtain the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin;

fNIRS自适应去噪网络模块:用于得到纯净信号;fNIRS adaptive denoising network module: used to obtain pure signals;

fNIRS动态图注意力情绪识别网络模块:根据纯净信号输出情绪标签。fNIRS dynamic graph attention emotion recognition network module: output emotion labels according to pure signals.

一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述fNIRS情绪识别方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the fNIRS emotion recognition method when the processor executes the computer program.

一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现fNIRS情绪识别方法的步骤。A storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the fNIRS emotion recognition method.

本发明的有益效果:Beneficial effects of the present invention:

(1)在数据的去噪过程中,采用基于生成对抗网络的自适应去噪模型。对比传统基于机器学习的数据去噪方法,本发明能够很大程度上避免数据去噪过程中的人工参与及经验分析,克服传统方法具有较强任务依赖性的缺点,在多任务条件下具有较高的自适应性。同时,通过生成对抗学习的方式,能够无需特定假设地解决fNIRS在“静态任务”和“动态任务”中的噪声差异问题,去噪模型具有较强泛化能力。(1) In the process of data denoising, an adaptive denoising model based on generative adversarial network is adopted. Compared with the traditional data denoising method based on machine learning, the present invention can largely avoid manual participation and experience analysis in the process of data denoising, overcome the disadvantage that the traditional method has strong task dependence, and has relatively high performance under multi-task conditions. High adaptability. At the same time, through the method of generative adversarial learning, the noise difference problem of fNIRS in "static tasks" and "dynamic tasks" can be solved without specific assumptions, and the denoising model has strong generalization ability.

(2)在fNIRS信号的情绪表征提取中,使用深度学习的网络模型。对比传统的手工特征提取方法,本发明能通过数据驱动的方式,经过网络的学习实现对fNIRS中情绪特征的提取,克服了传统方法中固定特征的计算中存在的维度有限、有效性的不确定等问题。通过深度学习的网络对特征进行学习与提取,能够有效地获取不同维度的情绪表征,增强对fNIRS数据中情绪特征的提取与利用。(2) In the emotional representation extraction of fNIRS signals, a deep learning network model is used. Compared with the traditional manual feature extraction method, the present invention can realize the extraction of emotional features in fNIRS through network learning in a data-driven manner, overcoming the limited dimension and uncertainty of validity existing in the calculation of fixed features in the traditional method. And other issues. Learning and extracting features through a deep learning network can effectively obtain emotional representations of different dimensions, and enhance the extraction and utilization of emotional features in fNIRS data.

(3)使用了动态图卷积神经网络对具有探针位置信息、通道信号的fNIRS数据进行有效建模。对比传统将数据简单看作时间序列信号而使用基于支持向量机、贝叶斯分类器的机器学习方法或基于长短期记忆的循环神经网络进行分析,本发明提出使用图的方法对fNIRS数据进行拓扑映射,将不同探针映射成图中的节点,时间序列的数据为节点的特征,而不同通过的关系通过邻接矩阵表示为图中的边。本方法充分利用数据的特征,对脑结构的拓扑具有还原性,同时对不同通道的数据关联性进行了表征,能够提高网络模型在fNIRS脑信号检测的情绪识别中的准确率。(3) The dynamic graph convolutional neural network is used to effectively model the fNIRS data with probe position information and channel signal. Compared with the traditional data that is simply regarded as a time series signal and analyzed using a machine learning method based on a support vector machine, a Bayesian classifier, or a recurrent neural network based on long short-term memory, the present invention proposes to use a graph method to perform topology on fNIRS data. Mapping, which maps different probes to nodes in the graph, the time series data is the feature of the node, and the relationship of different passes is represented as the edge in the graph through the adjacency matrix. The method makes full use of the characteristics of the data, has the restoration of the topology of the brain structure, and simultaneously characterizes the data correlation of different channels, which can improve the accuracy of the network model in the emotion recognition of fNIRS brain signal detection.

(4)本方法引入了图注意力机制,通过计算不同探针节点与相邻节点的相似系数来获得不同节点间的注意力系数,在图卷积过程中使用多头注意力机制的加权求和进行节点特征的更新。本方法能够在模型的训练过程中,引入节点相邻节点的特征联系,使得模型能够更好地对fNIRS中不同通道数据的关联特征进行提取,同时也能获取不同脑区域对不同情绪的激活反应,在脑信号的情绪识别具有显著作用。(4) This method introduces a graph attention mechanism, obtains the attention coefficients between different nodes by calculating the similarity coefficients between different probe nodes and adjacent nodes, and uses the weighted summation of the multi-head attention mechanism in the graph convolution process. Update node features. This method can introduce the feature relationship of adjacent nodes in the model training process, so that the model can better extract the correlation features of different channel data in fNIRS, and can also obtain the activation responses of different brain regions to different emotions , has a significant role in emotion recognition of brain signals.

附图说明Description of drawings

图1是本发明的工作流程图;Fig. 1 is the working flow chart of the present invention;

图2是本发明的fNIRS自适应去噪网络模型结构图;Fig. 2 is the fNIRS adaptive denoising network model structural diagram of the present invention;

图3是本发明的fNIRS动态图注意力情绪识别网络模型结构图;Fig. 3 is the fNIRS dynamic graph attention emotion recognition network model structure diagram of the present invention;

图4是本发明fNIRS采集模块示意图。FIG. 4 is a schematic diagram of the fNIRS acquisition module of the present invention.

具体实施方式Detailed ways

下面结合实施例及附图,对本发明作进一步地详细说明,但本发明的实施方式不限于此。The present invention will be described in further detail below with reference to the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例1Example 1

一种基于图网络及自适应去噪的fNIRS情绪识别方法,适用于fNIRS采集设备的情绪识别任务,主要包括外部情绪刺激步骤、fNIRS采集步骤、fNIRS自适应去噪步骤、fNIRS情绪识别步骤。An fNIRS emotion recognition method based on graph network and adaptive denoising is suitable for emotion recognition tasks of fNIRS acquisition equipment, and mainly includes an external emotional stimulation step, an fNIRS acquisition step, an fNIRS adaptive denoising step, and an fNIRS emotion recognition step.

外部情绪刺激步骤,外部情绪刺激材料采用愤怒、厌恶、恐惧、愉快、悲伤、惊讶六个情感标签类型的视频,使用者通过观看视频进行情绪的诱发。In the external emotional stimulation step, the external emotional stimulation material adopts videos of six emotional label types of anger, disgust, fear, pleasure, sadness and surprise, and users can induce emotions by watching the videos.

如图4所示,fNIRS采集步骤,fNIRS采集设备由多通道双波长近红外连续波发射-接收源组成。As shown in Figure 4, the fNIRS acquisition step, the fNIRS acquisition equipment is composed of multi-channel dual-wavelength near-infrared continuous wave transmitter-receiver sources.

首先,使用者通过佩戴fNIRS采集设备,实时采集、记录不同光极在发射-接收前后光强的变化量。First, the user wears the fNIRS acquisition device to collect and record the change of light intensity of different optodes before and after emission and reception in real time.

设任一时刻的单通道光强变化量为:

Figure BDA0003343365130000061
Figure BDA0003343365130000062
Let the single-channel light intensity change at any time be:
Figure BDA0003343365130000061
and
Figure BDA0003343365130000062

计算转换为吸光度为:

Figure BDA0003343365130000063
The calculation is converted to absorbance as:
Figure BDA0003343365130000063

根据比尔-朗伯定律:ΔA=ε×d×DPF,其中,ε为摩尔消光系数,d为检测深度,DPF为微分路径因子。According to Beer-Lambert law: ΔA=ε×d×DPF, where ε is the molar extinction coefficient, d is the detection depth, and DPF is the differential path factor.

通过求解吸光度变化与脑组织中吸光色团浓度相对变化量的关系方程,获得氧合血红蛋白和脱氧血红蛋白浓度的相对变化量,记为ΔCHbO和ΔCHbRBy solving the relationship equation between the change of absorbance and the relative change of the concentration of light-absorbing chromophore in brain tissue, the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin was obtained, which was recorded as ΔC HbO and ΔC HbR .

fNIRS自适应去噪步骤fNIRS adaptive denoising step

将数据输入到fNIRS自适应去噪模块进行去噪增强,经过训练好的生成网络获得纯净的氧合血红蛋白和脱氧血红蛋白浓度的相对变化量数据PHbO与PHbR。PHbO,PHbR=Gp(ΔCHbO,ΔCHbR)。The data is input into the fNIRS adaptive denoising module for denoising enhancement, and the trained generation network obtains the relative change data P HbO and P HbR of pure oxyhemoglobin and deoxyhemoglobin concentrations. P HbO , PHbR=G p (ΔC HbO , ΔC HbR ).

所述fNIRS自适应去噪模块基于生成对抗网络,通过卷积对fNIRS信号进行特征提取,并通过反卷积进行纯净信号生成。网络的训练过程中,输入成对的带噪信号与纯净信号,通过两个判别器的对抗损失进行训练,同时引入循环一致性损失进行空间映射的约束,提高模型的训练效率。The fNIRS adaptive denoising module is based on a generative adversarial network, performs feature extraction on the fNIRS signal through convolution, and generates a pure signal through deconvolution. In the training process of the network, input pairs of noisy signals and pure signals are trained through the adversarial loss of the two discriminators, and at the same time, the cycle consistency loss is introduced to constrain the spatial mapping to improve the training efficiency of the model.

fNIRS情绪识别步骤fNIRS emotion recognition steps

将所有通道的纯净数据进行图节点映射:PHbO,PHbR→V,V为图节点的集合。使用高斯核函数进行邻接矩阵W的初始化,其中高斯核函数为:Map the pure data of all channels to graph nodes: P HbO , P HbR → V, V is the set of graph nodes. Use the Gaussian kernel function to initialize the adjacency matrix W, where the Gaussian kernel function is:

Figure BDA0003343365130000071
Figure BDA0003343365130000071

式中,dist为节点间的高斯距离,θ与τ为高斯距离算法中的固定参数。In the formula, dist is the Gaussian distance between nodes, and θ and τ are fixed parameters in the Gaussian distance algorithm.

计算每个节点与邻近节点的相似系数

Figure BDA0003343365130000072
a为全局映射矩阵,
Figure BDA0003343365130000073
Figure BDA0003343365130000074
分别为节点i和节点j的权重矩阵。Calculate the similarity coefficient between each node and neighboring nodes
Figure BDA0003343365130000072
a is the global mapping matrix,
Figure BDA0003343365130000073
and
Figure BDA0003343365130000074
are the weight matrices of node i and node j, respectively.

使用LeakyReLU非线性激活函数归一化图节点注意力系数为αijUsing the LeakyReLU nonlinear activation function to normalize the graph node attention coefficient as α ij ,

Figure BDA0003343365130000075
Figure BDA0003343365130000075

通过图卷积加权求并拼接多头注意力参数获得新特征

Figure BDA0003343365130000076
Figure BDA0003343365130000077
σ为非线性映射关系,K为注意力头的数量,||表示拼接。New features obtained by graph convolution weighted summation and splicing of multi-head attention parameters
Figure BDA0003343365130000076
Figure BDA0003343365130000077
σ is the nonlinear mapping relationship, K is the number of attention heads, and || represents splicing.

进行池化降维,通过展平与全连接层后经过分类器输出情绪识别概率。The pooling dimension is reduced, and the emotion recognition probability is output through the classifier after flattening and full connection layer.

在本实施例中,所述的情绪识别概率为愤怒、厌恶、愉快、悲伤、惊讶六个埃克曼情绪分类标签的概率值,且

Figure BDA0003343365130000078
In this embodiment, the emotion recognition probability is the probability value of the six Ekman emotion classification labels of anger, disgust, happiness, sadness, and surprise, and
Figure BDA0003343365130000078

现有技术均是基于生理信号的情绪识别方法大多以脑电信号与功能磁共振为主,本发明提出一种基于功能性近红外光谱技术的情绪识别方法,充分挖掘新型非入侵性脑检测方法在情绪研究中的作用和潜能,不仅在实际应用中具有重大意义,同时也开创了生理信号的情绪分析新方法。Existing technologies are mostly based on physiological signal-based emotion recognition methods, and most of them are based on EEG signals and functional magnetic resonance. The role and potential in emotion research is not only of great significance in practical applications, but also creates a new method for emotional analysis of physiological signals.

本申请采用的信号去噪方法,能够实现多通道的fNIRS信号进行端对端的自适应去噪,算法具有较高的较强的泛化能力与普适性。The signal de-noising method adopted in this application can realize end-to-end adaptive de-noising of multi-channel fNIRS signals, and the algorithm has high generalization ability and universality.

本发明提出基于动态图注意力模型,采用动态图卷积的方法进行脑拓扑的构建及特征的提取,同时引入注意力机制进行fNIRS通道间关联特征的提取,提高模型的学习能力,获得较高的情绪识别准确率。The invention proposes a dynamic graph attention model based on the dynamic graph convolution method to construct the brain topology and extract features, and at the same time introduces an attention mechanism to extract the correlation features between fNIRS channels, so as to improve the learning ability of the model and obtain higher emotion recognition accuracy.

本发明采用深度学习的方法,通过数据驱动进行特征的学习的提取,提高了情绪特征的表达能力,同时避免人为的参与。The invention adopts the method of deep learning, and extracts the learning of features through data driving, which improves the expression ability of emotional features and avoids human participation.

实施例2Example 2

一种基于图网络及自适应去噪的fNIRS情绪识别系统,包括:A fNIRS emotion recognition system based on graph network and adaptive denoising, including:

fNIRS采集模块:利用fNIRS采集设备连续采集发射-接收前后光强的变化量,将光强的变化量转化为吸光度的变化,进一步得到氧合血红蛋白与脱氧血红蛋白浓度的相对变化量;fNIRS acquisition module: use the fNIRS acquisition equipment to continuously collect the change of light intensity before and after transmission and reception, convert the change of light intensity into the change of absorbance, and further obtain the relative change of the concentration of oxyhemoglobin and deoxyhemoglobin;

fNIRS自适应去噪网络模块:用于得到纯净信号;fNIRS adaptive denoising network module: used to obtain pure signals;

fNIRS动态图注意力情绪识别网络模块:根据纯净信号输出情绪标签。fNIRS dynamic graph attention emotion recognition network module: output emotion labels according to pure signals.

实施例3Example 3

一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现所述fNIRS情绪识别方法的步骤。A computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the fNIRS emotion recognition method when the processor executes the computer program.

实施例4Example 4

一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现fNIRS情绪识别方法的步骤。A storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the steps of the fNIRS emotion recognition method.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiments are preferred embodiments of the present invention, but the embodiments of the present invention are not limited by the described embodiments, and any other changes, modifications, substitutions, and combinations made without departing from the spirit and principle of the present invention , simplification, all should be equivalent replacement modes, and are all included in the protection scope of the present invention.

Claims (10)

1. A fNIRS emotion recognition method based on graph network and adaptive denoising is characterized by comprising the following steps:
continuously acquiring the variation of light intensity before and after transmitting and receiving by the fNIRS acquisition equipment, converting the variation of the light intensity into the variation of absorbance, and further obtaining the relative variation of the concentration of oxyhemoglobin and deoxyhemoglobin;
denoising through a self-adaptive denoising network model to obtain a pure signal, wherein the input signal of the self-adaptive denoising network model is the relative variation of the concentrations of oxyhemoglobin and deoxyhemoglobin obtained in the last step, and the output signal is the relative variation data of the concentrations of pure oxyhemoglobin and deoxyhemoglobin;
mapping graph nodes by combining probe and channel characteristics, restoring brain topology by using a graph network, recognizing network models by dynamic graph attention emotion, and classifying and outputting emotion labels.
2. The fNIRS emotion recognition method of claim 1, wherein the adaptive denoising network model comprises a plurality of deep convolution pairs of convolution and deconvolution blocks of different sizes.
3. The fNIRS emotion recognition method of claim 2, wherein the convolution performs feature extraction on the fNIRS signal and clean signal generation is performed by deconvolution blocks.
4. The fNIRS emotion recognition method of claim 1, wherein the dynamical graph attention emotion recognition network model comprises a graph convolution and attention mechanism.
5. The fNIRS emotion recognition method of claim 4, wherein the recognition process of the dynagram attention emotion recognition network model is as follows: and constructing a graph network, mapping data to a graph, mapping pure fNIRS signals to nodes of the graph, mapping characteristic features of probes and channels to edges of the graph, extracting the features by a dynamic graph convolution method, simultaneously introducing an attention mechanism to learn channel relevance, finally classifying and outputting through dimensionality reduction and flattening splicing of graph pooling, and realizing accurate emotion identification.
6. The fNIRS emotion recognition method of claim 5, wherein the emotion labels are output in a classified manner by the network model for attention emotion recognition of the dynamic graph, specifically:
the definition of the graph is denoted as G (V, E, W), where V represents the combination of graph nodes and | V | ═ n tableData sequence Δ C showing n nodes in total, corresponding to n channels of fNIRSHbO(t),ΔCHbR(t) is denoted as X; e represents the set of edges in the graph, and the set of different channels in fNIRS; w is an adjacency matrix and defines the connection relation of the nodes, namely the relevance of different channels in the fNIRS, wherein the values in the adjacency matrix describe the importance of the relation between the nodes, and the value WijThe initialization method uses a Gaussian kernel function method;
drawing attention, and calculating similarity coefficients of each node and adjacent nodes;
calculating attention coefficients among the nodes of the graph and normalizing the attention coefficients;
in graph convolution, weighted summation is carried out by using multi-head attention to carry out parameter integration, and new features are obtained;
and performing pooling dimensionality reduction, and outputting emotion categories through a classifier after flattening and full connection layers.
7. The fNIRS emotion recognition method of any one of claims 1 to 6, wherein the equation relating the change in absorbance to the change in the concentration of the light-absorbing chromophore in the brain tissue is obtained using the beer-lambert law, and the relative change in the concentrations of oxyhemoglobin and deoxyhemoglobin is obtained by solving the equation.
8. A system for implementing the method of fNIRS emotion recognition of any of claims 1-6, comprising:
the fNIRS acquisition module: continuously acquiring the variation of light intensity before and after transmitting and receiving by using an fNIRS acquisition device, converting the variation of the light intensity into the variation of absorbance, and further obtaining the relative variation of the concentration of oxyhemoglobin and deoxyhemoglobin;
the fNIRS self-adaptive denoising network module: for obtaining a clean signal;
the fNIRS dynamics graph attention emotion recognition network module: and outputting the emotion label according to the pure signal.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the computer program implements the steps of the fNIRS emotion recognition method of any of claims 1 to 7.
10. A storage medium having stored thereon a computer program, wherein the computer program, when executed by a processor, performs the steps of the fNIRS emotion recognition method of any of claims 1 to 7.
CN202111315105.2A 2021-11-08 2021-11-08 fNIRS emotion recognition method and system based on graph network and self-adaptive denoising Active CN114209319B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111315105.2A CN114209319B (en) 2021-11-08 2021-11-08 fNIRS emotion recognition method and system based on graph network and self-adaptive denoising

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111315105.2A CN114209319B (en) 2021-11-08 2021-11-08 fNIRS emotion recognition method and system based on graph network and self-adaptive denoising

Publications (2)

Publication Number Publication Date
CN114209319A true CN114209319A (en) 2022-03-22
CN114209319B CN114209319B (en) 2024-03-29

Family

ID=80696655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111315105.2A Active CN114209319B (en) 2021-11-08 2021-11-08 fNIRS emotion recognition method and system based on graph network and self-adaptive denoising

Country Status (1)

Country Link
CN (1) CN114209319B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782449A (en) * 2022-06-23 2022-07-22 中国科学技术大学 Method, system, equipment and storage medium for extracting key points in lower limb X-ray image
CN117156072A (en) * 2023-11-01 2023-12-01 慧创科仪(北京)科技有限公司 Device for processing near infrared data of multiple persons, processing equipment and storage medium
WO2025028393A1 (en) * 2023-07-28 2025-02-06 憲吾 田代 Brain activity measuring device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070202477A1 (en) * 2004-09-02 2007-08-30 Nagaoka University Of Technology Emotional state determination method
US20170172479A1 (en) * 2015-12-21 2017-06-22 Outerfacing Technology LLC Acquiring and processing non-contact functional near-infrared spectroscopy data
CN107280685A (en) * 2017-07-21 2017-10-24 国家康复辅具研究中心 Top layer physiological noise minimizing technology and system
US20190239792A1 (en) * 2018-02-07 2019-08-08 Denso Corporation Emotion identification apparatus
CN111466876A (en) * 2020-03-24 2020-07-31 山东大学 An auxiliary diagnosis system for Alzheimer's disease based on fNIRS and graph neural network
WO2020166091A1 (en) * 2019-02-15 2020-08-20 俊徳 加藤 Biological function measurement device, and biological function measurement method, and program
WO2021067464A1 (en) * 2019-10-01 2021-04-08 The Board Of Trustees Of The Leland Stanford Junior University Joint dynamic causal modeling and biophysics modeling to enable multi-scale brain network function modeling
CN113180650A (en) * 2021-01-25 2021-07-30 北京不器科技发展有限公司 Near-infrared brain imaging atlas identification method
KR102288267B1 (en) * 2020-07-22 2021-08-11 액티브레인바이오(주) AI(Artificial Intelligence) BASED METHOD OF PROVIDING BRAIN INFORMATION
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070202477A1 (en) * 2004-09-02 2007-08-30 Nagaoka University Of Technology Emotional state determination method
US20170172479A1 (en) * 2015-12-21 2017-06-22 Outerfacing Technology LLC Acquiring and processing non-contact functional near-infrared spectroscopy data
CN107280685A (en) * 2017-07-21 2017-10-24 国家康复辅具研究中心 Top layer physiological noise minimizing technology and system
US20190239792A1 (en) * 2018-02-07 2019-08-08 Denso Corporation Emotion identification apparatus
WO2020166091A1 (en) * 2019-02-15 2020-08-20 俊徳 加藤 Biological function measurement device, and biological function measurement method, and program
WO2021067464A1 (en) * 2019-10-01 2021-04-08 The Board Of Trustees Of The Leland Stanford Junior University Joint dynamic causal modeling and biophysics modeling to enable multi-scale brain network function modeling
CN111466876A (en) * 2020-03-24 2020-07-31 山东大学 An auxiliary diagnosis system for Alzheimer's disease based on fNIRS and graph neural network
KR102288267B1 (en) * 2020-07-22 2021-08-11 액티브레인바이오(주) AI(Artificial Intelligence) BASED METHOD OF PROVIDING BRAIN INFORMATION
CN113180650A (en) * 2021-01-25 2021-07-30 北京不器科技发展有限公司 Near-infrared brain imaging atlas identification method
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LEMONQC: "图注意力网络(GAT)", pages 1 - 3, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/118605260?utm_id=0> *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114782449A (en) * 2022-06-23 2022-07-22 中国科学技术大学 Method, system, equipment and storage medium for extracting key points in lower limb X-ray image
WO2025028393A1 (en) * 2023-07-28 2025-02-06 憲吾 田代 Brain activity measuring device
CN117156072A (en) * 2023-11-01 2023-12-01 慧创科仪(北京)科技有限公司 Device for processing near infrared data of multiple persons, processing equipment and storage medium
CN117156072B (en) * 2023-11-01 2024-02-13 慧创科仪(北京)科技有限公司 Device for processing near infrared data of multiple persons, processing equipment and storage medium

Also Published As

Publication number Publication date
CN114209319B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
Chen et al. Accurate EEG-based emotion recognition on combined features using deep convolutional neural networks
Dalvi et al. A survey of ai-based facial emotion recognition: Features, ml & dl techniques, age-wise datasets and future directions
CN114209319A (en) fNIRS emotion recognition method and system based on graph network and adaptive denoising
Kumar Arora et al. Optimal facial feature based emotional recognition using deep learning algorithm
CN110765873B (en) Facial expression recognition method and device based on expression intensity label distribution
CN110367967B (en) Portable lightweight human brain state detection method based on data fusion
CN112766355B (en) A method for EEG emotion recognition under label noise
CN113343860A (en) Bimodal fusion emotion recognition method based on video image and voice
CN113157094A (en) Electroencephalogram emotion recognition method combining feature migration and graph semi-supervised label propagation
CN115349860A (en) A multi-modal emotion recognition method, system, device and medium
Jinliang et al. EEG emotion recognition based on granger causality and capsnet neural network
CN117033638A (en) Text emotion classification method based on EEG cognition alignment knowledge graph
CN117725367A (en) Speech imagination brain electrolysis code method for source domain mobility self-adaptive learning
CN115238835A (en) Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion
Younis et al. Machine learning for human emotion recognition: a comprehensive review
CN111709284B (en) Dance Emotion Recognition Method Based on CNN-LSTM
ALISAWI et al. Real-time emotion recognition using deep learning methods: systematic review
CN115909438A (en) Pain expression recognition system based on deep spatio-temporal convolutional neural network
CN116738330A (en) Semi-supervision domain self-adaptive electroencephalogram signal classification method
Jayasekara et al. Timecaps: Capturing time series data with capsule networks
CN115169386A (en) Weak supervision increasing activity identification method based on meta-attention mechanism
CN117942079B (en) Emotion intelligence classification method and system based on multidimensional sensing and fusion
Zhu et al. Annotation efficiency in multimodal emotion recognition with deep learning
Lu et al. LGL-BCI: A lightweight geometric learning framework for motor imagery-based brain-computer interfaces
Aljaloud et al. Facial Emotion Recognition using Neighborhood Features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant