CN115659259A - Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space - Google Patents

Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space Download PDF

Info

Publication number
CN115659259A
CN115659259A CN202211678810.3A CN202211678810A CN115659259A CN 115659259 A CN115659259 A CN 115659259A CN 202211678810 A CN202211678810 A CN 202211678810A CN 115659259 A CN115659259 A CN 115659259A
Authority
CN
China
Prior art keywords
features
layer
spatial
data
local
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211678810.3A
Other languages
Chinese (zh)
Inventor
陈俊龙
叶梦晴
张通
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202211678810.3A priority Critical patent/CN115659259A/en
Publication of CN115659259A publication Critical patent/CN115659259A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method, a medium and equipment for recognizing electroencephalogram emotion based on hierarchical multi-dimensional space; the method comprises the following steps: preprocessing input electroencephalogram data; inputting the preprocessed data into a layered dynamic graph convolution module of a layered multidimensional space network, and extracting the spatial features based on a channel space; inputting the preprocessed data into an auxiliary information module of the hierarchical multidimensional space network in parallel, and extracting discriminant features based on data dependence; and carrying out multi-dimensional self-adaptive fusion on the spatial features and the discriminative features through a morning and evening self-adaptive fusion mechanism to obtain final features, thereby realizing emotion classification. The method can capture spatial features based on brain function connection relation, integrates discriminative features based on data dependence into the spatial features through a morning and evening self-adaptive fusion mechanism, extracts common representation most beneficial to emotion recognition for emotion classification tasks, and improves performance of the emotion recognition tasks based on electroencephalogram signals.

Description

Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space
Technical Field
The invention relates to the technical field of electroencephalogram emotion recognition, in particular to an electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space.
Background
The human brain stores emotional experiences accumulated over a lifetime. By directly analyzing the electroencephalogram signals, the emotional response of a person when the person is in contact with a specific environment can be researched. The emotion recognition based on the electroencephalogram signals is not only suitable for man-machine interaction, but also suitable for patients with expression disorders. Many emotion recognition algorithms based on electroencephalogram signals are proposed, aiming at extracting features most relevant to emotion. It has a broad prospect, and there are often convolutional neural network-based and cyclic neural network-based methods that capture and analyze electroencephalogram signals in the time-frequency domain, and most independently extract features from a single channel.
In recent years, more and more research has revealed the correlation between brain electrical signals and emotions. Certain brain regions, such as the amygdala and frontal lobe, store a number of cognitive-emotional interactions. Studies have shown strong connectivity between brain regions important for emotional processing. I.e. high emotional-related connectivity exists within one area and between various areas throughout the brain, such as the frontal, temporal, occipital and parietal lobes. This phenomenon means that, in addition to a fixed structural connection, the strength of the functional connection is also important. Furthermore, emotion is more dominated by interactions between multiple brain regions, which is a functionally integrated system with cognition. However, most methods do not construct the relationship between brain electrical channels based on brain mechanisms, and cannot reflect the real channel distribution. Because the distribution of electroencephalogram channels is not regular grid data, it is difficult for traditional methods to capture potential connections between channels. One way to solve this problem is to use a graph convolution network. The graph is a representation of irregularly distributed entities and their relationships. Graph convolution networks have been proposed to handle graph structures such as brain electrical channel distributions, of several types: recursive graph neural networks, convolutional graph neural networks, graph auto-coding, and space-time graph neural networks. For electroencephalogram signals, functional dependence between electrode channels is closely related to emotional and cognitive processes. The graph structure is very suitable for representing the spatial relationship among the brain electricity channels, most of research works for capturing the spatial relationship of the brain electricity channels adopt a graph convolution network, and the channels are regarded as nodes on the graph.
However, the existing electroencephalogram emotion recognition technology based on the convolutional neural network has the following defects:
(1) Studies have shown that specific emotional patterns occur in neuronal populations that are locally distributed within a region or in neural networks on a larger spatial scale. Fixed physical connections may not reflect actual dependencies and incomplete graph structures may result in some information loss in actual dependencies. In addition, the functional connection changes along with the change of the environment and is changed by the variables of cognition, emotion, motivation and the like, namely, the functional connection among the brain electric channels is changed dynamically instead of being fixed statically. Recently, some approaches have proposed dynamically building adjacency matrices based on the global dynamics of the channel connections. One major problem with these studies is that they do not consider a stable activation pattern between mood and specific brain functional areas. They only build global network connections and ignore specific area enhancements. In reality, however, there are specific brain regions that are continuously activated during emotional tasks.
(2) The method based on the image volume network can capture effective information of the brain wave data in a spatial domain, but a model which only utilizes the spatial mode learning loses the capability of capturing discrimination characteristics based on data dependence from the original brain wave data. And the auxiliary information is crucial to the brain electric emotion recognition task. Therefore, how to better extract discriminant features from the electroencephalogram data and fuse the auxiliary information into spatial features is a huge challenge. Most of the previous methods use a late fusion mechanism for information fusion. The late fusion mechanism refers to extracting key information more beneficial to classification from different prediction results. However, the features extracted by these methods lose high-dimensional detailed information. Early fusion mechanisms for capturing high-dimensional fusion features solved this problem, but it was difficult to focus on abstract features of the brain electrical data. In fact, the two types of information captured by the early and late fusion mechanisms may be complementary, but most approaches do not allow for such fusion of multidimensional features.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention aims to provide an electroencephalogram emotion recognition method, medium and equipment based on a hierarchical multidimensional space; the method can capture spatial features based on brain function connection relation, integrates discriminative features based on data dependence into the spatial features through a morning and evening self-adaptive fusion mechanism, extracts common representation most beneficial to emotion recognition for emotion classification tasks, and improves performance of the emotion recognition tasks based on electroencephalogram signals.
In order to achieve the purpose, the invention is realized by the following technical scheme: an electroencephalogram emotion recognition method based on a hierarchical multidimensional space comprises the following steps:
preprocessing input electroencephalogram data;
pre-processed dataXInputting the data into a layered dynamic graph convolution module of a layered multidimensional space network, and extracting spatial features based on channel spaceZ G
Pre-processed dataXParallel input to hierarchyExtracting discriminant characteristics based on data dependence in auxiliary information module of multi-dimensional space networkZ C
Spatial features over early-late adaptive fusion mechanismZ G And discriminant featuresZ C Carrying out multi-dimensional self-adaptive fusion to obtain a final characteristic Z and realizing emotion classification;
wherein the spatial characteristicsZ G The extraction method comprises the following steps:
the layered dynamic graph rolling module comprises a global dynamic branch and a local function branch; the global dynamic branch comprises a global dynamic connection graphA D AndKa layer global channel spatial feature extraction layer; wherein the global dynamic connection graphA D Setting learnable parameters of the hierarchical multidimensional space network, and obtaining the learnable parameters through training and optimization of the hierarchical multidimensional space network; the local functional branches comprise local functional connection graphsA R AndKa layer local channel spatial feature extraction layer; partial function connection diagramA R The construction method comprises the following steps: pre-processed data according to brain functional regionXThe contained electrode channels are divided into N areas, the channel connection relation in each area is calculated by utilizing a Gaussian kernel function, and finally, a local function connection diagram is obtained by combinationA R
Pre-processed dataXAnd global dynamic connection diagramA D Input to global dynamic branchesKLayer global channel spatial feature extraction layer, proceedKDiffusing the node information of the step to obtain the output characteristics of the global dynamic branchZ D (l+1) (ii) a And pre-processing the dataXAnd local function connection diagramA R Input to local functional branchesKLayer local channel spatial feature extraction layer, proceedingKDiffusing the node information of the step to obtain the output characteristics of the local functional branchZ R (l+1)
Figure DEST_PATH_IMAGE001
Figure 100002_DEST_PATH_IMAGE002
Wherein the content of the first and second substances,Z D (l+1) Z D (l) respectively represent the firstl+1、lExtracting the output characteristics of the layer global channel spatial characteristics;Z R (l+1) Z R (l) respectively representl+1、lExtracting the output characteristics of the layer local channel spatial characteristics; at the beginningZ D (0) =X、 Z R (0) =X
Figure DEST_PATH_IMAGE003
Figure 100002_DEST_PATH_IMAGE004
Respectively represent tolFiltering by a layer global channel spatial feature extraction layer and a local channel spatial feature extraction layer;f (l) f (l) respectively represent the firstlLayer parameter ofθThe global channel spatial feature extraction layer and the local channel spatial feature extraction layer;
à D (l) à R (l) respectively represent the firstlA normalized adjacency matrix of the layer global dynamic connection diagram and the local function connection diagram; at the beginning
Figure DEST_PATH_IMAGE005
Figure 100002_DEST_PATH_IMAGE006
D D (0) D R (0) Respectively represent a pair normalized adjacency matrixA D A R Each column of (a) is summed, and then the sum is taken as a diagonal matrix of diagonal elements; then, a first steplNormalized adjacency matrix of layersà D (l) à R (l) According to the calculation of the previous layer, the following steps are obtained:
à D (l) D (l- ) 1 ( à D (l- )1 )
à R (l) R (l- ) 1 ( à R (l- )1 )
wherein, the first and the second end of the pipe are connected with each other,Ф D ( · )Ф R ( · ) respectively represent a full connection layer forl- Layer 1 normalized adjacency matrix mapping to the secondlA layer;
output characteristics of global dynamic branchesZ D (l+ )1 And output characteristics of local functional branchesZ R (l+ )1 Performing self-adaptive fusion to obtain spatial characteristics output by the layered dynamic graph rolling moduleZ G
Z G = Agg(Z D (l+ )1 , Z R (l+ )1 )
Wherein Agg (·) is the mechanism of attention.
Preferably, in the auxiliary information module, a gated convolution network is used to perform preprocessing on the dataXDe-noising and extracting discriminative power based on data dependencyCharacteristic ofZ C
Z C =g(Θ 1 ·X+b 1 ) ⊙σ(Θ 2 ·X+b 2 )
Wherein the content of the first and second substances,Θ 1 andΘ 2 respectively two one-dimensional convolution operations;b 1 andb 2 respectively are bias parameters;g(. Cndot.) is a tanh function;σ(.) is a sigmoid function; an represents an element product operation.
Preferably, the spatial features are fused by an early-late adaptive fusion mechanismZ G And discriminant featuresZ C Performing multidimensional self-adaptive fusion to obtain final embedding, which means that:
spatial features of high dimensionZ G And discriminative featuresZ C Respectively carrying out feature conversion and mapping to the same dimensionality to obtain featuresZ' G And Z' C (ii) a Will be characterized byZ' G And Z' C Performing fusion to obtain high-dimensional featuresZ' F
Z' F = Ω(Z' G , Z' C )
Wherein Ω (-) represents the early fusion function;
separately utilizing full-link layer reduction featuresZ' G 、Z' C AndZ' F dimension of (2) to obtain spatial featuresZ" G And the distinguishing characteristicsZ" C And fusion featuresZ" F (ii) a Then characterizing the spaceZ" G Discriminant featureZ" C And fusion featuresZ" F Sending the data into a self-adaptive late fusion network to obtain complementary information, and finally obtaining a characteristic Z for emotion recognition:
Z=φ G ·Z" G + φ C ·Z" C + φ F ·Z" F
wherein the content of the first and second substances,φ G φ C andφ F respectively representing attention weights.
Preferably, the hierarchical multidimensional space network realizes network updating by minimizing a loss function, and a back propagation algorithm is adopted to optimize the parameter weight of the network.
Preferably, the characteristicsZSpatial characteristics ofZ" G And discriminant featuresZ" C The prediction probability for each class is calculated by normalization with the softmax (.) -functionY' i Y' Gi Y' Ci i=1,2,...,ccThe number of categories;
wherein the content of the first and second substances,Y' i ={y' 1, y' 2,…, y' s }for all thatsA sample is atiA prediction probability of each class, whereiny' j Is as followsjA sample is at the firstiA prediction probability for each category;j=1,2,...,s
Y' Gi ={y' G1, y' G2,…, y' Gs }to adopt normalized spatial featuresZ" G All obtained by calculationsA sample is atiA prediction probability of a sample, whereiny' Gj For normalized spatial featuresZ" G Calculated the obtained secondjA sample is at the firstiA prediction probability for each category;
Y' Ci ={y' C1, y' C2,…, y' Cs }to adopt the normalized discriminative featuresZ" C All obtained by calculationsA sample is at the firstiA prediction probability of a sample, whereiny' Cj To adopt the normalized discriminant featuresZ" C Calculated the obtainedjA sample is atiA prediction probability for each category;
the loss function is optimized by classificationL cla And a space constraint penalty functionL reg Forming;
Figure DEST_PATH_IMAGE007
Figure 100002_DEST_PATH_IMAGE008
wherein, the first and the second end of the pipe are connected with each other, Y i ={y 1, y 2,…, y s }for all thatsA sample is at the firstiTrue probability of each class, whereiny j Is as followsjA sample is atiTrue probabilities for each category;
total loss ofL total Comprises the following steps:
L total =λ·L cla + μ·L reg
wherein the content of the first and second substances,λμoptimizing loss functions for classes, respectivelyL cla Space constrained loss functionL reg The corresponding coefficients.
A storage medium, wherein the storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the above electroencephalogram emotion recognition method based on a hierarchical multidimensional space.
A computing device comprises a processor and a memory for storing an executable program of the processor, and when the processor executes the program stored in the memory, the electroencephalogram emotion recognition method based on the hierarchical multidimensional space is realized.
Compared with the prior art, the invention has the following advantages and beneficial effects:
1. the invention provides a hierarchical multi-dimensional space network based, which integrates a hierarchical dynamic graph convolution module and an auxiliary information module, extracts the characteristics of electroencephalogram signals and carries out emotion recognition; the channel spatial features can be captured based on brain function connection relations, the discriminative features based on data dependence are fused into the spatial features through a morning and evening self-adaptive fusion mechanism by using an auxiliary information module, common expressions most beneficial to emotion recognition are extracted for emotion classification tasks, and finally the performance of the emotion recognition tasks based on electroencephalogram signals is improved;
2. the invention provides a multi-map construction mechanism, which constructs a global dynamic connection map and a local function connection map according to a brain activity mechanism and captures multi-level multi-map information of electroencephalogram data by combining a multi-level map convolution network; the electroencephalogram signal is the overall reflection of electrophysiological signals in the cerebral cortex, and the real brain activity mechanism is considered by the multi-image construction mechanism provided by the invention; specifically, the local function connection diagram considers the actual physical position distribution of the channel, divides a plurality of local channel areas according to the actual brain function area, and establishes the connection relation inside the channel; the global dynamic connection diagram considers the connection among channels in a global view based on the functional connection which is not related to the physics; the multi-graph construction mechanism aims at constructing a functional connection graph capable of reflecting the real spatial relationship of brain electrical multi-channel according to the brain science theory related to emotion generation; the multi-level graph convolution network extracts the space interactive information among channels based on brain function network connection, and extracts the global dynamic space characteristics and the local enhanced space characteristics by adopting global dynamic branches and local function branches;
3. the invention provides a morning and evening self-adaptive fusion mechanism for extracting multi-dimensional fusion features, which can select the most relevant abstract representation without losing semantic information in the high-dimensional features and better fuse discriminant features with spatial features; and by embedding three characteristics in a self-adaptive fusion manner, extracting the collaborative information with the highest correlation for final classification.
Drawings
FIG. 1 is an overall framework diagram of a hierarchical multi-dimensional spatial network in the electroencephalogram emotion recognition method;
FIG. 2 is a frame diagram of a layered dynamic graph convolution module in the electroencephalogram emotion recognition method;
FIG. 3 is a schematic diagram of electrode channel area division in the electroencephalogram emotion recognition method.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
Example one
According to the invention, by using a method based on a hierarchical multidimensional space network, on the basis of brain mechanism research, the construction of brain function network connection is fully considered, and the missing discriminant characteristic based on data dependence is made up; the method aims to construct a functional connection diagram capable of reflecting the real spatial relationship of multiple channels of the electroencephalogram according to a brain science theory related to emotion generation, and therefore spatial interaction information among the channels is extracted based on the functional connection diagram. In addition, the method can not lose semantic information in the high-dimensional features as discriminant features on the basis of selecting the abstract representation most relevant to emotion, and can better fuse the discriminant features with the spatial features.
The design idea of the invention is as follows: the hierarchical multidimensional space network consists of a hierarchical dynamic graph convolution module and an auxiliary information module. Firstly, the invention designs a multi-graph construction mechanism, and constructs a channel space connection capable of reflecting the activity relation of a real brain based on the connection mode of the brain existing at the same time. Because strong connection relation exists among channels in the same electrode area, the invention provides a local function connection diagram according to the division of the brain function area, and the electrode channels are divided into the electrode areas according to the position of the brain function area where the electrode channels are actually located. Second, brain activity is also manifested in interactions across brain regions, and the connections of neurons are not fixed, but dynamically changing. Thus, a global dynamic connection graph capable of considering global dynamic connections is established to complement the local functional connection graph. After the construction of the functional connection is comprehensively considered, the global dynamic branch and the local functional branch are adopted to extract the channel space interaction information based on the global and local connection relation. In addition, since the hierarchical dynamic graph convolution module mainly focuses on the construction of functional connection and the capture of spatial relationship, the information of each node after the aggregation of the adjacency matrix may lose some data dependency and discriminant features of the original data. Therefore, the invention also constructs an auxiliary information module to adaptively compensate the spatial characteristics, and selects the related abstract representation containing the high-dimensional characteristic semantic information for emotion recognition.
In order to implement the above thought, this embodiment provides a hierarchical multidimensional space-based electroencephalogram emotion recognition method, including the following steps:
s1, preprocessing input electroencephalogram data.
S2, preprocessing the dataXThe input is a hierarchical multidimensional space network which is shown in figure 1 and comprises a hierarchical dynamic graph convolution module and an auxiliary information module.
S21, preprocessing the dataXInputting the data into a layered dynamic graph convolution module of a layered multidimensional space network, and extracting the spatial features based on the channel spaceZ G . The layered dynamic graph convolution module focuses on capturing of spatial features, and provides a multi-graph construction mechanism for constructing functional connection graphs from different angles. Then a parallel multi-level diffusion graph convolution network is introduced to capture the multi-level representation of the multi-graph by adaptively fusing local and global spatial features, and the structure of the hierarchical dynamic graph convolution module is shown in fig. 2.
Spatial featuresZ G The extraction method comprises the following steps:
the hierarchical dynamic graph rolling module comprises a global dynamic branch and a local function branch. The global dynamic branch comprises a global dynamic connection graphA D AndKlayer global channel spatial feature extractionTaking a layer; wherein the global dynamic connection graphA D The learnable parameters are set as the learnable parameters of the hierarchical multidimensional space network and are obtained by training and optimizing the hierarchical multidimensional space network. The local functional branches comprise local functional connectivity graphsA R AndKa layer local channel spatial feature extraction layer; partial function connection diagramA R The construction method comprises the following steps: pre-processed data according to brain functional regionXThe electrode channels are divided into N areas, the channel connection relation in each area is calculated by utilizing a Gaussian kernel function, and finally, a local function connection diagram is obtained by combinationA R . The electrode channel is divided into N regions, where N is 7, and the seven regions are respectively as shown in fig. 3:
1. f7, F5, AF3, FP1, FPZ, FP2, AF4, F6, F8;
2. FT7, FC5, T7, C5, TP7, CP5, P7, P5;
3. f3, F1, FZ, F2, F4, FC3, FC1, FCZ, FC2, FC4;
4. c3, C1, CZ, C2, C4, CP3, CP1, CPZ, CP2, CP4;
5. p3, P1, PZ, P2, P4, FO3, POZ, PO4;
6. FC6, FT8, C6, T8, CP6, TP8, P6, P8;
7. PO7, PO5, CB1, O1, OZ, O2, PO6, CB2, PO8.
After the multiple graphs are constructed, the information of the global dynamic connection graph and the local function connection graph is collected, and the dynamic connection of the global electroencephalogram channel and the function connection of the local brain area are learned. Furthermore, if each layer utilizes the same invariant adjacency matrix, it will be difficult to explore well the multi-level representation in a multi-layer graph-convolution network. Thus, a multi-level graph convolution network is established. The method can capture spatial information of the electroencephalogram channels on multiple levels and dimensions so as to extract topological spatial features which are most beneficial to emotion recognition based on the electroencephalogram signals.
Pre-processed dataXAnd global dynamic connection graphA D Input to global dynamic branchesKA layer global channel spatial feature extraction layer, performKNode information of stepDiffusion to obtain output characteristics of global dynamic branchZ D (l+1) (ii) a And pre-processing the dataXAnd local function connection diagramA R Input to local functional branchesKLayer local channel spatial feature extraction layer, proceedingKDiffusing the node information of the step to obtain the output characteristics of the local functional branchZ R (l+1)
Figure DEST_PATH_IMAGE009
Figure 293907DEST_PATH_IMAGE002
Wherein the content of the first and second substances,Z D (l+1) Z D (l) respectively representl+1、lExtracting the output characteristics of the layer global channel spatial characteristics;Z R (l+1) Z R (l) respectively representl+1、lExtracting the output characteristics of the layer local channel spatial characteristics; at the beginningZ D (0) =X、 Z R (0) =X
Figure 647528DEST_PATH_IMAGE003
Figure 586665DEST_PATH_IMAGE004
Respectively represent to the firstlFiltering by a layer global channel spatial feature extraction layer and a local channel spatial feature extraction layer;f (l) f (l) respectively representlLayer parameter ofθThe global channel spatial feature extraction layer and the local channel spatial feature extraction layer;
à D (l) à R (l) respectively representlA normalized adjacency matrix of the layer global dynamic connection diagram and the local function connection diagram; at the beginning
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE011
D D (0) D R (0) Respectively represent the normalized adjacency matrixA D A R Each column of (a) is summed, and then the sum is taken as a diagonal matrix of diagonal elements; then, a first steplNormalized adjacency matrix of layersà D (l) à R (l) According to the calculation of the previous layer, the following steps are obtained:
à D (l) D (l- ) 1 ( à D (l- )1 )
à R (l) R (l- ) 1 ( à R (l- )1 )
wherein, the first and the second end of the pipe are connected with each other,Ф D ( · )Ф R ( · ) respectively represent a full connection layer forl- Layer 1 normalized adjacency matrix mapping tolA layer;
output characteristics of global dynamic branchesZ D (l+ )1 And output characteristics of local functional branchesZ R (l+ )1 Performing adaptive fusion to obtain hierarchySpatial features output by the dynamic graph convolution moduleZ G
Z G = Agg(Z D (l+ )1 , Z R (l+ )1 )
Wherein Agg (. Cndot.) is the attention mechanism.
S21, preprocessing the dataXParallelly inputting the data into an auxiliary information module of a hierarchical multidimensional space network, and extracting discriminant characteristics based on data dependenceZ C
Specifically, a gated convolutional network is used to pair the preprocessed dataXDenoising and extracting discriminative features based on data dependencyZ C (ii) a The structure of the gated convolution network adopts two activation functions, wherein a gated linear unit can keep the nonlinear capacity;
Z C =g(Θ 1 ·X+b 1 ) ⊙σ(Θ 2 ·X+b 2 )
wherein the content of the first and second substances,Θ 1 andΘ 2 respectively two one-dimensional convolution operations;b 1 andb 2 respectively are bias parameters;g(. Cndot.) is a tanh function;σ(.) is a sigmoid function; an represents an element product operation.
S3, spatial characteristics are subjected to early-late adaptive fusion mechanismZ G And discriminative featuresZ C And carrying out multi-dimensional self-adaptive fusion to obtain a final characteristic Z, and realizing emotion classification.
The spatial features are matched through a morning-evening adaptive fusion mechanismZ G And discriminative featuresZ C Carrying out multidimensional self-adaptive fusion to obtain final embedding, which means that:
spatial features of high dimensionZ G And discriminant featuresZ C Respectively carrying out feature conversion and mapping to the same dimensionality to obtain featuresZ' G And Z' C (ii) a Will be characterized byZ' G And Z' C Performing fusion to obtain high-dimensional featuresZ' F
Z' F = Ω(Z' G , Z' C )
Wherein Ω (-) represents the early fusion function;
separately utilizing full-link layer reduction featuresZ' G 、Z' C AndZ' F of the space to obtain spatial characteristicsZ" G And the distinguishing characteristicsZ" C And fusion featuresZ" F (ii) a Then characterizing the spaceZ" G Discriminant featureZ" C And fusion featuresZ" F And sending the information into a self-adaptive late fusion network to obtain complementary information, and finally obtaining a characteristic Z for emotion recognition:
Z=φ G ·Z" G + φ C ·Z" C + φ F ·Z" F
wherein the content of the first and second substances,φ G φ C andφ F respectively representing attention weights.
The hierarchical multidimensional space network realizes network updating by minimizing a loss function, and optimizes the parameter weight of the network by adopting a back propagation algorithm.
Will be characterized byZSpatial characteristics ofZ" G And discriminative featuresZ" C The prediction probability for each class is calculated by normalization with the softmax (.) -functionY' i Y' Gi Y' Ci i=1,2,...,ccThe number of categories;
wherein, the first and the second end of the pipe are connected with each other,Y' i ={y' 1, y' 2,…, y' s }for all thatsA sample is atiA prediction probability of each class, whereiny' j Is as followsjA sample is atiA prediction probability for each category;j=1,2,...,s
Y' Gi ={y' G1, y' G2,…, y' Gs }to adopt normalized spatial featuresZ" G All obtained by calculationsA sample is at the firstiA prediction probability of a sample, whereiny' Gj For normalized spatial featuresZ" G Calculated the obtained secondjA sample is atiA prediction probability for each category;
Y' Ci ={y' C1, y' C2,…, y' Cs }to adopt the normalized discriminant featuresZ" C All obtained by calculationsA sample is at the firstiA prediction probability of a sample, whereiny' Cj To adopt the normalized discriminant featuresZ" C Calculated the obtainedjA sample is at the firstiThe prediction probability of each class.
The loss function is optimized by classificationL cla And a space constraint penalty functionL reg Composition is carried out;
Figure 644619DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE012
wherein the content of the first and second substances, Y i ={y 1, y 2,…, y s }for all that issA sample is atiTrue probability of each class, whereiny j Is as followsjA sample is at the firstiTrue probabilities of individual classes;
total loss ofL total Comprises the following steps:
L total =λ·L cla + μ·L reg
wherein the content of the first and second substances,λμoptimizing loss functions for classes, respectivelyL cla Space constrained loss functionL reg The corresponding coefficients.
In order to verify the effect of the electroencephalogram emotion recognition method, experiments are respectively carried out on two main electroencephalogram data sets, namely a DREAMER data set and a SEED data set. Wherein, the DREAMER data set adopts movie fragments to perform emotion induction on a subject, an experiment consists of 18 movie fragments, and 23 subjects (14 males and 9 females) participate in the experiment; the data set provides a Power Spectral Density (PSD) characteristic; all data tags are labeled as category two (low/high price, low/high wake up). The SEED data set is composed of electroencephalogram signals of positive, neutral and negative emotions caused by Chinese movie fragments; each emotion comprises five movie fragments, namely, 15 movie fragments are contained in one experiment; 15 subjects (7 males and 8 females) participated in the experiment, each performing the same experiment at three different periods to explore stable emotional patterns, with 45 experiments resulting; the data set provides five features: differential Entropy (DE), power Spectral Density (PSD), differential Asymmetry (DASM) and Rational Asymmetry (RASM) from five frequency bands (delta, theta, alpha, beta, gamma)bands) And (4) extracting. The electroencephalogram emotion recognition method provided by the embodiment obtains the highest accuracy rate by comparing the DREAMER data set with other methods, and obtains 93.95% and 94% in titer and arousal degree respectively.Accuracy of 64%. On the SEED data set, the electroencephalogram emotion recognition method provided by the embodiment achieves 96.40% of accuracy rate on all bands on DE characteristics, and the highest accuracy rate in all comparison methods is obtained.
Example two
The storage medium of this embodiment is characterized in that the storage medium stores a computer program, and the computer program, when executed by a processor, causes the processor to execute the electroencephalogram emotion recognition method based on a hierarchical multidimensional space according to the first embodiment.
EXAMPLE III
The computing device of the embodiment includes a processor and a memory for storing an executable program of the processor, and is characterized in that when the processor executes the program stored in the memory, the electroencephalogram emotion recognition method based on the hierarchical multidimensional space described in the first embodiment is implemented.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (7)

1. A brain electric emotion recognition method based on a hierarchical multidimensional space is characterized by comprising the following steps: the method comprises the following steps:
preprocessing input electroencephalogram data;
pre-processed dataXInputting the data into a layered dynamic graph convolution module of a layered multidimensional space network, and extracting the spatial features based on the channel spaceZ G
Pre-processed dataXParallelly inputting the data into an auxiliary information module of a hierarchical multidimensional space network, and extracting discriminant characteristics based on data dependenceZ C
Spatial features over early-late adaptive fusion mechanismZ G And discriminative featuresZ C To carry outObtaining a final characteristic Z by multi-dimensional self-adaptive fusion to realize emotion classification;
wherein the spatial characteristicsZ G The extraction method comprises the following steps:
the layered dynamic graph rolling module comprises a global dynamic branch and a local function branch; the global dynamic branch comprises a global dynamic connection graphA D AndKa layer global channel spatial feature extraction layer; wherein the global dynamic connection graphA D Setting learnable parameters of the hierarchical multidimensional space network, and obtaining the learnable parameters through training and optimization of the hierarchical multidimensional space network; the local functional branches comprise local functional connection graphsA R AndKa layer local channel spatial feature extraction layer; partial function connection diagramA R The construction method comprises the following steps: pre-processed data according to brain functional regionXThe contained electrode channels are divided into N areas, the channel connection relation in each area is calculated by utilizing a Gaussian kernel function, and finally, a local function connection diagram is obtained by combinationA R
Pre-processed dataXAnd global dynamic connection diagramA D Input to global dynamic branchesKLayer global channel spatial feature extraction layer, proceedKDiffusing the node information of the step to obtain the output characteristics of the global dynamic branchZ D (l+1) (ii) a And pre-processing the dataXAnd local function connection diagramA R Input into local functional branchesKLayer local channel spatial feature extraction layer, proceedingKDiffusing the node information of the step to obtain the output characteristics of the local functional branchZ R (l+1)
Figure 558239DEST_PATH_IMAGE001
Figure DEST_PATH_IMAGE002
Wherein the content of the first and second substances,Z D (l+1) Z D (l) respectively representl+1、lExtracting the output characteristics of the layer global channel spatial characteristics;Z R (l+1) Z R (l) respectively represent the firstl+1、lExtracting the output characteristics of the layer local channel spatial characteristics; at the beginningZ D (0) =X、Z R (0) = X
Figure 927909DEST_PATH_IMAGE003
Figure DEST_PATH_IMAGE004
Respectively represent tolFiltering by a layer global channel spatial feature extraction layer and a local channel spatial feature extraction layer;f (l) f (l) respectively representlLayer parameter ofθThe global channel spatial feature extraction layer and the local channel spatial feature extraction layer;
à D (l) à R (l) respectively represent the firstlA normalized adjacency matrix of the layer global dynamic connection diagram and the local function connection diagram; at the beginning
Figure 50191DEST_PATH_IMAGE005
Figure DEST_PATH_IMAGE006
D D (0) D R (0) Respectively represent a pair normalized adjacency matrixA D A R Each column of (a) is summed, and then the sum is taken as a diagonalA diagonal matrix of upper elements; then, a first steplNormalized adjacency matrix of layersà D (l) à R (l) Calculating according to the previous layer to obtain:
à D (l) D (l- ) 1 ( à D (l- )1 )
à R (l) R (l- ) 1 ( à R (l- )1 )
wherein the content of the first and second substances,Ф D ( · )Ф R ( · ) respectively represent a full connection layer forl-Layer 1 normalized adjacency matrix mapping to the secondlA layer;
output characteristics of global dynamic branchesZ D (l+ )1 And output characteristics of local functional branchesZ R (l+ )1 Performing self-adaptive fusion to obtain spatial characteristics output by the layered dynamic graph rolling moduleZ G
Z G = Agg(Z D (l+ )1 , Z R (l+ )1 )
Wherein Agg (. Cndot.) is the attention mechanism.
2. The electroencephalogram emotion recognition method based on the hierarchical multi-dimensional space, as recited in claim 1, wherein: in the auxiliary information module, a gated convolution network is adopted to carry out preprocessing on the dataXDenoising and extracting discriminative features based on data dependencyZ C
Z C =g(Θ 1 ·X+b 1 ) ⊙σ(Θ 2 ·X+b 2 )
Wherein the content of the first and second substances,Θ 1 andΘ 2 respectively two one-dimensional convolution operations;b 1 andb 2 respectively as a bias parameter;g(. Cndot.) is a tanh function;σ(.) is a sigmoid function; an represents an element product operation.
3. The electroencephalogram emotion recognition method based on the hierarchical multi-dimensional space, as recited in claim 1, wherein: the spatial features are matched through a morning-evening adaptive fusion mechanismZ G And discriminative featuresZ C Carrying out multidimensional self-adaptive fusion to obtain final embedding, which means that:
spatial features of high dimensionZ G And discriminative featuresZ C Respectively carrying out feature conversion and mapping to the same dimensionality to obtain featuresZ' G And Z' C (ii) a Will be characterized byZ' G And Z' C Performing fusion to obtain high-dimensional featuresZ' F
Z' F = Ω(Z' G , Z' C )
Wherein Ω (-) represents the early fusion function;
separately utilizing full-link layer reduction featuresZ' G 、Z' C AndZ' F of the space to obtain spatial characteristicsZ" G Discriminant featureZ" C And fusion featuresZ" F (ii) a Then characterizing the spaceZ" G Discriminant featureZ" C And fusion featuresZ" F Sending the data into an adaptive late fusion network to obtain a feature Z for emotion recognition:
Z=φ G ·Z" G + φ C ·Z" C + φ F ·Z" F
wherein the content of the first and second substances,φ G φ C andφ F respectively, represent attention weights.
4. The electroencephalogram emotion recognition method based on the hierarchical multi-dimensional space, which is characterized in that: the hierarchical multidimensional space network realizes network updating by minimizing a loss function, and optimizes the parameter weight of the network by adopting a back propagation algorithm.
5. The electroencephalogram emotion recognition method based on the hierarchical multi-dimensional space, which is characterized in that: will be characterized byZSpatial characteristics ofZ" G And discriminant featuresZ" C The prediction probability for each class is calculated by normalization with the softmax (.) -functionY' i Y' Gi Y' Ci i=1,2,...,ccThe number of categories;
wherein, the first and the second end of the pipe are connected with each other,Y' i ={y' 1, y' 2,…, y' s }for all that issA sample is atiA prediction probability of each class, whereiny' j Is a firstjA sample is atiA prediction probability for each category;j=1,2,...,s
Y' Gi ={y' G1, y' G2,…, y' Gs }to adopt normalized spatial featuresZ" G All obtained by calculationsA sample is atiA prediction probability of a sample, whereiny' Gj For normalized spatial featuresZ" G Calculated the obtainedjA sample is atiA prediction probability for each category;
Y' Ci ={y' C1, y' C2,…, y' Cs }to adopt the normalized discriminant featuresZ" C All obtained by calculationsA sample is atiA prediction probability of a sample, whereiny' Cj To adopt the normalized discriminant featuresZ" C Calculated the obtainedjA sample is atiA prediction probability for each category;
the loss function is optimized by classificationL cla And a space constraint penalty functionL reg Composition is carried out;
Figure 179821DEST_PATH_IMAGE007
Figure DEST_PATH_IMAGE008
wherein the content of the first and second substances, Y i ={y 1, y 2,…, y s }for all that issA sample is atiTrue probability of each class, whereiny j Is as followsjA sample is at the firstiTrue probabilities of individual classes;
total loss ofL total Comprises the following steps:
L total =λ·L cla + μ·L reg
wherein the content of the first and second substances,λμoptimizing loss functions for classes, respectivelyL cla Space constraint loss functionL reg The corresponding coefficients.
6. A storage medium storing a computer program, wherein the computer program when executed by a processor causes the processor to execute the hierarchical multidimensional space-based electroencephalogram emotion recognition method of any one of claims 1 to 5.
7. A computing device comprising a processor and a memory for storing a program executable by the processor, wherein the processor implements the hierarchical multidimensional space-based electroencephalogram emotion recognition method according to any one of claims 1 to 5 when executing the program stored in the memory.
CN202211678810.3A 2022-12-27 2022-12-27 Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space Pending CN115659259A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211678810.3A CN115659259A (en) 2022-12-27 2022-12-27 Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211678810.3A CN115659259A (en) 2022-12-27 2022-12-27 Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space

Publications (1)

Publication Number Publication Date
CN115659259A true CN115659259A (en) 2023-01-31

Family

ID=85022347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211678810.3A Pending CN115659259A (en) 2022-12-27 2022-12-27 Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space

Country Status (1)

Country Link
CN (1) CN115659259A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098621A (en) * 2023-02-14 2023-05-12 平顶山学院 Emotion face and physiological response recognition method based on attention mechanism

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115238835A (en) * 2022-09-23 2022-10-25 华南理工大学 Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115238835A (en) * 2022-09-23 2022-10-25 华南理工大学 Electroencephalogram emotion recognition method, medium and equipment based on double-space adaptive fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MENGQING YE 等: "Hierarchical Dynamic Graph Convolutional Network With Interpretability for EEG-Based Emotion Recognition", 《 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》, pages 2 - 3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116098621A (en) * 2023-02-14 2023-05-12 平顶山学院 Emotion face and physiological response recognition method based on attention mechanism

Similar Documents

Publication Publication Date Title
Ye et al. Coupled layer-wise graph convolution for transportation demand prediction
Zhao et al. Multiple classifiers fusion and CNN feature extraction for handwritten digits recognition
CN107516110B (en) Medical question-answer semantic clustering method based on integrated convolutional coding
Chen et al. A dual-attention-based stock price trend prediction model with dual features
Oloulade et al. Graph neural architecture search: A survey
Fischer et al. GeoComputational modelling: techniques and applications
Shi et al. A binary harmony search algorithm as channel selection method for motor imagery-based BCI
CN115659259A (en) Electroencephalogram emotion recognition method, medium and equipment based on hierarchical multi-dimensional space
Li et al. Embedded stacked group sparse autoencoder ensemble with L1 regularization and manifold reduction
Zhang et al. Node-feature convolution for graph convolutional networks
Bai et al. HVAE: A deep generative model via hierarchical variational auto-encoder for multi-view document modeling
Weng et al. Adversarial attention-based variational graph autoencoder
Pattanayak et al. CURATING: A multi-objective based pruning technique for CNNs
Shao et al. Heterogeneous graph neural network with multi-view representation learning
Hruschka et al. Feature selection by Bayesian networks
Sun et al. Triplet attention multiple spacetime-semantic graph convolutional network for skeleton-based action recognition
Wu et al. AAE-SC: A scRNA-seq clustering framework based on adversarial autoencoder
Abpeykar et al. Neural trees with peer-to-peer and server-to-client knowledge transferring models for high-dimensional data classification
Mezzah et al. Practical hyperparameters tuning of convolutional neural networks for EEG emotional features classification
Padole et al. Graph wavelet-based multilevel graph coarsening and its application in graph-CNN for alzheimer’s disease detection
Chen et al. Deep self-supervised graph attention convolution autoencoder for networks clustering
Wen et al. Block-sparse CNN: towards a fast and memory-efficient framework for convolutional neural networks
Zhang et al. Clustering optimization algorithm for data mining based on artificial intelligence neural network
Lv et al. Deep ensemble network based on multi-path fusion
da Silva et al. A novel multi-objective grammar-based framework for the generation of convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination