CN113951883B - Gender difference detection method based on electroencephalogram signal emotion recognition - Google Patents

Gender difference detection method based on electroencephalogram signal emotion recognition Download PDF

Info

Publication number
CN113951883B
CN113951883B CN202111335757.2A CN202111335757A CN113951883B CN 113951883 B CN113951883 B CN 113951883B CN 202111335757 A CN202111335757 A CN 202111335757A CN 113951883 B CN113951883 B CN 113951883B
Authority
CN
China
Prior art keywords
data
long
time
short
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111335757.2A
Other languages
Chinese (zh)
Other versions
CN113951883A (en
Inventor
吕宝粮
朱懿晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zero Unique Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111335757.2A priority Critical patent/CN113951883B/en
Publication of CN113951883A publication Critical patent/CN113951883A/en
Application granted granted Critical
Publication of CN113951883B publication Critical patent/CN113951883B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • A61B5/374Detecting the frequency distribution of signals, e.g. detecting delta, theta, alpha, beta or gamma waves
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Psychology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Social Psychology (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A brain electrical characteristic recognition method based on a long-and-short-term memory diagram neural network extracts differential entropy characteristics from brain electrical data in a centralized mode, then the differential entropy characteristics are converted into characteristic matrixes of representation diagrams, then a long-and-short-term diagram neural network model is trained, brain function connection information and time sequence relation of the characteristic data are collected at the same time, and finally emotion recognition is achieved through the trained network model. According to the invention, the long-time memory diagram neural network fully utilizes the brain function connection information and the time sequence information, so that a plurality of commonly used and representative data sets are identified, the most common difference performance is analyzed, and the gender difference characteristic in emotion-related electroencephalogram activities is verified.

Description

Gender difference detection method based on electroencephalogram signal emotion recognition
Technical Field
The invention relates to a technology in the field of electroencephalogram signals, in particular to a gender difference detection method based on electroencephalogram signal emotion recognition.
Background
Compared with the signal recognition such as facial expressions, voice, body postures and the like frequently used by researchers in the past, the electroencephalogram signal can represent the brain activities related to the emotion of the tested person more delicately, so that the electroencephalogram signal is considered to be the most effective signal form for emotion recognition, and therefore, the research of emotion recognition by adopting the electroencephalogram signal is more and more concerned by academic circles and industrial circles. However, because there are not only physical structure differences such as scalp impedance and head shape, but also mental mechanism differences such as thinking way, psychological state and cognitive ability among different individuals, the characteristic pattern of the brain electrical signal highly depends on the individual characteristics of the tested individual, and the problem of the individual difference seriously hinders the practical application of the emotional brain-computer interface.
In order to solve the problem of individual difference of electroencephalogram signals, one method is to use transfer learning to process the difference between tested individuals. Another research method is to find out the reasons for the individual differences of the electroencephalogram signals, understand the mechanism and action range behind the electroencephalogram signals and then solve the problem of the corresponding individual differences according to the properties of the electroencephalogram signals. The sex difference is a simple and clear individual difference, and the incidence rate of affective disorder diseases has obvious sex difference. For example, statistical data indicate that the number of female patients with depression is approximately 1.7 times greater than that of males. Affective disorders are closely related to the emotional state of the patient. Therefore, the method for detecting whether gender difference exists in electroencephalogram signal emotion recognition provides important theoretical and technical support for constructing an accurate emotion model and developing an assistant diagnosis and treatment system for affective disorder diseases based on an emotion brain-computer interface.
Although the research result of the difference of emotion recognition and gender of the electroencephalogram signals exists, the existing work has great limitations: firstly, most of the data only use data of a single data set, and the model performance is insufficient, so that the conclusion is not universal; secondly, the processing method and the operation steps of the data in the researches are different, and the same conclusion can not be stably obtained under other experimental configurations; in addition, although various electroencephalogram difference expression forms are found in the results, the results are inconsistent among different researches, and the results cannot be accepted as general results. Therefore, a new method needs to be developed to find more stable gender difference expression and verify the gender difference of the brain electrical activity in emotional experience.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a gender difference detection method based on electroencephalogram signal emotion recognition, which fully utilizes brain function connection information and time sequence information through a long-time memory map neural network, realizes recognition of a plurality of commonly used and representative data sets, analyzes the most common difference expressions at the same time, and verifies the gender difference characteristics in emotion-related electroencephalogram activities.
The invention is realized by the following technical scheme:
the invention relates to a brain electrical characteristic identification method based on a long-time and short-time memory diagram neural network, which comprises the following steps: the method comprises the steps of extracting differential entropy characteristics from electroencephalogram data set, converting the differential entropy characteristics into a characteristic matrix of a representation diagram, training a long-time diagram neural network model and a short-time diagram neural network model, collecting brain function connection information and time sequence relation of the characteristic data at the same time, and finally realizing emotion recognition by using the trained network model.
The training was performed using 5 common, more representative data sets to develop the experiment, the data sets involved were: china (SEED, SEED-IV, SEED-V), Europe (DEAP, DREAMER) data set. They all acquire EEG electroencephalogram modality data and all use video material to induce emotion.
The training is to extract the differential entropy frequency domain characteristics as the input data of the experiment by preprocessing the original data. Designing a universal same-sex training strategy and an opposite-sex training strategy, training a recognizer for each tested data of each data set by using a leave-one-cross verification method to obtain a corresponding same-sex model and an opposite-sex model, adjusting the hyper-parameters of each group of models in iteration, and obtaining a final experimental result.
On the basis of the method, various problems of differential expression are solved. The invention analyzes and compares the experimental results on each data set with common gender difference expression forms, such as: the neural mode, the key brain area and the key frequency band further obtain a more objective and stable conclusion, and the sex difference characteristic of the emotional experience electroencephalogram signal is verified.
The pretreatment is as follows: the baseline correction is carried out on the original data of the electroencephalogram signals in the data set, and the data are down-sampled to 200Hz, so that the data analysis process is accelerated. Then, band-pass filtering is performed in the range of 1-75Hz to filter noise and artifacts in the data.
The differential entropy feature extraction is as follows: carrying out short-time Fourier transform on the preprocessed data, and extracting differential entropy characteristics of each lead on 5 frequency bands, wherein the characteristics are as follows: delta is 1-4Hz, theta is 4-8Hz, alpha is 8-14Hz, beta is 14-31Hz, and gamma is 31-50 Hz; because the SEED, SEED-IV and SEED-V use 62-lead electroencephalogram caps, the differential entropy characteristics of 310-dimensional electroencephalogram are shared; and because the original electroencephalogram data of DEAP and DREAMER filter delta frequency range and 32-lead and 14-lead electroencephalogram caps are used, 128-dimensional and 56-dimensional differential entropy characteristics are obtained respectively. And finally, smoothing the extracted electroencephalogram characteristics by using a linear power system, and eliminating rapid jitter information irrelevant to emotion.
The characteristic transformation means that: the InMemoryDataset base class of the PyTorch Geometric tool library is used, the extracted leads of the frequency domain feature are used as points, the connection among the leads is an edge, the feature data are converted into a feature matrix in the form of g ═ v, epsilon, and then the feature matrix is converted into time sequence data through a time window T to construct input data of an experiment.
The recognizer model is a long-time and short-time memory map neural network which is a map convolution neural network added with a memory module, and the network comprises: memory module, a plurality of picture convolution module, domain classifier, gradient reversal layer, emotion perception learner, pooling layer, full tie-layer when long and short term, wherein: the long-time and short-time memory module captures time sequence dependence information among the characteristic matrixes, the image volume module extracts brain function connection characteristic information related to emotional experience, the domain classifier is used for solving the efficiency problem of a scene crossing a tested scene, the gradient of the domain classifier is reversed by the gradient reversing layer during back propagation, the emotion perception learner aims at data label noise, the pooling layer pools output characteristics, and finally the full-connection layer is used for decoding the pooled characteristics and predicting an emotion label.
The long-time and short-time memory module comprises one or more self-connected memory cells and three gate units, and for data of each time step, the memory cells of the neural network can extract information from the result of the previous step, so that the memory module can store the time sequence dependence information of the data in a long time. Because the electroencephalogram signals are also timing sequence data, the memory module can capture and utilize timing sequence information in the electroencephalogram signals, and the identification accuracy is improved.
The graph convolution module captures local connection and global connection information among different leads by adopting a sparse adjacency matrix attached to an intracerebral network structure, the sparse adjacency matrix is obtained by utilizing reciprocal calculation of physical distances among lead channels, the local connection shows the anatomical connectivity of a brain area, and the global connection shows the emotion-related functional connectivity of left and right hemispheres.
The domain classifier combines a transfer learning and confrontation training method, reduces the difference between a source domain and a target domain, enhances the generalization capability of the model, and solves the problem of poor recognition efficiency in a cross-tested scene.
The emotion perception learner is as follows: the noise level factor is used to transform the individual tags into a prior probability distribution according to the mood-evoked properties, thereby mitigating the tag noise problem inside the data set.
The training process comprises the following specific contents:
1) firstly, information packaging is carried out on each data set, experimental configuration is convenient to switch, then each data set is divided into male data and female data, and the experimental data are normalized and used as input of model training.
2) Secondly, initializing the adjacency matrix learned by the graph convolution network:
Figure BDA0003350458540000031
and sets the global connection initial value to: a. the ij =A ij And-1, the global connection is totally 9 pairs, and the global connection spans the left and right cerebral hemispheres and can maximize the lateralization of the brain electrical signals and find the functional connectivity between the hemispheres.
3) Training the same-sex model and the opposite-sex model of each tested data by using a leave-one-out cross-validation method, specifically, taking any tested data as a test set, and taking all other tested electroencephalogram data with the same sex as a training set X i_same Training a same sex model, and simultaneously using all the different sex data as a training set X i_cross And training a heterology model.
4) In a long-short time memory diagram neural network, a characteristic matrix sequence is formed
Figure BDA0003350458540000032
And a long-time input memory module. For the elements in the input sequence, the updating method of the memory module is respectively as follows: an input gate: i.e. i t =σ(W ix x t +b ii +W ih h t-1 +b i ) And forget to close the door: f. of t =σ(W fx x t +b if +W fh h t-1 +b f ) And a memory gate: g is a radical of formula t =tanh(W cx x t +W ch h t-1 +b c ) Memory cell status:
Figure BDA0003350458540000033
an output gate: o t =σ(W ox x t +W oh h t-1 +b o ),h t =o t *tanh(c t ) Wherein: h is t Hidden layer state at time t, corresponding to h t-1 σ refers to the logistic sigmoid function, representing the Hadamard product, at time t-1 or the initial hidden layer state.
5) For output X of long-and-short-time memory module i Each graph convolution module calculates: z i =S L X i W, output Z i And learn the importance of functional connections of the various brains.
6) Then the output of the graph convolution module passes through the pooling layer and the full-connection layer, and the emotion label Y is output i Probability distribution of (2):
Figure BDA0003350458540000046
wherein the content of the first and second substances,
Figure BDA0003350458540000047
the full connection layer takes a softmax function as an activation function, and pool (phi) represents that global sum pooling is carried out, wherein the sigma (Z) i ) Then is to Z i Each element of (a) is non-linearly transformed: σ (x) is max (0, x).
7) In the node representation learning process, a domain classifier is trained to learn domain invariant features, and the source domain X is reduced S And a target domain X T The difference between them. The main task of the domain classifier is to minimize the cross-entropy loss function of two binary classification tasks:
Figure BDA0003350458540000041
Figure BDA0003350458540000042
the method enhances the generalization capability of the model and improves the robustness in the cross-tested experiment.
8) Learning of domain classifiers is then aided by gradient inversion layersIn the learning process, the calculation function of the gradient inversion layer factor is as follows:
Figure BDA0003350458540000043
Figure BDA0003350458540000044
wherein p is [0, 1 ]]Representing the progress of model training.
9) The emotion perception learner converts a single emotion label into prior probability distribution by using the noise level factor, and replaces the optimization problem of the graph convolution module with the problem of the minimized KL divergence function:
Figure BDA0003350458540000045
mitigating tag noise issues within the data set.
10) To this end, the loss function of the entire model becomes a calculation: phi ″ ═ phi' + phi D And finally, outputting a prediction result by using a single-layer full-connection network, respectively calculating the recognition accuracy of the isotropic model and the special-shaped model, and iterating the training process.
Technical effects
The invention combines a long-time memory map neural network and a short-time memory map neural network, simultaneously captures brain function connection information and time sequence information to perform signal pattern recognition, introduces a domain classifier and an emotion perception learner, reduces the difference between a training set and a test set, reduces data label noise, enhances the performance of a network model, and improves the pattern recognition accuracy based on electroencephalogram signals. Compared with the prior art, the method and the device realize emotion recognition tasks on a plurality of commonly used representative data sets, integrally solve the problem of limitation of the existing research work, improve the recognition rate of the electroencephalogram signal mode, and verify the gender difference of the electroencephalogram activity in emotion experience.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a network structure diagram of a neural network of a long and short term memory diagram according to the present invention;
FIG. 3 is a statistical chart of the experimental results of the support vector machine;
FIG. 4 is a statistical graph of experimental results of neural networks based on long-term and short-term memory;
FIG. 5 is an analysis chart of the experimental results of the neural network based on long-term and short-term memory under different moods in the invention;
fig. 6 is a topological diagram for visualizing energy difference of male and female brains of 5 emotions in 5 frequency bands.
Detailed Description
As shown in fig. 1, the present embodiment relates to a gender difference detection method based on electroencephalogram emotion recognition, which specifically includes:
the method comprises the following steps: the configuration information of 5 data sets including brain electricity cap leads, tested information, label content and the like is sorted and packaged, and experimental configuration is convenient to switch. Acquiring original data of the electroencephalogram signals in the data set.
Step two: the raw data is pre-processed, down-sampled to 200Hz and band-pass filtered at 1 to 75Hz, filtering noise and artifacts.
Step three: and (3) extracting differential entropy characteristics, calculating short-time Fourier transform on the preprocessed data, extracting the differential entropy characteristics on 5 frequency bands on each lead in a non-overlapping time window of 4 seconds, and then performing characteristic smoothing by using a linear power system to eliminate rapid jitter to obtain experimental data of 310 dimension, 128 dimension and 56 dimension respectively.
Step four: the experimental data of the 5 data sets are converted into a feature matrix of a representation diagram, and then converted into time series data of the feature matrix, wherein the time series data are used as input data of a neural network of a training long-time and short-time memory diagram, and the network structure is shown in fig. 2.
Step five: and training the same-sex model and the opposite-sex model of each tested data by using a leave-one cross verification method, and respectively outputting the result data of the two models.
Step six: and (3) iterating the model training process, adjusting the hyper-parameters of the model, correcting the neural network and optimizing the performance of the model.
Step seven: fixing the neural network parameters of the same-sex model and the opposite-sex model, inputting the tested data into the neural network, outputting a predicted emotion label, and calculating the classification accuracy.
Step eight: and (4) counting the final identification accuracy, analyzing and evaluating the experimental result, analyzing the electroencephalogram mode and the brain area of the gender difference according to the brain energy topological graph, evaluating the key brain area and the key frequency band by utilizing network weight distribution and the like.
As shown in fig. 3 and 4, the classification accuracy of the support vector machine and the homogeneous model of the neural network based on long-term and short-term memory is generally higher than that of the heterogeneous model, which indicates that homogeneous data has more consistent data distribution, and there is a case of misclassification between heterogeneous data and a test set. And the accuracy of the neural network based on long-time and short-time memory is generally higher than that of a support vector machine, which shows that the time sequence information is fully utilized to be beneficial to improving the performance of the recognizer, namely: with the improvement of model performance, the difference between the same-sex model and the different-sex model becomes more obvious. The graph neural network is known to realize excellent performance in emotion recognition, so that the method is combined with a long-time memory module and the graph neural network, utilizes brain function connection information and time sequence information, and is more favorable for detecting gender difference of male and female emotion electroencephalogram signals.
As shown in fig. 5, the accuracy of the same sex model is higher than that of the opposite sex model under different emotions, which indicates that the electroencephalogram signals of men and women are different under different emotions. The electroencephalogram data of men and women with different emotions can show different signal modes, and the method can be applied to scenes such as auxiliary diagnosis and treatment of affective disorder diseases of men and women.
From fig. 6, it can be seen that the brain of the male is activated more in a limited area than the brain of the female, and a larger difference in brain energy can be found in aversion to the emotion. Gender differences related to emotion recognition are more significant in high frequency bands, such as beta and gamma bands, and can assist in the development of recognition tasks.
Through specific experiments, a PyTorch neural network framework is used, the learning rate is set to be 0.01 under the specific environment setting based on a plurality of data sets and unified experimental configuration, the Adam optimization method is applied, the classification accuracy of a same-sex model in the obtained experimental data is generally higher than that of a different-sex model, and the situation of large misclassification between the different-sex model and the tested data is explained; under most emotions, the accuracy of the same-sex model is higher than that of the opposite-sex model, and the same-sex data are proved to have higher similarity.
Compared with the prior art, the performance index of the method is improved as follows: the long-time and short-time memory module is combined with the graph neural network, brain function connection information and time sequence information are collected simultaneously, network model performance is enhanced, mode identification accuracy based on electroencephalogram signals is improved, and gender difference in electroencephalogram data is highlighted.
According to the invention, the long-time memory map neural network and the short-time memory map neural network are used for simultaneously capturing brain function connection information and time sequence information, the domain classifier is introduced to reduce the difference between a source domain and a target domain, emotion labels are converted into probability distribution to reduce label noise, the emotion recognition task of a plurality of commonly used and representative data sets is realized, the limitation problem existing in the existing research work is integrally solved, and the sex difference of electroencephalogram activities in emotion experience is verified.
Compared with the prior art, the unique new functions/effects of the invention comprise:
1) firstly, the electroencephalogram feature recognition method based on the long-time memory diagram neural network is provided, brain function connection information and time sequence information of feature data are collected simultaneously, and accuracy of electroencephalogram emotion recognition is improved. Meanwhile, a support vector machine and a neural network based on long-term and short-term memory are trained to serve as experimental baselines, and the characteristics of the performance and the gender difference of a new model are highlighted and generally exist in emotion-related electroencephalogram activities.
2) Secondly, the commonly used data sets including Chinese (SEED, SEED-IV and SEED-V) and European (DEAP and DREAMER) data sets are researched by using the international emotional brain-computer interface, so that the experiment is more representative, and the generality and the feasibility of the detection method are enhanced.
3) And finally, analyzing the experimental results and the expression forms of common gender differences on the basis of a plurality of data sets and high-performance models, wherein the expression forms comprise neural patterns, key brain regions and key frequency bands. More objective and stable conclusion can be obtained, and the robustness is stronger.
The foregoing embodiments may be modified in many different ways by those skilled in the art without departing from the spirit and scope of the invention, which is defined by the appended claims and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (8)

1. A brain electrical characteristic recognition method based on a long-and-short-term memory diagram neural network is characterized in that differential entropy characteristics are extracted from brain electrical data set, then the differential entropy characteristics are converted into characteristic matrixes of representation diagrams, then a long-and-short-term memory diagram neural network model is trained, brain function connection information and time sequence relation of the characteristic data are collected at the same time, and finally emotion recognition is achieved by utilizing the trained network model;
the long and short time memory diagram neural network model is a diagram convolution neural network added with a memory module, and the network comprises: memory module, a plurality of picture convolution module, domain classifier, gradient reversal layer, emotion perception learner, pooling layer, full tie-layer when long and short term, wherein: the long-time and short-time memory module captures time sequence dependence information among the characteristic matrixes, the image volume module extracts brain function connection characteristic information related to emotional experience, the domain classifier is used for solving the efficiency problem of a scene crossing a tested scene, the gradient of the domain classifier is reversed by the gradient reversal layer during the back propagation period, the emotion perception learner performs pooling on output characteristics aiming at data label noise by the pooling layer, and finally, the full-connection layer is used for decoding the pooled characteristics and predicting an emotion label;
the training is to carry out pretreatment on original data, namely, extracting differential entropy frequency domain characteristics as input data of an experiment, designing a general same-sex training strategy and a general opposite-sex training strategy, training a recognizer on each tested data of each data set by using a leave-one-cross verification method, obtaining a corresponding same-sex model and an opposite-sex model, adjusting the hyper-parameter of each group of models in iteration, and obtaining a final experiment result, wherein the training specifically comprises the following steps:
1) firstly, performing information encapsulation on each data set, dividing each data set into male data and female data, normalizing experimental data, and using the normalized experimental data as input of model training;
2) secondly, initializing the adjacency matrix learned by the graph convolution network:
Figure FDA0003704919100000011
and sets the global connection initial value to: a. the ij =A ij 1, global connections of 9 pairs, which span the left and right hemispheres of the brain, maximizing the lateralization of the brain electrical signal and finding functional connectivity between hemispheres;
3) training the same-sex model and the opposite-sex model of each tested data by using a leave-one-out cross-validation method, specifically, taking any tested data as a test set, and taking all other tested electroencephalogram data with the same sex as a training set X i_same Training a same-sex model while using all the different-sex data as training set X i_cross Training a different sex model;
4) in a long-short time memory diagram neural network, a characteristic matrix sequence is formed
Figure FDA0003704919100000012
Inputting a long-time and short-time memory module; for the elements in the input sequence, the updating methods of the memory module are respectively as follows: an input gate: i.e. i t =σ(W ix x t +b ii +W ih h t-1 +b i ) And forget to close the door: f. of t =σ(W fx x t +b if +W fh h t-1 +b f ) And a memory gate: g t =tanh(W cx x t +W ch h t-1 +b c ) Memory cell status:
Figure FDA0003704919100000013
an output gate: o t =σ(W ox x t +W oh h t-1 +b o ),h t =o t *tanh(c t ) Wherein: h is t Hidden layer state at time t, corresponding to h t-1 σ is the logical sigmoid function at time t-1 or the initial hidden layer stateNumber is Hadamard product;
5) for output X of long-and-short-time memory module i Each graph convolution module calculates: z i =S L X i W, output Z i And learning the importance of each brain functional connection;
6) then the output of the graph convolution module passes through the pooling layer and the full-connection layer, and the emotion label Y is output i Probability distribution of (2):
Figure FDA0003704919100000021
wherein the content of the first and second substances,
Figure FDA0003704919100000022
the full connection layer takes a softmax function as an activation function, pool (-) is used for global sum pooling, and sigma (Z) i ) Is to Z i Each element of (a) is non-linearly transformed: σ (x) max (0, x);
7) in the process of learning the nodes, a domain classifier is trained to learn domain invariant features, and the source domain X is reduced S And a target domain X T The difference between them; the main job of the domain classifier is to minimize the cross entropy loss function of two classification tasks:
Figure FDA0003704919100000023
Figure FDA0003704919100000024
the generalization capability of the model is enhanced, and the robustness in the cross-tested experiment is improved;
8) then, a learning process of the domain classifier is assisted by a gradient inversion layer, and a calculation function of a gradient inversion layer factor is as follows:
Figure FDA0003704919100000025
Figure FDA0003704919100000026
wherein p is [0, 1 ]]Representing the progress of model training;
9) feeling of emotionThe learner converts the single emotion label into prior probability distribution by using the noise level factor, and replaces the optimization problem of the graph convolution module with the problem of minimizing the KL divergence function:
Figure FDA0003704919100000027
mitigating tag noise issues within the data set;
10) the loss function of the entire model becomes the calculation: phi ″ ═ phi' + phi D And finally, outputting a prediction result by using a single-layer full-connection network, respectively calculating the recognition accuracy of the same-sex model and the opposite-sex model, and iterating the training process.
2. The method for electroencephalogram feature recognition based on the long-term and short-term memory map neural network as claimed in claim 1, wherein the training and the use of EEG electroencephalogram signal modal data comprise: china SEED, SEED-IV, SEED-V, European DEAP, DREAMER data set.
3. The electroencephalogram feature recognition method based on the long-and-short-term memory map neural network as claimed in claim 1, wherein the preprocessing refers to: the method comprises the steps of performing baseline correction on original data of electroencephalogram signals in a data set, performing down-sampling on the data to 200Hz, facilitating acceleration of a data analysis process, performing band-pass filtering within a range of 1-75Hz, and filtering noise and artifacts in the data.
4. The electroencephalogram feature identification method based on the long-and-short-term memory map neural network as claimed in claim 2, wherein the differential entropy feature extraction is as follows: carrying out short-time Fourier transform on the preprocessed data, and extracting differential entropy characteristics of each lead on 5 frequency bands, wherein the characteristics are as follows: δ:1-4Hz, θ:4-8Hz, α:8-14Hz, beta: 14-31Hz, and γ:31-50 Hz; because the SEED, SEED-IV and SEED-V use 62-lead electroencephalogram caps, the differential entropy characteristics of 310-dimensional electroencephalogram are shared; and because the original EEG data of DEAP and DREAMER filters delta frequency range and 32-lead and 14-lead EEG caps are used, 128-dimensional and 56-dimensional differential entropy characteristics are obtained respectively, and finally, the extracted EEG characteristics are smoothed by using a linear power system, so that quick jitter information irrelevant to emotion is eliminated.
5. The electroencephalogram feature recognition method based on the long-and-short-term memory map neural network as claimed in claim 1, wherein the feature transformation is as follows: using InMemoryDataset base class of PyTorch Geometric tool library, taking each lead of the extracted frequency domain characteristics as a point, and taking the connection among the leads as an edge, converting the characteristic data into
Figure FDA0003704919100000031
The feature matrix in the form, which is then converted into time series data over a time window T, constitutes the input data for the experiment.
6. The method for recognizing electroencephalogram characteristics based on the long-short-time memory map neural network as claimed in claim 1, wherein the long-short-time memory module comprises one or more self-connected memory cells and three gate units, and for data of each time step, the memory cells of the neural network can extract information from a result of the previous step, so that the memory module can store time sequence dependent information of the data in a long time, and because electroencephalogram signals are also time sequence data, the memory module can capture and utilize the time sequence information therein, and the recognition accuracy is improved.
7. The method for recognizing electroencephalogram characteristics based on the long-short time memory map neural network as claimed in claim 1, wherein the map volume module captures local connection and global connection information among different leads by adopting a sparse adjacency matrix fitting an intracerebral network structure, the matrix is obtained by utilizing reciprocal calculation of physical distances among lead channels, the local connection displays the connectivity of a brain area in anatomy, and the global connection represents the functional connectivity of left and right hemispheres related to emotion.
8. The electroencephalogram feature recognition method based on the long-and-short-term memory map neural network as claimed in claim 1, wherein the domain classifier combines a transfer learning and confrontation training method to reduce the difference between a source domain and a target domain, enhance the generalization capability of a model and solve the problem of poor recognition efficiency in a cross-tested scene.
CN202111335757.2A 2021-11-12 2021-11-12 Gender difference detection method based on electroencephalogram signal emotion recognition Active CN113951883B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111335757.2A CN113951883B (en) 2021-11-12 2021-11-12 Gender difference detection method based on electroencephalogram signal emotion recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111335757.2A CN113951883B (en) 2021-11-12 2021-11-12 Gender difference detection method based on electroencephalogram signal emotion recognition

Publications (2)

Publication Number Publication Date
CN113951883A CN113951883A (en) 2022-01-21
CN113951883B true CN113951883B (en) 2022-08-12

Family

ID=79470158

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111335757.2A Active CN113951883B (en) 2021-11-12 2021-11-12 Gender difference detection method based on electroencephalogram signal emotion recognition

Country Status (1)

Country Link
CN (1) CN113951883B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115038B (en) * 2022-08-30 2022-11-08 合肥心之声健康科技有限公司 Model construction method based on single lead electrocardiosignal and gender identification method
CN116700206B (en) * 2023-05-24 2023-12-05 浙江大学 Industrial control system anomaly detection method and device based on multi-modal neural network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190130808A (en) * 2018-05-15 2019-11-25 연세대학교 산학협력단 Emotion Classification Device and Method using Convergence of Features of EEG and Face
CN111949812A (en) * 2020-07-10 2020-11-17 上海联影智能医疗科技有限公司 Brain image classification method and storage medium
CN113191225A (en) * 2021-04-19 2021-07-30 华南师范大学 Emotional electroencephalogram recognition method and system based on graph attention network
CN113288146A (en) * 2021-05-26 2021-08-24 杭州电子科技大学 Electroencephalogram emotion classification method based on time-space-frequency combined characteristics
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111419221A (en) * 2020-02-14 2020-07-17 广东司法警官职业学院 Electroencephalogram signal analysis method based on graph convolution network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190130808A (en) * 2018-05-15 2019-11-25 연세대학교 산학협력단 Emotion Classification Device and Method using Convergence of Features of EEG and Face
CN111949812A (en) * 2020-07-10 2020-11-17 上海联影智能医疗科技有限公司 Brain image classification method and storage medium
CN113191225A (en) * 2021-04-19 2021-07-30 华南师范大学 Emotional electroencephalogram recognition method and system based on graph attention network
CN113288146A (en) * 2021-05-26 2021-08-24 杭州电子科技大学 Electroencephalogram emotion classification method based on time-space-frequency combined characteristics
CN113598774A (en) * 2021-07-16 2021-11-05 中国科学院软件研究所 Active emotion multi-label classification method and device based on multi-channel electroencephalogram data

Also Published As

Publication number Publication date
CN113951883A (en) 2022-01-21

Similar Documents

Publication Publication Date Title
Liu et al. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network
CN110598793B (en) Brain function network feature classification method
Mensch et al. Learning neural representations of human cognition across many fMRI studies
CN113951883B (en) Gender difference detection method based on electroencephalogram signal emotion recognition
CN114052735B (en) Deep field self-adaption-based electroencephalogram emotion recognition method and system
CN113947127A (en) Multi-mode emotion recognition method and system for accompanying robot
Zheng et al. Adaptive neural decision tree for EEG based emotion recognition
Prasetio et al. The facial stress recognition based on multi-histogram features and convolutional neural network
Liu et al. Voxelhop: Successive subspace learning for als disease classification using structural mri
Pusarla et al. Learning DenseNet features from EEG based spectrograms for subject independent emotion recognition
Zhang et al. Diffusion kernel attention network for brain disorder classification
Deng et al. SFE-Net: EEG-based emotion recognition with symmetrical spatial feature extraction
Yang et al. Mlp with riemannian covariance for motor imagery based eeg analysis
Jinliang et al. EEG emotion recognition based on granger causality and capsnet neural network
Boloukian et al. Recognition of words from brain-generated signals of speech-impaired people: Application of autoencoders as a neural Turing machine controller in deep neural networks
Kumari et al. Automated visual stimuli evoked multi-channel EEG signal classification using EEGCapsNet
Peng et al. MSFF-Net: multi-stream feature fusion network for surface electromyography gesture recognition
Latha et al. Brain tumour detection using neural network classifier and kmeans clustering algorithm for classification and segmentation
Chen et al. DCTNet: Hybrid deep neural network-based EEG signal for detecting depression
Liu et al. A cross-session motor imagery classification method based on Riemannian geometry and deep domain adaptation
Lu et al. Deep learning solutions for motor imagery classification: A Comparison Study
CN116821764A (en) Knowledge distillation-based multi-source domain adaptive EEG emotion state classification method
Kwaśniewska et al. Real-time facial features detection from low resolution thermal images with deep classification models
Chopparapu et al. An efficient multi-modal facial gesture-based ensemble classification and reaction to sound framework for large video sequences
Rammy et al. Sequence-to-sequence deep neural network with spatio-spectro and temporal features for motor imagery classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20220801

Address after: Room 23a, No. 19, Lane 99, Nandan East Road, Xuhui District, Shanghai 200030

Applicant after: Lv Baoliang

Address before: 200240 No. 800, Dongchuan Road, Shanghai, Minhang District

Applicant before: SHANGHAI JIAO TONG University

TA01 Transfer of patent application right
TR01 Transfer of patent right

Effective date of registration: 20220913

Address after: Room 901, Building A, SOHO Fuxing Plaza, No. 388 Madang Road, Huangpu District, Shanghai, 200025

Patentee after: Shanghai Zero Unique Technology Co.,Ltd.

Address before: Room 23a, No. 19, Lane 99, Nandan East Road, Xuhui District, Shanghai 200030

Patentee before: Lv Baoliang

TR01 Transfer of patent right