CN115105079B - Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof - Google Patents

Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof Download PDF

Info

Publication number
CN115105079B
CN115105079B CN202210880984.1A CN202210880984A CN115105079B CN 115105079 B CN115105079 B CN 115105079B CN 202210880984 A CN202210880984 A CN 202210880984A CN 115105079 B CN115105079 B CN 115105079B
Authority
CN
China
Prior art keywords
data
training model
original electroencephalogram
channel
electroencephalogram data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210880984.1A
Other languages
Chinese (zh)
Other versions
CN115105079A (en
Inventor
王忠泉
曾虹
张�杰
何丹娜
占丰平
顾立明
王�锋
敬文磊
陈锐
仲建跃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Roledith Technology Co ltd
Original Assignee
Hangzhou Roledith Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Roledith Technology Co ltd filed Critical Hangzhou Roledith Technology Co ltd
Priority to CN202210880984.1A priority Critical patent/CN115105079B/en
Publication of CN115105079A publication Critical patent/CN115105079A/en
Application granted granted Critical
Publication of CN115105079B publication Critical patent/CN115105079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Power Engineering (AREA)
  • Social Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application provides an electroencephalogram emotion recognition method based on a self-attention mechanism and application thereof, and the method comprises the following steps: s1, acquiring original electroencephalogram data of a testee, obtaining different emotional states of the testee according to the original electroencephalogram data, and giving corresponding emotional labels; s2, removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data; s3, according to the characteristics of each channel, establishing a pre-training model based on an attention mechanism; s4, training the pre-training model by using original electroencephalogram data under different stimuli; and S5, finely adjusting the trained pre-training model through the emotion data set and outputting a detection result. The emotion recognition accuracy of the model in the cross-test experiment is improved, the network training time is shortened, and the method is also suitable for downstream tasks of small sample data.

Description

Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof
Technical Field
The application relates to a neural electrophysiological signal analysis technology in the field of brain cognition, in particular to an electroencephalogram emotion recognition method based on a self-attention mechanism and application thereof.
Background
Emotion is a complex psychological and physiological state that, due to brain responses to these physiological changes, plays a vital role in life. In recent years, more and more research has been focused on emotions not only to create an emotional user interface for Human Machine Interaction (HMI) applications, but also to assess psychological illnesses of neurological disorder patients, such as parkinson's disease, autism spectrum disorders, schizophrenia, depression, and the like. Therefore, emotion recognition employs a variety of methods, mainly focusing on external behavior characterization, such as face/voice detection and internal electrophysiological signal analysis.
Although external behavioral characterizations can sometimes lead to better emotional classification performance, they are often subjective and often ineffective in neurological disorders such as autism.
Since the cortical activity directly and objectively reflects the emotional state change, the measurement of electrophysiological signals thereof is often used as an analysis signal source for emotion classification, such as electroencephalogram, electrocardiogram, electromyogram, and the like. In addition, although the electroencephalogram signal has a low signal-to-noise ratio (SNR), the electroencephalogram signal has the characteristics of high time resolution, acceptable spatial resolution, low acquisition cost of portable electroencephalogram signal acquisition equipment, convenience and the like, and therefore, the electroencephalogram signal is more and more concerned by emotion recognition research.
Disclosure of Invention
The embodiment of the application provides an electroencephalogram emotion recognition method based on a self-attention mechanism and application thereof, and aims to solve the problems of low sensitivity of the prior art to electroencephalogram signals and low accuracy of cross-test experiments.
The core technology of the invention is mainly that a pre-training model is generated through pre-training and is constructed based on an attention mechanism, and then fine tuning is carried out on the basis of the pre-training model. The technology not only improves the emotion recognition precision of the model in the cross-test experiment, but also reduces the network training time, and is also suitable for the downstream task of small sample data.
In a first aspect, the present application provides an electroencephalogram emotion recognition method based on an attention-free mechanism, including the following steps:
s1, acquiring original electroencephalogram data of a testee, obtaining different emotional states of the testee according to the original electroencephalogram data, and giving corresponding emotional labels;
s2, removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data;
s3, creating a pre-training model based on an attention mechanism according to the characteristics of each channel;
s4, training the pre-training model by using original electroencephalogram data under different stimuli;
and S5, finely adjusting the trained pre-training model through the emotion data set and outputting a detection result.
Further, in step S1, an integrated E-prime experiment paradigm is designed through emotional cognition factors, and multiple tested spontaneous and evoked EEG signals are collected to obtain original electroencephalogram data of the tested person. Removing blink signals may result in cleaner EEG data due to the presence of blink signals in the acquired EEG data
Further, in step S2, the blink artifact in the original electroencephalogram data is removed, and then the characteristics of each channel are extracted through filtering by a band-pass filter and power spectral density.
Further, in step S2, an independent component analysis method is used to remove blink artifacts from the original electroencephalogram data. Blink signals resulting from blinks are present in the acquired EEG data and may be removed using a separate component analysis method in order to remove such blink signals and thereby obtain clean EEG data.
Further, in step S3, the feature of each channel and the three channels of start, segmentation and end are used as feature matrices and input into the pre-training model.
Further, in step S3, each row in the feature matrix represents all features of one channel.
Further, in step S4, the pre-training model is converged by using a back propagation algorithm so that the LOSS value of the pre-training model becomes small and stable. The back propagation algorithm can correct the model parameters, so that the model obtains better classification parameters, and a higher classification effect is achieved.
Further, in step S5, the original electroencephalogram data of a plurality of channels are input each time, and the data of all the channels are classified by the trained pre-training model.
In a second aspect, the present application provides an electroencephalogram emotion recognition apparatus based on an attention-based mechanism, including:
the data acquisition module is used for acquiring original electroencephalogram data of the testee, obtaining different emotion states of the testee according to the original electroencephalogram data and endowing the testee with corresponding emotion labels to obtain an emotion data set;
the data processing module is used for removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data;
the model creating and training module is used for creating a pre-training model based on an attention mechanism according to the characteristics of each channel and training the pre-training model by using original electroencephalogram data under different stimuli;
the fine tuning module is used for finely tuning the trained pre-training model through the emotion data set;
and the output module is used for outputting the detection result.
In a third aspect, the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the computer program to execute the electroencephalogram emotion recognition method based on the self-attention mechanism.
In a fourth aspect, the present application provides a readable storage medium having stored therein a computer program comprising program code for controlling a process to execute a process, the process comprising a method for electroencephalogram emotion recognition based on a self-attention mechanism according to the above.
The main contributions and innovation points of the invention are as follows: 1. compared with the prior art, the method and the device have the advantages that the pre-training model is generated through pre-training and is constructed on the basis of the attention mechanism, and then fine tuning is carried out on the basis of the pre-training model. The technology not only improves the emotion recognition precision of the model in the cross-test experiment, but also reduces the network training time, is also suitable for the downstream task of small sample data, and can better solve the problem of low accuracy of the cross-test experiment; in addition, under the condition of less data, a better effect can still be achieved.
2. Compared with the prior art, the self-attention residual model is a deep learning model, and the self-attention residual model is constructed by combining the advantages of a self-attention mechanism and a residual;
3. compared with the prior art, the EEG feature self-attention mechanism is directly adopted to obtain better features, so that the model complexity can be greatly simplified, and the model pre-training time is reduced; secondly, a residual error network is introduced, and the residual error network is proved to be capable of effectively improving the classification performance of the model and relieving the problem of gradient explosion or gradient disappearance of the model in the training process. While the models used in the prior art do not design a preparation solution for potential problems (gradient explosion or gradient disappearance);
4. the existing technology for extracting features by adopting a combination mode of an LSTM and a CNN increases the complexity of the model, and the problem of gradient disappearance or gradient explosion in the training process is caused by the complexity of the model. The model of the application introduces a residual block, which is a recognized method for solving the problems of gradient disappearance and gradient explosion. The stability of model training is fundamentally ensured.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flow chart of an electroencephalogram emotion recognition method based on a self-attention mechanism according to an embodiment of the present application;
FIG. 2 is a model creation flow according to an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the methods may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
In emotion recognition, the frequency bands of brain electricity, such as delta (1-3 Hz), theta4-7 Hz), alpha (8-13 Hz), beta (14-30 Hz), and gamma (31-50 Hz) remain the most popular features. In addition, the characteristics of Power Spectral Density (PSD), event-related desynchronization (ERD/ERS), event-related potential (ERP) and the like are well represented in electroencephalogram emotion analysis. In recent years, in emotion recognition based on electroencephalogram signals, various methods are used for feature extraction and classification. For example, fourier transform based methods are widely used for EEG emotion analysis. Researchers combine short-time Fourier transform (STFT) and Mutual Information (MI) for short-time emotion assessment in a recall paradigm, and better classification accuracy is obtained compared with the traditional method; and identifying music-induced emotional responses from brain activity using a short-time fourier transform (STFT) and an Asymmetry Index (AI). In addition, the correlation between brain electrodynamics and music-induced emotional states was further evaluated by Fast Fourier Transform (FFT). Wavelet Transform (WT), common Spatial Pattern (CSP), nonlinear analysis methods, and other analysis methods are also widely used for emotional characteristic analysis of electroencephalogram signals.
Based on this, the present invention identifies emotional states based on EEG techniques.
Example one
The application aims to provide an emotion recognition method based on self-attention for recognizing electroencephalogram. Firstly, in the pre-training stage, the model is used for learning the basic knowledge about the computer signals, and then the trained pre-training model is used as the initialization model of the downstream task. Namely, in the invention, the emotion electroencephalogram signals induced by the stimulation task are used as downstream tasks, and the pre-trained model is pre-trained before the data are finely adjusted. The data are used for pre-training, and then the model is finely adjusted through a specific task, so that the sensitivity of the model to the individual difference of the electroencephalogram signals is relieved to a certain extent, and the accuracy of the model in a cross-test experiment is improved.
The method mainly comprises the steps of collecting EEG data on the basis of designing an experimental paradigm, mainly comprising spontaneous and evoked EEG signals, preprocessing and analyzing the collected original data, extracting attention-based weighted signal features, and inputting the weighted signal features into a model so as to identify the emotional state of a child when the intelligence instrument is used.
Specifically, an embodiment of the present application provides an electroencephalogram emotion recognition method based on an attention-based mechanism, which may specifically refer to fig. 1, and the method includes the following steps:
s1, acquiring original electroencephalogram data of a testee, obtaining different emotional states of the testee according to the original electroencephalogram data, and giving corresponding emotional labels;
in the step, an integrated E-prime experiment paradigm is designed through emotional cognition factors, and multiple spontaneous and evoked EEG signals of a tested person are collected to obtain original EEG data of the tested person;
in this embodiment, an experimental paradigm is designed first, an integrated E-prime experimental paradigm is designed for emotional cognition factors, and multiple numbers of tested spontaneous and evoked EEG signals are collected to provide a data base for subsequent research. Using DSI-24 brain electrical acquisition equipment to acquire the tested original brain electrical data, and marking corresponding emotion labels (such as positive, neutral and negative) according to different emotion states.
S2, removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data;
in the step, an independent component analysis method is adopted to remove blink artifacts in original electroencephalogram data, filtering is carried out through a band-pass filter, and the characteristics of each channel are extracted through power normal density;
in this embodiment, data preprocessing, using a band pass filter of 0-50Hz, independent analysis (ICA) is widely used to eliminate artifacts caused by blinking, and therefore, the ICA is used first to remove the blinking artifacts in the original electroencephalogram data; the EEG data is then filtered by a band pass filter to between 0.1 Hz and 30Hz, and finally the Power Spectral Density (PSD) is used to extract the features of each channel.
The specific conversion formula is as follows:
Figure DEST_PATH_IMAGE002
where X (f) = F.T. { X (t) } is a continuous fourier transform of X (t), f is a frequency component of X, d is a differential sign, t is time, X (t) represents a corresponding EEG signal voltage value at the current time, and X (f) represents a corresponding EEG signal voltage value at the current frequency.
Among them, in signal processing, independent Component Analysis (ICA) is a calculation method for separating a multivariate signal into additive subcomponents. This is done by assuming that the subcomponents are non-gaussian signals and are statistically independent of each other. ICA is a special case of blind source separation.
S3, according to the characteristics of each channel, establishing a pre-training model based on an attention mechanism;
in the step, the characteristics of each channel and the three channels of the beginning, the segmentation and the ending are taken as characteristic matrixes to be input into a pre-training model, and each row in the characteristic matrixes represents all the characteristics of one channel;
in the present embodiment, the following are input: after the extraction of the eye electrical and PSD characteristics, each channel of the processed EEG data comprises n PSD characteristics, and the PSD characteristics of 2m EEG channels are input each time, so that the input data can be represented by a characteristic matrix. In addition, three special channels are added, representing start/slice/end, respectively, so each input to the model is a (2m + 3) × n feature matrix.
E.g., n =30, the matrix input to the model represents 30 PSD signatures for one channel per row.
As shown in fig. 2, the model creating step includes:
s31, a feature mapping layer: first, X is subjected to characteristic transformation, and a transformation matrix W is used q ,W k ,W v Multiplying with X to obtain three matrixes Q, K and V, wherein the formula is as follows:
Q=XW q
K=XW k
V=XW v
s32, self-attention (Self-attention mechanism) layer: and calculating the attention weight value of each channel through Q and K, and multiplying the attention weight value by V to obtain the output of the attention layer. This may allow the model auto-learning to maintain a high focus on important channels and a low focus on noisy or irrelevant channels; the formula is as follows:
Figure DEST_PATH_IMAGE004
wherein d is k A dimension representing a vector of inputs;
s33, residual network layer: in order to relieve the gradient disappearance problem, a layer of residual error network is added behind the Self-attack layer;
s34, a Layer normalization Layer: in order to enable the model to better learn higher-level ideal characteristics and ensure that the distribution of data does not cause too much difference, a layer normalization layer and a residual error network layer are added in a Self-attribute layer at the same time;
s35, feed Forward layer: the layer is a structure comprising three fully-connected layers, the output of the Self-attention layer is used as the input of a Feed Forward layer after residual connection and layer normalization, and the output of the last layer of the last fully-connected layer is used as the output of the Feed Forward layer;
s36, repeating the steps S31-S36 for 5 or 6 times (more times, such as 10 times, 20 times and the like) to serve as the whole pre-training model;
s4, training the pre-training model by using original electroencephalogram data under different stimuli (a large number of different tasks);
in the step, the pre-training model is converged by using a back propagation algorithm so that the LOSS value of the pre-training model is small and stable;
in this embodiment, in the pre-training phase, the LOSS value of the objective function has two LOSSs, where one LOSS is a LOSS value obtained by classifying output corresponding to the start of a particular channel and a real label, and is a binary output of 0/1, where the output 0 represents that the first m channels and the last m channels belong to the same task, and when the output is 1, the output represents that the first m channels and the last m channels belong to different tasks. Regarding another LOSS value, 2m electroencephalogram channels are randomly inactivated with a probability of 0.05, namely the input of the channel is 0, the input data of the inactivated channel needs to be predicted by the pre-trained model according to other channels which are not inactivated, and the second LOSS value can be obtained by the real value and the predicted value of the pre-trained model. By back propagation, the LOSS value becomes small and stable after the model converges.
The back propagation algorithm, called BP algorithm for short, is a learning algorithm suitable for a multilayer neuron network and is established on the basis of a gradient descent method. The input-output relationship of the BP network is essentially a mapping relationship: an n-input m-output BP neural network performs the function of continuous mapping from n-dimensional euclidean space to a finite field in m-dimensional euclidean space, which is highly non-linear. Its information processing ability comes from multiple composition of simple non-linear function, so it has strong function reproduction ability. The signal identification and the signal-noise separation under the conditions of wide frequency band, small noise ratio and less signal modes can be better realized, so that the model of the application is more accurate.
And S5, finely adjusting the trained pre-training model through the emotion data set and outputting a detection result.
In the step, the original electroencephalogram data of a plurality of channels are input each time, and the data of all the channels are classified through the trained pre-training model.
Wherein, the emotion data set (including the emotion-induced EEG signal and corresponding different emotion labels, such as happy, neutral, sad, etc.) is data disclosed in the art, and belongs to the prior art, and can be obtained from a network or other approaches.
In this embodiment, in the fine tuning stage, each electroencephalogram channel is not deactivated randomly any more, and electroencephalogram data of M channels are input each time, so that a pre-trained model starts channel output to classify emotional states of the M channel data. And (4) outputting the detection result (such as positive, neutral and negative).
Finally, testing is performed across the test. The test results are given in table 1 below:
Figure DEST_PATH_IMAGE006
TABLE 1
As can be seen from Table 1, the accuracy was high.
Statistically, the expected accuracy of the three classifications is 33.3%, the accuracy of the traditional machine learning method is about 60%, and the classification result of the method is far higher than the value; table 1 is a statistical summary of the test results recorded for the method of the present application, followed by the output test results; SEED and DEEP are published names of mood data sets.
Example two
Based on the same conception, the application also provides an electroencephalogram emotion recognition device based on a self-attention mechanism, which comprises:
the data acquisition module is used for acquiring original electroencephalogram data of a testee, obtaining different emotion states of the testee according to the original electroencephalogram data and endowing corresponding emotion labels to the testee so as to obtain an emotion data set;
the data processing module is used for removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data;
the model creating and training module is used for creating a pre-training model based on an attention mechanism according to the characteristics of each channel and training the pre-training model by using original electroencephalogram data under different stimuli;
the fine tuning module is used for finely tuning the trained pre-training model through the emotion data set;
and the output module is used for outputting the detection result.
EXAMPLE III
The present embodiment also provides an electronic device, referring to fig. 3, comprising a memory 404 and a processor 402, wherein the memory 404 stores a computer program, and the processor 402 is configured to execute the computer program to perform the steps of any of the above method embodiments.
Specifically, the processor 402 may include a Central Processing Unit (CPU), or A Specific Integrated Circuit (ASIC), or may be configured to implement one or more integrated circuits of the embodiments of the present application.
Memory 404 may include, among other things, mass storage 404 for data or instructions. By way of example, and not limitation, memory 404 may include a hard disk drive (hard disk drive, HDD for short), a floppy disk drive, a solid state drive (SSD for short), flash memory, an optical disk, a magneto-optical disk, tape, or a Universal Serial Bus (USB) drive or a combination of two or more of these. Memory 404 may include removable or non-removable (or fixed) media, where appropriate. The memory 404 may be internal or external to the data processing apparatus, where appropriate. In a particular embodiment, the memory 404 is a Non-Volatile (Non-Volatile) memory. In certain embodiments, memory 404 includes Read-only memory (ROM) and Random Access Memory (RAM). The ROM may be mask-programmed ROM, programmable ROM (PROM), erasable PROM (EPROM), electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or FLASH memory (FLASH), or a combination of two or more of these, where appropriate. The RAM may be a static random-access memory (SRAM) or a dynamic random-access memory (DRAM), where the DRAM may be a fast page mode dynamic random-access memory 404 (FPMDRAM), an extended data output dynamic random-access memory (EDODRAM), a synchronous dynamic random-access memory (SDRAM), or the like.
Memory 404 may be used to store or cache various data files for processing and/or communication use, as well as possibly computer program instructions for execution by processor 402.
The processor 402 reads and executes the computer program instructions stored in the memory 404 to implement any one of the self-attention mechanism-based electroencephalogram emotion recognition methods in the above embodiments.
Optionally, the electronic apparatus may further include a transmission device 406 and an input/output device 408, where the transmission device 406 is connected to the processor 402, and the input/output device 408 is connected to the processor 402.
The transmitting device 406 may be used to receive or transmit data via a network. Specific examples of the network described above may include wired or wireless networks provided by communication providers of the electronic devices. In one example, the transmission device includes a Network adapter (NIC) that can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmitting device 406 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The input-output device 408 is used to input or output information. In the present embodiment, the input information may be raw electroencephalogram data or the like, and the output information may be a detection result of a subject or the like.
Example four
The embodiment also provides a readable storage medium, wherein a computer program is stored in the readable storage medium, the computer program comprises program codes for controlling a process to execute the process, and the process comprises the electroencephalogram emotion recognition method based on the self-attention mechanism according to the first embodiment.
It should be noted that, for specific examples in this embodiment, reference may be made to examples described in the foregoing embodiments and optional implementations, and details of this embodiment are not described herein again.
In general, the various embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects of the invention may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
Embodiments of the invention may be implemented by computer software executable by a data processor of the mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Computer software or programs (also referred to as program products) including software routines, applets and/or macros can be stored in any device-readable data storage medium and they include program instructions for performing particular tasks. The computer program product may comprise one or more computer-executable components configured to perform embodiments when the program is run. The one or more computer-executable components may be at least one software code or a portion thereof. Further in this regard it should be noted that any block of the logic flow as in the figures may represent a program step, or an interconnected logic circuit, block and function, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as, for example, DVDs and data variants thereof, CDs. The physical medium is a non-transitory medium.
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples are merely illustrative of several embodiments of the present application, and the description is more specific and detailed, but not to be construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (4)

1. An electroencephalogram emotion recognition method based on a self-attention mechanism is characterized by comprising the following steps:
s1, acquiring original electroencephalogram data of a testee, obtaining different emotional states of the testee according to the original electroencephalogram data, and giving corresponding emotional labels;
wherein, an integrated E-prime experiment paradigm is designed through emotional cognition factors, and spontaneous and evoked EEG signals of a plurality of testees are collected to obtain original EEG data of the testees;
s2, removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data;
removing blink artifacts in original electroencephalogram data by adopting an independent component analysis method, filtering the blink artifacts to be between 0.1 and 30Hz through a band-pass filter of 0 to 50Hz, and extracting the characteristics of each channel through power normal density;
s3, inputting the characteristics of each channel and the three channels of the beginning, the segmentation and the ending into a pre-training model as a characteristic matrix, wherein each row in the characteristic matrix represents all the characteristics of one channel, and the pre-training model is created based on an attention mechanism;
s4, training the pre-training model by using original electroencephalogram data under different stimuli, and converging the pre-training model by using a back propagation algorithm so as to enable the LOSS value of the pre-training model to be small and stable;
and S5, finely adjusting the trained pre-training model through the emotion data set and outputting a detection result, wherein the original electroencephalogram data of a plurality of channels are input each time, and the data of all the channels are classified through the trained pre-training model.
2. The utility model provides an electroencephalogram emotion recognition device based on self-attention mechanism which characterized in that includes:
the data acquisition module is used for acquiring original electroencephalogram data of a testee, obtaining different emotion states of the testee according to the original electroencephalogram data and endowing corresponding emotion labels to the testee so as to obtain an emotion data set;
the data processing module is used for removing interference signals in the original electroencephalogram data and extracting the characteristics of each channel in the original electroencephalogram data; removing blink artifacts in original electroencephalogram data by adopting an independent component analysis method, filtering the blink artifacts to be between 0.1 and 30Hz through a band-pass filter of 0 to 50Hz, and extracting the characteristics of each channel through power normal density;
the model creating and training module is used for inputting the characteristics of each channel and the starting, splitting and ending three channels as characteristic matrixes into a pre-training model, wherein each row in the characteristic matrixes represents all characteristics of one channel, the pre-training model is created based on an attention mechanism, the pre-training model is trained by using original electroencephalogram data under different stimuli, and the pre-training model is converged by using a back propagation algorithm so that the LOSS value of the pre-training model becomes small and stable;
the fine tuning module is used for finely tuning the trained pre-training model through an emotion data set, wherein the original electroencephalogram data of a plurality of channels are input each time, and the data of all the channels are classified through the trained pre-training model;
and the output module is used for outputting the detection result.
3. An electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the method for electroencephalogram emotion recognition based on a self-attention mechanism of claim 1.
4. A readable storage medium having stored therein a computer program comprising program code for controlling a process to execute a process, the process comprising the method for electroencephalogram emotion recognition based on self-attention mechanism according to claim 1.
CN202210880984.1A 2022-07-26 2022-07-26 Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof Active CN115105079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210880984.1A CN115105079B (en) 2022-07-26 2022-07-26 Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210880984.1A CN115105079B (en) 2022-07-26 2022-07-26 Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof

Publications (2)

Publication Number Publication Date
CN115105079A CN115105079A (en) 2022-09-27
CN115105079B true CN115105079B (en) 2022-12-09

Family

ID=83333307

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210880984.1A Active CN115105079B (en) 2022-07-26 2022-07-26 Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof

Country Status (1)

Country Link
CN (1) CN115105079B (en)

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107411738A (en) * 2017-04-18 2017-12-01 天津大学 A kind of mood based on resting electroencephalogramidentification similitude is across individual discrimination method
CN107256393B (en) * 2017-06-05 2020-04-24 四川大学 Feature extraction and state recognition of one-dimensional physiological signals based on deep learning
CN110610168B (en) * 2019-09-20 2021-10-26 合肥工业大学 Electroencephalogram emotion recognition method based on attention mechanism
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
CN111616721B (en) * 2020-05-31 2022-05-27 天津大学 Emotion recognition system based on deep learning and brain-computer interface and application
CN114492513B (en) * 2021-07-15 2023-05-26 电子科技大学 Electroencephalogram emotion recognition method adapting to anti-domain under cross-user scene
CN113627518B (en) * 2021-08-07 2023-08-08 福州大学 Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning
CN114186591A (en) * 2021-11-25 2022-03-15 中国科学院大学宁波华美医院 Method for improving generalization capability of emotion recognition system
CN114492560A (en) * 2021-12-06 2022-05-13 陕西师范大学 Electroencephalogram emotion classification method based on transfer learning
CN114358057A (en) * 2021-12-16 2022-04-15 华南理工大学 Cross-individual electroencephalogram emotion recognition method, system, device and medium
CN114209323B (en) * 2022-01-21 2024-05-10 中国科学院计算技术研究所 Method for identifying emotion and emotion identification model based on electroencephalogram data
CN114529945A (en) * 2022-02-22 2022-05-24 中国农业银行股份有限公司 Emotion recognition method, device, equipment and storage medium
CN114631831A (en) * 2022-03-04 2022-06-17 南京理工大学 Cross-individual emotion electroencephalogram recognition method and system based on semi-supervised field self-adaption

Also Published As

Publication number Publication date
CN115105079A (en) 2022-09-27

Similar Documents

Publication Publication Date Title
CN108143409B (en) Sleep stage staging method and device
EP3011895B1 (en) Determining cognitive load of a subject from electroencephalography (EEG) signals
CN109871831B (en) Emotion recognition method and system
CN109602417A (en) Sleep stage method and system based on random forest
CN114492501B (en) Electroencephalogram signal sample expansion method, medium and system based on improved SMOTE algorithm
CN114190944B (en) Robust emotion recognition method based on electroencephalogram signals
Dursun et al. A new approach to eliminating EOG artifacts from the sleep EEG signals for the automatic sleep stage classification
KR20190111570A (en) A system of detecting epileptic seizure waveform based on coefficient in multi-frequency bands from electroencephalogram signals, using feature extraction method with probabilistic model and machine learning
CN115414051A (en) Emotion classification and recognition method of electroencephalogram signal self-adaptive window
Sharma et al. EMG classification using wavelet functions to determine muscle contraction
CN113274037A (en) Method, system and equipment for generating dynamic brain function network
CN108143412A (en) A kind of control method of children's brain electricity mood analysis, apparatus and system
Dagdevir et al. Truncation thresholds based empirical mode decomposition approach for classification performance of motor imagery BCI systems
CN113208633A (en) Emotion recognition method and system based on EEG brain waves
CN115105079B (en) Electroencephalogram emotion recognition method based on self-attention mechanism and application thereof
CN110458066B (en) Age group classification method based on resting electroencephalogram data
Hindarto et al. Feature Extraction ElectroEncephaloGram (EEG) using wavelet transform for cursor movement
Sudirman et al. EEG different frequency sound response identification using neural network and fuzzy techniques
Puri et al. Wavelet packet sub-band based classification of alcoholic and controlled state EEG signals
CN115998249A (en) Artifact processing method, device, equipment and storage medium in electroencephalogram
Caldas et al. Towards automatic EEG signal denoising by quality metric optimization
Raj et al. Analysis of brain wave due to stimulus using EEG
Navea et al. Classification of wavelet-denoised musical tone stimulated EEG signals using artificial neural networks
Ergin et al. Emotion detection using EEG signals based on Multivariate Synchrosqueezing Transform and Deep Learning
CN116035594B (en) Electroencephalogram artifact removing method based on segmentation-noise reduction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant