CN115770043A - Dream emotion recognition method and device, electronic equipment and storage medium - Google Patents

Dream emotion recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115770043A
CN115770043A CN202211468838.4A CN202211468838A CN115770043A CN 115770043 A CN115770043 A CN 115770043A CN 202211468838 A CN202211468838 A CN 202211468838A CN 115770043 A CN115770043 A CN 115770043A
Authority
CN
China
Prior art keywords
lead
signal
dream
time
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211468838.4A
Other languages
Chinese (zh)
Inventor
马鹏程
卢正毅
王晓岸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Brain Up Technology Co ltd
Original Assignee
Beijing Brain Up Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Brain Up Technology Co ltd filed Critical Beijing Brain Up Technology Co ltd
Priority to CN202211468838.4A priority Critical patent/CN115770043A/en
Publication of CN115770043A publication Critical patent/CN115770043A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application discloses a dreaming emotion recognition method, a dreaming emotion recognition device, electronic equipment, a storage medium and a dreaming emotion recognition system, and belongs to the technical field of new-generation information. The method comprises the following steps: acquiring a multi-lead electroencephalogram signal of a user during sleeping; preprocessing the multi-lead electroencephalogram signals to obtain multi-lead preprocessed signals; performing empirical mode decomposition on the multi-lead preprocessing signal to obtain a multi-lead multi-order intrinsic mode function; carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-lead multi-order intrinsic mode functions to obtain sleep staging results; determining a REM period signal according to the sleep staging result; and identifying the REM period signal to obtain a dream emotion identification result. By the technical scheme, the reliability of dreaming emotion recognition is improved.

Description

Dream emotion recognition method and device, electronic equipment and storage medium
Technical Field
The application belongs to the technical field of new-generation information, and particularly relates to a dreaming emotion recognition method and device, electronic equipment and a storage medium.
Background
Emotions are the psychological and physiological responses that a person produces to external things. Accurately identifying emotions occupies an important position in human-computer interaction research. Because the electroencephalogram signals have the characteristics of objectivity, difficulty in camouflage and the like, the application of the electroencephalogram signals in the field of conscious emotion recognition is widely concerned. Due to uncertainty of dream occurrence time, collection of dreaming electroencephalogram signals and corresponding emotional states is difficult, and related researches on the dreaming electroencephalogram signals are relatively few.
When the dream emotion is detected, the reaction is usually performed in the form of lateral inference or questionnaire, which greatly reduces the reliability of the analysis of the dream emotion.
Disclosure of Invention
In order to at least solve the problem that the reliability of dreaming emotion analysis is low, the application provides a dreaming emotion recognition method, a dreaming emotion recognition device, an electronic device, a storage medium and a dreaming emotion recognition system.
In a first aspect, the present application provides a method for recognizing a mood of dream, comprising:
acquiring a multi-lead electroencephalogram signal of a user during sleep;
preprocessing the multi-lead electroencephalogram signals to obtain multi-lead preprocessed signals;
carrying out empirical mode decomposition on the multi-lead preprocessed signal to obtain a multi-lead multi-order eigenmode function;
carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-lead multi-order intrinsic mode functions to obtain sleep staging results;
determining a Rapid Eye Movement (REM) period signal according to a sleep stage result;
and identifying the REM period signal to obtain a dream emotion identification result.
In a second aspect, the present application provides a dreaming emotion recognition apparatus, comprising:
the electroencephalogram signal acquisition module is used for acquiring multi-lead electroencephalogram signals of a user during sleep;
the signal preprocessing module is used for preprocessing the multi-lead electroencephalogram signals to obtain multi-lead preprocessing signals;
a mode function obtaining module for performing empirical mode decomposition on the multi-lead preprocessing signal to obtain a multi-lead multi-order intrinsic mode function;
the sleep stage obtaining module is used for carrying out sleep stage on the multi-lead electroencephalogram signals according to the multi-lead multi-order intrinsic mode functions to obtain sleep stage results;
the REM period signal determining module is used for determining a rapid eye movement REM period signal according to a sleep staging result; and the dream emotion recognition module is used for recognizing the REM period signal to obtain a dream emotion recognition result.
In a third aspect, the present application provides an electronic device, comprising:
a memory and a processor; the processor is connected with the memory and is configured to execute the dream emotion recognition method based on the instructions stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the above-mentioned dream emotion recognition method.
The beneficial effect that technical scheme that this application provided brought is:
the method comprises the steps of obtaining multi-lead electroencephalogram signals when a user sleeps, preprocessing the electroencephalogram signals to obtain preprocessed signals, carrying out empirical mode decomposition on the preprocessed signals to obtain multi-level intrinsic mode functions, carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-level intrinsic mode functions of the leads to obtain sleep staging results, determining REM-stage signals according to the sleep staging results, and identifying the REM-stage signals to obtain dream emotion identification results. The dreams in the REM period have rich emotional information, so the dreams are identified by adopting the REM period signals, and the reliability of dreams identification is improved. When the REM period is analyzed, the multi-lead multi-order eigenmode function is used as a judgment signal instead of using an original electroencephalogram signal, signals strongly related to the REM period are further extracted, the accuracy of REM period identification can be improved, and therefore the reliability of dreaming emotion identification is improved.
Drawings
Fig. 1 is a schematic flowchart of a dream emotion recognition method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating a result of empirical mode decomposition according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a dream emotion recognition apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a dream emotion recognition system according to an embodiment of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Referring to fig. 1, an embodiment of the present application provides a method for recognizing a dream mood, which includes the following steps:
step 101, acquiring a multi-lead electroencephalogram signal of a user during sleep.
Electroencephalogram (EEG) is the general reflection of electrophysiological activity of brain nerve cells on the surface of cerebral cortex or scalp, and contains a large amount of physiological and disease information, so that some diagnosis results can be obtained through research and analysis of EEG, and the diagnosis results can be applied to treatment supervision assistance, such as treatment supervision assistance of sleep disorder people or mental disease people. While sleeping, a user typically experiences two sleep periods: an NREM (Not Rapid eye Movement) period and an REM (Rapid eye Movement) period. The NREM phase may be divided into three stages, N1 sleep stage, N2 sleep stage, and N3 sleep stage. The human brain responds differently in different stages of sleep.
The brain electrical signals are acquired by wearing a brain machine. The brain machine may be used in a form of being attached to or worn on the head, which is not limited in this embodiment. The brain-computer is multichannel, and the electroencephalogram signals collected at the moment can be called multi-lead electroencephalogram signals.
The following describes the brain-computer configuration in detail with an example of a configuration:
1) The gold cup electrode is adopted, the impedance is adjusted to be less than 20 kilo-ohms, and the gold cup electrode is designed in a wearable mode.
2) The distribution of the electrodes relates to a plurality of point positions, namely, a plurality of acquisition positions corresponding to electroencephalogram signals are provided, specifically frontal lobe, frontotemporal lobe and temporal lobe, and the reference electrode is the left and right lateral auricle mastoid. The sampling electrodes are F3, F4, FT7, FT8, T7 and T8, wherein F represents a frontal lobe part, T represents a temporal lobe part, and FT represents a frontotemporal lobe adjacent part. The electrode sites referenced to the international standard lead 10-20 system (10-20 electrode system) 64 channel with a sampling rate of 200hz. Each electrode corresponds to one channel, the electroencephalogram signals at the time are 6 channels (collection positions), and the adoption of the 6 channels can reduce the complexity of a classifier for recognizing the REM-period signals to obtain the dream emotion recognition result in recognition calculation.
3) The acquisition time is preferably the sleep time of the whole night, namely when the user is in a sleep state, the user is still lying down, the head and the body are kept not to shake, the user is in an eye closing state, and the ambient temperature and the noise are kept at normal and appropriate levels.
4) The relevant parameter settings at acquisition may be as follows: scalp EEG frequency ranges from 0hz to 50hz with volts values from 0.5uV to 100uV.
The positions of the electrodes are distributed to frontal lobe, frontotemporal lobe and temporal lobe, the collected electroencephalogram signals can intensively reflect emotion electroencephalogram activities in REM period, accurate depiction of left and right hemisphere electroencephalogram signals is met, and the neural activity condition of the hippocampus in the sleeping process can be detected to a certain extent.
Step 102, preprocessing the multi-lead electroencephalogram signal to obtain a multi-lead preprocessed signal.
In the process of acquiring signals, many interference signals are encountered, and in order to reduce the influence of the interference signals, the acquired signals are preprocessed. The pretreatment comprises the following steps: down-sampling, amplifying, filtering, encoding, independent Component Analysis (ICA), normalizing, and in other embodiments, may further include: and removing bad sections, which is not limited in this embodiment. In the processing process, preprocessing can be carried out on each lead electroencephalogram signal to obtain each lead preprocessing signal, and all the lead preprocessing signals are called multi-lead preprocessing signals in a combined mode.
And 103, carrying out empirical mode decomposition on the multi-lead preprocessing signal to obtain a multi-lead multi-order intrinsic mode function.
Performing Empirical Mode Decomposition (EMD) according to the time scale features of the signal itself, and decomposing the signal into a finite number of Intrinsic Mode Functions (IMF), where the criterion for stopping Decomposition may be that the standard deviation of two consecutive screening result data points is less than 0.05, as shown in fig. 3, which illustrates a preprocessed signal, a reference signal, and a fourth-order Intrinsic Mode Function, IMF1, IMF2, IMF3, and IMF4, and IMFM represents an mth-order Intrinsic Mode Function. In the processing process, empirical mode decomposition can be carried out on each lead preprocessing signal to obtain each lead multi-order intrinsic mode function, and all the lead multi-order intrinsic mode functions are called multi-lead multi-order intrinsic mode functions in a combined mode.
And step 104, carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-lead multi-order intrinsic mode functions to obtain sleep staging results.
Determining that the Mth order intrinsic mode function in the multi-order intrinsic mode functions is a sleep stage signal, wherein M is a natural number, then performing power spectral density analysis on the sleep stage signal to obtain power spectral density, performing the operation on the multi-order intrinsic mode functions of all leads to obtain the power spectral density of each lead, wherein the power spectral density is called multi-lead power spectral density, the power spectral density feature set contains the power spectral density of each lead, and classifying the power spectral density feature set by using a pre-constructed classifier to obtain a sleep stage result.
In order to better distinguish the REM period from the NREM period and improve the identification accuracy of the REM period, a 4 th order eigenmode function is selected as a base signal of sleep stage identification, namely a sleep stage signal.
The power spectral density analysis method can be selected from Welch methods, such as: the signal is decomposed by using a hamming window with a length of 256 to obtain a frequency domain signal, and the relevant parameters can be set to overlap =0.5, fs =200, and nfft =512. The decomposed frequency domain signal is subjected to 10 × log10 (x) operation, and then the operation result is converted into a DB/Hz unit form, thereby obtaining the power spectral density. In application, the first N sampling point data can be taken as the characteristic of the channel, and the N sampling point data can reflect the frequency component of the signal related to emotion recognition, so that the calculation amount is reduced. N is a natural number, for example 20. And performing power spectral density analysis on each lead sleep stage signal to obtain the power spectral density of each lead. And taking the power spectral density of each lead as an element of the power spectral feature set, namely the power spectral feature set comprises the power spectral density of each lead. If the number of channels is 6, 6 features are collected in the power spectrum feature set.
The pre-constructed classifier may be a GBDT classifier, in other embodiments, may also be a classifier constructed based on a convolutional neural network, and may also be a gaussian classifier, a support vector machine, or the like, and the principle and the kind of classification of the classifier are not limited in this embodiment. Before the classifier is used, training is usually needed, the labeled electroencephalogram signals are input into the classifier, iterative training is carried out according to the prediction result and the labeling result until the model converges, and the trained classifier is obtained. The labeled brain electrical signal has a known sleep stage result corresponding to the brain electrical signal.
Specifically, the characteristics of the 6 channels of labeled EEG data at REM and NREM phases are compression encoded, as per 4: and 1, dividing a training set and a test set in a mode of 1, and constructing a GBDT classifier. The parameters (' leaving _ rate ', ' n _ estimators ', max _ depth ', ' max _ features ') of the GBDT are optimally adjusted using a naive bayes optimization method such that they minimize the loss identified by the REM. GBDT is an ensemble learning algorithm constructed using the Boosting method. The method mainly includes iteratively training a series of weak classifiers, accumulating fitting residual errors of each time by using a gradient lifting tree, and enabling the weak classifiers to be cart regression trees.
And step 105, determining a rapid eye movement REM period signal according to the sleep staging result.
Generally, most dreams occur in the REM period, and dreams at this stage have richer emotional information, so in order to better recognize the dreams, electroencephalogram signals (referred to as REM period signals for short) corresponding to the REM period need to be extracted, and the extracted signals are used for dreams emotion recognition. By the steps, the multi-lead electroencephalogram signals are extracted in the REM period, and the REM period signals are obtained.
When the sleep staging result is the REM period, the brain electrical signal corresponding to the REM period is determined as the REM signal. The REM period, when present, typically lasts for a period of time, such as 1min, or longer. During this time, there is a possibility of an identification error, that is, a sleep staging result, which is originally a REM period, being identified as an NREM period. Because the sleep stage result is an NREM stage, the electroencephalogram signal in the NREM stage is not used as the electroencephalogram signal for subsequent emotion recognition, so that the emotion recognition result can be influenced, and in order to reduce the influence on the emotion recognition result caused by recognition error, the method comprises the following sub-steps: if the sleep staging result is that the number of times of the REM period is greater than a preset number threshold value in a preset time period, determining that the user enters the REM period in the preset time period, and extracting the multi-lead electroencephalogram signal according to the preset time period when the user enters the REM period to obtain the REM period signal.
The preset time interval is divided along the preset time length, and the electroencephalogram time lengths corresponding to a plurality of sleep stage results can be obtained. The method comprises the steps of carrying out sleep staging on each electroencephalogram signal with preset time length to obtain sleep staging results, then carrying out summary statistics on each sleep staging result, and if the occurrence frequency of the REM period in the statistical results is equal to a preset frequency threshold value, determining that a user enters the REM period in the preset time length, so that the electroencephalogram signal corresponding to the preset time length can be extracted to obtain the REM period signal. And if the number of times of occurrence of the REM period in the statistical result is smaller than a preset number threshold, determining that the user does not enter the REM period in the preset period, so that the electroencephalogram signal corresponding to the preset period is not extracted.
In order to further reduce the influence on the emotion recognition result caused by the recognition error in the REM period, the step of extracting the multi-lead electroencephalogram signal according to the preset time period when the user enters the REM period, and the implementation mode of obtaining the REM period signal can be as follows: determining the starting time point of the multi-lead electroencephalogram signal corresponding to the first REM period in the preset time period when the user enters the REM period, determining the ending time point of the preset time period when the user enters the REM period, and extracting the multi-lead electroencephalogram signal in the preset time period according to the starting time point and the ending time point to obtain the REM period signal. The first REM period is the REM period appearing for the first time in the staging result within the preset time period.
And after the starting time point of the REM period signal is determined, taking the ending time point of the preset period as the ending time point of the REM period signal, and then extracting the signal of the preset period according to the starting time point and the ending time point to obtain the REM period signal.
The method is described by taking the example that the preset time period is 1min, the electroencephalogram signal duration corresponding to the sleep stage result is 10s, the sleep stage results of 6 times in the past 1min are summarized and counted, if more than 3 REM periods appear in the statistical results, the REM period is judged to appear in the past 1min, the starting time point of the REM period signal is the starting time point of the electroencephalogram signal corresponding to the first REM period when the REM period is identified for the first time, the ending time point of the electroencephalogram signal corresponding to the preset time period 1min is used as the ending time point of the REM signal, and then the extracted original electroencephalogram signal in the REM period is transmitted to the next step. In this embodiment, the number threshold is set to 3, and in other embodiments, the number threshold may be other values, which is not limited in this embodiment.
And 106, identifying the REM period signal to obtain a dream emotion identification result.
When the dream emotion is identified, the used signal is an electroencephalogram signal corresponding to the REM period. The identification method may use a classifier, such as a convolutional neural network model, a support vector machine, which is not limited in this embodiment.
After the dream emotion recognition result is obtained, the recognition result of the dream emotion signal is used for generating a dream report. The result is usually sent to the client, and the client generates the dream report, so that the user can look up the dream report according to the requirement. In other embodiments, after obtaining the dream emotion recognition result, the method provided in this embodiment further includes: determining the starting time point and the duration of the REM period, converting the electroencephalogram signal into a waveform diagram, generating report related information for a user according to the dream emotion recognition result and at least one of the starting time point, the duration and the waveform diagram of the REM period, and outputting and processing the report related information, wherein the report related information is used for enabling a client of the user to generate a dream report, so that the user can know the dream related information more comprehensively and sufficiently.
The method comprises the steps of obtaining multi-lead electroencephalogram signals when a user sleeps, preprocessing the electroencephalogram signals to obtain preprocessed signals, carrying out empirical mode decomposition on the preprocessed signals to obtain multi-order intrinsic mode functions, carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-order intrinsic mode functions of all leads to obtain sleep staging results, determining REM-stage signals according to the sleep staging results, and identifying the REM-stage signals to obtain dream mood identification results. The dreams in the REM period have rich emotional information, so the dreams are identified by adopting the REM period signals, and the reliability of dreams identification is improved. When the REM period is analyzed, the multi-lead multi-order eigenmode function is used as a judgment signal instead of using an original electroencephalogram signal, signals strongly related to the REM period are further extracted, the accuracy of REM period identification can be improved, and therefore the reliability of dreaming emotion identification is improved.
In some embodiments, the implementation of step 106 may be:
and identifying the REM period signal by adopting a transformer model constructed by a time self-attention mechanism and a channel self-attention mechanism to obtain a dream emotion identification result based on the time self-attention mechanism and the channel self-attention mechanism.
Since the correlation between sample points reflects the temporal EEG interrelations, the temporal information of the EEG is encoded using a self-attention mechanism, taking into account the long-term dependence of the EEG time domain. Different channels in the EEG represent electrodes at different locations on the scalp, and by considering the dependencies between different channels, the functional connectivity between different brain regions can be calculated, thus, allowing for the use of attention mechanisms to model spatial information between different channels. When the dream emotion is identified, an identification model is constructed based on a time self-attention mechanism and a channel self-attention mechanism, the emotion characteristics of additional EEG are not needed during identification, only the original EEG is used, and therefore the correlation of time domain information and the correlation of an EEG channel can be fully utilized, and the emotion identification accuracy is improved. The states of the emotional recognition result of the dream can be classified into 4 types: non-dream-producing, positive, neutral and negative emotions, in other embodiments, can be further classified into 2 categories: the present embodiment does not limit the kind of the state of the emotional feeling recognition result.
The model comprises: an encoder and a decoder. Wherein the encoder is based on two self-attention mechanisms: the time self-attention mechanism and the channel self-attention mechanism. The correlation between different sampling points in the electroencephalogram signal is calculated by utilizing a time self-attention mechanism, and the correlation (or coupling relation) between different channel signals is calculated by utilizing a channel self-attention mechanism. Accordingly, the encoder is divided into a time-domain encoder constructed based on a time self-attention mechanism and a spatial-domain encoder constructed based on a channel self-attention mechanism. In the recognition model, for the existing transform model, only an Encoder (Encoder) part in the transform model is used, that is, after being processed by the Encoder, one MLP (Multi-Layer Perceptron) Layer is directly used for classification, and it can also be understood that the MLP (Multi-Layer Perceptron) Layer is used as a Decoder (Decoder), and the Multi-Layer Perceptron used as the Decoder is called a second Multi-Layer Perceptron.
Specifically, in implementing this step, the method used comprises:
adding position coding based on a channel to the multi-lead electroencephalogram signal to obtain fusion coding, coding the fusion coding by using a time domain coder to obtain time domain coding, coding the time domain coding by using a space domain coder to obtain space domain coding, and decoding the space domain coding by using a decoder to obtain a dream emotion recognition result.
The electroencephalogram signal is a linked time signal and has strong channel information correlation, so the electroencephalogram signal is subjected to position coding in sample identification and is added into the electroencephalogram signal. The position encoding method may be a trigonometric function position encoding method, and in other embodiments, other encoding methods may also be adopted, which is not limited in this embodiment.
When the position encoding method is trigonometric function position encoding, trigonometric function encoding involves the following formula (1):
Figure BDA0003957610940000091
in the formula, P (t) reflects the position information of each channel of input data (multi-lead electroencephalogram signals), t represents the position, in the embodiment, values are 1-6 (6 channels), w1, w2 \8230; \8230, data dimensions of each channel are represented, d is the sampling point number of an input sample, sin operation is given to odd bits in the input data, cos operation is given to even bits, and in the embodiment, d can be 360.
The time domain encoder includes: MHA (Multi-Head Attention) and MLP (Multi-layer perceptron), for the sake of distinction, MLP in the time-domain coder is referred to as the first Multi-layer perceptron, and MLP as the decoder is referred to as the first Multi-layer perceptron. In the time-domain encoder, the head output result of each head in the multi-head attention mechanism is calculated, and the following formula (2) and formula (3) are used:
h l =LN(MHA(z l-1 )+z l-1 )(2)
z l =LN(MLP(h l )+h l ):l=1,2,3……L(3)
where MHA denotes a multi-head attention mechanism, MLP denotes a first multi-layer perceptron, LN denotes layer normalization, L denotes the number of heads in the multi-head attention mechanism, h l Indicating the hidden state of the current header l, z l Head output result, z, representing the current head, l 0 Representing the multi-lead brain electrical signal after correspondingly adding position coding processing. In order to improve the training speed and the robustness of the model, residual connection and layer normalization are adopted for each part.
And then splicing the head output results of all the heads, and taking the obtained result as the output result of the time domain encoder, namely, time domain encoding.
Wherein MHA (z) 0 ) The formula (4) used for the calculation is as follows:
Figure BDA0003957610940000092
in the formula, Q, K and V are all matrixes obtained by linear projection of multi-lead electroencephalogram signals after position coding processing, and are respectively called a domain query linear transformation matrix, a time domain key value linear transformation matrix and a time domain value linear transformation matrix, and d k Is a scalar factor. Softmax is a regression function.
The spatial domain encoder also includes: MHA (Multi-Head Attention mechanism) and MLP (Multi-layer perceptron). The processing procedures of the multi-head attention mechanism and the multi-layer perceptron of the spatial domain encoder are the same as the processing procedures of the multi-head attention mechanism and the multi-layer perceptron of the time domain encoder, and the difference is that the input of the spatial domain encoder is time domain encoding, while the input of the time domain encoder is fusion encoding, and the details are not repeated here.
In the calculation, a matrix is usually used for calculation, and a sample matrix is composed of channels and sampling points, and when the number of the channels is 6 and the number of the sampling points is 360, the sample matrix is 6 × 360. A multi-headed self-attentive mechanism is computed on the input, thus avoiding errors in a single self-attentive mechanism.
Before using the model, the model usually needs to be trained. Dividing REM period EEG data marked with emotional features into a training set and a testing set according to a proportion (such as 4). And (4) performing iterative training according to the predicted dream emotion recognition result and the marked dream emotion recognition result until the model is converged to obtain a trained model, and then testing the accuracy of the model through a test set. If the requirement is met, the training is finished, otherwise, the training is continued.
Acquiring a multi-lead electroencephalogram signal of a user during sleeping; carrying out REM period extraction on the multi-lead electroencephalogram signals to obtain REM period signals; and identifying the REM period signal by adopting a transformer model constructed by a time self-attention mechanism and a channel self-attention mechanism to obtain a dreaming emotion identification result based on the time self-attention mechanism and the channel self-attention mechanism. Because the EEG has continuity in time and functional correlation among channels, the correlation among different sampling points in a sample is calculated by using a time self-attention mechanism of a transform model, the coupling relation among different channel signals is calculated by using the channel self-attention mechanism of the transform model, the emotional characteristics of the original EEG are used for recognition, and the original EEG is used for recognition, so that the accuracy and timeliness of the emotional recognition of the dream are improved.
Referring to fig. 3, an embodiment of the present application provides a dream emotion recognition apparatus for performing the dream emotion recognition method in the above embodiment. The dream emotion recognition apparatus includes: the system comprises an electroencephalogram signal acquisition module 301, a signal preprocessing module 302, a mode function acquisition module 303, a sleep stage acquisition module 304, an REM stage signal determination module 305 and a dream emotion recognition module 306.
The electroencephalogram signal acquisition module 301 is used for acquiring multi-lead electroencephalogram signals of a user during sleep. The signal preprocessing module 302 is configured to preprocess the multi-lead electroencephalogram signal to obtain a multi-lead preprocessed signal. The mode function obtaining module 303 is configured to perform empirical mode decomposition on the multi-lead preprocessed signal to obtain a multi-lead multi-order eigenmode function. The sleep stage obtaining module 304 is configured to perform sleep stage on the multi-lead electroencephalogram signal according to the multi-lead multi-stage eigenmode function, so as to obtain a sleep stage result. The REM period signal determining module 305 is configured to determine a REM period signal according to the sleep staging result. The dream emotion recognition module 306 is used for recognizing the REM period signal to obtain a dream emotion recognition result.
Optionally, the mode function obtaining module 303 includes: the device comprises a staging signal determining unit, a power spectral density obtaining unit, a density characteristic set obtaining unit and a sleep staging obtaining unit. The staging signal determining unit is used for determining the Mth order intrinsic mode function in the multi-order intrinsic mode functions as a sleep staging signal. The power spectral density obtaining unit is used for carrying out power spectral density analysis on the sleep stage signals of each lead to obtain the power spectral density of each lead. The density characteristic set obtaining unit is used for determining a power spectral density characteristic set corresponding to the multi-lead electroencephalogram signal according to the power spectral density of each lead. The sleep staging obtaining unit is used for classifying the power spectral density characteristic set by utilizing a pre-constructed classifier to obtain a sleep staging result.
Optionally, the REM period signal determining module 305 includes: a REM period determining unit and a signal extracting unit. The REM period determining unit is used for determining that the user enters the REM period in the preset period if the sleep staging result is that the times of the REM period are larger than a preset time threshold in the preset period. The signal extraction unit is used for carrying out signal extraction on the multi-lead electroencephalogram signal according to the preset time period when the user enters the REM period, so as to obtain the REM period signal.
Optionally, the signal extraction unit comprises: a start time determining subunit, an end time determining subunit, and a REM period signal extracting subunit. The start time determining subunit is used for determining a start time point of the multi-lead electroencephalogram signal corresponding to the first REM period when the user enters a preset time period of the REM period. The end time determining subunit is configured to determine an end time point of the preset period when the user enters the REM period. And the REM period signal extraction subunit is used for carrying out signal extraction on the multi-lead electroencephalogram signals in the preset time period according to the starting time point and the ending time point to obtain REM period signals.
Optionally, the system further includes a related information generating module, configured to determine a start time point and a duration of an REM period corresponding to the REM period signal; converting the multi-lead electroencephalogram signals into oscillograms; and generating report related information aiming at the user according to the dream emotion recognition result and at least one of the starting time point, the duration and the oscillogram of the REM period so as to output and process the report related information, wherein the report related information is used for enabling a client of the user to generate a dream report.
Optionally, the dream emotion recognition module 306 is configured to recognize the REM period signal by using a transform model constructed by a time self-attention mechanism and a channel self-attention mechanism, and obtain a dream emotion recognition result based on the time self-attention mechanism and the channel self-attention mechanism.
Optionally, the dream emotion recognition module 306 includes a transform model and a fusion coding unit, the transform model includes a time-domain encoder constructed based on a time-domain self-attention mechanism, a spatial-domain encoder constructed based on a channel self-attention mechanism, and a decoder, and the fusion coding unit is configured to add a channel-based position code to the multi-lead electroencephalogram signal to obtain a fusion code. And the time domain encoder encodes the fusion code to obtain a time domain code. And the space-domain coder codes the time-domain code to obtain the space-domain code. And the decoder decodes the airspace codes to obtain a dream emotion recognition result.
Optionally, the time-domain encoder and the spatial-domain encoder each comprise a multi-head attention mechanism and a first multi-layer perceptron; calculating the fusion code by using a multi-head attention mechanism of a time domain coder to obtain a first head time domain self-attention output result, sequentially performing residual connection and layer normalization on the first head time domain self-attention output result to obtain a first head time domain hidden state, calculating the first head time domain hidden state by using a first multilayer perceptron of the time domain coder to obtain a first head time domain perception output result, sequentially performing residual connection and layer normalization on the first head time domain perception output result to obtain a first head time domain output result, sequentially circulating to obtain time domain output results of all heads of the time domain coder, and obtaining time domain codes according to the time domain output results of all heads; calculating time domain coding by using a multi-head attention mechanism of a space domain coder to obtain a first head space domain self-attention output result, sequentially performing residual connection and layer normalization on the first head space domain self-attention output result to obtain a first head space domain hidden state, calculating the first head space domain hidden state by using a first multilayer perceptron of the space domain coder to obtain a first head space domain perception output result, sequentially performing residual connection and layer normalization on the first head space domain perception output result to obtain a first head space domain output result, sequentially circulating to obtain space domain output results of all heads of the space domain coder, and obtaining space domain coding according to the space domain output results of all the heads.
Optionally, the decoder is a second multi-layer perceptron.
Optionally, in the time domain encoder, performing linear projection on the fused codes respectively to obtain a time domain query linear transformation matrix, a time domain key value linear transformation matrix and a time domain value linear transformation matrix; obtaining a time domain self-attention output result of the first head according to the time domain query linear transformation matrix, the time domain key value linear transformation matrix and the time domain value linear transformation matrix; in the space-domain encoder, linear projection is respectively carried out on time-domain codes to obtain a space-domain query linear transformation matrix, a space-domain key value linear transformation matrix and a space-domain value linear transformation matrix; and inquiring the linear transformation matrix, the spatial key value linear transformation matrix and the spatial value linear transformation matrix according to the spatial domain to obtain a first head spatial domain self-attention output result.
In practical application, the dream emotion recognition device can be integrated into the existing sleep monitoring system, and if the existing sleep monitoring system has a sleep staging function, the function of dream emotion analysis can be only completed in a compatible mode; if the sleep stage classification function is not available, the functions of REM identification and emotion identification can be completed in a compatible manner.
It should be noted that: in the above-described embodiments, the above-described division of the functional modules is merely used as an example for the emotional emotion recognition, and in practical applications, the above-described function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above-described functions. In addition, the dream emotion recognition device and the dream emotion recognition method provided by the embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments, and are not described in detail herein.
An embodiment of the present application provides an electronic device, which includes: a memory and a processor. The processor is connected with the memory and is configured to execute the dream emotion recognition method based on the instructions stored in the memory. The number of processors may be one or more, and the processors may be single core or multi-core. The memory may include volatile memory in a computer readable medium, random Access Memory (RAM) and/or nonvolatile memory such as Read Only Memory (ROM) or flash memory (flash RAM), and the memory includes at least one memory chip. The memory may be an example of the computer-readable medium described below.
An embodiment of the present application provides a computer-readable storage medium, on which at least one instruction, at least one program, code set, or instruction set is stored, and the at least one instruction, the at least one program, code set, or instruction set is loaded and executed by a processor to implement the above-mentioned dream emotion recognition method. The computer-readable storage medium includes: permanent and non-permanent, removable and non-removable media may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of the storage medium of the computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technologies, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium may be used to store information that may be accessed by a computing device.
An embodiment of the present application provides a dream emotion recognition system, which is based on the above-mentioned dream emotion recognition device of the embodiment, and includes: acquisition device 401, dream emotion recognition device 402, upper computer 403 and client 404.
The acquisition device 401 is used for acquiring an electroencephalogram signal, and for specific contents of the acquisition device, reference may be made to the related description of step 101 in the above embodiment. Referring to fig. 4, the dream emotion recognition device 402 may be integrated with the acquisition device 401 (may be referred to as a BCI device), and performs real-time acquisition and real-time analysis. At this moment, collection system 401 preferably adopts the wearable design, can support the monitoring of family's environment, can long-range many night monitoring record, and objective evaluation REM period mood state, the influence of first night effect is avoided in the built-in storage of large capacity and the long-range electric quantity design of long-range, and the insomnia condition is objectively assessed. The device can also be designed separately from the acquisition device 401, namely, the dream emotion recognition device 402 is disposed on the upper computer 403 side. After the emotional recognition device 402 completes the classification recognition, the result is transmitted to the host computer 403 and stored for the user to check. The host computer 403 automatically draws and generates a report according to the judged dreaming situation and outputs the report to a user, wherein the report comprises: the dreaming condition is one or more of the existence of dreaming condition, emotional type of dreaming condition, sleeping time, and the beginning, ending and duration of REM period. The upper computer 403 continuously supervises the dream mood, supports continuous multi-night sleep monitoring, has the function of providing data management for the testee and can provide various report template designs. And also for receiving and displaying time domain waveforms of the EEG signal. In other embodiments, the relevant user operating system may also be developed based on the needs of the user. The upper computer 403 is also provided with a patient and data management system, so that the data of the patient can be conveniently and effectively screened, the long-term curative effect of the insomnia patient can be effectively evaluated, the examinee who is in negative dream for a long time can be reminded, physical examination or psychological consultation can be timely carried out, and further physical changes can be avoided. The mode of receiving the dream emotion recognition result by the upper computer 403 may be a wired mode or a wireless mode, which is not limited in this embodiment. According to the operation mechanism of the upper computer 403, the client 404 may be in the form of a mobile phone app, so that the user can view the mobile phone app conveniently. Based on the above mechanisms, the modules are integrated into a portable BCI (Brain Computer Interface) device and an upper Computer 403, data transmission between the BCI device and the upper Computer 403 can adopt wired or wireless transmission, data acquisition is realized by using the BCI device, data processing and analysis are completed by the upper Computer 403, and emotion recognition in the REM period is realized.
It will be appreciated by those skilled in the art that the invention can be embodied in other specific forms without departing from the spirit or essential attributes thereof. The embodiments disclosed above are therefore to be considered in all respects as illustrative and not restrictive. All changes which come within the scope of or are equivalent to the terms of the claims are intended to be embraced therein.

Claims (10)

1. A method for recognizing a mood in dream, comprising:
acquiring a multi-lead electroencephalogram signal of a user during sleeping;
preprocessing the multi-lead electroencephalogram signal to obtain a multi-lead preprocessed signal;
carrying out empirical mode decomposition on the multi-lead preprocessing signal to obtain a multi-lead multi-order eigenmode function;
carrying out sleep staging on the multi-lead electroencephalogram signals according to the multi-lead multi-order intrinsic mode functions to obtain sleep staging results;
determining a rapid eye movement REM period signal according to the sleep staging result;
and identifying the REM period signal to obtain a dream emotion identification result.
2. The method of claim 1, wherein the determining the REM period signal according to the sleep stage result comprises:
if the sleep staging result is that the number of REM periods is greater than a preset number threshold value within a preset time period, determining that the user enters the REM period within the preset time period;
and carrying out signal extraction on the multi-lead electroencephalogram signal according to a preset time period when a user enters an REM period to obtain the REM period signal.
3. The method for recognizing emotional states of dream of claim 2, wherein the extracting the multi-lead electroencephalogram signal according to the preset time period when the user enters the REM period to obtain the REM period signal comprises:
determining the starting time point of the multi-lead electroencephalogram signal corresponding to the first REM period in a preset time period when a user enters the REM period;
determining an end time point of a preset time period when a user enters an REM period;
and carrying out signal extraction on the multi-lead electroencephalogram signals in the preset time period according to the starting time point and the ending time point to obtain REM period signals.
4. The method of claim 1, wherein after determining the REM period signal according to the sleep stage result, further comprising:
determining a starting time point and a duration of a REM period corresponding to the REM period signal;
converting the multi-lead electroencephalogram signal into a oscillogram;
and generating report related information aiming at the user according to the dream emotion recognition result and at least one item of the starting time point, the duration and the oscillogram of the REM period so as to output the report related information.
5. The method for recognizing dream mood according to claim 1, wherein the recognizing the REM period signal to obtain a dream mood recognition result comprises:
and identifying the REM period signal by adopting a transformer model constructed by a time self-attention mechanism and a channel self-attention mechanism to obtain a dreaming emotion identification result based on the time self-attention mechanism and the channel self-attention mechanism.
6. The method of claim 5, wherein the fransformer model comprises a time domain coder constructed based on a time-domain autofocusing mechanism, a spatial domain coder and decoder constructed based on a channel autofocusing mechanism;
the method for recognizing the REM period signal by adopting the transform model constructed by the time self-attention mechanism and the channel self-attention mechanism to obtain the dream emotion recognition result based on the time self-attention mechanism and the channel self-attention mechanism comprises the following steps of:
adding channel-based position coding to the multi-lead electroencephalogram signal to obtain fusion coding;
and encoding the fusion code by using the time-domain encoder to obtain a time-domain code:
coding the time domain code by using the space domain coder to obtain a space domain code;
and decoding the airspace code by using the decoder to obtain a dream emotion recognition result.
7. The method of claim 6, wherein the time-domain encoder and the spatial-domain encoder each comprise a multi-head attention mechanism and a first multi-layered perceptron; before the transform model constructed by the time self-attention mechanism and the channel self-attention mechanism is used for identifying the REM period signal and obtaining the dream emotion identification result based on the time self-attention mechanism and the channel self-attention mechanism, the method further comprises the following steps of:
adding channel-based position coding to the marked multi-lead electroencephalogram signal to obtain fusion coding;
calculating the fusion code by using a multi-head attention mechanism of the time domain coder to obtain a first head time domain self-attention output result, sequentially performing residual connection and layer normalization on the first head time domain self-attention output result to obtain a first head time domain hidden state, calculating the first head time domain hidden state by using a first multilayer perceptron of the time domain coder to obtain a first head time domain perception output result, sequentially performing residual connection and layer normalization on the first head time domain perception output result to obtain a first head time domain output result, sequentially circulating to obtain time domain output results of all heads of the time domain coder, and obtaining time domain codes according to the time domain output results of all heads;
calculating the time domain code by using a multi-head attention mechanism of the space domain encoder to obtain a first space domain self-attention output result, sequentially performing residual connection and layer normalization on the first space domain self-attention output result to obtain a first space domain hidden state, calculating the first space domain hidden state by using a first multilayer perceptron of the space domain encoder to obtain a first space domain perception output result, sequentially performing residual connection and layer normalization on the first space domain perception output result to obtain a first space domain output result, sequentially circulating to obtain space domain output results of all heads of the space domain encoder, and obtaining space domain code according to the space domain output results of all heads;
decoding the airspace code by using the decoder to obtain a recognition result of the predicted dream mood;
and (4) carrying out model training iteratively according to the predicted dream emotion recognition result and the marked dream emotion recognition result corresponding to the marked multi-lead electroencephalogram signal until the model is converged to obtain a trained transformer model.
8. A dream emotion recognition device, comprising:
the electroencephalogram signal acquisition module is used for acquiring multi-lead electroencephalogram signals when a user sleeps;
the signal preprocessing module is used for preprocessing the multi-lead electroencephalogram signals to obtain multi-lead preprocessing signals;
a mode function obtaining module, configured to perform empirical mode decomposition on the multi-lead preprocessed signal to obtain a multi-lead multi-order intrinsic mode function;
a sleep stage obtaining module for performing sleep stage on the multi-lead electroencephalogram signal according to the multi-lead multi-stage intrinsic mode function to obtain a sleep stage result;
the REM period signal determining module is used for determining a rapid eye movement REM period signal according to the sleep staging result; and the dream emotion recognition module is used for recognizing the REM period signal to obtain a dream emotion recognition result.
9. An electronic device, comprising: a memory and a processor; the processor is coupled to the memory and configured to perform the method of any one of claims 1-7 based on instructions stored in the memory.
10. A computer readable storage medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, wherein the at least one instruction, at least one program, set of codes, or set of instructions is loaded and executed by a processor to implement the method of emotional mood recognition according to any one of claims 1-7.
CN202211468838.4A 2022-11-22 2022-11-22 Dream emotion recognition method and device, electronic equipment and storage medium Pending CN115770043A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211468838.4A CN115770043A (en) 2022-11-22 2022-11-22 Dream emotion recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211468838.4A CN115770043A (en) 2022-11-22 2022-11-22 Dream emotion recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115770043A true CN115770043A (en) 2023-03-10

Family

ID=85389812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211468838.4A Pending CN115770043A (en) 2022-11-22 2022-11-22 Dream emotion recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115770043A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116965817A (en) * 2023-07-28 2023-10-31 长江大学 EEG emotion recognition method based on one-dimensional convolution network and transducer

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116965817A (en) * 2023-07-28 2023-10-31 长江大学 EEG emotion recognition method based on one-dimensional convolution network and transducer
CN116965817B (en) * 2023-07-28 2024-03-15 长江大学 EEG emotion recognition method based on one-dimensional convolution network and transducer

Similar Documents

Publication Publication Date Title
Wang et al. Channel selection method for EEG emotion recognition using normalized mutual information
Yuan et al. A multi-view deep learning framework for EEG seizure detection
Cui et al. Automatic Sleep Stage Classification Based on Convolutional Neural Network and Fine‐Grained Segments
CN112656427A (en) Electroencephalogram emotion recognition method based on dimension model
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
Pan et al. Emotion recognition based on EEG using generative adversarial nets and convolutional neural network
CN110664395A (en) Image processing method, image processing apparatus, and storage medium
CN113558644B (en) Emotion classification method, medium and equipment for 3D matrix and multidimensional convolution network
CN111317446B (en) Sleep structure automatic analysis method based on human muscle surface electric signals
Mini et al. EEG based direct speech BCI system using a fusion of SMRT and MFCC/LPCC features with ANN classifier
CN115804602A (en) Electroencephalogram emotion signal detection method, equipment and medium based on attention mechanism and with multi-channel feature fusion
Li et al. Multi-modal emotion recognition based on deep learning of EEG and audio signals
CN115770043A (en) Dream emotion recognition method and device, electronic equipment and storage medium
CN114209323A (en) Method for recognizing emotion and emotion recognition model based on electroencephalogram data
Ingolfsson et al. Energy-efficient tree-based EEG artifact detection
Ma et al. TSD: Transformers for seizure detection
CN115659207A (en) Electroencephalogram emotion recognition method and system
Pratiwi et al. EEG-based happy and sad emotions classification using LSTM and bidirectional LSTM
Kapfo et al. LSTM based Synthesis of 12-lead ECG Signal from a Reduced Lead Set
CN117407748A (en) Electroencephalogram emotion recognition method based on graph convolution and attention fusion
Kim et al. Electrocardiogram authentication method robust to dynamic morphological conditions
CN114742107A (en) Method for identifying perception signal in information service and related equipment
Xu et al. Sleep Stage Classification With Multi-Modal Fusion and Denoising Diffusion Model
Tan et al. EEG signal recognition algorithm with sample entropy and pattern recognition
Singh et al. Emotion recognition using deep convolutional neural network on temporal representations of physiological signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination