CN116439706A - Identification method and identification system based on electroencephalogram and eye movement - Google Patents

Identification method and identification system based on electroencephalogram and eye movement Download PDF

Info

Publication number
CN116439706A
CN116439706A CN202310341713.3A CN202310341713A CN116439706A CN 116439706 A CN116439706 A CN 116439706A CN 202310341713 A CN202310341713 A CN 202310341713A CN 116439706 A CN116439706 A CN 116439706A
Authority
CN
China
Prior art keywords
eye movement
signals
electroencephalogram
signal
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310341713.3A
Other languages
Chinese (zh)
Inventor
于淑月
张忠海
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aerospace Measurement and Control Technology Co Ltd
Original Assignee
Beijing Aerospace Measurement and Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aerospace Measurement and Control Technology Co Ltd filed Critical Beijing Aerospace Measurement and Control Technology Co Ltd
Priority to CN202310341713.3A priority Critical patent/CN116439706A/en
Publication of CN116439706A publication Critical patent/CN116439706A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The embodiment of the invention relates to an identification method and an identification system based on electroencephalogram and eye movement, wherein the method comprises the following steps: creating an acquisition paradigm of an electroencephalogram signal and an eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm; creating a feature fusion network model based on an attention mechanism; and inputting the acquired electroencephalogram signals and the eye movement signals into the feature fusion network model to perform feature extraction and classification processing, so as to obtain recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals. The electroencephalogram signals and the eye movement signals are acquired through the set signal acquisition paradigm, the acquired signals are input into the created feature fusion network model to perform feature extraction to obtain feature signals, the feature signals are further classified to obtain corresponding recognition results, and recognition processing is performed through the acquisition of the two recognition signals, so that recognition processing of corresponding emotion states can be achieved, and the technical effect of improving emotion recognition rate is achieved.

Description

Identification method and identification system based on electroencephalogram and eye movement
Technical Field
The embodiment of the invention relates to the technical field of intelligent recognition of brain-computer interfaces, in particular to a recognition method and a recognition system based on brain electricity and eye movement.
Background
Emotion is a fundamental factor in human daily life, affecting decision making, perception, interpersonal interaction, and human intelligence. Emotion recognition helps to better understand language processing and nonverbal interactions in man-machine interaction environments, and recent studies have shown that emotional states can be predicted from biomedical signals.
At present, emotion recognition technologies are mainly classified into three types: (1) facial expressions and sounds; (2) an extrinsic physiological signal; (3) brain signals generated by the central nervous system. In these measurements, audiovisual-based detectors that interpret facial expressions and sounds enable non-contact recognition of emotions, but they do not always return reliable results, as people may easily disguise their emotion as not being noticed. In contrast, physiological signals exhibit relatively high recognition accuracy because the user cannot control it. Features extracted from peripheral physiological signals such as Electrocardiogram (ECG), skin conductance (SubCutaneous, SC), and pulse can provide detailed and complex information for identifying emotional states. The electroencephalogram (ElectroEncephaloGram, EEG) signal captured from the central nervous system can directly reflect brain activity and have an inherent link to the emotional state of humans as compared to the extrinsic physiological signal.
Disclosure of Invention
In view of this, in order to solve the technical problem of low emotion recognition rate, an embodiment of the present invention provides a recognition method and a recognition system based on electroencephalogram and eye movement.
In a first aspect, an embodiment of the present invention provides an identification method based on electroencephalogram and eye movement, including:
creating an acquisition paradigm of an electroencephalogram signal and an eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm;
creating a feature fusion network model based on an attention mechanism;
and inputting the acquired electroencephalogram signals and the eye movement signals into the feature fusion network model to perform feature extraction and classification processing, so as to obtain recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
In one possible embodiment, the creating an acquisition paradigm of the electroencephalogram signal and the eye movement signal, and the acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm, includes:
establishing an acquisition paradigm of an electroencephalogram signal and an eye movement signal according to a set experimental paradigm, wherein the acquisition paradigm comprises target stimulation and interference stimulation;
collecting a target electroencephalogram signal and a target eye movement signal based on the target stimulus;
And acquiring an interference brain electrical signal and an interference eye movement signal based on the interference stimulus.
In one possible implementation manner, the creating the feature fusion network model based on the attention mechanism includes:
creating a deep time sequence convolution layer network, a multi-frequency spectrum convolution layer network and a characteristic fusion classification layer network;
and performing attention mechanism processing based on the deep time sequence convolution layer network, the multi-spectrum convolution layer network and the feature fusion classification layer network, and creating a corresponding feature fusion network model.
In one possible implementation manner, the inputting the collected electroencephalogram signal and the eye movement signal into the feature fusion network model to perform feature extraction and classification processing, to obtain a recognition result of psychological states corresponding to the electroencephalogram signal and the eye movement signal, includes:
inputting the acquired electroencephalogram signals into a corresponding depth time sequence convolution layer network in the characteristic fusion network model to perform time sequence convolution processing to obtain high-dimensional time sequence characterization corresponding to the electroencephalogram signals;
inputting the acquired eye movement signals into a corresponding multi-frequency spectrum convolution layer network in the characteristic fusion network model to carry out wavelet convolution processing to obtain multi-frequency spectrum characteristics corresponding to the eye movement signals;
Inputting the high-dimensional time sequence representation and the multi-frequency characteristic into the characteristic fusion classification layer network for fusion treatment, and classifying the fused characteristic to obtain a recognition result of psychological states corresponding to the electroencephalogram signal and the eye movement signal.
In one possible implementation manner, the inputting the high-dimensional time sequence representation and the multi-frequency characteristic into the characteristic fusion classification layer network to perform fusion processing, and performing classification processing on the fused characteristic to obtain a recognition result of the psychological states corresponding to the electroencephalogram signal and the eye movement signal, includes:
carrying out global average pooling treatment on the high-dimensional time sequence characterization and the multi-spectral feature input feature fusion classification layer network to obtain dimension statistical features;
performing first full-connection processing, nonlinear processing and second full-connection processing on the dimension statistical characteristics to obtain fusion feature vectors;
and classifying the fusion feature vector based on a softmax function to obtain a preset psychological state recognition result.
In one possible embodiment, the psychological states comprise a positive emotional state and a negative emotional state.
In a first aspect, an embodiment of the present invention provides an identification system applying the identification method for electroencephalogram and eye movement described in the first aspect, including:
the system comprises a synchronous acquisition signal module, a data management module, a state online detection module and a display module;
the synchronous acquisition signal module is used for synchronously acquiring brain electrical signals and eye movement signals in real time;
the data management module is used for storing multi-mode data corresponding to the electroencephalogram signals and the eye movement signals;
the state online detection module is used for carrying out online classification detection and analysis processing on the acquired electroencephalogram signals and eye movement signals and determining psychological states and analysis results corresponding to the electroencephalogram signals and the eye movement signals;
the display module is used for displaying the psychological state and the analysis result obtained by the on-line detection.
In one possible implementation manner, the data management module is further configured to perform quality analysis processing on multi-mode data corresponding to the electroencephalogram signal and the eye movement signal, and screen the multi-mode data, so that an original database and a feature database in the data management module are updated in a feedback manner.
In a possible implementation manner, the state online detection module is further used for preprocessing online multi-mode data to obtain online multi-mode information; and carrying out fusion classification processing based on the online multi-mode information, and determining the online psychological state and the online analysis result of the online multi-mode data.
In one possible embodiment, the identification system further comprises an offline detection module;
the off-line detection module is used for preprocessing the original multi-mode data to obtain off-line multi-mode information;
and carrying out fusion classification processing based on the offline multi-mode information, and determining the offline psychological state and the offline analysis result of the offline multi-mode data.
According to the identification scheme based on the electroencephalogram and the eye movement, which is provided by the embodiment of the invention, an acquisition paradigm of the electroencephalogram and the eye movement signal is created, and the electroencephalogram and the eye movement signal are acquired according to the acquisition paradigm; creating a feature fusion network model based on an attention mechanism; and inputting the acquired electroencephalogram signals and the eye movement signals into the feature fusion network model to perform feature extraction and classification processing, so as to obtain recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals. The electroencephalogram signals and the eye movement signals are acquired through the set signal acquisition paradigm, the acquired signals are input into the created feature fusion network model to conduct feature extraction to obtain feature signals, the feature signals are further classified to obtain corresponding recognition results, recognition processing is conducted through the acquisition of the two recognition signals, and accordingly recognition processing of corresponding emotion states can be achieved, and the technical effect of improving emotion recognition rate is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a schematic flow chart of an identification method based on electroencephalogram and eye movement according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of another recognition method based on brain electricity and eye movement according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of an example scenario provided by an embodiment of the present invention;
FIG. 4 is a schematic flow chart of another example scenario provided by the embodiment of the present invention;
FIG. 5 is a schematic diagram of an identification system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an identification system in an example scenario provided by an embodiment of the present invention;
FIG. 7 is an effect diagram of an identification system in another example scenario provided by an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "comprising" and "having" in embodiments of the present invention are used to mean including open and mean that there may be additional elements/components/etc. in addition to the listed elements/components/etc.; the terms "first" and "second" and the like are used merely as labels, and are not intended to limit the number of their objects. Furthermore, the various elements and regions in the figures are only schematically illustrated and thus the present invention is not limited to the dimensions or distances illustrated in the figures.
For the purpose of facilitating an understanding of the embodiments of the present invention, reference will now be made to the following description of specific embodiments, taken in conjunction with the accompanying drawings, which are not intended to limit the embodiments of the invention.
The mechanism of attention (Attention Mechanism) stems from the study of human vision. In cognitive sciences, due to bottlenecks in information processing, humans may selectively focus on a portion of all information while ignoring other visible information. The above mechanism is often referred to as an attention mechanism.
Fig. 1 is a schematic flow chart of an identification method based on electroencephalogram and eye movement according to an embodiment of the present invention. The execution subject of the invention is an emotion recognition system. According to the diagram provided in fig. 1, the identification method based on electroencephalogram and eye movement specifically comprises the following steps:
S101, creating an acquisition paradigm of the electroencephalogram signal and the eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm.
The invention is applied to emotion recognition. The method comprises the steps of collecting brain electrical signals and eye movement signals of a tester, inputting the collected signals into a created feature fusion network model to perform feature extraction to obtain feature signals, and further performing classification processing on the feature signals to obtain corresponding recognition results, so that recognition processing is performed by collecting two kinds of recognition signals.
The brain wave signals are understood to be brain wave signals collected from the brain of the subject. The eye movement signal is understood to be information such as a retinal focusing condition and a pupil dilation condition of the eye portion of the subject. The paradigm described herein is understood to mean a predetermined signal acquisition mode.
Further, before the electroencephalogram signal and the eye movement signal of the detection object are collected, a signal collection mode is designed, and the brain electric signal and the eye pupil state or the eye movement signal of the detection object are collected according to the designed collection mode, so that processing data are provided for identifying the emotion type of the detection object.
S102, creating a feature fusion network model based on an attention mechanism.
Attention mechanisms as referred to herein may be understood as processes that selectively focus on a portion of all information. The feature fusion network model is understood as an identification model for classifying after feature aggregation.
Further, a recognition model is built according to the principle of the attention mechanism, a network model for aggregating characteristic features is created, and a processing model is provided for emotion recognition.
And S103, inputting the acquired electroencephalogram signals and the eye movement signals into a feature fusion network model for feature extraction and classification processing, and obtaining recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
Feature extraction is understood here as the process of extracting feature vectors in a signal. The classification process is understood herein as a process of identifying the collected signals separately for the emotion classification. Psychological states as referred to herein may be understood as emotional categories, including negative and positive emotions. The recognition result is understood as a specific category judged for emotion classification.
Further, the obtained electroencephalogram signals and eye movement signals are preprocessed and then input into a trained feature fusion network model for feature extraction, emotion classification and recognition are carried out on the extracted feature data, emotion categories represented by the currently collected electroencephalogram signals and eye movement signals are obtained and are output as recognition results of psychological states, recognition processing of corresponding emotional states is achieved, and the technical effect of improving emotion recognition rate is achieved.
According to the identification method based on electroencephalogram and eye movement, which is provided by the embodiment of the invention, the acquisition paradigm of the electroencephalogram and the eye movement signal is created, and the electroencephalogram and the eye movement signal are acquired according to the acquisition paradigm; creating a feature fusion network model based on an attention mechanism; and inputting the acquired electroencephalogram signals and eye movement signals into a feature fusion network model for feature extraction and classification processing, and obtaining recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals. The electroencephalogram signals and the eye movement signals are acquired through the set signal acquisition paradigm, the acquired signals are input into the created feature fusion network model to conduct feature extraction to obtain feature signals, the feature signals are further classified to obtain corresponding recognition results, recognition processing is conducted through the acquisition of the two recognition signals, and accordingly recognition processing of corresponding emotion states can be achieved, and the technical effect of improving emotion recognition rate is achieved.
Fig. 2 is a flow chart of another identification method based on electroencephalogram and eye movement according to an embodiment of the present invention. The execution subject of the invention is an emotion recognition system. Fig. 2 is presented on the basis of the above embodiment. Referring to the diagram provided in fig. 2, the identification method based on electroencephalogram and eye movement specifically further includes:
S201, creating an acquisition paradigm of brain electrical signals and eye movement signals according to a set experimental paradigm, wherein the acquisition paradigm comprises target stimulation and interference stimulation.
The invention is applied to emotion recognition. The method comprises the steps of collecting brain electrical signals and eye movement signals of a tester, inputting the collected signals into a created feature fusion network model to perform feature extraction to obtain feature signals, and further performing classification processing on the feature signals to obtain corresponding recognition results, so that recognition processing is performed by collecting two kinds of recognition signals.
The brain wave signals are understood to be brain wave signals acquired by the brain of the person. The subject may be, but is not limited to, a human or an animal. The eye movement signal is understood to be information such as a retinal focusing condition and a pupil dilation condition of the eye portion of the subject. The paradigm described herein is understood to mean a predetermined signal acquisition mode. For example, a normal form cycle process of setting a still picture, then watching a specified video content, finally closing the video, and resting eyes. The target stimulus is understood herein to be a visual stimulus with conflicting visual sensations. For example, a very fast racing car in a rough terrain accelerates the process, or a video clip of natural phenomena such as tsunami, mud-rock flow, etc. The interferential stimulus is understood herein as a visual stimulus having a gentle visual perception. For example, a blue sky in a clear sky or a mountain road where flowing water is purgy.
Further, before the electroencephalogram signal and the eye movement signal of the detection object are acquired, a signal acquisition paradigm is designed, and a set of complete experimental paradigm is created according to different categories of target stimulation and interference stimulation so as to prepare for the next acquisition of the electroencephalogram signal and the eye movement signal.
S202, acquiring target brain electrical signals and target eye movement signals based on target stimulation.
S203, acquiring an interference brain electrical signal and an interference eye movement signal based on the interference stimulus.
The target brain electrical signal is understood as an brain electrical signal generated by the brain after watching the target stimulation video. The target eye movement signal is understood as an eye movement signal generated by looking at the pupil and focusing conditions corresponding to the eye after the target stimulus. The term "interferential brain electrical signal" is understood to mean an brain electrical signal generated by the brain after viewing the interferential stimulation video. The disturbance eye movement signal is understood as an eye movement signal generated by looking at the pupil and focusing conditions corresponding to the eye after the disturbance stimulus.
Further, when the detected object watches different visual stimuli, corresponding brain electrical signals are collected aiming at the target stimuli to serve as target brain electrical signals, and corresponding eye movement signals are collected to serve as target eye movement signals; similarly, for the interference stimulus in the visual stimulus, the interference brain electrical signal corresponding to the brain electrical signal is collected, and the interference eye movement signal corresponding to the eye movement signal is collected, so that the brain electrical signal of the detection object and the eye pupil state or the eye movement signal of the focusing state are collected according to a designed collection paradigm, and a data basis is provided for identifying the emotion category of the detection object.
In one possible example scenario, based on the go-nogo experimental paradigm, the experimental paradigm was designed using E-Prime software to design class 2 stimuli (i.e., target stimuli and interferents stimuli). The detection object wears the electrode cap, and the video screen playing the stimulus is seen in the front direction, and the screen center presents a cross of 1s at the beginning, and then randomly presents visual stimulus for 2s, and finally has a rest of 1 s. After each round of test of the detected object, the self-evaluation needs to be performed in time, and the true emotion experience is recorded.
S204, creating a deep time sequence convolution layer network, a multi-spectrum convolution layer network and a characteristic fusion classification layer network.
S205, performing attention mechanism processing based on the deep time sequence convolution layer network, the multi-spectrum convolution layer network and the feature fusion classification layer network, and creating a corresponding feature fusion network model.
The deep time sequence convolution layer network can be understood as a network model for processing time sequence data characteristics to obtain time correlation characteristics. The multi-spectral convolutional layer network is understood here as a model for spectral analysis of spectral features. Attention mechanisms as referred to herein may be understood as processes that selectively focus on a portion of all information. The feature fusion classification layer network is understood as an identification model for classification processing after fusion processing.
Further, a deep time sequence convolution layer network, a multi-spectrum convolution layer network and a feature fusion classification layer network are respectively established, attention mechanism processing is conducted on the three network models, the degree of association among the three network models is established, a feature fusion network model for feature fusion of a plurality of feature data and achieving the purpose of feature classification recognition after fusion processing is established, and preparation is made for the next emotion classification recognition.
S206, inputting the acquired electroencephalogram signals into a corresponding deep time sequence convolution layer network in the characteristic fusion network model to perform time sequence convolution processing, and obtaining high-dimensional time sequence representation corresponding to the electroencephalogram signals.
The convolution process described herein can be understood as a multi-layer convolution operation. The high-dimensional timing characterization is understood herein as multi-dimensional timing characterization data.
Further, time sequence convolution operation is carried out on the collected electroencephalogram signals through the created feature fusion network model, feature dimensions of the electroencephalogram signals are improved, and high-dimensional time sequence characterization corresponding to the electroencephalogram signals is obtained after multi-dimensional transformation, so that preparation is made for identifying emotion categories.
In one possible example scenario, a deep sequential convolution layer network in a feature fusion network model extracts high-dimensional temporal features of an electroencephalogram signal, and a plurality of subunits are designed as sequential convolution units. Each time sequence convolution unit starts from the largest pooling layer, the core size is set to be (1 multiplied by 3), and the corresponding electroencephalogram representation is generated through the electroencephalogram signals of the acquisition channels. The time convolution with a convolution kernel size (1×11) is used to generate deeper information, so as to achieve the purpose of improving the dimension. Meanwhile, a nonlinear activation function is set, nonlinearity of the electroencephalogram signal is increased, and high-dimensional time sequence representation corresponding to the electroencephalogram signal is obtained.
S207, inputting the acquired eye movement signals into a multi-frequency spectrum convolution layer network corresponding to the characteristic fusion network model to carry out wavelet convolution processing, so as to obtain multi-frequency spectrum characteristics corresponding to the eye movement signals.
Wavelet convolution is understood here to mean an operation method of spectrum processing. The multi-spectral feature is understood here to be feature data that more highlights the main frequency components and more easily identifies the individual constituent components of the signal.
Further, wavelet convolution operation is carried out on the collected eye movement signals through the created feature fusion network model, spectrum features are extracted, and multi-spectrum features corresponding to the eye movement signals are obtained through improving feature dimensions of the eye movement signals, so that preparation is made for identifying emotion types.
In one possible example scenario, the function of frequency domain feature extraction is incorporated into a feature fusion network model using a convolution operator-wavelet convolution layer. The eye movement signal is subjected to wavelet decomposition by using Db4 (Daubechies order-4) wavelet, and the shallow eye movement signal is decomposed into wavelet coefficients of a plurality of frequency bands. After a series of wavelet convolutions, these spectral representations are further stitched into a compact signature for multi-spectral analysis. Defining x as an eye movement signal of T sampling points, defining wavelet transform at the sampling point T as expressed by the following equations 1 and 2:
Where u and v represent a pair of wavelet filters respectively, named approximation filter and detail filter respectively. X is x A And x D The approximation coefficients and the detail coefficients, respectively. K, s are the kernel size and step size of the wavelet convolution. Wherein the wavelet layer number V is determined by the sampling rate of the eye movement signal:
wherein f s Is the original sampling rate of the signal. The step size of each shift of the wavelet convolution kernel is set to 2 and the kernel size is 8, which is consistent with the order of the Db4 wavelet filter. Since the wavelet is implemented by convolution, the number of output channels is twice that of input channels, which ensures that the number of two channels obtained after separation is the same. Assuming that the number of input channels is N, 2N output channels are separated into approximate coefficients and detail coefficients by a defined selection method:
x A ={x w (c) Ic=1, 3, …,2R-1} formula 4
x D ={x w (c) Ic=2, 4, …,2R } formula 5
Wherein x is W For the output of each wavelet convolution layer, c is the selected channel index. For each input channel, the use of a pair of wavelet filters (u, v) results in two output channels, x A And x D Is alternating. Selecting x to be output by a filling method A The result of (2) is periodically and smoothly represented, and the distortion problem of the head and the tail of the signal after wavelet convolution is relieved:
Where K is the length of the signal and h is the kernel size of the wavelet convolution. To integrate all frequency domain features into a compact model architecture, the resulting spectral features are stitched together to yield a set of multi-spectral features.
S208, inputting the high-dimensional time sequence representation and the multi-spectral features into a feature fusion classification layer network for fusion treatment, and classifying the fused features to obtain a recognition result of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
Wherein the psychological states comprise a positive emotional state and a negative emotional state.
Further, the obtained high-dimensional time sequence representation representing the electroencephalogram signal and the multi-spectral feature representing the eye movement signal are input into a feature fusion classification layer network to perform feature fusion, the obtained fusion features are subjected to classification recognition, the obtained recognition result is used as the psychological states of the electroencephalogram signal and the eye movement signal, the emotion type is further obtained, and emotion recognition based on the electroencephalogram signal and the eye movement signal is completed.
Further, the specific steps of implementing the classification and identification process in step S208 include:
step one: and inputting the high-dimensional time sequence characterization and the multi-spectral features into a feature fusion classification layer network to perform global average pooling treatment to obtain the dimension statistical features.
Step two: and performing first full-connection processing, nonlinear processing and second full-connection processing on the dimension statistical characteristics to obtain a fusion feature vector.
Step three: and classifying the fusion feature vector based on a softmax function to obtain a preset psychological state recognition result.
In one possible example scenario, for feature data of an input brain electrical signal corresponding to an eye movement signal, the attention module performs a "compression" (i.e., fusion process) operation to recalibrate the feature, aggregating the feature data along the time dimension. The "excitation" operation takes the output of the previous "compression" block as input, explores channel correlation, calculates weights for all channels, and weights to feature data, implementing the channel attention mechanism. To take advantage of channel correlation, the compression operation performs a global average pooling of the input feature data to compress global information for use in generating channel dimension statistics. Formally, the statistics of the global average pooling calculation are defined as shown in equation 8:
wherein x is s Is input feature data, T is the time series length, M, c, T represents three dimensions, number, width and length of feature data. Subsequent "excitation" operations take full advantage of channel correlation after extracting the channel information from the compression operation. This is accomplished by two consecutive fully connected layers, a nonlinear layer and a softmax function, and finally the output of the attention module is defined as shown in equation 9:
z se =σ(W 2 ε(W 1 z sq ))x s 9. The invention is applicable to
W in the formula 1 ,W 2 Respectively a first and a second fully-connected layer, epsilon being a nonlinear function and sigma being a softmax function. Mapping all features into one-dimensional feature vector and inputting the wholeAnd a connection layer. And using the softmax function to obtain the probabilities of different classifications, and using the classification with the highest probability as the recognition result of the final electroencephalogram signal and eye movement signal emotion recognition to fulfill the aim of detecting emotion change based on the electroencephalogram signal and the eye movement signal.
In one possible example scenario, fig. 3 presents a flowchart of an example scenario provided by an embodiment of the present invention. Referring to the diagram provided in fig. 3, first, a visual stimulus video or image is watched on an eye movement signal acquisition device by wearing electroencephalogram on a detection subject, an electroencephalogram of the detection subject is acquired, and an eye movement signal is acquired by an eye movement meter, thereby obtaining an EEG signal and an eye movement signal. And respectively preprocessing the acquired signals to obtain multi-layer dynamic graph convolution data corresponding to the electroencephalogram signals and time sequence and frequency spectrum convolution data corresponding to the eye movement signals. And inputting the obtained convolution data into a feature fusion network model based on an attention mechanism for feature fusion and classification to obtain a psychological state recognition result, wherein the recognition result can be classified according to two major categories of positive emotion and negative emotion, and the positive emotion is refined into peace emotion, relaxation emotion or happy emotion. Negative emotions can be re-refined to tired, tension or fear. After the identification result is obtained, optionally, model performance evaluation is increased, the feature extraction method is corrected under the condition of disqualification, the current feature fusion network model is detected in real time under the condition of qualification, the multi-mode data management module is utilized to obtain signal data, the classification identification data is adjusted through online detection, and the detection result is displayed through the display module, so that the purpose of real-time detection is achieved.
Alternatively, in one possible example scenario, fig. 4 presents a schematic flow chart of another example scenario provided by an embodiment of the present invention. Referring to the diagram provided in fig. 4, by collecting the electroencephalogram signal and the eye movement signal, the electroencephalogram signal is subjected to multi-layer convolution processing, pooling processing and nonlinear function processing by using the depth time sequence convolution layer, so that the high-dimensional time sequence characteristics corresponding to the electroencephalogram signal are obtained. And carrying out multi-layer wavelet convolution processing on the eye movement signal in the multi-frequency spectrum convolution layer to obtain multi-frequency spectrum characteristics. And inputting the obtained high-dimensional time sequence features and multi-spectral features into a feature fusion network model based on an attention mechanism, carrying out fusion processing on the high-dimensional time sequence features and the multi-spectral features, classifying the fused features, including global pooling processing, twice full connection processing, nonlinear processing and classification processing, comparing according to preset emotion categories, and finally outputting a recognition result to realize recognition processing of corresponding emotion states and improve the technical effect of emotion recognition rate.
According to the identification method based on electroencephalogram and eye movement, which is provided by the embodiment of the invention, the electroencephalogram and eye movement signals are acquired through design realization, the electroencephalogram and eye movement signals are subjected to feature extraction by utilizing a deep time sequence convolution layer network to obtain high-dimensional time sequence features, the eye movement signals are subjected to feature extraction by utilizing a multi-frequency spectrum convolution layer network to obtain multi-frequency spectrum features, the obtained high-dimensional time sequence features and the multi-frequency spectrum features are input into a feature fusion classification layer network based on an attention mechanism to perform feature fusion, the fused feature data is classified to obtain emotion classification identification results, identification processing of corresponding emotion states is realized, and the technical effect of emotion identification rate is improved.
Fig. 5 is a schematic structural diagram of an identification system according to an embodiment of the present invention. The execution subject of the invention is an emotion recognition system. Referring to the diagram provided in fig. 5, the identification system specifically includes:
the system comprises a synchronous acquisition signal module 51, a data management module 52, a state online detection module 53 and a display module 54.
Further, the synchronous acquisition signal module is used for synchronously acquiring the brain electrical signals and the eye movement signals in real time; the data management module is used for storing multi-mode data corresponding to the electroencephalogram signals and the eye movement signals; the state online detection module is used for carrying out online classification detection and analysis processing on the acquired electroencephalogram signals and eye movement signals and determining psychological states and analysis results corresponding to the electroencephalogram signals and the eye movement signals; the display module is used for displaying the psychological state and the analysis result obtained by the on-line detection.
In one possible example scenario, the data management module is further configured to perform quality analysis processing on multi-mode data corresponding to the electroencephalogram signal and the eye movement signal, and filter the multi-mode data, so that an original database and a feature database in the data management module are updated in a feedback manner.
In a possible example scenario, the state online detection module is further configured to preprocess online multi-mode data to obtain online multi-mode information; and carrying out fusion classification processing based on the online multi-mode information, and determining the online psychological state and the online analysis result of the online multi-mode data.
In one possible example scenario, the identification system further comprises an offline detection module; the off-line detection module is used for preprocessing the original multi-mode data to obtain off-line multi-mode information; and carrying out fusion classification processing based on the offline multi-mode information, and determining the offline psychological state and the offline analysis result of the offline multi-mode data.
Alternatively, in one possible example scenario, fig. 6 is a schematic structural diagram of an identification system in one example scenario provided by an embodiment of the present invention. Firstly, the synchronous acquisition module of the brain electrical signal and the eye movement signal of the wearing of the test object realizes the synchronous acquisition of the brain electrical signal and the eye movement physiological information with the advantages of high time resolution and high stability by integrating a high sampling rate Neuroscan brain electrical acquisition device and a portable Tobii Pro Glasses eye movement tracker. And the data management module corresponding to the electroencephalogram signals and the eye movement signals stores the physiological information such as the collected and recorded electroencephalogram signals, eye movement tracking and the like into the electroencephalogram eye movement database at the server end for management, quantitatively analyzes and judges the data quality according to the multi-mode characteristic processing result, screens and optimizes the data, and enables the database to be continuously fed back and updated, thereby realizing the dynamic storage management of the big data of the physiological information to be tested. And then, the psychological state online detection module integrates the multimodal data preprocessing and the multimodal data information fusion processing of the electroencephalogram signals and the eye movement signals by means of strong computing capacity of a server, firstly, a multimodal time-frequency characteristic extraction and fusion decoding network model is built offline based on the historical storage data of the data management module corresponding to the electroencephalogram signals and the eye movement signals, online detection analysis is carried out on the multimodal physiological data corresponding to the electroencephalogram signals and the eye movement signals which are synchronously acquired in real time, correction and updating are carried out on decoding network parameters according to detection results, and then the psychological state is rapidly and accurately detected. And finally, the dynamic psychological state online detection and analysis result synchronously processed with the model is presented through the set client platform display module, so that the emotion state recognition processing is realized, and the technical effect of improving the emotion recognition rate is achieved.
Alternatively, in one possible example scenario, fig. 7 is an effect diagram of the recognition system in another example scenario provided by the embodiment of the present invention. Referring to the diagram provided in fig. 7, the real-time detection system for the psychological state of the user corresponding to the detected object can intuitively see the data of the current detected object by inputting the basic confidence of the user and displaying the state of the sampling device, display the current emotion category dimension negative emotion through the display module of the current psychological state, refine the emotion into a fatigue state and accurately acquire the position of the detected object by locking the coordinate latitude of the detected object. Meanwhile, through the set history detection report display interface, the history detection result of the detection object is displayed and used as the condition for judging the emotion change of the detection object, the identification processing of the corresponding emotion state is realized, and the technical effect of improving the emotion identification rate is achieved.
According to the recognition system provided by the embodiment of the invention, the synchronous acquisition signal module, the data management module, the state online detection module and the display module are arranged, the characteristic extraction is carried out based on the electroencephalogram signals and the eye movement signals, the special gift fusion and the online classification detection are carried out through the characteristic fusion network model based on the attention mechanism, the real-time online psychological state detection is realized, the psychological state detection such as fatigue, tension and excitement is carried out based on the emotion recognition system of the electroencephalogram and the eye movement signals by combining the nerve mechanism of visual cognition and the characteristics of the cognition process, the visual attention mechanism, the human experience knowledge and the machine intelligence are combined, and the man-machine fusion system platform is formed, so that the recognition processing of the emotional state is realized, and the technical effect of the emotion recognition rate is improved.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the electronic device 800 shown in fig. 8 includes: at least one processor 801, memory 802, at least one network interface 804, and other user interfaces 803. The various components in the electronic device 800 are coupled together by a bus system 805. It is appreciated that the bus system 805 is used to enable connected communications between these components. The bus system 805 includes a power bus, a control bus, and a status signal bus in addition to the data bus. But for clarity of illustration, the various buses are labeled as bus system 805 in fig. 8.
The user interface 803 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, a trackball, a touch pad, or a touch screen, etc.).
It will be appreciated that the memory 802 in embodiments of the invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as an external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (Double Data Rate SDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), and Direct memory bus RAM (DRRAM). The memory 802 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some implementations, the memory 802 stores the following elements, executable units or data structures, or a subset thereof, or an extended set thereof: an operating system 8021 and application programs 8022.
The operating system 8021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 8022 includes various application programs such as a Media Player (Media Player), a Browser (Browser), and the like for realizing various application services. The program for implementing the method of the embodiment of the present invention may be contained in the application program 8022.
In the embodiment of the present invention, by calling a program or an instruction stored in the memory 802, specifically, a program or an instruction stored in the application program 8022, the processor 801 is configured to perform method steps provided by each method embodiment, for example, including:
creating an acquisition paradigm of the electroencephalogram signal and the eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm; creating a feature fusion network model based on an attention mechanism; and inputting the acquired electroencephalogram signals and eye movement signals into a feature fusion network model for feature extraction and classification processing, and obtaining recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
The method disclosed in the above embodiment of the present invention may be applied to the processor 801 or implemented by the processor 801. The processor 801 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 801 or by instructions in software. The processor 801 described above may be a general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software elements in a decoding processor. The software elements may be located in a random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 802, and the processor 801 reads information in the memory 802 and, in combination with its hardware, performs the steps of the above method.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or a combination thereof. For a hardware implementation, the processing units may be implemented within one or more application specific integrated circuits (Application Specific Integrated Circuits, ASIC), digital signal processors (Digital Signal Processing, DSP), digital signal processing devices (dspev, DSPD), programmable logic devices (Programmable Logic Device, PLD), field programmable gate arrays (Field-Programmable Gate Array, FPGA), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be an electronic device as shown in fig. 8, and may perform all the steps of the recognition method based on electroencephalogram and eye movement as shown in fig. 1-2 and fig. 4, so as to achieve the technical effects of the recognition method based on electroencephalogram and eye movement as shown in fig. 1-2 and fig. 4, and the detailed description is omitted herein for brevity.
The embodiment of the invention also provides a storage medium (computer readable storage medium). The storage medium here stores one or more programs. Wherein the storage medium may comprise volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, hard disk, or solid state disk; the memory may also comprise a combination of the above types of memories.
When one or more programs in the storage medium are executable by one or more processors, the above-described electroencephalogram and eye movement based identification method performed on the electroencephalogram and eye movement based identification apparatus side is implemented.
The processor is used for executing an identification program based on the brain electricity and the eye movement stored in the memory to realize the following steps of an identification method based on the brain electricity and the eye movement, which are executed on an identification equipment side based on the brain electricity and the eye movement:
creating an acquisition paradigm of the electroencephalogram signal and the eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm; creating a feature fusion network model based on an attention mechanism; and inputting the acquired electroencephalogram signals and eye movement signals into a feature fusion network model for feature extraction and classification processing, and obtaining recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of function in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. An identification method based on electroencephalogram and eye movement is characterized by comprising the following steps:
creating an acquisition paradigm of an electroencephalogram signal and an eye movement signal, and acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm;
creating a feature fusion network model based on an attention mechanism;
and inputting the acquired electroencephalogram signals and the eye movement signals into the feature fusion network model to perform feature extraction and classification processing, so as to obtain recognition results of psychological states corresponding to the electroencephalogram signals and the eye movement signals.
2. The method of claim 1, wherein the creating an acquisition paradigm of the electroencephalogram signal and the eye movement signal and the acquiring the electroencephalogram signal and the eye movement signal according to the acquisition paradigm comprises:
establishing an acquisition paradigm of an electroencephalogram signal and an eye movement signal according to a set experimental paradigm, wherein the acquisition paradigm comprises target stimulation and interference stimulation;
collecting a target electroencephalogram signal and a target eye movement signal based on the target stimulus;
and acquiring an interference brain electrical signal and an interference eye movement signal based on the interference stimulus.
3. The method of claim 2, wherein creating the attention mechanism based feature fusion network model comprises:
Creating a deep time sequence convolution layer network, a multi-frequency spectrum convolution layer network and a characteristic fusion classification layer network;
and performing attention mechanism processing based on the deep time sequence convolution layer network, the multi-spectrum convolution layer network and the feature fusion classification layer network, and creating a corresponding feature fusion network model.
4. The method according to claim 3, wherein the inputting the collected electroencephalogram signal and the eye movement signal into the feature fusion network model for feature extraction and classification processing, to obtain the recognition result of the psychological states corresponding to the electroencephalogram signal and the eye movement signal, includes:
inputting the acquired electroencephalogram signals into a corresponding depth time sequence convolution layer network in the characteristic fusion network model to perform time sequence convolution processing to obtain high-dimensional time sequence characterization corresponding to the electroencephalogram signals;
inputting the acquired eye movement signals into a corresponding multi-frequency spectrum convolution layer network in the characteristic fusion network model to carry out wavelet convolution processing to obtain multi-frequency spectrum characteristics corresponding to the eye movement signals;
inputting the high-dimensional time sequence representation and the multi-frequency characteristic into the characteristic fusion classification layer network for fusion treatment, and classifying the fused characteristic to obtain a recognition result of psychological states corresponding to the electroencephalogram signal and the eye movement signal.
5. The method according to claim 4, wherein the inputting the high-dimensional time sequence representation and the multi-frequency characteristic into the characteristic fusion classification layer network for fusion processing, and classifying the fused characteristic to obtain the recognition result of the psychological states corresponding to the electroencephalogram signal and the eye movement signal, includes:
carrying out global average pooling treatment on the high-dimensional time sequence characterization and the multi-spectral feature input feature fusion classification layer network to obtain dimension statistical features;
performing first full-connection processing, nonlinear processing and second full-connection processing on the dimension statistical characteristics to obtain fusion feature vectors;
and classifying the fusion feature vector based on a softmax function to obtain a preset psychological state recognition result.
6. The method of claim 1, wherein the mental states comprise a positive emotional state and a negative emotional state.
7. An identification system applying the identification method for brain electricity and eye movements according to claim 1, characterized by comprising:
the system comprises a synchronous acquisition signal module, a data management module, a state online detection module and a display module;
The synchronous acquisition signal module is used for synchronously acquiring brain electrical signals and eye movement signals in real time;
the data management module is used for storing multi-mode data corresponding to the electroencephalogram signals and the eye movement signals;
the state online detection module is used for carrying out online classification detection and analysis processing on the acquired electroencephalogram signals and eye movement signals and determining psychological states and analysis results corresponding to the electroencephalogram signals and the eye movement signals;
the display module is used for displaying the psychological state and the analysis result obtained by the on-line detection.
8. The recognition system of claim 7, wherein the data management module is further configured to perform quality analysis processing on multi-modal data corresponding to the electroencephalogram signal and the eye movement signal, and to screen the multi-modal data, so that an original database and a feature database in the data management module are updated in a feedback manner.
9. The recognition system of claim 7, wherein the status online detection module is further configured to preprocess online multi-modal data to obtain online multi-modal information; and carrying out fusion classification processing based on the online multi-mode information, and determining the online psychological state and the online analysis result of the online multi-mode data.
10. The identification system of claim 7, further comprising an offline detection module;
the off-line detection module is used for preprocessing the original multi-mode data to obtain off-line multi-mode information;
and carrying out fusion classification processing based on the offline multi-mode information, and determining the offline psychological state and the offline analysis result of the offline multi-mode data.
CN202310341713.3A 2023-03-31 2023-03-31 Identification method and identification system based on electroencephalogram and eye movement Pending CN116439706A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310341713.3A CN116439706A (en) 2023-03-31 2023-03-31 Identification method and identification system based on electroencephalogram and eye movement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310341713.3A CN116439706A (en) 2023-03-31 2023-03-31 Identification method and identification system based on electroencephalogram and eye movement

Publications (1)

Publication Number Publication Date
CN116439706A true CN116439706A (en) 2023-07-18

Family

ID=87131411

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310341713.3A Pending CN116439706A (en) 2023-03-31 2023-03-31 Identification method and identification system based on electroencephalogram and eye movement

Country Status (1)

Country Link
CN (1) CN116439706A (en)

Similar Documents

Publication Publication Date Title
Wang et al. Channel selection method for EEG emotion recognition using normalized mutual information
Palazzo et al. Decoding brain representations by multimodal learning of neural activity and visual features
Altaheri et al. Deep learning techniques for classification of electroencephalogram (EEG) motor imagery (MI) signals: A review
Zhang et al. Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface
Kächele et al. Methods for person-centered continuous pain intensity assessment from bio-physiological channels
CN111329474B (en) Electroencephalogram identity recognition method and system based on deep learning and information updating method
CN111209885A (en) Gesture information processing method and device, electronic equipment and storage medium
Cecotti et al. Best practice for single-trial detection of event-related potentials: Application to brain-computer interfaces
Miao et al. A spatial-frequency-temporal optimized feature sparse representation-based classification method for motor imagery EEG pattern recognition
CN110555468A (en) Electroencephalogram signal identification method and system combining recursion graph and CNN
Rahman et al. EyeNet: An improved eye states classification system using convolutional neural network
CN108186030A (en) A kind of stimulus information provides the cognition index analysis method of device and latent energy value test
Li et al. A parallel multi-scale time-frequency block convolutional neural network based on channel attention module for motor imagery classification
Xie et al. WT feature based emotion recognition from multi-channel physiological signals with decision fusion
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
Soni et al. Electroencephalography signals-based sparse networks integration using a fuzzy ensemble technique for depression detection
Jiang et al. Emotion recognition via multi-scale feature fusion network and attention mechanism
Immanuel et al. Recognition of emotion with deep learning using EEG signals-the next big wave for stress management in this covid-19 outbreak
Bajada et al. Real-time eeg-based emotion recognition using discrete wavelet transforms on full and reduced channel signals
Jaswal et al. Empirical analysis of multiple modalities for emotion recognition using convolutional neural network
CN116421200A (en) Brain electricity emotion analysis method of multi-task mixed model based on parallel training
Saha et al. Automatic emotion recognition from multi-band EEG data based on a deep learning scheme with effective channel attention
CN116439706A (en) Identification method and identification system based on electroencephalogram and eye movement
Cizmeci et al. Channel selection and feature extraction on deep EEG classification using metaheuristic and Welch PSD
CN114742107A (en) Method for identifying perception signal in information service and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination