CN112656431A - Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium - Google Patents

Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN112656431A
CN112656431A CN202011472619.4A CN202011472619A CN112656431A CN 112656431 A CN112656431 A CN 112656431A CN 202011472619 A CN202011472619 A CN 202011472619A CN 112656431 A CN112656431 A CN 112656431A
Authority
CN
China
Prior art keywords
signal
attention recognition
attention
brain wave
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011472619.4A
Other languages
Chinese (zh)
Inventor
余小新
王怡珊
李烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202011472619.4A priority Critical patent/CN112656431A/en
Publication of CN112656431A publication Critical patent/CN112656431A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application relates to the technical field of brain-computer interfaces, and provides an attention recognition method and device based on electroencephalogram, terminal equipment and a computer storage medium. The attention recognition method includes: acquiring a brain wave signal to be identified; preprocessing the brain wave signal to obtain a target signal; adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training; and identifying the target signal by adopting the attention identification model after parameter adjustment to obtain an attention identification result of the brain wave signal. By introducing the transfer learning mechanism, the problem of limited number of electroencephalogram data set samples can be solved, and the accuracy of attention recognition can be improved.

Description

Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium
Technical Field
The application relates to the technical field of brain-computer interfaces, in particular to an attention recognition method and device based on electroencephalogram, terminal equipment and a storage medium.
Background
With the development and progress of society, the proportion of mental labor in human activities is higher and higher, wherein attention factors have important influence on brain activities, and the real-time detection and identification of the attention of the brain become the focus of attention of current technology development.
At present, the attention recognition of the electroencephalogram is mainly realized by a method based on traditional machine learning, and the basic principle is as follows: electroencephalogram data are collected through electroencephalogram equipment, then a series of data processing such as filtering, noise reduction, decomposition and the like are carried out on the electroencephalogram data, features are manually calculated and then the data are input into a Support Vector Machine (SVM) classification model, and a result of attention recognition is obtained.
However, the SVM classification model has the defects of tedious feature manual extraction and low accuracy of the recognition result.
Disclosure of Invention
In view of this, embodiments of the present application provide an attention recognition method and apparatus based on electroencephalogram, a terminal device, and a storage medium, which can improve accuracy of attention recognition.
A first aspect of an embodiment of the present application provides an attention recognition method based on electroencephalogram, including:
acquiring a brain wave signal to be identified;
preprocessing the brain wave signal to obtain a target signal;
adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training;
and identifying the target signal by adopting the attention identification model after parameter adjustment to obtain an attention identification result of the brain wave signal.
The method and the device adopt a preset source domain data set to train to obtain a convolutional neural network model, preprocess brain wave signals to be recognized and input the preprocessed brain wave signals into the model to adjust model parameters, and therefore migration of the model from a source domain to a target domain is achieved. By introducing the transfer learning mechanism, the problem of limited number of electroencephalogram data set samples can be solved, and the accuracy of attention recognition can be improved.
In an embodiment of the present application, the pre-processing the brain wave signal to obtain the target signal may include:
performing band-pass filtering processing on the brain wave signals to obtain specified frequency band signals;
denoising the specified frequency band signal by adopting a multi-level wavelet decomposition method;
and executing Euclidean space alignment processing on the denoised specified frequency band signal and the source domain data set to obtain the target signal.
Because signals acquired by electroencephalogram equipment are inevitably mixed with a large number of noise signals, which mainly include self-noise of the equipment, interference of an electro-oculogram signal or an electromyogram signal, and the like, after acquiring a brain wave signal, a series of preprocessing processes such as filtering, noise reduction and the like are firstly performed on the brain wave signal.
Further, the method for performing denoising processing on the specified frequency band signal by using a multi-level wavelet decomposition may include:
performing low-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the low-pass filtering processing by adopting a preset first impulse response function to obtain a low-frequency component signal;
performing high-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the high-pass filtering processing by adopting a preset second impulse response function to obtain a high-frequency component signal;
and if the low-frequency component signal and the high-frequency component signal are both in a specified frequency range, determining to finish the denoising processing of the specified frequency band signal.
For a large amount of noise signals mixed in the brain wave signals, a multi-level wavelet decomposition method can be adopted for denoising, so that noise interference in the brain wave signals is reduced.
In one embodiment of the present application, the attention recognition model may be obtained by training:
acquiring the source domain data set, wherein the source domain data set comprises electroencephalogram data with class labels acquired by a plurality of different subjects at different time and in different states;
and training by taking the source domain data set as a training set to obtain a convolutional neural network model containing a depth separable convolutional layer and a residual error module, wherein the convolutional neural network model is used as the attention recognition model.
According to the method, the depth-separable convolutional layer and the residual error module are combined, a lightweight convolutional neural network can be obtained through training, and the calculated amount is greatly reduced at the cost of only slightly reducing the classification accuracy.
Specifically, adjusting parameters of a pre-constructed attention recognition model according to the target signal may include:
and adjusting the softmax output layer parameter and the batchsize parameter of the attention recognition model according to the target signal.
In order to prevent overfitting, only the adjustment of the softmax output layer parameter and the Batchsize parameter is carried out, and other model parameters are kept unchanged.
Further, adjusting the softmax output layer parameter and the blocksize parameter of the attention recognition model according to the target signal may include:
adjusting the output category number of the softmax output layer parameter to a designated value according to the target signal;
and reducing the value of the batch size parameter according to a preset proportion.
Assuming that the initial softmax output layer parameter of the model is 4, if the target domain is classified only by two, the softmax output layer parameter can be adjusted to 2. In addition, since the target domain data set is small, to accommodate this feature, the batchsize parameter of the model may be gradually decreased from large to small to determine the most appropriate parameter value.
In an embodiment of the present application, after the preprocessing operation is performed on the brain wave signals to obtain the target signals, the method may further include:
performing segmentation processing on the target signal according to a specified time length to obtain a plurality of target signal segments;
calculating a time-frequency diagram of a plurality of specified wave bands of the brain wave signal according to the target signal segments;
the identifying the target signal by using the attention identification model after parameter adjustment to obtain the attention identification result of the brain wave signal may specifically be:
and inputting the time-frequency graphs of the plurality of specified wave bands into the attention recognition model after the parameters are adjusted to obtain the attention recognition result of the brain wave signal.
For the preprocessed target signal, the target signal may be segmented according to a certain specified time length to obtain a plurality of signal segments. And then, calculating time-frequency graphs of a plurality of specified wave bands of the brain wave signal according to the signal segments, and using each calculated time-frequency graph as an input feature of an attention recognition model to finish attention recognition.
A second aspect of the embodiments of the present application provides an attention recognition device based on electroencephalogram, including:
the signal acquisition module is used for acquiring brain wave signals to be identified;
the signal preprocessing module is used for preprocessing the brain wave signal to obtain a target signal;
the model parameter adjusting module is used for adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training;
and the attention recognition module is used for recognizing the target signal by adopting the attention recognition model after parameter adjustment to obtain an attention recognition result of the brain wave signal.
A third aspect of an embodiment of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the electroencephalogram-based attention recognition method as provided in the first aspect of the embodiment of the present application when executing the computer program.
A fourth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, which when executed by a processor implements the steps of the electroencephalogram-based attention recognition method as provided in the first aspect of embodiments of the present application.
A fifth aspect of the embodiments of the present application provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the steps of the electroencephalogram-based attention recognition method according to the first aspect of the embodiments of the present application.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic diagram of a basic flow adopted by an electroencephalogram-based attention recognition method provided by an embodiment of the present application;
FIG. 2 is a flowchart of an embodiment of an electroencephalogram-based attention recognition method provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a multi-level wavelet decomposition denoising principle provided in the embodiment of the present application;
fig. 4 is a schematic structural diagram of a residual module included in the attention recognition model according to an embodiment of the present disclosure;
FIG. 5 is a block diagram of an embodiment of an electroencephalogram-based attention recognition device provided by an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail. Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
At present, the scheme of using a brain-computer interface technology for attention recognition mainly comprises an invasive electroencephalogram system, and the scheme of collecting electroencephalogram signals through an electroencephalogram cap device and then performing machine learning and the like. However, these solutions have the disadvantages of high cost, complex operation, poor practicability and limited accuracy.
In order to solve the above problems, the present application provides an attention recognition method suitable for a portable electroencephalogram device, which may be implemented by collecting electroencephalogram data of a forehead portion of a subject using a portable device such as an electroencephalogram headband, and then performing data processing and other operations on the electroencephalogram data, and then recognizing an attention state of the electroencephalogram data using a transfer learning method, so as to improve generalization ability of a model and accuracy of attention recognition.
It should be understood that the main subjects of the embodiments of the methods of the present application are various types of terminal devices or servers, such as mobile phones, tablet computers, notebook computers, desktop computers, wearable devices, various types of electroencephalogram detection devices, and the like.
Fig. 1 is a basic flow diagram adopted by an electroencephalogram-based attention recognition method provided in an embodiment of the present application, and mainly includes the following key steps:
(1) after the brain wave signals of the testee are collected, data preprocessing operations are carried out on the signals, wherein the data preprocessing operations can comprise operations of denoising, artifact removing, data alignment and the like;
(2) performing segmentation processing on the brain wave signals, namely dividing the brain wave signals subjected to data preprocessing according to a certain specified time (for example, 5 seconds) to obtain a plurality of signal segments;
(3) a lightweight convolutional neural network model is constructed to serve as an attention recognition model, the network structure of the model is simple, the calculated amount can be reduced, and the model is suitable for being adopted by portable electroencephalogram equipment;
(4) pre-training the model by adopting a source domain data set, and storing the trained model;
(5) inputting the brain wave signals subjected to the segmentation processing into the trained model, adjusting model parameters, and completing the migration of the model from a source domain to a target domain;
(6) and identifying the brain wave signal by adopting the model after parameter adjustment to obtain the result of attention identification.
For a more detailed introduction and description of the above steps, reference may be made to the examples described below.
Referring to fig. 2, a method for recognizing attention based on electroencephalogram according to an embodiment of the present application is shown, including:
201. acquiring a brain wave signal to be identified;
first, brain wave signals to be recognized are acquired. Specifically, a portable brain wave acquisition device may be employed to acquire a brain wave signal of the forehead of the brain of the subject as a brain wave signal for which attention recognition is to be performed.
202. Preprocessing the brain wave signal to obtain a target signal;
because signals acquired by electroencephalogram equipment are inevitably mixed with a large number of noise signals, which mainly include self-noise of the equipment, interference of an electro-oculogram signal or an electromyogram signal, and the like, after acquiring a brain wave signal, a series of preprocessing processes such as filtering, noise reduction and the like are firstly performed on the brain wave signal.
In an embodiment of the present application, the pre-processing the brain wave signal to obtain the target signal may include:
(1) performing band-pass filtering processing on the brain wave signals to obtain specified frequency band signals;
(2) denoising the specified frequency band signal by adopting a multi-level wavelet decomposition method;
(3) and executing Euclidean space alignment processing on the denoised specified frequency band signal and the source domain data set to obtain the target signal.
In the step (1), since the frequency band range of the acquired electroencephalogram signal is large and data in many frequencies is useless for attention recognition, a band-pass filter may be provided to process the electroencephalogram signal and obtain a signal in a certain specified frequency band. For example, a 0.5Hz-32Hz bandpass filtering process may be performed to obtain signals within a specified frequency band that are useful for attention recognition.
In the step (2), a multi-level wavelet decomposition method can be adopted to perform denoising processing on a large number of noise signals included in the brain wave signals, so as to reduce noise interference in the brain wave signals. Specifically, the performing denoising processing on the specified frequency band signal by using a multi-level wavelet decomposition method may include:
(2.1) performing low-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the low-pass filtering processing by adopting a preset first impulse response function to obtain a low-frequency component signal;
(2.2) performing high-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the high-pass filtering processing by adopting a preset second impulse response function to obtain a high-frequency component signal;
and (2.3) if the low-frequency component signal and the high-frequency component signal are both in a specified frequency range, determining that the de-noising processing of the specified frequency band signal is finished.
For the description of steps (2.1) - (2.3), reference may be made to the multi-level wavelet decomposition denoising principle diagram shown in fig. 3. A in FIG. 30Representing the non-denoised signal in the specified frequency band, first for A0Low-pass filtering is carried out, and then the signal after low-pass filtering is sampled by adopting a preset impulse response function h (n) to obtain a low-frequency component signal A1(ii) a Parallelly performing high-pass filtering on A0, and sampling the high-pass filtered signal by using a preset impulse response function g (n) to obtain a high-frequency component signal D1(ii) a Then, the obtained low frequency component signal A is judged1And a high frequency component signal D1Whether all are within a certain specified frequency range; if yes, determining that the de-noising processing of the signal is finished, and carrying out de-noising processing on the signal A1Determined as a denoised signal. If not, the signal A is sent1As an initial signal, the sum signal a is repeatedly performed0And (5) the same processing process is carried out until a de-noising signal meeting the requirement is obtained.
For the step (3), after the brain wave signals are denoised, an alignment processing procedure of the target domain signals and the source domain signals can be executed, and the euclidean space alignment processing mode is adopted in the application. Model-based migration learning, which may utilize knowledge learned in the old domain (source domain) to assist the model in solving problems in the new domain (target domain), is based on the premise that source domain data and target domain data are distributed similarly but not identically. The method adopts the Euclidean space alignment technology, reduces the difference of data distribution of a source domain and a target domain, and enables the data distribution to be closer, and the specific alignment operation method comprises the following steps:
first, the arithmetic mean of all covariance matrices of electroencephalogram data of one subject in a data set is calculated using the following formula
Figure BDA0002836335550000081
Figure BDA0002836335550000082
In the above formula, XiCollecting brain wave data for a test of the subject, and performing data alignment operation to obtain aligned data
Figure BDA0002836335550000083
Figure BDA0002836335550000084
After data alignment, the mean covariance matrix for all n alignment trials is:
Figure BDA0002836335550000091
it can be seen that the mean covariance matrices of all subjects in the data set are aligned to be equal to the identity matrix, so that the variance distributions of different subjects are more similar, i.e., the data distributions of the source domain and the target domain are closer.
In an embodiment of the present application, after the preprocessing operation is performed on the brain wave signals to obtain the target signals, the method may further include:
(1) performing segmentation processing on the target signal according to a specified time length to obtain a plurality of target signal segments;
(2) and calculating a time-frequency diagram of a plurality of specified wave bands of the brain wave signal according to the target signal segments.
The identifying the target signal by using the attention identification model after parameter adjustment to obtain the attention identification result of the brain wave signal may specifically be:
and inputting the time-frequency graphs of the plurality of specified wave bands into the attention recognition model after the parameters are adjusted to obtain the attention recognition result of the brain wave signal.
For the preprocessed target signal, the target signal may be segmented according to a certain specified time length to obtain a plurality of signal segments. For example, a plurality of signal segments may be divided in a time length of 5 seconds, and adjacent signal segments may have an overlapping portion of 1 second. And then, calculating time-frequency graphs of a plurality of specified wave bands of the brain wave signal according to the signal segments, and using each calculated time-frequency graph as an input feature of an attention recognition model to finish attention recognition. Specifically, a time-frequency map of each designated band, which may be a band of four rhythms (α, β, θ, and δ waves) divided in the brain wave signal, may be calculated based on three functions cwt (), centfrq (), and scal2frq () in the matlab wavelet toolbox.
203. Adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training;
after the target signal is obtained, parameters of a pre-constructed attention recognition model are adjusted according to the target signal so as to complete the migration of the model from the source domain to the target domain.
In one embodiment of the present application, the attention recognition model may be obtained by training:
(1) acquiring the source domain data set, wherein the source domain data set comprises electroencephalogram data with class labels acquired by a plurality of different subjects at different time and in different states;
(2) and training by taking the source domain data set as a training set to obtain a convolutional neural network model containing a depth separable convolutional layer and a residual error module, wherein the convolutional neural network model is used as the attention recognition model.
For a source domain data set, it may contain electroencephalogram data with class labels acquired at different times and different states for a plurality of different subjects. For example, SEED fatigue driving brain bag data set of shanghai transportation university may be used as source domain data. The data set uses a simulation driving system to collect EEG signals and mark the signals, and a nerve scanning system is adopted to record EEG and eye current in the experimental process. Meanwhile, the subject wears SMI eye tracking glasses to record eye movements as an index of fatigue. The source data set comprises a plurality of subdata sets, the EEG _ Feature _5Bands data set in the Forehead _ EEG which is closest to the self-collected data set of the person can be adopted in the application, the data set is completed by 25 volunteers, the data set only comprises signals of four electrodes on the Forehead part, the electroencephalogram signals are divided into 5 rhythms according to frequency Bands, each part is divided into a plurality of segments, and the application only adopts four electroencephalogram wave Bands closely related to attention activity: alpha waves, beta waves, theta waves, and delta waves.
The source domain data set is adopted for training, and a convolutional neural network model comprising a depth separable convolutional layer and a residual error module is obtained and serves as an attention recognition model adopted by the application. The deep convolutional layer is a network structure widely applied to the field of micro networks or network optimization, and is an improved algorithm for the classical convolutional layer. The residual module combines the input of the front-layer network with the output of the front-layer network, so that data in the network can flow across layers while the identity mapping is added manually, and the possibility that the deep network is automatically transformed into the shallow network is realized. The residual error module can optimize the training process, and the better fitting function can improve the classification precision, and the structural schematic of the residual error module is shown in fig. 4. The application provides a lightweight convolutional neural network which can greatly reduce the calculation amount at the cost of only slightly reducing the classification precision.
In one embodiment of the present application, the hierarchy of the trained lightweight convolutional neural network may be as shown in table 1 below:
TABLE 1
NetworkLayer(s) Each layer structure Output size
Input (C,T)
Conv2D (1,64) (16,C,T)
BatchNorm (16,C,T)
DepthwiseConv2D (C,1) (16,1,T)
BatchNorm (16,1,T)
Activation Relu (16,1,T)
AveragePool2D (1,4) (16,1,T/4)
DepthSeperableConv2D (1,16) (16,1,T/4)
BatchNorm (16,1,T/4)
Activation Relu (16,1,T/4)
AveragePool2D (1,4) (16,1,T/16)
Residual module (16,1,T/16)
Dropout (16,1,T/16)
Softmax N
In table 1, C is the number of data channels, T is the number of time points, and N is the number of classifications. Input represents an Input layer, Conv2D is a two-dimensional convolutional layer, DepthwiseConv2D is a two-dimensional depth convolutional layer, and depthseperableeconv 2D is a two-dimensional depth separable convolutional layer; BatchNorm is an algorithm for accelerating neural network training, convergence speed and stability, which is often used in a deep network; activation is an Activation function, AveragePool2D is a two-dimensional average pooling layer, Residual module is a Residual module, Dropout is a random discard operation, and Softmax is an N classifier.
Specifically, for the parameter selection of the attention recognition model, Softmax may be output in category 4, and the Batchsize parameter may be 256, and in order to ensure that the subsequent model can converge after the Batchsize adjustment, the learning rate may be selected to be a small value, for example, 0.01.
Further, adjusting parameters of a pre-constructed attention recognition model according to the target signal may include:
and adjusting the softmax output layer parameter and the batchsize parameter of the attention recognition model according to the target signal.
By adjusting the parameters of the attention recognition model, the model can be adapted to the target data field, thereby improving the accuracy of attention recognition. In model-based transfer learning, parameters of a model can be shared between a source domain and a target domain, and the parameter sharing process is a process in which the model transfers knowledge learned in the source domain to the target domain. Specifically, the shared parameters need to be properly adjusted on the target domain data to adapt to different data characteristics, in the present application, the target domain data is similar to the source domain data, and in order to prevent overfitting, only the adjustment of the softmax output layer parameters and the blocksize parameters is performed, and other model parameters remain unchanged.
Specifically, adjusting the softmax output layer parameter and the blocksize parameter of the attention recognition model according to the target signal may include:
(1) adjusting the output category number of the softmax output layer parameter to a designated value according to the target signal;
(2) and reducing the value of the batch size parameter according to a preset proportion.
According to the characteristics of the target domain signal, the output category quantity of the model softmax output layer parameters is adjusted to be a designated value, and the value of the model batch size parameter is gradually reduced according to a preset proportion. For example, the initial softmax output layer parameter of the model is 4, and if the target domain is classified only by two, the softmax output layer parameter can be adjusted to 2. In addition, since the target domain data set is small, to accommodate this feature, the batchsize parameter of the model may be gradually decreased from large to small to determine the most appropriate parameter value.
204. And identifying the target signal by adopting the attention identification model after parameter adjustment to obtain an attention identification result of the brain wave signal.
And finally, identifying the target signal by adopting the model after parameter adjustment, thereby obtaining the attention identification result of the brain wave signal. The existing transfer learning method is mostly applied to the field of image recognition, the method can effectively utilize knowledge obtained from source domain training to assist the training process of a model in a target domain, the transfer learning method is applied to the attention recognition of electroencephalogram signals, the model is trained through the public source domain electroencephalogram data set, then model parameters are adjusted, the model is made to adapt to the data of the target domain, and therefore the accuracy of the attention recognition is improved.
The method and the device adopt a preset source domain data set to train to obtain a convolutional neural network model, preprocess brain wave signals to be recognized and input the preprocessed brain wave signals into the model to adjust model parameters, and therefore migration of the model from a source domain to a target domain is achieved. By introducing a transfer learning mechanism, the problem of limited number of electroencephalogram data set samples can be solved, and the accuracy of attention recognition can be improved.
The key points of the electroencephalogram-based attention recognition method are summarized as follows:
(1) the application provides a light-weight convolutional neural network, which combines a depth separable convolutional layer and a residual error module, wherein the depth separable convolutional divides the traditional convolutional into two parts, the operation amount of a model is greatly reduced, and the design of the residual error module can better fit a function to obtain a better classification effect. Based on the characteristics, the network is convenient to deploy on embedded or portable electroencephalogram acquisition equipment.
(2) The application provides a transfer learning algorithm suitable for electroencephalogram data classification. And preprocessing the acquired target domain data, inputting the preprocessed target domain data into the model, and performing parameter fine adjustment to realize the migration of the model. The method overcomes the problem of insufficient target domain data in the actual condition, and realizes the accurate identification of attention.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
The above mainly describes an electroencephalogram-based attention recognition method, and a electroencephalogram-based attention recognition apparatus will be described below.
Referring to fig. 5, an embodiment of an electroencephalogram-based attention recognition apparatus in an embodiment of the present application includes:
a signal acquiring module 501, configured to acquire a brain wave signal to be identified;
a signal preprocessing module 502, configured to perform preprocessing operation on the brain wave signal to obtain a target signal;
a model parameter adjusting module 503, configured to adjust a parameter of a pre-constructed attention recognition model according to the target signal, where the attention recognition model is a convolutional neural network model obtained by training using a preset source domain data set;
an attention recognition module 504, configured to recognize the target signal by using the attention recognition model after parameter adjustment, so as to obtain an attention recognition result of the brain wave signal.
In one embodiment of the present application, the signal preprocessing module may include:
the band-pass filtering unit is used for performing band-pass filtering processing on the brain wave signals to obtain specified frequency band signals;
the signal denoising unit is used for performing denoising processing on the specified frequency band signal by adopting a multi-level wavelet decomposition method;
and the signal alignment unit is used for executing Euclidean space alignment processing on the denoised specified frequency band signal and the source domain data set to obtain the target signal.
Further, the signal denoising unit may include:
the low-pass filtering subunit is used for performing low-pass filtering processing on the specified frequency band signal and sampling the signal subjected to the low-pass filtering processing by adopting a preset first impulse response function to obtain a low-frequency component signal;
the high-pass filtering subunit is used for performing high-pass filtering processing on the specified frequency band signal and sampling the signal subjected to the high-pass filtering processing by adopting a preset second impulse response function to obtain a high-frequency component signal;
and the signal denoising subunit is configured to determine to complete denoising processing of the specified frequency band signal if the low-frequency component signal and the high-frequency component signal are both within a specified frequency range.
In one embodiment of the present application, the attention recognition device may further include:
the source domain data set acquisition module is used for acquiring the source domain data set, wherein the source domain data set comprises electroencephalogram data with class labels acquired by a plurality of different subjects at different time and in different states;
and the model training module is used for training to obtain a convolutional neural network model containing a depth separable convolutional layer and a residual error module as the attention recognition model by taking the source domain data set as a training set.
In one embodiment of the present application, the model parameter adjustment module may include:
and the target parameter adjusting unit is used for adjusting the softmax output layer parameter and the batch size parameter of the attention recognition model according to the target signal.
Specifically, the target parameter adjusting unit may include:
a first parameter adjusting subunit, configured to adjust the number of output categories of the softmax output layer parameter to a specified value according to the target signal;
and the second parameter adjusting subunit is used for reducing the value of the batch size parameter according to a preset proportion.
In one embodiment of the present application, the attention recognition device may further include:
the signal segmentation module is used for performing segmentation processing on the target signal according to a specified time length to obtain a plurality of target signal segments;
the time-frequency diagram acquisition module is used for calculating and obtaining time-frequency diagrams of a plurality of specified wave bands of the brain wave signals according to the target signal fragments;
the attention identification module may be specifically configured to:
and inputting the time-frequency graphs of the plurality of specified wave bands into the attention recognition model after the parameters are adjusted to obtain the attention recognition result of the brain wave signal.
An embodiment of the present application further provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of any one of the electroencephalogram-based attention recognition methods shown in fig. 2.
An embodiment of the present application further provides a computer program product, which, when running on a terminal device, causes the terminal device to execute the steps of implementing any one of the electroencephalogram-based attention recognition methods shown in fig. 2.
Fig. 6 is a schematic diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various brain electrical based attention recognition method embodiments described above, such as the steps 201-204 shown in fig. 2. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 501 to 504 shown in fig. 5.
The computer program 62 may be divided into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 62 in the terminal device 6.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program and other programs and data required by the terminal device. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described system embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. . Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An electroencephalogram-based attention recognition method is characterized by comprising the following steps:
acquiring a brain wave signal to be identified;
preprocessing the brain wave signal to obtain a target signal;
adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training;
and identifying the target signal by adopting the attention identification model after parameter adjustment to obtain an attention identification result of the brain wave signal.
2. The attention recognition method as claimed in claim 1, wherein the pre-processing operation of the brain wave signals to obtain target signals comprises:
performing band-pass filtering processing on the brain wave signals to obtain specified frequency band signals;
denoising the specified frequency band signal by adopting a multi-level wavelet decomposition method;
and executing Euclidean space alignment processing on the denoised specified frequency band signal and the source domain data set to obtain the target signal.
3. The attention recognition method of claim 2, wherein the denoising processing is performed on the specified band signal by using a multi-level wavelet decomposition method, and comprises:
performing low-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the low-pass filtering processing by adopting a preset first impulse response function to obtain a low-frequency component signal;
performing high-pass filtering processing on the specified frequency band signal, and sampling the signal subjected to the high-pass filtering processing by adopting a preset second impulse response function to obtain a high-frequency component signal;
and if the low-frequency component signal and the high-frequency component signal are both in a specified frequency range, determining to finish the denoising processing of the specified frequency band signal.
4. The attention recognition method of claim 1, wherein the attention recognition model is obtained by training:
acquiring the source domain data set, wherein the source domain data set comprises electroencephalogram data with class labels acquired by a plurality of different subjects at different time and in different states;
and training by taking the source domain data set as a training set to obtain a convolutional neural network model containing a depth separable convolutional layer and a residual error module, wherein the convolutional neural network model is used as the attention recognition model.
5. The attention recognition method of claim 4, wherein adjusting parameters of a pre-constructed attention recognition model based on the target signal comprises:
and adjusting the softmax output layer parameter and the batchsize parameter of the attention recognition model according to the target signal.
6. The attention recognition method of claim 5, wherein adjusting the softmax output layer parameter and the blocksize parameter of the attention recognition model according to the target signal comprises:
adjusting the output category number of the softmax output layer parameter to a designated value according to the target signal;
and reducing the value of the batch size parameter according to a preset proportion.
7. The attention recognition method as claimed in any one of claims 1 to 6, further comprising, after performing a preprocessing operation on the brain wave signals to obtain target signals:
performing segmentation processing on the target signal according to a specified time length to obtain a plurality of target signal segments;
calculating a time-frequency diagram of a plurality of specified wave bands of the brain wave signal according to the target signal segments;
the recognizing the target signal by using the attention recognition model after the parameter adjustment to obtain the attention recognition result of the brain wave signal specifically comprises:
and inputting the time-frequency graphs of the plurality of specified wave bands into the attention recognition model after the parameters are adjusted to obtain the attention recognition result of the brain wave signal.
8. An attention recognition device based on electroencephalogram, comprising:
the signal acquisition module is used for acquiring brain wave signals to be identified;
the signal preprocessing module is used for preprocessing the brain wave signal to obtain a target signal;
the model parameter adjusting module is used for adjusting parameters of a pre-constructed attention recognition model according to the target signal, wherein the attention recognition model is a convolutional neural network model obtained by adopting a preset source domain data set for training;
and the attention recognition module is used for recognizing the target signal by adopting the attention recognition model after parameter adjustment to obtain an attention recognition result of the brain wave signal.
9. A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor, when executing said computer program, implements the steps of the brain electrical based attention recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the brain electrical based attention recognition method according to any one of claims 1 to 7.
CN202011472619.4A 2020-12-15 2020-12-15 Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium Pending CN112656431A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011472619.4A CN112656431A (en) 2020-12-15 2020-12-15 Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011472619.4A CN112656431A (en) 2020-12-15 2020-12-15 Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112656431A true CN112656431A (en) 2021-04-16

Family

ID=75405944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011472619.4A Pending CN112656431A (en) 2020-12-15 2020-12-15 Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112656431A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113253850A (en) * 2021-07-05 2021-08-13 中国科学院西安光学精密机械研究所 Multitask cooperative operation method based on eye movement tracking and electroencephalogram signals
CN113505632A (en) * 2021-05-12 2021-10-15 杭州回车电子科技有限公司 Model training method, model training device, electronic device and storage medium
CN113655884A (en) * 2021-08-17 2021-11-16 河北师范大学 Equipment control method, terminal and system
CN113712571A (en) * 2021-06-18 2021-11-30 陕西师范大学 Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network
CN113925509A (en) * 2021-09-09 2022-01-14 杭州回车电子科技有限公司 Electroencephalogram signal based attention value calculation method and device and electronic device
CN116778969A (en) * 2023-06-25 2023-09-19 山东省人工智能研究院 Domain-adaptive heart sound classification method based on double-channel cross attention
CN117520925A (en) * 2024-01-02 2024-02-06 小舟科技有限公司 Personalized man-machine interaction method, device, equipment and medium based on electroencephalogram signals
WO2024074037A1 (en) * 2022-10-08 2024-04-11 上海前瞻创新研究院有限公司 Motor imagery brain-computer interface communication method, apparatus and system, and medium and device

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038382A1 (en) * 2005-08-09 2007-02-15 Barry Keenan Method and system for limiting interference in electroencephalographic signals
CN101049236A (en) * 2007-05-09 2007-10-10 西安电子科技大学 Instant detection system and detection method for state of attention based on interaction between brain and computer
CN101304480A (en) * 2008-06-26 2008-11-12 湖南大学 Method and system for eliminating ghost of television signal based on wavelet preprocessing GCR
CN104887224A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Epileptic feature extraction and automatic identification method based on electroencephalogram signal
CN106295506A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of age recognition methods based on integrated convolutional neural networks
CN106570516A (en) * 2016-09-06 2017-04-19 国网重庆市电力公司电力科学研究院 Obstacle recognition method using convolution neural network
CN106778820A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Identification model determines method and device
CN108606798A (en) * 2018-05-10 2018-10-02 东北大学 Contactless atrial fibrillation intelligent checking system based on depth convolution residual error network
CN108670276A (en) * 2018-05-29 2018-10-19 南京邮电大学 Study attention evaluation system based on EEG signals
CN109359539A (en) * 2018-09-17 2019-02-19 中国科学院深圳先进技术研究院 Attention appraisal procedure, device, terminal device and computer readable storage medium
CN109480833A (en) * 2018-08-30 2019-03-19 北京航空航天大学 The pretreatment and recognition methods of epileptic's EEG signals based on artificial intelligence
US20190147335A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. Continuous Convolution and Fusion in Neural Networks
US20190325203A1 (en) * 2017-01-20 2019-10-24 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
CN110598638A (en) * 2019-09-12 2019-12-20 Oppo广东移动通信有限公司 Model training method, face gender prediction method, device and storage medium
CN110772249A (en) * 2019-11-25 2020-02-11 华南脑控(广东)智能科技有限公司 Attention feature identification method and application
US20200053257A1 (en) * 2019-10-22 2020-02-13 Intel Corporation User detection and user attention detection using multi-zone depth sensing
CN111144453A (en) * 2019-12-11 2020-05-12 中科院计算技术研究所大数据研究院 Method and equipment for constructing multi-model fusion calculation model and method and equipment for identifying website data
CN111222574A (en) * 2020-01-07 2020-06-02 西北工业大学 Ship and civil ship target detection and classification method based on multi-model decision-level fusion
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
WO2020186883A1 (en) * 2019-03-18 2020-09-24 北京市商汤科技开发有限公司 Methods, devices and apparatuses for gaze area detection and neural network training
CN111985396A (en) * 2020-08-20 2020-11-24 南京师范大学 Pregnant woman emotion monitoring and recognition system

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070038382A1 (en) * 2005-08-09 2007-02-15 Barry Keenan Method and system for limiting interference in electroencephalographic signals
CN101049236A (en) * 2007-05-09 2007-10-10 西安电子科技大学 Instant detection system and detection method for state of attention based on interaction between brain and computer
CN101304480A (en) * 2008-06-26 2008-11-12 湖南大学 Method and system for eliminating ghost of television signal based on wavelet preprocessing GCR
CN104887224A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Epileptic feature extraction and automatic identification method based on electroencephalogram signal
CN106295506A (en) * 2016-07-25 2017-01-04 华南理工大学 A kind of age recognition methods based on integrated convolutional neural networks
CN106570516A (en) * 2016-09-06 2017-04-19 国网重庆市电力公司电力科学研究院 Obstacle recognition method using convolution neural network
CN106778820A (en) * 2016-11-25 2017-05-31 北京小米移动软件有限公司 Identification model determines method and device
US20190325203A1 (en) * 2017-01-20 2019-10-24 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
US20190147335A1 (en) * 2017-11-15 2019-05-16 Uber Technologies, Inc. Continuous Convolution and Fusion in Neural Networks
CN108606798A (en) * 2018-05-10 2018-10-02 东北大学 Contactless atrial fibrillation intelligent checking system based on depth convolution residual error network
CN108670276A (en) * 2018-05-29 2018-10-19 南京邮电大学 Study attention evaluation system based on EEG signals
CN109480833A (en) * 2018-08-30 2019-03-19 北京航空航天大学 The pretreatment and recognition methods of epileptic's EEG signals based on artificial intelligence
CN109359539A (en) * 2018-09-17 2019-02-19 中国科学院深圳先进技术研究院 Attention appraisal procedure, device, terminal device and computer readable storage medium
WO2020186883A1 (en) * 2019-03-18 2020-09-24 北京市商汤科技开发有限公司 Methods, devices and apparatuses for gaze area detection and neural network training
CN110598638A (en) * 2019-09-12 2019-12-20 Oppo广东移动通信有限公司 Model training method, face gender prediction method, device and storage medium
US20200053257A1 (en) * 2019-10-22 2020-02-13 Intel Corporation User detection and user attention detection using multi-zone depth sensing
CN110772249A (en) * 2019-11-25 2020-02-11 华南脑控(广东)智能科技有限公司 Attention feature identification method and application
CN111144453A (en) * 2019-12-11 2020-05-12 中科院计算技术研究所大数据研究院 Method and equipment for constructing multi-model fusion calculation model and method and equipment for identifying website data
CN111222574A (en) * 2020-01-07 2020-06-02 西北工业大学 Ship and civil ship target detection and classification method based on multi-model decision-level fusion
CN111460892A (en) * 2020-03-02 2020-07-28 五邑大学 Electroencephalogram mode classification model training method, classification method and system
CN111985396A (en) * 2020-08-20 2020-11-24 南京师范大学 Pregnant woman emotion monitoring and recognition system

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505632A (en) * 2021-05-12 2021-10-15 杭州回车电子科技有限公司 Model training method, model training device, electronic device and storage medium
CN113712571A (en) * 2021-06-18 2021-11-30 陕西师范大学 Abnormal electroencephalogram signal detection method based on Rinyi phase transfer entropy and lightweight convolutional neural network
CN113253850A (en) * 2021-07-05 2021-08-13 中国科学院西安光学精密机械研究所 Multitask cooperative operation method based on eye movement tracking and electroencephalogram signals
CN113655884A (en) * 2021-08-17 2021-11-16 河北师范大学 Equipment control method, terminal and system
CN113925509A (en) * 2021-09-09 2022-01-14 杭州回车电子科技有限公司 Electroencephalogram signal based attention value calculation method and device and electronic device
CN113925509B (en) * 2021-09-09 2024-01-23 杭州回车电子科技有限公司 Attention value calculation method and device based on electroencephalogram signals and electronic device
WO2024074037A1 (en) * 2022-10-08 2024-04-11 上海前瞻创新研究院有限公司 Motor imagery brain-computer interface communication method, apparatus and system, and medium and device
CN116778969A (en) * 2023-06-25 2023-09-19 山东省人工智能研究院 Domain-adaptive heart sound classification method based on double-channel cross attention
CN116778969B (en) * 2023-06-25 2024-03-01 山东省人工智能研究院 Domain-adaptive heart sound classification method based on double-channel cross attention
CN117520925A (en) * 2024-01-02 2024-02-06 小舟科技有限公司 Personalized man-machine interaction method, device, equipment and medium based on electroencephalogram signals
CN117520925B (en) * 2024-01-02 2024-04-16 小舟科技有限公司 Personalized man-machine interaction method, device, equipment and medium based on electroencephalogram signals

Similar Documents

Publication Publication Date Title
CN112656431A (en) Electroencephalogram-based attention recognition method and device, terminal equipment and storage medium
Wang et al. Unilateral sensorineural hearing loss identification based on double-density dual-tree complex wavelet transform and multinomial logistic regression
CN109726751B (en) Method for recognizing electroencephalogram based on deep convolutional neural network
Vrbancic et al. Automatic classification of motor impairment neural disorders from EEG signals using deep convolutional neural networks
CN110163180A (en) Mental imagery eeg data classification method and system
CN110353673B (en) Electroencephalogram channel selection method based on standard mutual information
EP4212100A1 (en) Electroencephalogram signal classification method and apparatus, and device, storage medium and program product
CN106709450A (en) Recognition method and system for fingerprint images
CN113191225B (en) Emotion electroencephalogram recognition method and system based on graph attention network
Hsu Application of quantum-behaved particle swarm optimization to motor imagery EEG classification
Taqi et al. Classification and discrimination of focal and non-focal EEG signals based on deep neural network
CN113768519B (en) Method for analyzing consciousness level of patient based on deep learning and resting state electroencephalogram data
CN114578963B (en) Electroencephalogram identity recognition method based on feature visualization and multi-mode fusion
CN113143261B (en) Myoelectric signal-based identity recognition system, method and equipment
CN111671420A (en) Method for extracting features from resting electroencephalogram data and terminal equipment
CN114548166A (en) Electroencephalogram signal heterogeneous tag space migration learning method based on Riemann manifold
Wei et al. Cross-subject EEG channel selection method for lower limb brain-computer interface
CN112438741A (en) Driving state detection method and system based on electroencephalogram feature transfer learning
CN116340825A (en) Method for classifying cross-tested RSVP (respiratory tract protocol) electroencephalogram signals based on transfer learning
CN117058584A (en) Deep learning-based infant spasticity clinical episode video identification method
CN115169384A (en) Electroencephalogram classification model training method, intention identification method, equipment and medium
CN114831652A (en) Electroencephalogram signal processing method based on synchronous compression wavelet transform and MLF-CNN
CN113907722A (en) HHT-based intelligent pulse pathological feature screening, classifying and identifying system and method
Wang et al. EEG Artifact Removal Based on Independent Component Analysis and Outlier Detection
CN112464711A (en) MFDC-based electroencephalogram identity identification method, storage medium and identification device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210416

RJ01 Rejection of invention patent application after publication