CN115886840B - Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network - Google Patents

Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network Download PDF

Info

Publication number
CN115886840B
CN115886840B CN202310147446.6A CN202310147446A CN115886840B CN 115886840 B CN115886840 B CN 115886840B CN 202310147446 A CN202310147446 A CN 202310147446A CN 115886840 B CN115886840 B CN 115886840B
Authority
CN
China
Prior art keywords
domain
data
frequency
model
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310147446.6A
Other languages
Chinese (zh)
Other versions
CN115886840A (en
Inventor
李阳
严伟栋
向岩松
崔渭刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN202310147446.6A priority Critical patent/CN115886840B/en
Publication of CN115886840A publication Critical patent/CN115886840A/en
Application granted granted Critical
Publication of CN115886840B publication Critical patent/CN115886840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention provides an epileptic prediction method based on a domain-oriented multi-level deep convolution feature fusion network. Validity verification is carried out by adopting a published epileptic electroencephalogram data set CHB-MIT: firstly, preprocessing a data set, and establishing a training set and a testing set; secondly, constructing a model structure of a domain countermeasure multi-level depth convolution feature fusion network; inputting the preprocessed training set into a model, and training a domain countermeasure model; and finally, inputting the test set into a trained model, and performing model performance test. The advantages of the invention include: the multi-level depth feature extraction module is designed, so that the problem of insufficient extraction of single-level features is solved, and multi-domain complementation of brain electricity is realized; the multi-level self-attention feature fusion module is designed, and the prediction accuracy is remarkably improved through time-space-frequency domain feature fusion. The method is effectively verified on CHB-MIT, the average prediction accuracy of a single patient reaches 95.4%, and the false alarm rate per hour is less than 0.11, which is superior to the existing optimal method.

Description

Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network
Technical Field
The invention provides an epileptic prediction method based on a domain-oriented multi-level deep convolution feature fusion network.
Background
Epilepsy is a serious chronic neurological disorder disease, and the normal life and life health of patients are seriously affected due to the sudden, repeated and high risk of seizures. If the early stage of epileptic seizure can be accurately predicted and intervened in time, the treatment rate can be effectively improved, the risk of a patient can be greatly reduced, and the method has a promoting effect on deep exploration of the pathogenesis of epileptic seizure and research of a new regulation and control method.
Electroencephalogram (EEG) signals are an important basis for seizure prediction. The existing clinical EEG-based epileptic prediction method mainly depends on experience judgment of doctors, has lower seizure prediction precision, needs doctors to continuously observe and judge due to uncertainty of seizure time periods, and has weak prediction timeliness. In recent years, prediction of epilepsy by means of a deep learning method has become a research hotspot, and the method can map low-dimensional features into high-dimensional features by automatically extracting sample features, so that intelligent prediction of epilepsy is realized. Heretofore, most convolutional neural network (Convolution Neural Network, CNN) prediction methods based on deep learning only extract EEG signal features from a single time domain or frequency domain, neglect spatial domain features and non-stationary features of the EEG, and fail to realize multi-domain feature fusion complementation. In order to make up for the defect of extracting EEG information by single characteristics, researchers put forward a multi-stream model structure, but the method usually combines simple superposition of multi-domain analysis, lacks effective fusion of characteristics, and easily causes redundancy of the characteristics, so that the most discernable characteristics cannot be obtained, and the prediction accuracy is difficult to further improve. Meanwhile, in order to ensure the prediction performance of the model, a large amount of data is usually required to train deep learning model parameters, however, the electroencephalogram data of the patient acquired in actual clinic is relatively less, and the model is easy to be overfitted due to small sample data. Some researchers try to amplify the data volume by data enhancement and data generation methods, but often enhance the sensitivity of the model to noise, and also have difficulty in obtaining higher prediction accuracy.
Disclosure of Invention
The invention provides an epileptic prediction method based on a domain-by-domain-resistant multi-level depth convolution feature fusion network, which aims to solve the problems of lack of extraction of EEG multi-domain features and low prediction accuracy of the existing epileptic prediction method based on deep learning. The high-accuracy prediction of epilepsy is realized through four steps of data processing, model construction, model training and model testing. The validity verification is carried out on an epileptic electroencephalogram data set CHB-MIT disclosed internationally, the average epileptic prediction accuracy of a single patient can reach 95.4%, the false alarm rate per hour is less than 0.11, and the method is superior to other existing optimal methods.
In order to solve the problems in the prior art, the invention provides an epileptic prediction method based on a domain-by-domain-opposing multi-level deep convolution feature fusion network. The inventor adopts an epileptic electroencephalogram data set CHB-MIT disclosed internationally to verify the effectiveness of the method. Dividing the CHB-MIT data set into source domain data and target domain data after preprocessing, forming a training set by one part of the source domain data and the target domain data, taking the other part of the target domain data as a test set for input during model training, finally outputting the pre-seizure inference duration and pre-seizure prediction index, and comparing seizure prediction results.
According to one embodiment of the invention, the epileptic prediction method based on the domain-specific multi-level depth convolution feature fusion network comprises the following steps:
Step 1: acquiring an international public epileptic electroencephalogram data set CHB-MIT, preprocessing the data set, dividing the data set into source domain data and target domain data, taking part of the source domain data and the target domain data as training sets, and taking the other part of the target domain data as test sets;
step 2: constructing a model structure of a domain countermeasure multi-level deep convolution feature fusion network by using Pytorch deep learning frames;
Step3: inputting the training set data established in the step 1 into a model, training by a domain countermeasure method, and fine-tuning model parameters by using target domain training data to generate a final training model;
Step 4: and (3) inputting the test set brain electrical data preprocessed in the step (1) into a final trained model to obtain the epileptic seizure early-stage reasoning time length and the seizure early-stage prediction index.
In the step 1, the data preprocessing comprises electroencephalogram signal channel selection, baseline correction, artifact removal, low-pass filtering and sliding window data division, wherein a sliding window divides continuous electroencephalogram signals into signal fragments with the length of 5s, and the signal fragments are not overlapped.
In step 2, the constructed domain countermeasure multi-level deep convolution feature fusion network includes: the system comprises a multi-level depth feature extraction module, a multi-level feature self-attention fusion module and a domain judging and attack predicting module, wherein the multi-level depth feature extraction module is connected with the multi-level feature self-attention fusion module in series, and a fusion result is input into the domain judging and attack predicting module. The multi-level depth feature extraction module is used for extracting depth features of time domain, frequency domain and space domain in the electroencephalogram signals; the multi-level feature self-attention fusion module is used for deeply fusing the time-space features and the frequency-space features and extracting the time-space-frequency fusion features for classification; the domain judging and onset predicting module is used for outputting a classification result of the fusion characteristic, calculating the probability that the input electroencephalogram signal fragments come from the source domain data and the target domain data respectively, and calculating the probability that the input electroencephalogram signal fragments appear in the pre-onset period and the inter-onset period.
In step 3, when model training is performed by a domain countermeasure method, the error of the output of the domain judging and attack predicting module and the label is calculated by adopting a cross entropy function, model parameters are updated by error back propagation and random gradient descent iteration, training data Batchsize is set to be 256, model learning rate is set to be 0.0001, model parameter optimization is performed by adopting an Adam optimizer, a loss function adopts a cross entropy loss function, and model parameters are saved after 160 times of iteration training.
In step 3, when the target domain training data is used for fine tuning the model parameters, the cross entropy function is used for calculating the errors of the output and the labels of the domain judging and attack predicting module by freezing the parameters of the multi-level depth feature extraction module and the multi-level feature self-attention fusion module, and the errors are reversely propagated to fine tune the parameters of the domain judging and attack predicting module.
In step 4, when the pre-seizure period duration reasoning is carried out, adopting a generalized learning strategy to automatically and iteratively deduce the personalized pre-seizure period duration.
In step 4, when calculating the prediction index of the pre-seizure period, the continuous electroencephalogram data and the labels of the test set are input into the trained model in time sequence, a continuous prediction Curve output by the model is obtained, and the seizure prediction effect of the model is evaluated by adopting the Area Under ROC Curve (Area Under Curve, AUC), the prediction accuracy (Sn) and the false alarm rate per hour (FALSE PREDICTING RATE, FPR).
The epileptic prediction method based on the domain-by-domain-opposing multi-level deep convolution feature fusion network has the main advantages that:
1. The multi-level depth feature extraction module designed by the invention solves the problem of insufficient extraction of electroencephalogram information by a single level, and realizes feature complementation of time domain, frequency domain and space domain of electroencephalogram information;
2. The multi-level self-attention feature fusion module provides the classification feature with the most discriminant for epilepsy prediction through time-space feature fusion, frequency-space feature fusion and time-space-frequency feature fusion, remarkably improves the prediction accuracy, ensures that the average prediction accuracy of a single patient reaches 95.4%, and ensures that the false alarm rate per hour is less than 0.11.
Drawings
Fig. 1 is a flow chart of a method of epileptic prediction based on a domain-specific multi-level deep convolution feature fusion network according to one embodiment of the present invention.
Fig. 2 is a network structure diagram of a multi-level depth feature extraction module according to one embodiment of the invention.
FIG. 3 is a schematic diagram of a multi-level feature self-attention fusion module according to one embodiment of the invention.
FIG. 4 is a domain countermeasure training flow chart according to one embodiment of the invention.
Detailed Description
In the epileptic prediction method based on domain-specific multi-level depth convolution feature fusion network according to one embodiment of the invention, the multi-level depth feature extraction module, the multi-level feature self-attention fusion module and the domain discrimination seizure prediction module are designed to perform electroencephalogram multi-domain feature depth extraction and fusion; model training and parameter fine adjustment are performed by using a domain countermeasure learning training method; and combining with a generalization learning strategy, automatically and iteratively deducing the duration of the early period of the individual seizure to finally obtain an epileptic prediction result.
The invention provides an epileptic prediction method based on a domain-specific multi-level depth convolution feature fusion network, which is described in detail below with reference to the accompanying drawings and with reference to specific embodiments.
An overall flow of a domain-specific multi-level deep convolution feature fusion network-based epileptic prediction method according to an embodiment of the present invention is shown in fig. 1, which includes:
step S1: preprocessing a CHB-MIT data set, dividing the CHB-MIT data set into source domain data and target domain data, taking part of the source domain data and the target domain data as training sets, taking the other part of the target domain data as test sets,
According to a specific embodiment, the CHB-MIT data set is an acquired internationally published epileptic electroencephalogram data set comprising 22 subjects, 23 case records (5 men, 3 to 22 years; 17 women, 1.5 to 19 years), wherein the period from 15 minutes before each seizure to 1.5 hours before the seizure is set as the pre-seizure period according to internationally accepted data partitioning basis, and the period from 2 hours after the last seizure to 1.5 hours before the last seizure is set as the seizure period; to avoid the effect of the previous epilepsy on the next epilepsy, only data with seizure intervals exceeding two hours are used; while in order to ensure that each patient can evaluate the performance of the model using a leave-one-out method, in one embodiment of the present invention only data from patients (19 total) containing 3 qualified seizures are used;
The method specifically comprises the following steps:
step S1.1: channel selection, comprising: selecting CHB-MIT epileptic electroencephalogram as data input, wherein to ensure data quality, part of damaged channels and repeated channels are removed, and 'FP1-F7','F7-T7','T7-P7','P7-O1','FP1-F3','F3-C3','C3-P3','P3-O1','FP2-F4','F4-C4','C4-P4','P4-O2','FP2-F8','F8-T8','T8-P8-0','P8-O2','FZ-CZ','CZ-PZ','P7-T7','T7-FT9','FT9-FT10','FT10-T8' channels, namely 22 channels, are reserved as input data of a model;
step S1.2: calibrating the baseline, removing artifacts, comprising:
In order to avoid baseline drift phenomenon existing in long-time acquisition of brain electricity, an empirical mode decomposition algorithm is adopted to calibrate a baseline, and meanwhile, a direct removal method is adopted to remove noise and artifacts in the brain electricity signals;
Step S1.3: filtering at 0-64Hz, including: adopting 0-64Hz low-pass filtering to remove high-frequency interference of the brain electrical signals;
Step S1.4: dividing the data segments using sliding windows, comprising: dividing the original brain electrical signal by a sliding window, wherein the length of the sliding window is set to be 5s, the sliding step length is set to be 2.5s, dividing the epileptic signal and the non-epileptic signal into matrix data segments containing 28160 sampling points, namely 22 x 5s x 256Hz,
In a specific embodiment, the brain electricity of 19 epileptic patients is selected as data input, the data comprises 98 epileptic seizure brain electricity signals, the total signal duration is 451.56 hours, the total signal duration of the epileptic seizure is 23.78 hours, the number of test samples is 17121, the total signal duration of the non-epileptic seizure is 427.78 hours, and the number of test samples is 308000. The results of the pretreatment of each patient data are shown in table 1.
Step S1.5: constructing a training set and a testing set, comprising: s1.5.1: when analyzing the patient i, all data of the patient i are divided into target domain data, all data except the patient i are divided into source domain data, the source domain data and part of data in the target domain are used as training sets, and the other part of data in the target domain is used as a test set.
S1.5.2: the procedure of S1.5.1 was repeated N times, where N is the number of i seizures in the patient. Wherein at the jth repetition, continuous data of the jth complete seizure of patient i (including preseizure and inter-seizure intervals) is taken as test data, and the remaining data and source domain data are taken together as training data. After repeating N times, each episode was tested as test data, and the average of the N episodes was taken as the result of the invention on patient i, i.e. the result in table 2 was the average of N.
Table 1 results after pretreatment of each patient data
Step S2: building a model structure of a domain countermeasure multi-level deep convolution feature fusion network, comprising:
Using Pytorch deep learning frame function library in Python to establish domain countermeasure multi-level deep convolution feature fusion network, which comprises a multi-level deep feature extraction module, a multi-level feature self-attention fusion module and a domain discrimination attack prediction module, wherein the multi-level deep feature extraction module and the multi-level feature self-attention fusion module are connected in series, and the feature fusion result is input into the domain discrimination attack prediction module, and the method specifically comprises the following steps:
(a) A multi-level depth feature extraction module for
The structure of the method is shown in fig. 2, and the multi-level depth feature extraction module comprises the following steps:
the time domain convolution layer adopts convolution kernels with different sizes and convolution step sizes with different sizes to extract time domain features of different scales of the electroencephalogram signals, and the time domain convolution layer can be expressed as:
xt=ReLU(Conv(x)) (1)
Where Conv represents convolution and ReLU represents a linear rectification function. To further extract the time domain features, the time domain convolution of 5 convolution kernels is used to extract the time domain features of different scales, and the parameters are set as follows: the convolution kernel sizes are 1×4,1×8,1×16,1×32 and 1×32 respectively, and the step sizes of all layers are consistent without filling; through the time domain convolution layer, 5 time domain features x 1,x2,x3,x4 and x 5 are obtained, and the sizes of the time domain features are 22×320, 22×160, 22×80, 22×40 and 22×40 respectively;
A frequency domain convolution layer that extracts spectral features of brain signals using wavelet decomposition, wherein:
The wavelet decomposition parameter selects a plurality of Bei Xisi-order wavelets (Daubechies order-4, db 4), the signals are input into a frequency domain convolution layer, the output of the frequency domain convolution layer consists of two parts, the output even subscript is a low-frequency signal, the odd subscript is a high-frequency signal,
One layer of frequency domain convolution can be expressed as:
xL(t)=Conv(x)(2t) (2)
xH(t)=Conv(x)(2t+1) (3)
Wherein x L and x H represent a low-frequency signal and a high-frequency signal of wavelet decomposition, respectively; dividing the signal into a low-frequency approximate signal and a high-frequency detail signal according to frequency by frequency domain convolution; inputting the low-frequency signal into the frequency domain convolution layer again to obtain lower frequency decomposition;
The data sampling frequency is 256Hz, so that the highest contained frequency of the signal is 128Hz; to extract 5 response clinical frequency band signals, namely delta band (0-4 Hz), theta band (4-8 Hz), alpha band (8-12 Hz), beta band (13-30 Hz) and gamma band (30-50 Hz). The minimum 4Hz low frequency signal is needed, so the number of layers of wavelet convolution A layer; the parameters are set as follows: the convolution kernel size is 1 multiplied by 8, and each layer step length is 1 multiplied by 2;
The 5 frequency domain feature maps x γ,xβ,xα,xθ and x δ are obtained through the frequency domain convolution layer, and the sizes of the frequency domain feature maps x γ,xβ,xα,xθ and x δ are 22×320, 22×160, 22×80, 22×40 and 22×40 respectively;
The space domain convolution layer firstly calculates the pearson correlation coefficient of the inter-channel signals to obtain an inter-channel initial connection matrix A 0, and the size of the inter-channel initial connection matrix A 0 is 22 multiplied by 22; wherein, for the variable X and the variable Y, the Pearson correlation coefficient calculation method is as follows:
wherein E (-) refers to the expected value of the calculated variable;
The pearson correlation coefficient a 0 (i, j) between channel i and channel j is calculated as follows:
A0(i,j)=A0(j,i)=Person(Xi,Xj) (5)
Wherein X i and X j respectively represent brain electrical signals acquired by a channel i and a channel j, and A 0 (i, j) represents elements of an ith row and an ith column in an A 0 matrix;
The method for filtering the useless space domain information of the initial connection matrix A 0 among channels, amplifying the important space domain information, optimizing the connection matrix by adopting an attention mechanism comprises the following steps: firstly, an original two-dimensional connection matrix is unfolded to be a one-dimensional connection vector, then, the characteristics are compressed through a first linear full-connection layer FC, a nonlinear function ReLU is adopted for activation, useless information is filtered through compression, and only the useful information is reserved; and recovering the original number of one-dimensional vectors through a second linear layer FC, obtaining channel weights according to importance, and finally activating weights of different channels by using a Tanh activation function, thereby ensuring that the model focuses more on characteristics contained in channels with large information quantity and inhibiting characteristics contained in non-important channels, wherein the calculation formula is as follows:
Ac=Tanh(FC(ReLU(FC(A0)))) (6)
Wherein A c is a channel space domain feature matrix, and the size of the channel space domain feature matrix is 22 multiplied by 22;
(b) Multi-level feature self-attention fusion module for
The time domain information and the space domain information are fused into time-space characteristics, meanwhile, the frequency domain information and the space domain information are fused into frequency-space characteristics, and finally, the time-space-frequency domain characteristics are fused, and the specific structure is shown in fig. 3, and specifically comprises the following steps:
firstly, splicing the obtained multi-scale time domain features and multi-band frequency domain features according to channel dimensions, wherein the specific expression is as follows:
Wherein the method comprises the steps of The representation matrix is spliced according to the dimension of the electroencephalogram channel. After the multiple features are spliced, a time domain feature x tem and a frequency domain feature x fre are obtained, and the sizes of the time domain feature x tem and the frequency domain feature x fre are 22 multiplied by 640. Then, respectively carrying out matrix multiplication on the channel space domain features, the time domain features and the frequency domain features to obtain fused time-space features and frequency-space features:
After it is fused And/>The size of (2) is kept unchanged and is 22 multiplied by 640;
According to one embodiment of the invention, for depth fusion of time domain features and frequency domain features, a multi-level feature self-attention fusion method is adopted, correlation among different feature points is considered, space-time features and frequency-space features are subjected to transposition multiplication after convolution, convolution kernels are 22 multiplied by 1, step sizes are 22 multiplied by 1, and an attention matrix A att of cross-domain features is obtained, wherein the size of the attention matrix A att is 640 multiplied by 640; a att has the following specific calculation formula:
Where Softmax represents a normalization function, aimed at normalizing the output result, and T represents the transposition of the feature matrix. In order to reduce the mutual interference of the features between different frequency bands/different scales and reduce the calculation amount of the model, the attention moment array sparsification processing is carried out, specifically, only diagonal block parameters with the sizes of 40×40, 80×80, 160×160 and 320×320 on diagonal lines are reserved, and the rest elements of the matrix are all set to zero;
After the attention matrix is obtained, carrying out multi-domain fusion on the time domain characteristics and the frequency domain characteristics to obtain time-space-frequency fusion characteristics:
Wherein x fus is a fusion feature, its size is 44×640, and i is a unit diagonal array;
for x fus, further using the self-attention block to depth fuse the time domain features with the frequency domain features; specifically, the fusion feature x fus is optimized using a self-attention mechanism, and the calculation of the multi-headed self-attention module is as follows:
Where x mu represents the feature map output by the multi-head attention module and d represents the hyper-parameters that control scaling.
Finally, the residual error model is adopted to fuse the fused characteristics of different self-attention modules, and the specific calculation formula is as follows:
LN (·) is layer normalization, which aims at normalizing input, reducing the influence of input amplitude on the model, and reducing model overfitting, and D (·) is a standard deviation function of the calculated variables. After depth fusion, finally output depth fusion characteristics Is 2 x 192;
(c) A domain discrimination episode prediction module comprising:
the attack prediction layer classifies the characteristics by adopting two layers of full-connection layers after obtaining the most discriminant depth fusion time-space-frequency characteristics, and the specific calculation formula is as follows:
out pre is a 2-dimensional vector whose magnitude represents the probability that the input signal is pre-and inter-seizure. By means of the seizure prediction layer, the input signal can be classified into pre-seizure and inter-seizure data by means of deep fusion time-space-frequency features;
The domain discrimination layer adopts two layers of full-connection layers to classify the depth fusion characteristics from different domains, and the specific calculation formula is as follows:
out pre is a 2-dimensional vector whose magnitude represents the probability of the input signal coming from the source and target domains, GELU is a gaussian error linear function that acts as an activation function. Through the domain discrimination layer, the input signal can be accurately distinguished from the source domain or the target domain through the depth fusion time-space-frequency characteristic;
Step S3: inputting the training set data established in the step S1 into a model, training by a domain countermeasure method, and fine-tuning model parameters by using target domain training data to generate a final training model, wherein the method comprises the following steps:
The classification error of the model classification is calculated using the source domain data, and the classification loss is calculated using a Cross entropy loss function (Cross-Entropy, CE) as follows:
Where x s represents the source domain sample, y i represents the label of the source domain data, the pre-seizure data is 1, the inter-seizure data is 0, and p i represents the probability that the sample is predicted to be positive. L cls is mainly used for constraining model parameters so that the model classifies pre-seizure data and inter-seizure data as accurately as possible;
Calculating the distribution error of the characteristics by using the source domain data and the target domain data, and calculating the classification loss by adopting a cross entropy function, wherein the calculation formula is as follows:
Where x s+xt represents source and target domain data, y i represents a domain label of the data, if the data is from the source domain, the label is 1, if the data is from the target domain, the label is 0, and p i represents a probability that the sample is predicted to be from the source domain. L dom is mainly used for constraining feature extraction and fusing network parameters, so that the distribution of the features extracted by the model on a source domain and a target domain is as close as possible;
the training flow of the domain countermeasure method is shown in fig. 4, and specifically includes:
Step S3.1: freezing domain discrimination layer parameters, calculating a loss function L=L cls+GRL(Ldom), and updating parameters of a multi-level depth feature extraction module, a multi-level feature self-attention fusion module and a domain discrimination attack prediction module by back propagation; wherein L cls is used to measure whether the model is accurate for pre-and inter-seizure data classification, L dom is used to measure whether the time-space-frequency fusion features are uniformly distributed on the source domain and the target domain, and the larger the distribution difference is, the smaller the L dom is. To have the fusion features distributed as close as possible over the source and target domains, we expect L dom to be as large as possible, while L cls is as small as possible; gradient inversion (GRL) is therefore used in order to ensure that the gradient descent direction of L dom coincides with L cls.
Step S3.2: freezing the multi-level depth feature extraction module, the multi-level feature self-attention fusion module and the attack prediction layer parameters, calculating a loss function as L=L dom, and updating domain discrimination layer parameters by back propagation, so that the domain discrimination layer can more accurately discriminate the features from the source domain and the features from the target domain;
step S3.3: if the classification accuracy of the epilepsy prediction is not increased any more and the classification accuracy of the domain discrimination layer is more than 50% and the classification accuracy is not increased any more, ending training; otherwise, repeating the step S3.1 and the step S3.2; the fusion distribution is close through the step 3.1, so that the domain discrimination can not separate the source domain and the target domain data, and the domain discrimination layer can accurately discriminate the fusion characteristics on the source domain and the target domain through the step 3.2; through the countermeasure, the distribution of the extracted fusion characteristics in the source domain and the target domain is continuously approximate;
Step S3.4: defining a test set as a set D, defining a pre-seizure start time tau *, giving an initial value 15, defining a seizure interval stop time tau e, and giving an initial value 90; loading the model parameters trained in the step S3.3, and freezing parameters of a multi-level depth feature extraction module model and a multi-level feature self-attention fusion module; inputting the target domain test data divided in the step S1 into a model, iteratively fine-tuning model parameters and adaptively deriving the personalized pre-seizure period tau * by the following steps:
(a) Calculating a classification error L cls by using the data fine tuning model in the step D, back-propagating fine tuning attack prediction layer parameters, repeatedly training the model until the accuracy of the verification set starts to be reduced or the training times are more than 100 times, stopping training, and storing the model parameters;
(b) Taking one minute of unlabeled data of [ tau ** +1], dividing the unlabeled data into 12 sections of samples through a sliding window with the length of 5 seconds, inputting the 12 sections of samples into a trained model, obtaining the average output probability p * of the 12 sections of samples, marking the 12 sections of samples as pre-seizure samples if p * is more than 0.6, adding the pre-seizure samples into a set D, simultaneously updating the value of tau *, enabling tau *=τ* +1 to be the same time, and repeating the second step; if p * is less than or equal to 0.6, performing the step (c);
(c) Taking one minute of unlabeled data of [ tau e-1,τe ], dividing the unlabeled data into 12 sections of samples through a sliding window with the length of 5 seconds, inputting the 12 sections of samples into a trained model, obtaining the average output probability p e of the 12 sections of samples, marking the 12 sections of samples as inter-seizure interval samples if p e is less than or equal to 0.6, adding the inter-seizure interval samples into a set D, simultaneously updating the value of tau e, enabling tau e=τe -1 to repeat the third step; if p e > 0.6, then performing step (d);
(d) Ending the iteration if τ e is equal to τ *, or neither τ e nor τ * is changed, the patient's personalized early-stage episode time being τ *, otherwise repeating steps (a) through (c);
Step S4: loading the last saved model parameters in the step S3 to obtain a trained epileptic prediction model, inputting the continuous electroencephalogram data preprocessed in the step S1 into the model to obtain a continuous epileptic seizure prediction probability curve, wherein the method comprises the following steps of:
Firstly, carrying out moving average on a continuous epileptic seizure prediction probability curve by adopting a window with the length of 60 seconds, and calculating a smooth prediction probability to reduce spike and avoid false alarm caused by single spike; the prediction time point with the smooth prediction probability larger than the threshold value of 0.6 is considered as an epileptic seizure; if the predicted time point is within 15 minutes before the attack to 30 seconds before the attack, the correct prediction is considered; the method comprises the steps of testing the attack prediction effect of a model by adopting the Area Under the ROC Curve (AUC), the prediction accuracy (Sn) and the false positive rate (FALSE PREDICTING RATE, FPR) per hour, wherein the ROC Curve is drawn by taking the false positive rate and the true positive rate as the horizontal and vertical coordinates according to a series of two classification modes; the AUC values are calculated through the ROC curves, and the classification performance of different methods is compared, wherein the AUC values corresponding to the ROC curves which are closer to the upper left corner are larger, and the classification performance is better.
The prediction accuracy, also called sensitivity, is the percentage of the correct prediction times to the total attack times; defining prediction accuracy:
where N is the correct number of predictions and N is the total number of seizures;
the false alarm rate per hour is the number of false alarms of the epilepsy prediction algorithm in unit time, and the unit is the number of false alarms per hour; defining a false alarm rate per hour:
wherein T is the false alarm number, and T is the total time length (units/h) of the intracranial brain electrical signals;
the performance indexes of the epileptic seizure prediction method based on domain-specific depth multi-scale feature fusion transfer learning on a test set are shown in table 2, wherein the bolded data represent the optimal results in the column;
As can be seen from Table 2, the average accuracy of the epileptic prediction method based on the domain-specific multi-level deep convolution feature fusion network on the public data set CHB-MIT is 95.4%, the sensitivity is 95.92%, the false alarm rate per hour is less than 0.11, and the individuation pre-attack period presumption time is 26.368 minutes. The prediction accuracy of a plurality of patients reaches 95%, which shows that the method provided by the invention has high prediction accuracy, and the prediction sensitivity of a plurality of patients reaches 100%, which shows that the method provided by the invention has strong prediction capability on epileptic seizure.
Table 2 Performance index of epileptic prediction method against multi-level deep convolution feature fusion network on test set
In addition, to illustrate the advantages of the method provided by the invention, the performance indexes of the epileptic prediction deep learning model compared with the latest epileptic prediction deep learning model in recent years on the same test set are shown in table 3.
Table 3 comparison of epileptic predictive performance indicators for different methods
From Table 3, the three indexes of AUC, sn and FPR/h of the epileptic prediction method based on the domain-specific multi-level deep convolution feature fusion network provided by the invention are all superior to those of the existing advanced method.
The domain anti-multi-level depth convolution feature fusion network epileptic prediction method provided by the invention is described in detail above, but it is obvious that the scope of the invention is not limited thereto. Various modifications to the embodiments described above are within the scope of the invention without departing from the scope of protection as defined by the appended claims.

Claims (3)

1. An epileptic prediction method based on a domain-specific multi-level deep convolution feature fusion network is characterized by comprising the following steps:
Step S1: preprocessing a pre-acquired public epileptic electroencephalogram data set CHB-MIT, dividing the pre-acquired epileptic electroencephalogram data set CHB-MIT into source domain data and target domain data, taking part of the source domain data and the target domain data as training sets, and taking the other part of the target domain data as test sets;
step S2: constructing a model structure of a domain countermeasure multi-level deep convolution feature fusion network by using Pytorch deep learning frames;
Step S3: inputting the training set data established in the step 1 into a model, training by a domain countermeasure method, and fine-tuning model parameters by using target domain training data to generate a final training model;
Step S4: inputting the test set electroencephalogram data preprocessed in the step 1 into a final trained model to obtain the epileptic seizure early-stage reasoning time length and the seizure early-stage prediction index,
Wherein:
In the step S1, data preprocessing comprises electroencephalogram signal channel selection, baseline correction, artifact removal, low-pass filtering and sliding window data division, wherein a sliding window divides continuous electroencephalogram signals into signal fragments with the length of 5S, and the signal fragments are not overlapped;
In said step S2, the constructed domain countermeasure multi-level deep convolution feature fusion network comprises: the device comprises a multi-level depth feature extraction module, a multi-level feature self-attention fusion module and a domain discrimination attack prediction module, wherein the multi-level depth feature extraction module is connected with the multi-level feature self-attention fusion module in series and inputs fusion results into the domain discrimination attack prediction module, and the multi-level depth feature extraction module is used for extracting depth features of time domains, frequency domains and space domains in an electroencephalogram signal; the multi-level feature self-attention fusion module is used for deeply fusing the time-space features and the frequency-space features and extracting the time-space-frequency fusion features for classification; the domain judging and onset predicting module is used for outputting a classification result of the fusion characteristic, calculating the probability of the input electroencephalogram signal fragments from the source domain data and the target domain data respectively, and calculating the probability of the input electroencephalogram signal fragments in the pre-onset period and the inter-onset period;
In the step S3:
When model training is carried out through a domain countermeasure method, the error of the output of the domain judging and attack predicting module and the label is calculated by adopting a cross entropy function, model parameters are iteratively updated through error back propagation and random gradient descent, training data Batchsize is set to be 256, model learning rate is set to be 0.0001, model parameter optimization is carried out by adopting an Adam optimizer, a loss function adopts a cross entropy loss function, and model parameters are saved after 160 times of iterative training;
when the target domain training data is used for fine tuning of model parameters, by freezing parameters of the multi-level depth feature extraction module and the multi-level feature self-attention fusion module, calculating errors of the output and the labels of the domain judging and attack predicting module by using a cross entropy function, and back propagating the errors to fine tune the parameters of the domain judging and attack predicting module;
in the step S4:
when the pre-seizure period duration reasoning is carried out, adopting a generalization learning strategy, and automatically and iteratively deducing the personalized pre-seizure period duration;
when the prediction index calculation of the pre-seizure period is carried out, continuous electroencephalogram data and labels of a test set are input into a trained model in time sequence, a continuous prediction curve output by the model is obtained, the seizure prediction effect of the model is evaluated by adopting the area under the ROC curve, the prediction accuracy and the false alarm rate per hour,
In the step S1, the specific steps of data preprocessing include:
step S1.1: channel selection, comprising: selecting CHB-MIT epileptic electroencephalogram as data input, wherein to ensure data quality, part of damaged channels and repeated channels are removed, and 'FP1-F7','F7-T7','T7-P7','P7-O1','FP1-F3','F3-C3','C3-P3','P3-O1','FP2-F4','F4-C4','C4-P4','P4-O2','FP2-F8','F8-T8','T8-P8-0','P8-O2','FZ-CZ','CZ-PZ','P7-T7','T7-FT9','FT9-FT10','FT10-T8' channels, namely 22 channels, are reserved as input data of a model;
step S1.2: calibrating the baseline, removing artifacts, comprising:
In order to avoid baseline drift phenomenon existing in long-time acquisition of brain electricity, an empirical mode decomposition algorithm is adopted to calibrate a baseline, and meanwhile, a direct removal method is adopted to remove noise and artifacts in the brain electricity signals;
Step S1.3: filtering at 0-64Hz, including: adopting 0-64Hz low-pass filtering to remove high-frequency interference of the brain electrical signals;
Step S1.4: dividing the data segments using sliding windows, comprising: dividing an original electroencephalogram signal by adopting a sliding window, wherein the length of the sliding window is set to be 5s, the sliding step length is set to be 2.5s, and dividing an epileptic signal and a non-epileptic signal into matrix data segments containing 28160 sampling points, namely 22 multiplied by 5s multiplied by 256 Hz;
Step S1.5: constructing a training set and a testing set, comprising: s1.5.1: when analyzing the patient i, dividing all data of the patient i into target domain data, dividing all data except the patient i into source domain data, taking the source domain data and part of data in the target domain as training sets, and taking the other part of data in the target domain as test sets;
S1.5.2: repeating S1.5.1 operations N times, wherein N is the number of seizures of the patient i, wherein, at the j-th repetition, continuous data including preseizure and inter-seizure period of the j-th complete seizure of the patient i are taken as test data, the rest data and source domain data are taken as training data together, after repeating N times, each seizure is taken as test data to obtain a test result, at the moment, taking the average test result of the N seizures as the result of the seizure prediction on the patient i,
In the step S2, the constructed domain countermeasure multi-level deep convolution feature fusion network specifically includes:
(a) A multi-level depth feature extraction module for
Extracting time domain information, frequency domain information and spatial domain information of channel distribution in the electroencephalogram signal, wherein the multi-level depth feature extraction module comprises:
the time domain convolution layer adopts convolution kernels with different sizes and convolution step sizes with different sizes to extract time domain features of different scales of the electroencephalogram signals, and the time domain convolution layer can be expressed as:
xt=ReLU(Conv(x)) (1),
Wherein Conv represents convolution, reLU represents a linear rectification function, and in order to further extract time domain features, time domain features of different scales are extracted by using time domain convolution of 5 convolution kernels, and parameters are set as follows: the convolution kernel sizes are 1×4,1×8,1×16,1×32 and 1×32 respectively, and the steps of each layer are consistent without filling; through the time domain convolution layer, 5 time domain features x 1,x2,x3,x4 and x 5 are obtained, and the sizes of the time domain features are 22×320, 22×160, 22×80, 22×40 and 22×40 respectively;
a frequency domain convolution layer that extracts spectral features of an electroencephalogram signal using wavelet decomposition, wherein:
The wavelet decomposition parameter selects a plurality of Bei Xisi-order wavelets (Daubechies order-4, db 4), the signals are input into a frequency domain convolution layer, the output of the frequency domain convolution layer consists of two parts, the output even subscript is a low-frequency signal, the odd subscript is a high-frequency signal,
One layer of frequency domain convolution can be expressed as:
xL(t)=Conv(x)(2t) (2),
xH(t)=Conv(x)(2t+1) (3),
Wherein x L and x H represent a low-frequency signal and a high-frequency signal of wavelet decomposition, respectively; dividing the signal into a low-frequency approximate signal and a high-frequency detail signal according to frequency by frequency domain convolution; inputting the low-frequency signal into the frequency domain convolution layer again to obtain lower frequency decomposition;
The data sampling frequency is 256Hz, so that the highest contained frequency of the signal is 128Hz; in order to extract the clinical frequency band signals of 5 responses, namely delta frequency band (0-4 Hz), theta frequency band (4-8 Hz), alpha frequency band (8-12 Hz), beta frequency band (13-30 Hz) and gamma frequency band (30-50 Hz), the low frequency signals with the minimum frequency of 4Hz are needed to be obtained, thus the number of layers of wavelet convolution A layer; the parameters are set as follows: the convolution kernel size is 1 multiplied by 8, and each layer step length is 1 multiplied by 2;
The 5 frequency domain feature maps x γ,xβ,xα,xθ and x δ are obtained through the frequency domain convolution layer, and the sizes of the frequency domain feature maps x γ,xβ,xα,xθ and x δ are 22×320, 22×160, 22×80, 22×40 and 22×40 respectively;
the space domain convolution layer firstly calculates the pearson correlation coefficient of the inter-channel signals to obtain an inter-channel initial connection matrix A 0, and the size of the inter-channel initial connection matrix A 0 is 22 multiplied by 22; wherein, for the variable X and the variable Y, the Pearson correlation coefficient calculation method is as follows:
wherein E (-) refers to the expected value of the calculated variable;
The pearson correlation coefficient a 0 (i, j) between channel i and channel j is calculated as follows:
A0(i,j)=A0(j,i)=Person(X;,Xj) (5),
Wherein X i and X j respectively represent brain electrical signals acquired by a channel i and a channel j, and A 0 (i, j) represents elements of an ith row and a jth column in an A 0 matrix;
The method for filtering the useless space domain information of the initial connection matrix A 0 among channels, amplifying the important space domain information, optimizing the connection matrix by adopting an attention mechanism comprises the following steps: firstly, an original two-dimensional connection matrix is unfolded to be a one-dimensional connection vector, then, the characteristics are compressed through a first linear full connection layer (FC), a nonlinear function ReLU is adopted for activation, and the useless information is filtered through compression, so that only the useful information is reserved; and recovering and obtaining original number of one-dimensional vectors through a second linear layer (FC), obtaining channel weights according to importance, and finally activating weights of different channels by using a Tanh activation function, thereby ensuring that the model focuses more on characteristics contained in channels with large information quantity and inhibiting characteristics contained in non-important channels, wherein the calculation formula is as follows:
A=Tanh(FC(ReLU(FC(A0))) (6),
Wherein A c is a channel space domain feature matrix, and the size of the channel space domain feature matrix is 22 multiplied by 22;
(b) Multi-level feature self-attention fusion module for
The method for fusing the time domain information and the space domain information into time-space features, simultaneously fusing the frequency domain information and the space domain information into frequency-space features, and finally realizing the time-space-frequency domain feature fusion specifically comprises the following steps:
firstly, splicing the obtained multi-scale time domain features and multi-band frequency domain features according to channel dimensions, wherein the specific expression is as follows:
Wherein the method comprises the steps of And performing representation splicing, namely obtaining a time domain feature x tem and a frequency domain feature x fre after splicing, wherein the sizes of the time domain feature x tem and the frequency domain feature x fre are 22 multiplied by 640, and then respectively performing matrix multiplication on the channel space domain feature, the time domain feature and the frequency domain feature to obtain a fused time-space feature and a fused frequency-space feature:
After it is fused And/>The size of (2) is kept unchanged and is 22 multiplied by 640;
(c) A domain discrimination episode prediction module comprising:
the attack prediction layer classifies the characteristics by adopting two layers of full-connection layers after obtaining the most discriminant depth fusion time-space-frequency characteristics, and the specific calculation formula is as follows:
out pre is a 2-dimensional vector whose size represents the probability that the input signal is pre-and inter-seizure, which can be classified into pre-and inter-seizure data by deep fusion time-space-frequency features by the seizure prediction layer;
The domain discrimination layer adopts two layers of full-connection layers to classify the depth fusion characteristics from different domains, and the specific calculation formula is as follows:
out pre is a 2-dimensional vector whose magnitude represents the probability of an input signal coming from the source domain and the target domain, GELU is a gaussian error linear function that acts as an activation function, by means of a domain discrimination layer, the input signal coming from the source domain or the target domain can be accurately distinguished by means of deep fusion time-space-frequency features,
In the step S3, the training set data established in the step S1 is input into a model, training is performed by a domain countermeasure method, and model parameters are finely tuned by using target domain training data, so as to generate a final training model, which includes:
the classification error of the model classification is calculated using the source domain data, and the classification loss is calculated using a Cross entropy loss function (Cross-Entropy, CE) as follows:
Wherein x s represents a source domain sample, y i represents a label of the source domain data, the pre-seizure data is 1, the inter-seizure data is 0, p i represents a probability that the sample is predicted to be positive, and L cls is mainly used for constraining model parameters so that the model classifies the pre-seizure data and the inter-seizure data as accurately as possible;
Calculating the distribution error of the characteristics by using the source domain data and the target domain data, and calculating the classification loss by adopting a cross entropy function, wherein the calculation formula is as follows:
Wherein x s+xt represents source domain and target domain data, y i represents a domain label of the data, if the data is from the source domain, the label is 1, if the data is from the target domain, the label is 0, p i represents the probability that the sample is predicted to be from the source domain, and L dom is mainly used for restricting feature extraction and fusing network parameters, so that the distribution of the feature extracted by the model on the source domain and the target domain is as close as possible;
wherein the training process of the domain countermeasure method comprises the following steps:
Step S3.1: freezing domain discrimination layer parameters, calculating a loss function L=L cls+GRL(Ldom), and updating parameters of a multi-level depth feature extraction module, a multi-level feature self-attention fusion module and a domain discrimination attack prediction module by back propagation; wherein L cls is used to measure whether the model is accurate for pre-and inter-seizure data classification, L dom is used to measure whether the time-space-frequency fusion features are uniformly distributed on the source domain and the target domain, the larger the distribution difference is, the smaller L dom is, in order to make the fusion features distributed as close as possible on the source domain and the target domain, L dom is expected to be as large as possible and L cls is expected to be as small as possible; gradient inversion (GRL) is therefore used to ensure that the gradient descent direction of L dom is consistent with L cls;
Step S3.2: freezing the multi-level depth feature extraction module, the multi-level feature self-attention fusion module and the attack prediction layer parameters, calculating a loss function to be L=L dom, and updating domain discrimination layer parameters by back propagation so that the domain discrimination layer can more accurately discriminate the features from the source domain and the features from the target domain;
step S3.3: if the classification accuracy of the epilepsy prediction is not increased any more and the classification accuracy of the domain discrimination layer is more than 50% and the classification accuracy is not increased any more, ending training; otherwise, repeating the step S3.1 and the step S3.2; the fusion distribution is close through the step 3.1, so that the domain discrimination can not separate the source domain and the target domain data, and the domain discrimination layer can accurately discriminate the fusion characteristics on the source domain and the target domain through the step 3.2; through the countermeasure, the distribution of the extracted fusion characteristics in the source domain and the target domain is continuously approximate;
Step S3.4: defining a test set as a set D, defining a pre-seizure start time tau *, giving an initial value 15, defining a seizure interval stop time tau E, and giving an initial value 90; loading the model parameters trained in the step S3.3, and freezing parameters of a multi-level depth feature extraction module model and a multi-level feature self-attention fusion module; inputting the target domain test data divided in the step S1 into a model, iteratively fine-tuning model parameters and adaptively deriving the personalized pre-seizure period tau * by the following steps:
(a) Calculating a classification error L cls by using the data fine tuning model in the step D, back-propagating fine tuning attack prediction layer parameters, repeatedly training the model until the accuracy of the verification set starts to be reduced or the training times are more than 100 times, stopping training, and storing the model parameters;
(b) Taking one minute of unlabeled data of [ tau ** +1], dividing the unlabeled data into 12 sections of samples through a sliding window with the length of 5 seconds, inputting the 12 sections of samples into a trained model, obtaining the average output probability p * of the 12 sections of samples, marking the 12 sections of samples as pre-seizure samples if p * is more than 0.6, adding the pre-seizure samples into a set D, simultaneously updating the value of tau *, enabling tau *=τ* +1 to be the same time, and repeating the second step; if P * is less than or equal to 0.6, performing the step (c);
(c) Taking one minute of unlabeled data of [ tau e-1,τe ], dividing the unlabeled data into 12 sections of samples through a sliding window with the length of 5 seconds, inputting the 12 sections of samples into a trained model, obtaining the average output probability p e of the 12 sections of samples, marking the 12 sections of samples as inter-seizure interval samples if p e is less than or equal to 0.6, adding the inter-seizure interval samples into a set D, simultaneously updating the value of tau e, enabling tau e=τe -1 to repeat the third step; if p e >0.6, then performing step (d);
(d) Ending the iteration if τ e is equal to τ *, or neither τ e nor τ * is changed, the patient's personalized early-stage episode time is τ *, otherwise repeating steps (a) through (c),
In the step S4, loading the model parameters finally saved in the step S3 to obtain a trained epileptic prediction model, inputting the continuous electroencephalogram data preprocessed in the step S1 into the model to obtain a continuous epileptic seizure prediction probability curve, including:
Firstly, carrying out moving average on a continuous epileptic seizure prediction probability curve by adopting a window with the length of 60 seconds, and calculating a smooth prediction probability to reduce spike and avoid false alarm caused by single spike; the prediction time point with the smooth prediction probability larger than the threshold value of 0.6 is considered as an epileptic seizure; if the predicted time point is within 15 minutes before the attack to 30 seconds before the attack, the correct prediction is considered; the method comprises the steps of adopting the area under an ROC curve, the prediction accuracy and the attack prediction effect of an hourly false positive rate test model, wherein the ROC curve is drawn by taking the false positive rate and the true positive rate as the abscissa and the ordinate respectively according to a series of two classification modes; calculating area value under ROC curve by ROC curve, comparing classification performance of different methods, wherein the area value under ROC curve corresponding to ROC curve near upper left corner is larger, representing better classification performance,
Wherein the prediction accuracy, i.e. sensitivity, i.e. the percentage of the correct prediction times to the total number of episodes, is defined as:
where N is the correct number of predictions and N is the total number of seizures;
The false alarm rate per hour is the number of false alarms of the epilepsy prediction algorithm in unit time, the unit is the number of times/h, and the false alarm rate per hour is defined as:
wherein T is the number of false alarms, T is the total duration of intracranial brain electrical signals, and the unit is hours.
2. The method for predicting epilepsy based on domain-opposing multi-level deep convolution feature fusion network according to claim 1, wherein the multi-level feature self-attention fusion module:
For depth fusion of time domain features and frequency domain features, a multi-level feature self-attention fusion method is adopted, correlation among different feature points is considered, space-time features and frequency-space features are subjected to convolution and then transposed multiplication, convolution kernels are 22 multiplied by 1, step sizes are 22 multiplied by 1, and an attention matrix A att of cross-domain features is obtained, wherein the size of the attention matrix A att is 640 multiplied by 640; a att is:
The Softmax represents a normalization function, aims at normalizing an output result, and the T represents transposition of a feature matrix, so as to reduce mutual interference of features between different frequency bands/different scales, and simultaneously reduce the calculation amount of a model;
After the attention matrix is obtained, carrying out multi-domain fusion on the time domain characteristics and the frequency domain characteristics to obtain time-space-frequency fusion characteristics:
Wherein x fus is a fusion feature, its size is 44×640, and i is a unit diagonal array;
for x fus, further using the self-attention block to depth fuse the time domain features with the frequency domain features; specifically, the fusion feature x fus is optimized using a self-attention mechanism, and the calculation of the multi-headed self-attention module is as follows:
Where x mu represents the feature map of the multi-head attention module output, d represents the hyper-parameters that control scaling,
Finally, the residual error model is adopted to fuse the fused characteristics of different self-attention modules, and the specific calculation formula is as follows:
LN (& gt) is layer normalization, and aims to normalize input, reduce influence of input amplitude on a model, reduce overfitting of the model, D (& gt) is a standard deviation function of a calculated variable, and finally output depth fusion characteristics after depth fusion Is 2 x 192.
3. A computer readable storage medium storing a computer program enabling a processor to perform the method according to one of claims 1-2.
CN202310147446.6A 2023-02-15 2023-02-15 Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network Active CN115886840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310147446.6A CN115886840B (en) 2023-02-15 2023-02-15 Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310147446.6A CN115886840B (en) 2023-02-15 2023-02-15 Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network

Publications (2)

Publication Number Publication Date
CN115886840A CN115886840A (en) 2023-04-04
CN115886840B true CN115886840B (en) 2024-05-03

Family

ID=86496434

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310147446.6A Active CN115886840B (en) 2023-02-15 2023-02-15 Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network

Country Status (1)

Country Link
CN (1) CN115886840B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117752308B (en) * 2024-02-21 2024-05-24 中国科学院自动化研究所 Epilepsy prediction method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110960191A (en) * 2019-11-29 2020-04-07 杭州电子科技大学 Epilepsia electroencephalogram signal classification method based on frequency spectrum energy diagram
CN113786204A (en) * 2021-09-03 2021-12-14 北京航空航天大学 Epilepsia intracranial electroencephalogram early warning method based on deep convolution attention network
CN114224363A (en) * 2022-01-25 2022-03-25 杭州电子科技大学 Child epilepsy syndrome auxiliary analysis method based on double-flow 3D deep neural network
CN114366124A (en) * 2022-01-25 2022-04-19 北京航空航天大学 Epilepsia electroencephalogram identification method based on semi-supervised deep convolution channel attention single classification network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2571265A (en) * 2018-02-16 2019-08-28 Neuro Event Labs Oy Method for detecting and classifying a motor seizure
US11612750B2 (en) * 2019-03-19 2023-03-28 Neuropace, Inc. Methods and systems for optimizing therapy using stimulation mimicking natural seizures

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110960191A (en) * 2019-11-29 2020-04-07 杭州电子科技大学 Epilepsia electroencephalogram signal classification method based on frequency spectrum energy diagram
CN113786204A (en) * 2021-09-03 2021-12-14 北京航空航天大学 Epilepsia intracranial electroencephalogram early warning method based on deep convolution attention network
CN114224363A (en) * 2022-01-25 2022-03-25 杭州电子科技大学 Child epilepsy syndrome auxiliary analysis method based on double-flow 3D deep neural network
CN114366124A (en) * 2022-01-25 2022-04-19 北京航空航天大学 Epilepsia electroencephalogram identification method based on semi-supervised deep convolution channel attention single classification network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度卷积神经网络的脑电图异常检测;杜云梅;黄帅;梁会营;;华南师范大学学报(自然科学版);20200415(第02期);全文 *

Also Published As

Publication number Publication date
CN115886840A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN113627518B (en) Method for realizing neural network brain electricity emotion recognition model by utilizing transfer learning
CN110693493B (en) Epilepsia electroencephalogram prediction feature extraction method based on convolution and recurrent neural network combined time multiscale
CN113786204B (en) Epileptic intracranial brain electrical signal early warning method based on deep convolution attention network
CN111340142B (en) Epilepsia magnetoencephalogram spike automatic detection method and tracing positioning system
Anuragi et al. Automated FBSE-EWT based learning framework for detection of epileptic seizures using time-segmented EEG signals
CN114366124B (en) Epileptic electroencephalogram identification method based on semi-supervised deep convolution channel attention list classification network
Yildiz et al. Classification and analysis of epileptic EEG recordings using convolutional neural network and class activation mapping
CN111166327A (en) Epilepsy diagnosis device based on single-channel electroencephalogram signal and convolutional neural network
CN113647962B (en) Epileptic positioning and seizure prediction method based on deep learning integrated model
CN115886840B (en) Epileptic prediction method based on domain-oriented multi-level deep convolution feature fusion network
CN112426161B (en) Time-varying electroencephalogram feature extraction method based on domain self-adaptation
CN112800928B (en) Epileptic seizure prediction method of global self-attention residual error network integrating channel and spectrum characteristics
CN117503057B (en) Epileptic seizure detection device and medium for constructing brain network based on high-order tensor decomposition
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
Soleimani-B et al. Adaptive prediction of epileptic seizures from intracranial recordings
Li et al. GCNs–FSMI: EEG recognition of mental illness based on fine-grained signal features and graph mutual information maximization
CN117193537A (en) Double-branch convolutional neural network motor imagery intention decoding method based on self-adaptive transfer learning
CN112057068A (en) Epilepsia pathological data classification method and device and storage medium
Mühlberg et al. Seizure prediction by multivariate autoregressive model order optimization
Chavan et al. EEG signals classification and diagnosis using wavelet transform and artificial neural network
CN115017960A (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
Zhang et al. Tiny CNN for seizure prediction in wearable biomedical devices
El-Fequi et al. Prediction of epileptic seizures: A statistical approach with DCT compression
CN116616800B (en) Scalp electroencephalogram high-frequency oscillation signal identification method and device based on meta-shift learning
Praveena et al. Improved Artificial Bee Colony Based Feature Selection for Epileptic Seizure Detection.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant