CN116712090A - Epileptic electroencephalogram signal automatic detection and classification model establishment method and application - Google Patents

Epileptic electroencephalogram signal automatic detection and classification model establishment method and application Download PDF

Info

Publication number
CN116712090A
CN116712090A CN202311001225.4A CN202311001225A CN116712090A CN 116712090 A CN116712090 A CN 116712090A CN 202311001225 A CN202311001225 A CN 202311001225A CN 116712090 A CN116712090 A CN 116712090A
Authority
CN
China
Prior art keywords
epileptic
classification
epileptic electroencephalogram
encoder
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311001225.4A
Other languages
Chinese (zh)
Inventor
刘伟奇
马学升
陈金钢
陈凯乐
王肖玮
龚哲晰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongxin Zhiyi Technology Beijing Co ltd
Original Assignee
Tongxin Zhiyi Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongxin Zhiyi Technology Beijing Co ltd filed Critical Tongxin Zhiyi Technology Beijing Co ltd
Priority to CN202311001225.4A priority Critical patent/CN116712090A/en
Publication of CN116712090A publication Critical patent/CN116712090A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4094Diagnosing or monitoring seizure diseases, e.g. epilepsy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Neurology (AREA)
  • Artificial Intelligence (AREA)
  • Psychology (AREA)
  • Neurosurgery (AREA)
  • Physiology (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The embodiment of the application discloses an epileptic electroencephalogram automatic detection and classification model establishment method and application, wherein the epileptic electroencephalogram automatic detection and classification model establishment method comprises the following steps: extracting epileptic electroencephalogram signals, and performing data preprocessing, wherein the data preprocessing comprises the steps of applying suburban wolf optimization algorithm to perform feature selection so as to obtain optimal subclass features; and transmitting the selected optimal subclass characteristics to a depth typical correlation sparse self-encoder model for classification training, and taking the depth typical correlation sparse self-encoder model as an established epileptic electroencephalogram signal automatic detection and classification model after training is finished. The method solves the problems that the traditional technology for classifying the epileptic electroencephalogram signals still needs to rely on manual identification, a large amount of time is consumed, subjectivity exists in the judging result, and the existing classification model for classifying the epileptic electroencephalogram signals still needs to further improve the classification efficiency.

Description

Epileptic electroencephalogram signal automatic detection and classification model establishment method and application
Technical Field
The application relates to the technical field of medical image processing, in particular to an epileptic electroencephalogram signal automatic detection and classification model establishment method and application.
Background
Epilepsy is considered to affect the most refractory and severe neurological diseases of the human brain. Epilepsy is a nervous system disorder disease with cerebral cortex excitability abnormality caused by a large number of cerebral nerve cell groups, and has abrupt nature. At the molecular level, there are several pathways involved in apoptosis of pre-myelin oligodendrocytes or sub-plate neurons that are present in perinatal brain development. In hypoxic ischemic encephalopathy, elevated concentrations of glutamate or free radical active substances, inflammatory cytokines (such as TNF- α, IL-1b, IL-6, 12, 15 and 18) in activated microglia and astrocytes, low pH values in infection, and free iron secondary to cerebral hemorrhage are widely recognized as important triggers of epileptic events. Epileptic seizures have a great impact on social interaction, physical communication and the subsequent influence of the emotion of a patient, so that the epileptic seizures have important significance in diagnosis and treatment.
The electroencephalogram waveforms in the epileptic seizure stage can be in the conditions of spike waves, sharp waves and the like, and long-time electroencephalogram monitoring is generally adopted in clinical medicine to judge whether a patient is ill or not. Electroencephalogram (EEG) is a system in which brain neural activity is recorded by physiological activity of electric potential, and is an irreplaceable position in the detection of epileptic diseases. The widespread use of electroencephalograms can be attributed to their low cost, availability and lack of effort. By visually interpreting the recorded electroencephalogram signals, a neurologist can largely distinguish between normal brain activity at the time of an epileptic seizure (inter-seizure) and brain activity at the time of an epileptic seizure (seizure period). However, diagnosis of epilepsy using an electroencephalogram signal is laborious and time-consuming, because a neurologist or epileptic requires careful screening of the electroencephalogram signal. In addition, human error may occur. To reduce the likelihood of misinterpretation, an effective, objective and rapid plan must be made to handle a large number of electroencephalographic recordings. There are several Machine Learning (ML) methods currently available for diagnosing seizures using nonlinear, statistical and frequency domain parameters. In conventional machine learning methods, the classifier and feature selection are obtained by trial and error techniques. Also, sophisticated data mining and signal processing must be done to design an efficient method that is effective with small amounts of information. Currently, as the data volume increases, machine learning techniques may not work effectively, so Deep Learning (DL) is undoubtedly a powerful means of solving the automatic identification of epileptic brain electrical signals.
Deep learning is a machine learning method of constructing a multi-layer neural network with the ability to discover hidden distributed feature representations in the data. The deep learning is equivalent to the deep neural network from the basic structure, and the deep neural network is capable of realizing approximation of complex functions by a multi-level feature learning method as the number of network layers is larger, so that the deep neural network can express feature expression capability which is not possessed by the shallow neural network. The basic BP (Back-Propagate) neural network consists of three parts, namely an L1 input layer, an L2 hidden layer and an L3 output layer. Deep learning forms more abstract high-level representation attribute categories or features by combining low-level features to discover hidden distributed feature representations in the data. Therefore, the requirement on relevant priori knowledge of researchers is low, the key requirement is that data are sufficient, namely a reorganization input layer is provided, the algorithm fitting degree can be optimal, and better results can be obtained through multiple training. Therefore, compared with the prior art, the deep learning method is more suitable for processing occasions with larger information quantity.
A variety of classification models are available in the literature today for detecting and classifying seizures using electroencephalogram signals. Despite the many advantages of machine learning and deep learning models in the literature, there is still a need to further improve classification efficiency.
Disclosure of Invention
The embodiment of the application aims to provide an epileptic electroencephalogram automatic detection and classification model establishment method and application, which are used for solving the problems that the traditional technology for classifying epileptic electroencephalogram signals still needs to rely on manual identification in the prior art, a large amount of time is consumed, the judgment result is subjective, and the classification efficiency of the traditional classification model for classifying epileptic electroencephalogram signals still needs to be further improved.
In order to achieve the above object, an embodiment of the present application provides a method for automatically detecting and classifying epileptic brain electrical signals, including: extracting epileptic electroencephalogram signals, and performing data preprocessing, wherein the data preprocessing comprises the steps of applying suburban wolf optimization algorithm to perform feature selection so as to obtain optimal subclass features;
and transmitting the selected optimal subclass characteristics to a depth typical correlation sparse self-encoder model for classification training, and taking the depth typical correlation sparse self-encoder model as an established epileptic electroencephalogram signal automatic detection and classification model after training is finished.
Optionally, the method further comprises:
and adjusting parameters involved in the depth typical correlation sparse self-encoder model by applying a krill algorithm so as to improve the overall classification efficiency.
Optionally, the extracting epileptic electroencephalogram signals includes:
and capturing data characteristics through a self-encoder neural network, so as to extract the epileptic brain electrical signals.
Optionally, the data preprocessing further includes:
firstly, carrying out normalization processing on the epileptic electroencephalogram data set by using a linear normalization method;
then, class label processing is carried out, and the instance in the epileptic electroencephalogram data set is allocated with an appropriate class label.
Optionally, the depth canonical correlation sparse self encoder model is determined based on a canonical correlation analysis of the depth neural network in combination with the sparse self encoder as:
where N represents the entire data volume, X and Y represent the input matrices of the two data sets, X, Y represent a certain data sample in the data sets, f and g represent a deep neural network for extracting nonlinear features of all data sets but encoding all inputs simultaneously, the parameter being W f And W is g ,U=[u 1 ,…,u L ]Sum v= [ V 1 ,…,v L ]Is a typical correlation analysis direction in which deep neural network results are displayed in L cells to the top layer,representing the nonlinear representation used in the test, < + >>And->Meaning input +.>And input->Update of (2), accordingly,/->Is determined asWherein->Indicating the average activation of these hidden units j.
Optionally, the krill swarm algorithm characterizes the performance of the candidate solution by deriving an fitness function to obtain enhanced classification accuracy, defining a positive integer, and minimizing the classification error rate is considered as the fitness function:
in order to achieve the above purpose, the present application also provides an epileptic electroencephalogram signal automatic detection and classification method, which comprises: and acquiring an electroencephalogram signal, inputting the electroencephalogram signal into an epileptic electroencephalogram signal automatic detection and classification model established by the epileptic electroencephalogram signal automatic detection and classification model establishment method, and automatically detecting and classifying the epileptic electroencephalogram signal by the epileptic electroencephalogram signal automatic detection and classification model.
In order to achieve the above object, the present application further provides an apparatus for automatically detecting and classifying epileptic brain electrical signals, comprising: a memory; and
a processor coupled to the memory, the processor configured to perform the steps of the method as described above.
To achieve the above object, the present application also provides a computer storage medium having stored thereon a computer program which, when executed by a machine, implements the steps of the method as described above.
The embodiment of the application has the following advantages:
the embodiment of the application provides a method for automatically detecting epileptic brain electrical signals and establishing a classification model, which comprises the following steps: extracting epileptic electroencephalogram signals, and performing data preprocessing, wherein the data preprocessing comprises the steps of applying suburban wolf optimization algorithm to perform feature selection so as to obtain optimal subclass features; and transmitting the selected optimal subclass characteristics to a depth typical correlation sparse self-encoder model for classification training, and taking the depth typical correlation sparse self-encoder model as an established epileptic electroencephalogram signal automatic detection and classification model after training is finished.
By the method, the method for automatically detecting and classifying the epileptic brain electrical signals based on deep learning is provided, manual identification is not relied on, efficiency is higher, better performance is achieved compared with the existing classification model technology, epilepsy can be timely identified at an early stage, and classification efficiency and diagnosis accuracy are improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It will be apparent to those skilled in the art from this disclosure that the drawings described below are merely exemplary and that other embodiments may be derived from the drawings provided without undue effort.
Fig. 1 is a flowchart of a method for automatically detecting epileptic brain electrical signals and establishing a classification model according to an embodiment of the present application;
fig. 2 is an overall block diagram of a seizure detection and classification model of a deep typical correlation sparse self-encoder of an epileptic electroencephalogram signal automatic detection and classification model building method provided by an embodiment of the application;
fig. 3 is a schematic structural diagram of a self-encoder of a method for automatically detecting epileptic brain electrical signals and establishing a classification model according to an embodiment of the present application;
fig. 4 is a flowchart of a krill swarm algorithm of an epileptic electroencephalogram automatic detection and classification model building method according to an embodiment of the present application;
fig. 5 is a schematic view of ROC analysis of seizure detection and classification technology of a depth typical association sparse self-encoder under binary class according to an automatic epileptic electroencephalogram signal detection and classification model establishment method provided by an embodiment of the present application;
fig. 6 is a schematic view of ROC analysis of seizure detection and classification technology of a depth-typical association sparse self-encoder under multiple classes in an automatic epileptic brain signal detection and classification model building method according to an embodiment of the present application;
fig. 7 is a block diagram of an apparatus for automatically detecting and classifying epileptic brain electrical signals and establishing a model according to an embodiment of the present application.
Detailed Description
Other advantages and advantages of the present application will become apparent to those skilled in the art from the following detailed description, which, by way of illustration, is to be read in connection with certain specific embodiments, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
In addition, the technical features of the different embodiments of the present application described below may be combined with each other as long as they do not collide with each other.
An embodiment of the present application provides a method for automatically detecting and classifying an epileptic electroencephalogram, referring to fig. 1 and 2, fig. 1 is a flowchart of a method for automatically detecting and classifying an epileptic electroencephalogram provided in an embodiment of the present application, fig. 2 depicts an overall block diagram of a seizure detection and classification (DCSAE-ESDC) model of a depth-typical correlation sparse self-encoder, it should be understood that the method may further include additional blocks not shown and/or blocks not shown may be omitted, and the scope of the present application is not limited in this respect.
At step 101, epileptic brain electrical signals are extracted and data preprocessing is performed, including feature selection by applying suburban wolf optimization algorithms to obtain optimal subclass features.
Specifically, the characteristic extraction of the epileptic brain electrical signals:
the feature of extracting epileptic brain electrical signals is the core of classifying brain electrical signals. The feature extraction of the epileptic brain electrical signals mainly comprises a time domain analysis method, a frequency domain analysis method, a time-frequency domain analysis method, a nonlinear analysis method and the like. Along with the development of artificial intelligence, various epileptic electroencephalogram characteristic extraction methods from analysis angles appear. In deep learning, the neural network that captures the most important features of the data, i.e., the self-encoder neural network.
The self-encoder neural network is an unsupervised learning technology, and the neural network is utilized for characterization learning. I.e. the input feature x 1 、x 2 …x n There is a special relation between them, but these relations do not need to be manually extracted, but put into the network for learning, and finally concentrated into more refined and less number of features a 1 、a 2 …a m . Wherein m is<n. Where xn is the input data, a m Namely so-called braidingThe code, i.e. "bottleneck data". The general structure of the self-encoder is shown in fig. 3.
The unlabeled data set and the framework are adopted as task supervision learning problems and are responsible for outputting new data(reconstruction of the original input x). This network can be trained by minimizing reconstruction errors. "bottleneck" is a key attribute of the network design; if there is no "information bottleneck", the network will pass these values through the network and only learn to remember the input values, such an encoder has no meaning.
And (II) data preprocessing:
in the initial stage, preprocessing the electroencephalogram signals to improve the signal quality, and specifically comprises the following steps:
1. linear normalization processing and class label processing:
in the data preprocessing flow, the label-like processing and the linear normalization processing are generally combined together. First, linear normalization processing is performed, original feature data is mapped into the range of [0,1], and dimensional differences among different features are eliminated. The class labels are then processed appropriately to map the original class labels to the appropriate class representation for model training.
In the initial stage, a linear normalization method is applied to normalize the data set. Then, the lower and higher values are considered from the information set. Each piece of information is normalized to this value. The purpose is to normalize the maximum to 1, the minimum to 0, and the other values to within [0,1 ]. The process of linear normalization is defined by equation (1).
Wherein the method comprises the steps ofRepresenting the minimum value in the information set, +.>Representing the maximum value in the information set,/->Representing a certain value in the information set, +.>Then it is indicated that the value is normalized to 0,1]The latter value.
Then, class label processing is performed. In Class label processing, common cases include Multi-Class (Multi-Class) and Binary Class (Binary Class), which represent different cases in the classification problem, respectively. The multi-class problem means that in the classification task, class labels have a plurality of possible values, and each sample can only belong to one class, and can be represented by 0 to 4. Binary class problems refer to the fact that in a classification task, class labels have only two possible values, usually represented by 0 and 1.
Specifically, instances in the electroencephalogram data set are assigned appropriate class labels, e.g., 0,1, 2, 3, 4 are assigned to multiple classes, and 0,1 are assigned to binary classes.
2. Feature selection based on suburban wolf optimization algorithm (COA):
feature selection refers to the process of selecting N features from the existing M features to optimize a specific index of a system, is a process of selecting some most effective features from original features to reduce the dimension of a data set, is an important means for improving the performance of a learning algorithm, and is also a key data preprocessing step in pattern recognition.
Specifically, the inspiration of suburban wolf optimization algorithms (COAs) derives from the dynamic characteristics of suburban wolves in the environment and the experience of suburban wolves in communication. First, suburban wolf optimization algorithm (COA) randomly starts suburban wolf position using equation (2).
Wherein, the liquid crystal display device comprises a liquid crystal display device,and->Represents the maximum and minimum limits, +.>Represents [0,1]]Any number of>Indicating the position of suburban wolves. Here, the number of suburban wolves per group was limited to 14, which verifies the search capability of suburban wolf optimization algorithm (COA). The optimal suburban wolf may be calculated as the optimal suburban wolf that varies according to the environmental changes, followed by the suburban wolf with the smallest cost function. In suburban wolf optimization algorithms (COAs), suburban wolves are scheduled to participate in community maintenance and to share social conditions. The social behavior of the population can be determined by equation (3):
wherein, the liquid crystal display device comprises a liquid crystal display device,indicates suburban wolf number, < ->The social trend of the p-th wolf group from the t-th time is shown, and C shows the ordered social state of suburban wolves. From birth and death, the birth of a new suburban wolf can be determined using equation (4):
where i1 and i2 represent two random dimensions, r1 and r2 represent two suburban wolves randomly selected in group p,represented at [0,1]]Random number generated in->Representing connection probability->Representing the scattering probability. />And->Can be determined by equations (5) and (6).
In each round, each suburban wolf from the p-th group will upgrade the social status by using equation (7).
Wherein sigma 1 Sum sigma 2 The effect of alpha and beta packets is expressed and defined by formulas (8) and (9).
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing alpha suburban wolves. />The social status costs are inferred and can be determined using equation (10).
Finally, the optimal suburban wolf can be selected as the optimal solution of the problem according to the social standard. FF of suburban wolf optimization algorithm (COA) aims to determine a solution to achieve a balance between two fixed objectives.
Wherein, the liquid crystal display device comprises a liquid crystal display device,representing classifier error rate, +.>Indicating the size of the subset selected by the method, +.>Representing the total number of features contained in the existing dataset. />Representing the variables E [0,1]]Weight ratio to classifier error rate, +.>=1-/>The table demonstrates the significance of the reduction. Before calculating FS, classification performance is allowed to reach significant weights.
When the evaluation function can represent the classification precision, the result does not have a solution identical to the precision; however, it takes the smallest selected feature as an important feature to reduce the size problem.
At step 102, the selected optimal subclass features are transferred to a depth typical correlation sparse self-encoder model for classification training, and after training is finished, the depth typical correlation sparse self-encoder model is used as an established epileptic electroencephalogram signal automatic detection and classification model.
Specifically, at this stage, the optimal sub-class features after feature selection may be used for input of a depth-representative correlated sparse self-encoder model. This will help to improve the performance of the model, since the model only needs to process the most relevant feature subset, not all the original features. The selected features (optimal sub-class features) are passed into a depth-representative correlated sparse self-encoder model for classification processing.
Depth canonical correlation sparse self encoder (DCCSAE) is a deep learning model that combines Canonical Correlation Analysis (CCA) and sparse self encoder (Sparse Autoencoder). The model aims to learn a representation of data by jointly training multiple depth self-encoders and exploiting the idea of CCA to obtain an efficient, meaningful low-dimensional representation.
In a basic perspective, the self-encoder is an axisymmetric single hidden layer neural network. The self-encoder encodes the input sensor data using the hidden layer, estimates the minimum error, and reaches the hidden layer item of the best feature. The self-encoder model originates from an unsupervised computational model of human perception learning, which has many functional drawbacks, namely that the self-encoder cannot learn certain actual features by copying and inputting memory to the hidden layer, but it can reconstruct the input data with higher accuracy. Sparse self-encoder inheritance is a concept of self-encoder that creates a sparse penalty term that increases the constraints on feature learning making it a concise term for the input data. A typical correlation analysis (CCA) is a technique of searching for correlation between two data, but it cannot search for complex nonlinear links. To address the limitations of the classical correlation analysis technique, a deep neural network for classical correlation analysis was developed, named as the classical correlation analysis (DCCA) based on the deep neural network. Typical correlation analysis based on deep neural networks overcomes the limitation that typical correlation analysis techniques cannot identify complex nonlinear linkages. In a typical correlation analysis of deep neural networks, f and g of two deep neural networks learn the nonlinear representations of all data sets. The typical correlation analysis of the deep neural network is derived by maximizing the typical correlation of the two deep neural network results f (X) and g (Y), as follows:
,
,
,
,
wherein N represents the whole data volume, X and Y represent input matrices of two data sets, I represents an identity matrix, f and g define nonlinear representations of two deep neural networks, and the parameter is W f And W is g Correspondingly, u= [ U ] 1 ,…,u L ]Sum v= [ V 1 ,…,v L ]Is a typical correlation analysis direction that shows the deep neural network results to the top layer, has L units, (r x ,r y )>0 is the regularization parameter of the instantaneous covariance estimation.Representing the nonlinear representation used in the test. A typical correlation analysis based on deep neural networks will obtain the best results when searching for a nonlinear mapping of the two types of data. Sparse self-encoders have achieved great success in seeking non-linear compact representations of a single data set. However, typical correlation analysis of deep neural networks does not achieve efficient nonlinear dimension reduction, nor does a sparse self-encoder search for cross-modal data correlations. The typical correlation analysis of a depth neural network is combined with a sparse self-encoder to obtain the best representation of both data types, and a depth-typical correlation sparse self-encoder (DCSAE) is built that seeks a depth network table of both data setsAs shown, to maximize canonical correlation between the two removed top-level features and minimize format errors of the sparse self-encoder.
In some embodiments, the depth representative associated sparse self-encoder is determined as:
the depth typical correlation sparse self-encoder is the epileptic seizure detection and classification model provided by the application, wherein N represents the whole data volume, X and Y represent input matrixes of two data sets, X and Y represent a certain data sample in the data sets, f and g represent a depth neural network for extracting nonlinear characteristics of all the data sets and simultaneously encoding all the inputs, and the parameter is W f And W is g, U=[u 1 ,…,u L ]Sum v= [ V 1 ,…,v L ]Is a typical correlation analysis direction in which deep neural network results are displayed in L cells to the top layer.Representing the nonlinear representation used in the test. />And->Meaning inputAnd input->Update of (2), accordingly,/->Is determined as +.>Wherein->Represents the average activation of these hidden units j, like +.>
In some embodiments, a krill swarm algorithm is applied to adjust parameters involved in the depth-representative correlated sparse self-encoder model to improve overall classification efficiency.
In particular, to adjust parameters involved in the depth-typical correlated sparse self-encoder technique, krill-swarm algorithms (Krill Herd Algorithm, KHA) are applied to improve overall classification efficiency. Euphausia superba is the major animal species on earth, the most important feature of which is its ability to create large communities. Once predators, i.e., seals, whales and other species attack the krill population, individual krill are separated from the population. This attack reduces the density of krill populations. Recombination of the krill population after predation affects several parameters. An important goal of individual colonisation activities with krill is to increase the density of krill and to obtain food. Krill swarm techniques utilize such multi-objective sets to solve the global optimization problem. In order to find the food (maximum food focus) and density, the dependent attractiveness of krill is exploited as a target. As a result, therefore, krill individuals once find the population and food of maximum density, change in a near optimal solution. This activity produces a krill population that approaches the global minimum of the optimization problem. In short, there are several parameters, namely a several-dimensional parameter space, the motion fitness function of the krill population is determined by the objective function of the depth-representative correlated sparse self-encoder model, and the performance of each krill is determined by the fitness function to simulate the behavior thereof, so that the space occupied by the krill population can be changed, the parameter space can be reduced or enlarged, and the values of the various parameters can be changed.
The specific algorithm comprises the following steps: the time-dependent position of individual krill from a two-dimensional surface is controlled by three subsequent important activities:
1. another individual krill effort;
2. foraging;
3. physical or random diffusion.
The subsequent Lagrangian method can be generalized to the n-dimensional parameter space:
wherein N is i Refers to movement caused by another individual krill; f (F) i Representing foraging movements; d (D) i Indicating the distance of physical diffusion of the ith krill individual.
All krill individual efforts were determined as:
wherein, the liquid crystal display device comprises a liquid crystal display device,indicating the maximum induction rate, which may be equal to 0.01 (meters/second) according to the measured value. />Representing the inertial weight of the induced motion in the range of 0 and 1. />Representing local results provided by neighbors, target representing destination results provided by the best krill individual,/o>Representing the previous induced motion. />The inertial weight, initially equivalent to 0.9 after optimization, can be linearly reduced to 0.1. Fig. 4 shows a flow chart of krill swarm technique.
The result of the neighbors is considered to be a local search between individualsThe attraction or repulsion of the cords tends.Is a target result provided by the optimal krill individual, determined as:
wherein, the liquid crystal display device comprises a liquid crystal display device,refers to control coefficients, and is determined as follows:
wherein rand represents any form of number between 0 and 1, I represents the actual number of iterations, I maks Defined as the maximum number of iterations.
In some embodiments, the krill swarm algorithm (KHA) derives an fitness function to obtain enhanced classification accuracy. It defines a positive integer to characterize the performance of the candidate solution. In this work, minimization of the classification error rate is considered as a fitness function. A poor solution increases the error rate, while the best solution has the smallest error rate. The fitness function is:
ROC curves (receiver operating characteristics) are used to demonstrate the performance of the classification model at all classification thresholds. Fig. 5 shows ROC analysis of seizure detection and classification (DCSAE-ESDC) technique for seizure binary class using brain electrical signals with a depth canonical correlation sparse self-encoder. The figure shows that the inventive approach effectively recognizes the presence of seizures and reaches a maximum ROC of 99.5023. Fig. 6 shows ROC analysis of epileptic seizure detection and classification (DCSAE-ESDC) methods of depth-representative correlated sparse self-encoders in terms of multiple classes of epileptic seizures with brain electrical signals. It is pointed out that the present application effectively recognizes the presence of seizures, achieving a maximum ROC of 99.5023.
The embodiment of the application also provides an automatic epileptic electroencephalogram signal detection and classification method, which comprises the following steps:
and acquiring an electroencephalogram signal, inputting the electroencephalogram signal into an epileptic electroencephalogram signal automatic detection and classification model established by the epileptic electroencephalogram signal automatic detection and classification model establishment method, and automatically detecting and classifying the epileptic electroencephalogram signal by the epileptic electroencephalogram signal automatic detection and classification model.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
In summary, in the above embodiments, an efficient depth-representative correlation sparse self-encoder seizure detection and classification model is derived for identifying and classifying seizures by using electroencephalogram signals. The model proposed by the above embodiment includes several stages of operation, namely data preprocessing, feature selection based on suburban wolf optimization algorithm, classification based on depth-typical correlation sparse self-encoder, and parameter adjustment based on krill-swarm algorithm.
Specifically, the application provides an intelligent epileptic seizure detection and classification model based on a depth typical correlation sparse self-encoder of an electroencephalogram signal, and the model designs a novel feature selection model based on a suburban wolf optimization algorithm and is used for optimizing and selecting feature subsets. In addition, a classifier based on a depth typical association sparse self-encoder is also derived and is used for detecting and classifying different types of epileptic seizures, and parameters of an epileptic seizure detection and classification model are adjusted through a krill swarm algorithm.
The method for automatically detecting and classifying the epileptic brain electrical signals based on the deep learning provided by the application has the advantages that manual identification is not relied on, the efficiency is higher, the method has better performance than the existing classification model technology, the epileptic can be timely identified in an early stage, and the classification efficiency and the diagnosis accuracy are improved.
Specifically, the application reduces manual errors, avoids misdiagnosis, greatly shortens diagnosis time and improves diagnosis accuracy as a high-accuracy automatic algorithm. The deep learning method is applied to the daily diagnosis flow aiming at epilepsy, so that the monitoring of patients can be optimized, the early diagnosis can be carried out, and the treatment of the patients can be improved.
2. The feature selection method based on suburban wolf optimization algorithm (COA) is used, so that the dimension disaster problem is eliminated, and the classification result is enhanced. Compared with other FS models (such as SA-FS, PSO-FS and GA-FS), the epileptic seizure detection and classification (DCSAE-ESDC) model of the depth typical correlation sparse self-encoder can complete effective epileptic seizure classification performance under all conditions, can reduce the complexity of calculation, improves the accuracy of classification and has better effect than other FS models.
Fig. 7 is a block diagram of an apparatus for automatically detecting and classifying epileptic brain electrical signals and establishing a model according to an embodiment of the present application. The device comprises:
a memory 201; and a processor 202 connected to the memory 201, the processor 202 configured to: extracting epileptic electroencephalogram signals, and performing data preprocessing, wherein the data preprocessing comprises the steps of applying suburban wolf optimization algorithm to perform feature selection so as to obtain optimal subclass features;
and transmitting the selected optimal subclass characteristics to a depth typical correlation sparse self-encoder model for classification training, and taking the depth typical correlation sparse self-encoder model as an established epileptic electroencephalogram signal automatic detection and classification model after training is finished.
In some embodiments, the processor 202 is further configured to: further comprises:
and adjusting parameters involved in the depth typical correlation sparse self-encoder model by applying a krill algorithm so as to improve the overall classification efficiency.
In some embodiments, the processor 202 is further configured to: the extraction of epileptic brain electrical signals comprises the following steps:
and capturing data characteristics through a self-encoder neural network, so as to extract the epileptic brain electrical signals.
In some embodiments, the processor 202 is further configured to: the data preprocessing comprises the following steps:
firstly, carrying out normalization processing on the epileptic electroencephalogram data set by using a linear normalization method;
then, class label processing is carried out, and the instance in the epileptic electroencephalogram data set is allocated with an appropriate class label.
In some embodiments, the processor 202 is further configured to: the depth canonical correlation sparse self-encoder model is determined based on a canonical correlation analysis of a depth neural network in combination with a sparse self-encoder as:
where N represents the entire data volume, X and Y represent the input matrices of the two data sets, X, Y represent a certain data sample in the data sets, f and g represent a deep neural network for extracting nonlinear features of all data sets but encoding all inputs simultaneously, the parameter being W f And W is g ,U=[u 1 ,…,u L ]Sum v= [ V 1 ,…,v L ]Is a typical correlation analysis direction in which deep neural network results are displayed in L cells to the top layer,representing the nonlinear representation used in the test, < + >>And->Meaning input +.>And input->Update of (2), accordingly,/->Is determined asWherein->Indicating the average activation of these hidden units j.
In some embodiments, the processor 202 is further configured to: the krill swarm algorithm characterizes the performance of the candidate solution by deriving an fitness function to obtain enhanced classification accuracy, defining a positive integer, and minimizing the classification error rate is considered as the fitness function:
in some embodiments, the processor 202 is further configured to: and acquiring an electroencephalogram signal, inputting the electroencephalogram signal into an epileptic electroencephalogram signal automatic detection and classification model established by the epileptic electroencephalogram signal automatic detection and classification model establishment method, and automatically detecting and classifying the epileptic electroencephalogram signal by the epileptic electroencephalogram signal automatic detection and classification model.
Reference is made to the foregoing method embodiments for specific implementation methods, and details are not repeated here.
The present application may be a method, apparatus, system, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for performing various aspects of the present application.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information for computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Note that all features disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic set of equivalent or similar features. Where used, further, preferably, still further and preferably, the brief description of the other embodiment is provided on the basis of the foregoing embodiment, and further, preferably, further or more preferably, the combination of the contents of the rear band with the foregoing embodiment is provided as a complete construct of the other embodiment. A further embodiment is composed of several further, preferably, still further or preferably arrangements of the strips after the same embodiment, which may be combined arbitrarily.
While the application has been described in detail in the foregoing general description and specific examples, it will be apparent to those skilled in the art that modifications and improvements can be made thereto. Accordingly, such modifications or improvements may be made without departing from the spirit of the application and are intended to be within the scope of the application as claimed.

Claims (9)

1. The method for automatically detecting and establishing the classification model of the epileptic electroencephalogram signals is characterized by comprising the following steps of:
extracting epileptic electroencephalogram signals, and performing data preprocessing, wherein the data preprocessing comprises the steps of applying suburban wolf optimization algorithm to perform feature selection so as to obtain optimal subclass features;
and transmitting the selected optimal subclass characteristics to a depth typical correlation sparse self-encoder model for classification training, and taking the depth typical correlation sparse self-encoder model as an established epileptic electroencephalogram signal automatic detection and classification model after training is finished.
2. The method for automatically detecting and classifying an epileptic electroencephalogram according to claim 1, further comprising:
and adjusting parameters involved in the depth typical correlation sparse self-encoder model by applying a krill algorithm so as to improve the overall classification efficiency.
3. The method for automatically detecting and classifying an epileptic brain electrical signal according to claim 1, wherein the extracting an epileptic brain electrical signal comprises:
and capturing data characteristics through a self-encoder neural network, so as to extract the epileptic brain electrical signals.
4. The method for automatically detecting and classifying an epileptic electroencephalogram according to claim 1, wherein the performing data preprocessing further comprises:
firstly, carrying out normalization processing on the epileptic electroencephalogram data set by using a linear normalization method;
then, class label processing is carried out, and the instance in the epileptic electroencephalogram data set is allocated with an appropriate class label.
5. The method for automatically detecting and classifying the epileptic electroencephalogram signals according to claim 1, wherein,
the depth canonical correlation sparse self-encoder model is determined based on a canonical correlation analysis of a depth neural network in combination with a sparse self-encoder as:
where N represents the entire data volume, X and Y represent the input matrices of the two data sets, X, Y represent a certain data sample in the data sets, f and g represent a deep neural network for extracting nonlinear features of all data sets but encoding all inputs simultaneously, the parameter being W f And W is g ,U=[u 1 ,…,u L ]Sum v= [ V 1 ,…,v L ]Is a typical correlation analysis direction in which deep neural network results are displayed in L cells to the top layer,representing the nonlinear representation used in the test, < + >>And->Meaning input +.>And input->Update of (2), accordingly,/->Is determined asWherein->Indicating the average activation of these hidden units j.
6. The method for automatically detecting and classifying the epileptic electroencephalogram signals according to claim 2, wherein,
the krill swarm algorithm characterizes the performance of the candidate solution by deriving an fitness function to obtain enhanced classification accuracy, defining a positive integer, and minimizing the classification error rate is considered as the fitness function:
7. an automatic epileptic electroencephalogram signal detection and classification method is characterized by comprising the following steps:
acquiring an electroencephalogram signal, inputting the electroencephalogram signal into an epileptic electroencephalogram signal automatic detection and classification model established by the epileptic electroencephalogram signal automatic detection and classification model establishment method according to any one of claims 1-6, and automatically detecting and classifying the epileptic electroencephalogram signal by the epileptic electroencephalogram signal automatic detection and classification model.
8. An epileptic electroencephalogram automatic detection and classification model building device is characterized by comprising:
a memory; and
a processor connected to the memory, the processor being configured to perform the steps of the method of any one of claims 1 to 7.
9. A computer storage medium having stored thereon a computer program, which when executed by a machine performs the steps of the method according to any of claims 1 to 7.
CN202311001225.4A 2023-08-10 2023-08-10 Epileptic electroencephalogram signal automatic detection and classification model establishment method and application Pending CN116712090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311001225.4A CN116712090A (en) 2023-08-10 2023-08-10 Epileptic electroencephalogram signal automatic detection and classification model establishment method and application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311001225.4A CN116712090A (en) 2023-08-10 2023-08-10 Epileptic electroencephalogram signal automatic detection and classification model establishment method and application

Publications (1)

Publication Number Publication Date
CN116712090A true CN116712090A (en) 2023-09-08

Family

ID=87866459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311001225.4A Pending CN116712090A (en) 2023-08-10 2023-08-10 Epileptic electroencephalogram signal automatic detection and classification model establishment method and application

Country Status (1)

Country Link
CN (1) CN116712090A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060167A1 (en) * 2011-09-02 2013-03-07 Jeffrey Albert Dracup Method for prediction, detection, monitoring, analysis and alerting of seizures and other potentially injurious or life-threatening states
CN209186698U (en) * 2018-10-11 2019-08-02 河北大学 A kind of epilepsy early warning device based on FPGA
CN113598792A (en) * 2021-08-04 2021-11-05 杭州电子科技大学 Epilepsy electroencephalogram classification method based on supervised feature fusion algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130060167A1 (en) * 2011-09-02 2013-03-07 Jeffrey Albert Dracup Method for prediction, detection, monitoring, analysis and alerting of seizures and other potentially injurious or life-threatening states
CN209186698U (en) * 2018-10-11 2019-08-02 河北大学 A kind of epilepsy early warning device based on FPGA
CN113598792A (en) * 2021-08-04 2021-11-05 杭州电子科技大学 Epilepsy electroencephalogram classification method based on supervised feature fusion algorithm

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QIN LIN等: "Classification of Epileptic EEG Signals with Stacked Sparse Autoencoder Based on Deep Learning", 12TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING (ICIC), pages 802 - 810 *
贾鹏飞等: "机器嗅觉技术理论及应用", 陕西新华出版传媒集团 陕西科学技术出版社, pages: 112 - 113 *
韩德鹏: "深度典型关联稀疏自编码器在精神分裂症影像遗传数据分类上的应用", 医药卫生科技辑, no. 06, pages 35 - 50 *

Similar Documents

Publication Publication Date Title
CN109389059B (en) P300 detection method based on CNN-LSTM network
CN113693613B (en) Electroencephalogram signal classification method, electroencephalogram signal classification device, computer equipment and storage medium
CN108319928B (en) Deep learning method and system based on multi-target particle swarm optimization algorithm
CN112766355B (en) Electroencephalogram signal emotion recognition method under label noise
Kumar et al. CNN-SSPSO: a hybrid and optimized CNN approach for peripheral blood cell image recognition and classification
CN112084891B (en) Cross-domain human body action recognition method based on multi-modal characteristics and countermeasure learning
CN110135244B (en) Expression recognition method based on brain-computer collaborative intelligence
CN110942825A (en) Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network
Rapoport et al. Efficient universal computing architectures for decoding neural activity
Radmanesh et al. Online spike sorting via deep contractive autoencoder
Arulkumar et al. A novel usage of artificial intelligence and internet of things in remote‐based healthcare applications
CN111105877A (en) Chronic disease accurate intervention method and system based on deep belief network
CN116386899A (en) Graph learning-based medicine disease association relation prediction method and related equipment
Bollens et al. Learning subject-invariant representations from speech-evoked EEG using variational autoencoders
CN114983343A (en) Sleep staging method and system, computer-readable storage medium and electronic device
Deng et al. A GAN model encoded by CapsEEGNet for visual EEG encoding and image reproduction
Beniaguev et al. Multiple synaptic contacts combined with dendritic filtering enhance spatio-temporal pattern recognition capabilities of single neurons
Serbaya [Retracted] Analyzing the Role of Emotional Intelligence on the Performance of Small and Medium Enterprises (SMEs) Using AI‐Based Convolutional Neural Networks (CNNs)
CN116712090A (en) Epileptic electroencephalogram signal automatic detection and classification model establishment method and application
Agarwal et al. Classification of EEG signal using LSTMs under Audiovisual Stimuli
CN114626408A (en) Electroencephalogram signal classification method and device, electronic equipment, medium and product
CN114936583A (en) Teacher-student model-based two-step field self-adaptive cross-user electromyogram pattern recognition method
CN114548239A (en) Image identification and classification method based on artificial neural network of mammal-like retina structure
Ul Hassan et al. Efficient neural spike sorting using data subdivision and unification
Müller et al. A Deep and Recurrent Architecture for Primate Vocalization Classification.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination