CN116725553A - Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics - Google Patents

Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics Download PDF

Info

Publication number
CN116725553A
CN116725553A CN202310587185.XA CN202310587185A CN116725553A CN 116725553 A CN116725553 A CN 116725553A CN 202310587185 A CN202310587185 A CN 202310587185A CN 116725553 A CN116725553 A CN 116725553A
Authority
CN
China
Prior art keywords
frequency domain
cap
phase
time
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310587185.XA
Other languages
Chinese (zh)
Inventor
陈丹
明哲锴
高腾飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN202310587185.XA priority Critical patent/CN116725553A/en
Publication of CN116725553A publication Critical patent/CN116725553A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4806Sleep evaluation
    • A61B5/4809Sleep detection, i.e. determining whether a subject is asleep or not
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Evolutionary Computation (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Physiology (AREA)
  • Psychology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Fuzzy Systems (AREA)
  • Anesthesiology (AREA)
  • Power Engineering (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application discloses a sleep cycle alternating mode (CAP) detection method and device based on time-frequency domain association characteristics, which take single-channel electroencephalogram signals as the data basis of CAP detection, synchronously extract the time domain morphological characteristics and the frequency domain distribution characteristics of signals, and capture the macroscopic conversion mode of characteristic states so as to realize the identification of CAP. Comprises the following three stages: (1) time domain morphology feature learning: extracting sub-frequency bands of the complete brain electrical signals, embedding shallow signal fluctuation relations of the complete brain electrical signals by using a bidirectional gating circulation unit, and extracting morphological characteristics of the complete brain electrical signals by using a one-dimensional convolutional neural network in a segmented manner; (2) frequency domain feature calculation: calculating the power spectral density of the electroencephalogram fragments in a segmented way, and measuring the distribution characteristics of the power spectral density by using kurtosis and skewness; (3) feature fusion and macroscopic evolution characterization: after the time-frequency domain features are fused, a bidirectional gating circulation unit is applied again to capture macroscopic dependence and conversion relation among all phases of the CAP; the corresponding CAP phase per second in the input signal is ultimately determined based on the extracted features.

Description

Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics
Technical Field
The application relates to the technical field of computers, in particular to a sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics.
Background
The cyclic alternating pattern (Cyclic Alternating Pattern, CAP) is a periodic pattern of brain electrical changes that occur during sleep non-rapid eye movements, representing sleep instability and sleep disturbances, and may be recorded by electroencephalogram (EEG) acquisition. Studies show that CAP is associated with primary insomnia, sleep respiratory disorders, epilepsy, depression, and the like, and is a sensitive tool for investigating sleep disorders in the whole life cycle because CAP can be detected in adult and children's electroencephalogram.
CAP is characterized by a series of transient electroencephalographic events that appear at intervals that stand out significantly from the background: each CAP cycle includes an A phase and a B phase, wherein the A phase is the background fragment which stands out from the background and the B phase is not more than 60 seconds, and the A phase can be further divided into three subtypes according to different frequency distribution. The composition of the CAP sequence reflects the degree of sleep arousal and cortical activation, and the indexes such as the duration and the proportion of the CAP sequence to the non-rapid eye movement period, the proportion of each subtype of the A phase, the average duration of the A/B phase and the like are often analyzed in medicine, so that the quality and the structure of the sleep of a patient can be known, the diagnosis of diseases is assisted, and therefore, the accurate identification of each subtype of the A, B phase and the A phase in the CAP sequence is the key of a CAP identification model. However, since the phase a includes various electroencephalogram characteristic waves, the waveform diversity is provided, and the morphological characteristic difference of each subtype of the phase a divided by the frequency distribution is not obvious, the existing CAP detection method based on waveform characteristics still faces the problems of difficult extraction of typical characteristics and insufficient detection precision.
Disclosure of Invention
The application aims to provide a sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics, which are used for solving the technical problem of low detection accuracy in the prior art.
In order to solve the technical problems, the technical scheme of the application is as follows:
the first aspect provides a sleep cycle alternating pattern detection method based on time-frequency domain correlation characteristics, comprising the following steps:
s1: collecting an electroencephalogram signal of a subject in a sleep non-rapid eye movement period, and taking the electroencephalogram signal as an original electroencephalogram signal;
s2: preprocessing the acquired original electroencephalogram signals, and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is background;
s3: constructing a CAP detection framework, wherein the CAP detection framework comprises a morphological feature learning module for extracting depth signal waveform features, a frequency domain feature calculation module for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module for fusing the depth signal waveform features and the signal frequency domain features;
s4: acquiring training data from the preprocessed and marked data, and training the CAP detection frame by using the training data;
s5: and detecting the electroencephalogram signals to be identified by using the trained CAP detection framework.
In one embodiment, step S1 includes:
and acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject by using a 10-20 scalp electroencephalograph.
In one embodiment, step S2 includes:
s2.1: downsampling the acquired electroencephalogram signal to 100Hz based on a polyphase filtering algorithm, and then applying 0.3-30Hz band-pass filtering to the downsampled electroencephalogram signal;
s2.2: carrying out Robust scaler normalization processing on each lead channel of the electroencephalogram signal obtained in the step S2.1;
s2.3: segmenting and marking the electroencephalogram signals obtained in the step S2.2, wherein the length of each segment is 30S, and marking each subtype of the A phase or the B phase in the CAP sequence corresponding to each second in the segment every second.
In one implementation, in the CAP detection framework constructed in the step S3, the morphological feature learning module is sequentially composed of a sub-band extraction, a bidirectional gating circulation unit GRU and a convolutional neural network CNN, wherein the sub-band extraction is used for carrying out band-pass filtering on an original input single-channel sequence according to frequency bands of 0.3-4.5Hz, 4.5-12Hz and 12-30Hz, and the original input single-channel sequence signal is decomposed into a three-channel time sequence with low, medium and high frequency band information; then, the obtained three-channel time sequence is processed by using a two-way gating circulating unit, and the fluctuation characteristics of the shallow layers of the signals are associated; and finally, segmenting the hidden layer output of the bidirectional gating circulation unit one by one, and respectively applying the same one-dimensional convolutional neural network to each second segment to perform morphological feature learning to obtain morphological feature vectors serving as extracted depth signal waveform features.
In one embodiment, in the CAP detection framework constructed in step S3, the frequency domain feature calculation module explicitly calculates the frequency domain feature of the electroencephalogram signal by using the power spectral density, and specifically includes: after calculating the power spectral density of each second segment of the initial input electroencephalogram signal, intercepting a part of a 1-30Hz interval on the power spectral density and measuring the distribution characteristics of the power spectral density by using spectral kurtosis, spectral skewness, variance and mean; meanwhile, original power spectrum density information is reserved, and 3Hz is used as window width and step length for carrying out moving average; and finally, the power spectral density of each second signal segment, the kurtosis, the skewness, the variance and the mean value of the power spectral density jointly form a one-dimensional feature vector which is used as the frequency domain feature of the extracted signal.
In one implementation, in the CAP detection framework constructed in the step S3, the feature fusion and evolution association module splices the one-dimensional time domain feature vector and the frequency domain feature vector corresponding to each second segment along the feature dimension to form a fusion feature vector of the two feature vectors, and the fusion feature vector forms a high-dimensional time sequence in the time dimension; and for a time sequence formed by a feature vector, the feature fusion and evolution association module again applies the bidirectional GRU to capture the evolution of macroscopic features of the bidirectional GRU, finally, the final classification is carried out on the hidden state of each output node, a multi-layer perceptron network is used for the classifier for classifying, the output classification result is the type of the CAP sequence corresponding to each second time segment in the original input signal, and the classification result comprises three subtypes A1, A2 and A3 and B phases.
In One embodiment, in the training process of step S4, the electroencephalogram signal of the subject is input into the model in a single channel form with each 30S as a segment, and softening treatment is adopted on the label, specifically, for the sample label in the area before and after phase conversion, the confidence level of the One-hot code belonging to the labeling category is adjusted to be 0.7, and the confidence level of the One-hot code belonging to the category corresponding to the adjacent segment is adjusted to be 0.3; classification uses weighted cross entropy to calculate classification loss and back-propagates.
Based on the same inventive concept, a second aspect of the present application provides a sleep cycle alternating pattern detection apparatus based on time-frequency domain correlation characteristics, comprising:
the data acquisition module is used for acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject and taking the brain electrical signals as original brain electrical signals;
the preprocessing module is used for preprocessing the acquired original electroencephalogram signals and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is a background;
the detection frame construction module is used for constructing a CAP detection frame, and the CAP detection frame comprises a morphological feature learning module used for extracting depth signal waveform features, a frequency domain feature calculation module used for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module used for fusing the depth signal waveform features and the signal frequency domain features;
the training module is used for acquiring training data from the preprocessed and marked data and training the CAP detection frame by utilizing the training data;
and the detection module is used for detecting the electroencephalogram signals to be identified by using the trained CAP detection frame.
Based on the same inventive concept, a third aspect of the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the method of the first aspect.
Based on the same inventive concept, a fourth aspect of the present application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, said processor implementing the method according to the first aspect when executing said program.
Compared with the prior art, the application has the following advantages and beneficial technical effects:
according to the sleep cycle alternating mode detection method based on the time-frequency domain correlation features, a CAP detection frame is constructed, depth signal waveform features can be extracted through a morphological feature learning module, signal frequency domain features can be extracted through a spatial feature learning module, and the two features can be fused through a feature fusion and evolution correlation module. Training the CAP detection frame by using training data; CAP phase labeling of signal data to be identified can be achieved by using a trained CAP detection framework. Compared with the existing deep learning detection model in the sleep cycle alternating mode, the method strengthens the characterization of the frequency domain features, and enables the model to have outstanding performance when distinguishing A, B phases in the CAP sequence, particularly each subtype of the A phase with weak time domain feature difference and different frequency distribution characteristics through the fusion of the time domain features and the characterization of the macroscopic evolution of the features, so that the detection accuracy can be improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of the structure of a CAP detection framework constructed in the practice of the present application.
Detailed Description
CAP is a periodic electroencephalogram change occurring in a sleep non-rapid eye movement period, and is generally composed of a series of transient electroencephalogram emissions (A phase) protruding from the background and background electroencephalogram (B phase) alternating occurrences, and the accurate identification of the A phase and the accurate subdivision of the A phase subtypes (A1, A2 and A3) in CAP are important for clinical analysis of sleep disorder. Many efforts have been made to detect CAP signals by means of feature engineering or deep learning, which mostly extract the time-domain or frequency-domain features of EEG signals manually or automatically, and detect the background-prominent phase a and the subtype of phase a in the CAP sequence. However, compared with other brain electrical microstructures, the CAP sequence has no clear morphological definition, the A phase specifically comprises a plurality of brain electrical release activities such as K complex wave, delta burst, multi-phase wave burst and the like, and the subdivision of the three subtypes of the A phase is empirically classified according to the high-low frequency proportion of the A phase, so that algorithms which depend on part of signal statistical features or part of dimensional depth features are possibly limited by the characterization capability of the algorithms, and performance bottlenecks are faced when distinguishing the A phase from the B phase in the CAP sequence, especially subdividing the three subtypes of the A phase.
Based on the method, the time-frequency domain characteristics of the electroencephalogram signals are extracted respectively, and meanwhile, the conversion mode of the electroencephalogram signals in a macroscopic state is captured, so that the input longer electroencephalogram fragments are marked according to the A phase and the B phase in a second-by-second manner, and the subtype of the A phase is further subdivided.
The main inventive concept of the present application is as follows:
a single-channel electroencephalogram signal is selected as a data basis for CAP detection, and a detection framework for fusing the time domain morphological characteristics and the frequency domain characteristics of the electroencephalogram signal is designed: firstly, embedding shallow time sequence dependency relations through bidirectional GRUs, and independently extracting time domain morphological characteristics of each segment by using a multilayer one-dimensional convolution network; then calculating the power spectrum density of each segment as the frequency domain characteristic; finally, the two are spliced and the bidirectional GRU is applied again to capture macroscopic state transition between the two. And classifying the depth features corresponding to the every second segment extracted by the network through a multi-layer perceptron, and finally classifying three subtypes of A phase and B phase in a cyclic alternating mode in sleep electroencephalogram.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1
The application provides a sleep cycle alternating mode detection method based on time-frequency domain correlation characteristics, which comprises the following steps:
s1: collecting an electroencephalogram signal of a subject in a sleep non-rapid eye movement period, and taking the electroencephalogram signal as an original electroencephalogram signal;
s2: preprocessing the acquired original electroencephalogram signals, and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is background;
s3: constructing a CAP detection framework, wherein the CAP detection framework comprises a morphological feature learning module for extracting depth signal waveform features, a frequency domain feature calculation module for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module for fusing the depth signal waveform features and the signal frequency domain features;
s4: acquiring training data from the preprocessed and marked data, and training the CAP detection frame by using the training data;
s5: and detecting the electroencephalogram signals to be identified by using the trained CAP detection framework.
Specifically, the sleep cycle alternating mode is composed of an A phase containing a plurality of different abnormal brain electric microstructures and a B phase similar to a background signal, and meanwhile, the A phase can be further divided into three subtypes according to the frequency distribution, so that the identification of the A phase and the subtypes thereof in the CAP sequence faces the problems of various characteristics and fuzzy characteristic differences, and therefore, besides the differentiation of the A phase and the B phase based on the time domain morphological characteristics, the frequency domain characteristics are introduced to realize the subdivision of the subtypes of the A phase. Based on the detection frame based on the time-frequency domain correlation characteristics, the detection frame is designed, and the phase in the CAP sequence to which the input electroencephalogram signal belongs is marked every second.
Please refer to fig. 1, which is a schematic diagram of a CAP detection frame constructed in the implementation of the present application. The framework mainly comprises the following three stages or modules:
(1) Extracting morphological characteristics of a time domain: extracting sub-frequency bands of the complete brain electrical signals, embedding shallow signal fluctuation relations of the complete brain electrical signals by using a bidirectional gating circulation unit, and extracting morphological characteristics of the complete brain electrical signals by using a one-dimensional convolutional neural network in a segmented manner; (2) frequency domain feature calculation: calculating the power spectral density of the electroencephalogram fragments in a segmented way, and measuring the distribution characteristics of the power spectral density by using indexes such as kurtosis, skewness and the like; (3) feature fusion is associated with macroscopic evolution: after the time-frequency domain features are fused, a bidirectional gating circulation unit is applied again to capture macroscopic dependence and conversion relation among all phases of the CAP; the corresponding CAP phase per second in the input signal is ultimately determined based on the extracted features.
In one embodiment, step S1 includes:
and acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject by using a 10-20 scalp electroencephalograph.
In one embodiment, step S2 includes:
s2.1: downsampling the acquired electroencephalogram signal to 100Hz based on a polyphase filtering algorithm, and then applying 0.3-30Hz band-pass filtering to the downsampled electroencephalogram signal;
s2.2: carrying out Robust scaler normalization processing on each lead channel of the electroencephalogram signal obtained in the step S2.1;
s2.3: segmenting and marking the electroencephalogram signals obtained in the step S2.2, wherein the length of each segment is 30S, and marking each subtype of the A phase or the B phase in the CAP sequence corresponding to each second in the segment every second.
Specifically, the RobustScaler algorithm scales the signal according to the range between the first quartile and the third quartile in the original signal, which is more robust to outliers than other normalization methods.
In the specific implementation process, firstly, sleep stage is carried out on the acquired brain electrical signals, and only the part of the non-rapid eye movement stage of sleep is reserved; the portion is then downsampled and bandpass filtered and the signal for each lead channel is normalized using the RobustScaler algorithm.
Then, cutting the signal with the length of 30s as a segment length to obtain a sample with the length of 3000 (30 s is 100 Hz), and marking CAP phases (A1, A2, A3 or B phases) corresponding to each second in the segment by using One-hot (One-hot) coding; since the length of the background part in the electroencephalogram signal is far longer than that of the A phase of the CAP sequence, half of the pure background fragments in the interval from the first A phase to the last A phase in each channel signal are reserved randomly for properly reducing the quantity difference of the two phases.
In one implementation, in the CAP detection framework constructed in the step S3, the morphological feature learning module is sequentially composed of a sub-band extraction, a bidirectional gating circulation unit GRU and a convolutional neural network CNN, wherein the sub-band extraction is used for carrying out band-pass filtering on an original input single-channel sequence according to frequency bands of 0.3-4.5Hz, 4.5-12Hz and 12-30Hz, and the original input single-channel sequence signal is decomposed into a three-channel time sequence with low, medium and high frequency band information; then, the obtained three-channel time sequence is processed by using a two-way gating circulating unit, and the fluctuation characteristics of the shallow layers of the signals are associated; and finally, segmenting the hidden layer output of the bidirectional gating circulation unit one by one, and respectively applying the same one-dimensional convolutional neural network to each second segment to perform morphological feature learning to obtain morphological feature vectors serving as extracted depth signal waveform features.
Specifically, after bandpass filtering of the original electroencephalogram signal, a multichannel signal with a dimension of [3,3000] is obtained, and then the obtained three-channel signal is processed by using a bidirectional GRU. Because the input signal segment has longer duration and higher sampling rate, the unidirectional GRU can only extract the time-sequence-dependent features unidirectionally along the time direction, so that the global evolution information which can be received by the earlier part of the input sequence is limited, and partial earlier features can be covered in the transmission process; the bidirectional GRU can simultaneously consider time sequence information in the front direction and the rear direction, and each node in the signal can be ensured to receive enough global time domain dependency information, so that the representation capability of the model is improved. And finally, segmenting the hidden layer output of the bidirectional GRU network one by one second, and respectively applying the same one-dimensional convolutional neural network to each second segment to extract the characteristics. The method aims at extracting the depth characteristic of each sub-segment every second for the longer electroencephalogram signal sequence embedded with the early shallow time sequence dependency relationship, and finally outputting the characteristic of each second segment in the form of a one-dimensional characteristic vector. The detailed network structure parameters of the morphology feature learning module are shown in table 1.
Table 1 network parameters of time domain feature extraction section
In one embodiment, in the CAP detection framework constructed in step S3, the frequency domain feature calculation module explicitly calculates the frequency domain feature of the electroencephalogram signal by using Power Spectral Density (PSD), and specifically includes: after calculating the power spectral density of each second segment of the initial input electroencephalogram signal, intercepting a part of a 1-30Hz interval on the power spectral density and measuring the distribution characteristics of the power spectral density by using spectral kurtosis, spectral skewness, variance and mean; meanwhile, the frequency domain feature calculation module reserves the original power spectrum density information and carries out moving average by taking 3Hz as the window width and the step length; and finally, the power spectral density of each second signal segment, the kurtosis, the skewness, the variance and the mean value of the power spectral density jointly form a one-dimensional feature vector which is used as the frequency domain feature of the extracted signal.
Specifically, the A phase and the B phase in the CAP sequence have large differences in time domain waveform and frequency domain energy, but the differences of three subtypes in the A phase in the time domain waveform are obviously weakened, and the classification of each subtype is the main in medicineDepending on the frequency domain characteristics. Therefore, to enhance the capability of characterizing the frequency domain features, the frequency domain feature calculation module in this embodiment explicitly calculates the frequency domain features of the electroencephalogram signal using Power Spectral Density (PSD): the power spectral density of the signal is estimated using a Periodogram (Periodogram) method. The calculation formula of the periodogram method is shown as formula (1), wherein,an estimated value of the power spectral density is represented, X (k) represents a result of calculating a discrete fourier transform on a signal sequence of length N, and k represents a discrete frequency.
After calculating the power spectral density of each second segment of the initial input brain electrical signal, the module intercepts the part of the power spectral density in the 1-30Hz interval. Since the three subtypes in phase a are divided according to different proportions of high-voltage low-frequency slow waves (representing synchronization of electroencephalogram) and low-amplitude fast rhythms (representing desynchronization of electroencephalogram) in phase a, the framework further explicitly measures the distribution characteristics of power spectral density by using kurtosis, skewness, variance and mean. Wherein the spectral kurtosis is an index for describing the kurtosis degree of the spectral distribution of the signal, reflects the steep degree of the single-peak power spectral density, and calculates kurtosis beta as shown in formula (2); the spectrum skewness is an index for describing the non-uniformity degree of the spectrum distribution of the signal, the calculation of the skewness gamma is shown as a formula (3), wherein f i Indicating the i-th frequency point of the signal,represents the average value of all frequency points, P (f i ) Represents the power spectral density value at the ith frequency bin and N represents the total number of frequency bins.
Meanwhile, the framework keeps original power spectral density information, and the window width and the step length are used for carrying out moving average by taking 3Hz, so that the characteristic dimension of the power spectral density is reduced, and the robustness of the model to the characteristic change of a fine frequency domain is improved. The power spectral density of each second signal segment and the kurtosis, skewness, variance and mean of the power spectral density together form a one-dimensional eigenvector with the length of 14.
In one implementation, in the CAP detection framework constructed in the step S3, the feature fusion and evolution association module splices the one-dimensional time domain feature vector and the frequency domain feature vector corresponding to each second segment along the feature dimension to form a fusion feature vector of the two feature vectors, and the fusion feature vector forms a high-dimensional time sequence in the time dimension; and for a time sequence formed by a feature vector, the feature fusion and evolution association module again applies the bidirectional GRU to capture the evolution of macroscopic features of the bidirectional GRU, finally, the final classification is carried out on the hidden state of each output node, a multi-layer perceptron network is used for the classifier for classifying, the output classification result is the type of the CAP sequence corresponding to each second time segment in the original input signal, and the classification result comprises three subtypes A1, A2 and A3 and B phases.
Specifically, compared with the bidirectional GRU used by the morphological feature learning module, the current macro evolution characterization module extracts the object with lower sampling frequency and deeper feature embedding degree, and aims to capture the evolution of the macro semantic features and aim to directly correspond to the classification result of each segment. The detailed network structure parameters of the feature fusion and evolution association module are shown in table 2.
Table 2 network parameters of time-frequency feature fusion and evolution association
In One embodiment, in the training process of step S4, the electroencephalogram signal of the subject is input into the model in a single channel form with each 30S as a segment, and softening treatment is adopted on the label, specifically, for the sample label in the area before and after phase conversion, the confidence level of the One-hot code belonging to the labeling category is adjusted to be 0.7, and the confidence level of the One-hot code belonging to the category corresponding to the adjacent segment is adjusted to be 0.3; classification uses weighted cross entropy to calculate classification loss and back-propagates.
Specifically, the label is subjected to softening treatment, so that interference caused by fuzzy characteristics of the CAP sequence phase conversion interval during training can be avoided.
The classification adopts cross entropy with weight to calculate the classification loss and carries out back propagation, and the weight proportion is w A1 :w A2 :w A3 :w B =5: 5:5:1, the loss weight of the a-phase sample with a small sample number is improved. The calculation formula of the classification loss L is shown as formula (4), wherein N' represents the number of samples, C represents the number of classes, and w j Representing the weight of the j-th class, y ij Tag probability, p, indicating whether the ith sample belongs to the jth class ij Representing the predicted probability that the ith sample belongs to the jth class.
In the detection process, inputting a sample to be classified into a model to obtain the confidence coefficient of each second segment belonging to each phase in the CAP sequence, and taking the category corresponding to the maximum confidence coefficient as a detection result.
The method provided by the application is illustrated by the following specific examples.
Step 1: the data acquisition means that the scalp electroencephalograph is used for acquiring the electroencephalogram of a tested person in a sleep state, and sleep stage preservation is carried out on signals of a non-rapid eye movement period of sleep.
Step 2: the data preprocessing refers to preprocessing the acquired data set and marking the phase of the CAP sequence. Resampling the acquired brain electricity data to 100Hz, carrying out band-pass filtering on the resampled brain electricity data according to the range of 0.3-30Hz, and carrying out standardized processing on the signals of each lead channel by using a RobustScaler algorithm. Then, the signal is cut with the length of the segment of 30s, and the CAP phase corresponding to each second in the segment is marked by using the single thermal coding.
Step 3: constructing a CAP sequence detection framework;
(3.1) in order to extract morphological features contained in the signal, the present embodiment designs a morphological feature extraction module composed of a bidirectional GRU and a convolutional neural network. Firstly, extracting sub-bands of 0.3-4.5Hz, 4.5-12Hz and 12-30Hz from an input 30s segment, and then embedding three-channel signals formed by the bands into shallow signal fluctuation characteristics by using a bidirectional GRU; since the final classification of the framework is in seconds, the deep morphology features are then extracted on a second-by-second basis using convolutional neural networks.
(3.2) in order to extract the frequency domain features contained in the signal, the embodiment designs a frequency domain feature calculation module based on signal statistics features, which calculates the power spectral density for each second of the original signal, measures the distribution features of the power spectral density by using the mean, the variance, the spectral kurtosis and the spectral skewness, and uses the distribution features as the frequency domain features of the segments per second.
(3.3) in order to realize the fusion of morphological characteristics and frequency domain characteristics and capture the macroscopic evolution of the characteristics, the embodiment designs a characteristic fusion and evolution association module based on a bidirectional GRU and a multi-layer perceptron, and the fusion characteristic vector of each second segment and the frequency domain characteristic vector can be formed by splicing the one-dimensional time domain characteristic vector and the frequency domain characteristic vector corresponding to each second segment along the characteristic dimension, and the evolution of the macroscopic characteristics is captured by the bidirectional GRU again. Finally, three subtypes A1, A2 and A3 of the CAP sequence corresponding to each second time slice in the original input signal are classified by using a multi-layer perceptron network (four classifications).
Step 4: training the CAP sequence detection framework using training data;
training the CAP detection framework by using training data, and in order to avoid model bias caused by unbalance of training samples, using weighted cross entropy as a loss function and properly improving the classification weight of each subtype of the A phase.
Step 5: and detecting the data to be identified by using the trained CAP sequence detection framework.
In order to confirm the effect of the present embodiment, the present embodiment uses data of the same data set that is not used for training, and performs the same preprocessing and cutting, resulting in a series of data to be identified. And inputting the data to be identified into a detection framework to obtain a final identification result. In addition, this embodiment considers that the original signal is cut at 30s with 50% overlap, and only the prediction result of 15s in the center of each segment is reserved for each prediction, which helps to alleviate the interference of the feature deletion caused by edge truncation to the model performance.
Compared with the prior art, the application has the beneficial effects that:
(1) Compared with the similar method, the detection scheme of the CAP sequence has better overall effect in the identification task, and is hopeful to reduce the workload of an expert in the complicated task;
(2) The application enhances the signal characterization capability by capturing the characteristics of the CAP sequence in the time domain and the frequency domain, and particularly ensures that the model has more outstanding performance when distinguishing all subtypes of the A phase in the CAP sequence.
Example two
Based on the same inventive concept, the application discloses a sleep cycle alternating mode detection device based on time-frequency domain correlation characteristics, which comprises:
the data acquisition module is used for acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject and taking the brain electrical signals as original brain electrical signals;
the preprocessing module is used for preprocessing the acquired original electroencephalogram signals and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is a background;
the detection frame construction module is used for constructing a CAP detection frame, and the CAP detection frame comprises a morphological feature learning module used for extracting depth signal waveform features, a frequency domain feature calculation module used for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module used for fusing the depth signal waveform features and the signal frequency domain features;
the training module is used for acquiring training data from the preprocessed and marked data and training the CAP detection frame by utilizing the training data;
and the detection module is used for detecting the electroencephalogram signals to be identified by using the trained CAP detection frame.
Since the device described in the second embodiment of the present application is a device for implementing the sleep cycle alternating mode detection method based on the time-frequency domain correlation feature in the first embodiment of the present application, based on the method described in the first embodiment of the present application, a person skilled in the art can know the specific structure and deformation of the device, and therefore, the detailed description thereof is omitted herein. All devices used in the method of the first embodiment of the present application are within the scope of the present application.
Example III
Based on the same inventive concept, the present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the method as described in embodiment one.
Since the computer readable storage medium described in the third embodiment of the present application is a computer readable storage medium used for implementing the sleep cycle alternating pattern detection method based on the time-frequency domain correlation feature in the first embodiment of the present application, based on the method described in the first embodiment of the present application, a person skilled in the art can understand the specific structure and the modification of the computer readable storage medium, and therefore, the description thereof is omitted here. All computer readable storage media used in the method according to the first embodiment of the present application are included in the scope of protection.
Example IV
Based on the same inventive concept, the application also provides a computer device, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor executes the program to implement the method in the first embodiment.
Because the computer device described in the fourth embodiment of the present application is a computer device used for implementing the sleep cycle alternating mode detection method based on the time-frequency domain correlation feature in the first embodiment of the present application, based on the method described in the first embodiment of the present application, a person skilled in the art can understand the specific structure and deformation of the computer device, and therefore, the description thereof is omitted here. All computer devices used in the method of the first embodiment of the present application are within the scope of the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit or scope of the embodiments of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims and the equivalents thereof, the present application is also intended to include such modifications and variations.

Claims (10)

1. The sleep cycle alternating mode detection method based on the time-frequency domain correlation characteristics is characterized by comprising the following steps of:
s1: collecting an electroencephalogram signal of a subject in a sleep non-rapid eye movement period, and taking the electroencephalogram signal as an original electroencephalogram signal;
s2: preprocessing the acquired original electroencephalogram signals, and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is background;
s3: constructing a CAP detection framework, wherein the CAP detection framework comprises a morphological feature learning module for extracting depth signal waveform features, a frequency domain feature calculation module for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module for fusing the depth signal waveform features and the signal frequency domain features;
s4: acquiring training data from the preprocessed and marked data, and training the CAP detection frame by using the training data;
s5: and detecting the electroencephalogram signals to be identified by using the trained CAP detection framework.
2. The sleep cycle alternating pattern detection method based on the time-frequency domain correlation characteristic as claimed in claim 1, wherein step S1 comprises:
and acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject by using a 10-20 scalp electroencephalograph.
3. The sleep cycle alternating pattern detection method based on the time-frequency domain correlation characteristic as claimed in claim 1, characterized in that, step S2 comprises:
s2.1: downsampling the acquired electroencephalogram signal to 100Hz based on a polyphase filtering algorithm, and then applying 0.3-30Hz band-pass filtering to the downsampled electroencephalogram signal;
s2.2: carrying out Robust scaler normalization processing on each lead channel of the electroencephalogram signal obtained in the step S2.1;
s2.3: segmenting and marking the electroencephalogram signals obtained in the step S2.2, wherein the length of each segment is 30S, and marking each subtype of the A phase or the B phase in the CAP sequence corresponding to each second in the segment every second.
4. The sleep cycle alternating pattern detection method based on time-frequency domain correlation characteristics according to claim 1, wherein in the CAP detection framework constructed in the step S3, a morphological feature learning module is sequentially composed of a sub-band extraction, a bidirectional gating cycle unit GRU and a convolutional neural network CNN, wherein the sub-band extraction is used for carrying out band-pass filtering on an original input single-channel sequence according to frequency bands of 0.3-4.5Hz, 4.5-12Hz and 12-30Hz, and an original input single-channel sequence signal is decomposed into a three-channel time sequence with low, medium and high frequency band information; then, the obtained three-channel time sequence is processed by using a two-way gating circulating unit, and the fluctuation characteristics of the shallow layers of the signals are associated; and finally, segmenting the hidden layer output of the bidirectional gating circulation unit one by one, and respectively applying the same one-dimensional convolutional neural network to each second segment to perform morphological feature learning to obtain morphological feature vectors serving as extracted depth signal waveform features.
5. The sleep cycle alternating pattern detection method based on time-frequency domain correlation characteristics according to claim 1, wherein in the CAP detection framework constructed in step S3, the frequency domain characteristic calculation module explicitly calculates the frequency domain characteristics of the electroencephalogram signal by using the power spectral density, and specifically includes: after calculating the power spectral density of each second segment of the initial input electroencephalogram signal, intercepting a part of a 1-30Hz interval on the power spectral density and measuring the distribution characteristics of the power spectral density by using spectral kurtosis, spectral skewness, variance and mean; meanwhile, original power spectrum density information is reserved, and 3Hz is used as window width and step length for carrying out moving average; and finally, the power spectral density of each second signal segment, the kurtosis, the skewness, the variance and the mean value of the power spectral density jointly form a one-dimensional feature vector which is used as the frequency domain feature of the extracted signal.
6. The sleep cycle alternating pattern detection method based on time-frequency domain correlation features as claimed in claim 1, wherein in the CAP detection framework constructed in step S3, feature fusion and evolution correlation modules splice a one-dimensional time domain feature vector and a frequency domain feature vector corresponding to each second segment along feature dimensions to form fusion feature vectors of the two feature vectors, and the fusion feature vectors form a high-dimensional time sequence in time dimensions; and for a time sequence formed by a feature vector, the feature fusion and evolution association module again applies the bidirectional GRU to capture the evolution of macroscopic features of the bidirectional GRU, finally, the final classification is carried out on the hidden state of each output node, a multi-layer perceptron network is used for the classifier for classifying, the output classification result is the type of the CAP sequence corresponding to each second time segment in the original input signal, and the classification result comprises three subtypes A1, A2 and A3 and B phases.
7. The sleep cycle alternating pattern detection method based on the time-frequency domain correlation characteristic as claimed in claim 1, wherein in the training process of step S4, the electroencephalogram signal of the subject is input into the model in a single channel form every 30S as a segment, and softening treatment is adopted for the label, specifically, for the sample label in the area before and after phase conversion, the confidence level of the label class in the One-hot code is adjusted to be 0.7, and the confidence level of the label class corresponding to the adjacent segment is adjusted to be 0.3; classification uses weighted cross entropy to calculate classification loss and back-propagates.
8. The sleep cycle alternating mode detection device based on the time-frequency domain correlation characteristic is characterized by comprising:
the data acquisition module is used for acquiring brain electrical signals of the sleep non-rapid eye movement period of the subject and taking the brain electrical signals as original brain electrical signals;
the preprocessing module is used for preprocessing the acquired original electroencephalogram signals and labeling CAP phase categories subordinate to each second in the fragments, wherein the CAP phase categories comprise A phase and B phase, the A phase comprises three subtypes, and the B phase is a background;
the detection frame construction module is used for constructing a CAP detection frame, and the CAP detection frame comprises a morphological feature learning module used for extracting depth signal waveform features, a frequency domain feature calculation module used for extracting signal frequency domain features and a time-frequency feature fusion and evolution association module used for fusing the depth signal waveform features and the signal frequency domain features;
the training module is used for acquiring training data from the preprocessed and marked data and training the CAP detection frame by utilizing the training data;
and the detection module is used for detecting the electroencephalogram signals to be identified by using the trained CAP detection frame.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when executed, implements the method of any one of claims 1 to 7.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 7 when the program is executed.
CN202310587185.XA 2023-05-19 2023-05-19 Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics Pending CN116725553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310587185.XA CN116725553A (en) 2023-05-19 2023-05-19 Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310587185.XA CN116725553A (en) 2023-05-19 2023-05-19 Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics

Publications (1)

Publication Number Publication Date
CN116725553A true CN116725553A (en) 2023-09-12

Family

ID=87902075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310587185.XA Pending CN116725553A (en) 2023-05-19 2023-05-19 Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics

Country Status (1)

Country Link
CN (1) CN116725553A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117838153A (en) * 2024-01-19 2024-04-09 首都医科大学宣武医院 Clinical discrimination evaluation system and equipment based on dual classifier

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117838153A (en) * 2024-01-19 2024-04-09 首都医科大学宣武医院 Clinical discrimination evaluation system and equipment based on dual classifier
CN117838153B (en) * 2024-01-19 2024-07-12 首都医科大学宣武医院 Clinical discrimination evaluation system and equipment based on dual classifier

Similar Documents

Publication Publication Date Title
Tiwari et al. Automated diagnosis of epilepsy using key-point-based local binary pattern of EEG signals
CN110353673B (en) Electroencephalogram channel selection method based on standard mutual information
CN107095669B (en) A kind of processing method and system of epileptic's EEG signals
Kayikcioglu et al. Fast and accurate PLS-based classification of EEG sleep using single channel data
CN106108894A (en) A kind of emotion electroencephalogramrecognition recognition method improving Emotion identification model time robustness
CN111340142A (en) Epilepsia magnetoencephalogram spike automatic detection method and tracing positioning system
CN110598608B (en) Non-contact and contact cooperative psychological and physiological state intelligent monitoring system
CN114093501B (en) Intelligent auxiliary analysis method for child movement epilepsy based on synchronous video and electroencephalogram
Wang et al. A novel multi-scale dilated 3D CNN for epileptic seizure prediction
CN116725553A (en) Sleep cycle alternating mode detection method and device based on time-frequency domain correlation characteristics
CN111920420A (en) Patient behavior multi-modal analysis and prediction system based on statistical learning
CN110141258A (en) A kind of emotional state detection method, equipment and terminal
CN110717461A (en) Fatigue state identification method, device and equipment
CN106580350A (en) Fatigue condition monitoring method and device
CN114492513A (en) Electroencephalogram emotion recognition method for adaptation to immunity domain based on attention mechanism in cross-user scene
CN108836322B (en) Naked eye 3D display vision-induced motion sickness detection method
CN113768519A (en) Method for analyzing consciousness level of patient based on deep learning and resting state electroencephalogram data
CN113749619A (en) Mental fatigue assessment method based on K-TRCA
CN106388780A (en) Sleep state detection method and system based on fusion of two classifiers and detector
Miranda et al. Classification of EEG signals using genetic programming for feature construction
Jiang et al. Analytical comparison of two emotion classification models based on convolutional neural networks
Thilagaraj et al. Identification of drivers drowsiness based on features extracted from EEG signal using SVM classifier
CN115137374A (en) Sleep stage oriented electroencephalogram interpretability analysis method and related equipment
Jibon et al. Epileptic seizure detection from electroencephalogram (EEG) signals using linear graph convolutional network and DenseNet based hybrid framework
CN106333677A (en) Method and system for detecting blink activity in sleep state analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination