CN112438738A - Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium - Google Patents
Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium Download PDFInfo
- Publication number
- CN112438738A CN112438738A CN201910828706.XA CN201910828706A CN112438738A CN 112438738 A CN112438738 A CN 112438738A CN 201910828706 A CN201910828706 A CN 201910828706A CN 112438738 A CN112438738 A CN 112438738A
- Authority
- CN
- China
- Prior art keywords
- signal
- electroencephalogram
- sleep
- electroencephalogram signal
- long
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 230000008667 sleep stage Effects 0.000 title claims abstract description 16
- 230000007958 sleep Effects 0.000 claims abstract description 52
- 210000004556 brain Anatomy 0.000 claims abstract description 22
- 230000015654 memory Effects 0.000 claims abstract description 14
- 230000002457 bidirectional effect Effects 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims abstract description 6
- 238000003062 neural network model Methods 0.000 claims abstract description 4
- 238000013527 convolutional neural network Methods 0.000 claims description 61
- 230000007774 longterm Effects 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 16
- 238000004364 calculation method Methods 0.000 claims description 6
- 230000001413 cellular effect Effects 0.000 claims description 5
- 238000000605 extraction Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 238000012880 independent component analysis Methods 0.000 description 4
- 230000004461 rapid eye movement Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000008452 non REM sleep Effects 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 230000007787 long-term memory Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000036385 rapid eye movement (rem) sleep Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 240000007124 Brassica oleracea Species 0.000 description 1
- 235000003899 Brassica oleracea var acephala Nutrition 0.000 description 1
- 235000012905 Brassica oleracea var viridis Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000010977 jade Substances 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 241000894007 species Species 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000002618 waking effect Effects 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4806—Sleep evaluation
- A61B5/4812—Detecting sleep stages or cycles
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7203—Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/725—Details of waveform analysis using specific filters therefor, e.g. Kalman or adaptive filters
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The application relates to the technical field of application equipment identification, in particular to a method and a device for sleep staging based on a single-channel electroencephalogram signal and a storage medium. The application provides a sleep stage classification method based on a single-channel electroencephalogram signal, which mainly comprises the following steps: s1, acquiring a first electroencephalogram signal X (n) of a single channel, wherein the electroencephalogram signal comprises a normal electroencephalogram signal and an electro-oculogram signal; s2, acquiring a second electroencephalogram signal X (n) without ocular artifacts based on the first electroencephalogram signal X (n); s3, based on the second electroencephalogram signal X (n), obtaining sleep characteristics { a } of the second electroencephalogram signal X (n) by utilizing a neural network model1,a2,…,aN}; s4, setting the sleep characteristics { a1,a2,…,aNInputting the data to a bidirectional long-short term memory network model BLSTM for training,acquiring the sleep cycle classification of the first brain electrical signal X (n).
Description
Technical Field
The application relates to the technical field of application equipment identification, in particular to a sleep staging method and device based on a single-channel electroencephalogram signal and a storage medium.
Background
In 1937 Loomis first proposed EEG (Electroencephalogram) method to replace ethology as the standard for sleep depth judgment. In 1953 American colleagues Aserinsky and Kleitman found the REM (Rapid Eye Movement) phenomenon. Recchtchaffen and Kales proposed the standard for sleep staging in 1968 and recommended by the American society for physiology as the international classification standard for the first sleep staging. The sleep state is generally divided into 5 stages, W (awake), N1 (non-rapid eye movement sleep first phase), N2 (non-rapid eye movement sleep second phase), N3 (non-rapid eye movement sleep third phase), REM (rapid eye movement sleep phase), and the latter four belong to the sleep state.
In the process of analyzing the sleep state of people based on the electroencephalogram signals, the electroencephalogram signal acquisition contains more noise, and particularly, the electroencephalogram signals are taken as main interference signals, so that certain difficulty is brought. In the implementation of some eye electrical artifact removal sleep stages, wavelet transformation is carried out on an electroencephalogram signal so as to decompose the electroencephalogram signal into a plurality of components, the part without the eye electrical artifact is regarded as a pure electroencephalogram signal, the part containing the eye electrical artifact is subjected to EMD modal decomposition, the pure electroencephalogram signal and the pure electrooculogram signal in the decomposed components are separated by ICA independent component analysis, and the pure electroencephalogram signals obtained in the two processes are recombined and restored to obtain the electroencephalogram signal without the eye electrical artifact. Then, the energy of different brain wave bands is extracted and input into the self-adaptive fuzzy neural inference network, and the result of sleep staging is obtained after training. However, the ICA algorithm requires that the sources are independent from each other, and the wavelet transform can only ensure that the sources are independent from each other; the ICA algorithm cannot automatically judge which group of separated signals are electrooculogram signals and needs manual participation in judgment; human sleep is a front-back correlation process, and the self-adaptive fuzzy neural inference network cannot comprehensively judge the sleep state by utilizing electroencephalogram information of front-back time periods.
Therefore, how to improve the stability of the algorithm of the step of removing the electrooculogram, reduce manual intervention, increase the relevance of the brain wave information of the time period before and after sleep and the sleep stage classification, and improve the accuracy and the robustness of the sleep stage classification system becomes a problem to be solved.
Disclosure of Invention
The application aims to provide a method, a device and a storage medium for sleep staging based on a single-channel electroencephalogram signal, and provides the method for removing ocular artifacts under the single channel.
The embodiment of the application is realized as follows:
the first aspect of the embodiment of the application provides a sleep stage method based on a single-channel electroencephalogram signal, which mainly comprises the following steps:
s1, acquiring a first electroencephalogram signal X (n) of a single channel, wherein the electroencephalogram signal comprises a normal electroencephalogram signal and an electro-oculogram signal;
s2, acquiring a second electroencephalogram signal X (n) without ocular artifacts based on the first electroencephalogram signal X (n);
s3, based on the second electroencephalogram signal X (n), obtaining sleep characteristics { a } of the second electroencephalogram signal X (n) by utilizing a neural network model1,a2,…,aN};
S4, setting the sleep characteristics { a1,a2,…,aNInputting the signals into a bidirectional long-short term memory network model BLSTM for training, and acquiring the sleep cycle classification of the first electroencephalogram signal X (n).
Optionally, the obtaining of the second electrical brain signal x (n) without ocular artifacts comprises the following steps:
s2.1, acquiring a long-term differential signal of the first electroencephalogram signal X (n), wherein the long-term differential signal comprises a long-term differential signal of an electroencephalogram signal and a long-term differential signal of an electro-oculogram signal;
s2.2, extracting an amplitude envelope signal E (n) of the first electroencephalogram signal X (n) long-term differential signal;
s2.3, carrying out double-threshold electro-oculogram interference interval endpoint detection based on the amplitude envelope signal E (N), and obtaining a final electro-oculogram interval [ N1,N2];
S2.4, removing the final eye electric range [ N ] in the first brain electric signal X (N)1,N2]The ocular electrical signal of (a).
Optionally, the extracting an amplitude envelope signal e (n) of the long-term differential signal includes the following steps:
s2.2.1, performing square operation on the long-term differential signal to obtain a long-term differential energy signal;
s2.2.2, performing 2-time amplitude calculation on the long-term differential energy signal to obtain a second long-term differential energy signal so as to compensate low-frequency energy loss caused by square calculation;
s2.2.3, passing the second long-term differential energy signal through an nth-order low-pass filter h (N) to obtain an energy envelope signal;
s2.2.4, performing an evolution operation on the energy envelope signal to obtain an amplitude envelope signal E (n).
Optionally, the obtaining of the final eye electrical interval [ N [ ]1,N2]Further comprising the steps of:
and after the search is finished, rejecting non-electro-ocular signals less than 50 ms.
Optionally, the final eye electrical interval [ N ] in the first brain electrical signal X (N) is removed1,N2]The ocular signal of (a) comprises the steps of:
s2.4.1, generating a signal X '(n) with the same length as the first brain electrical signal X (n), configuring the signal X' (n) to be consistent with the first brain electrical signal X (n) in the value corresponding to the original signal eye electrical interval, and configuring the values of the rest intervals to be 0;
s2.4.2, shifting the generated signal X' (n)Smooth in such a way that the section [ N ] of X' (N)1,N2]And the interval of X (N) [ N ]1,N2]Superposing to obtain X '(n), wherein the X' (n) filters out relatively high-frequency electroencephalogram signals and only keeps low-frequency ocular electrical signals lower than 1 Hz;
s2.4.3, removing the electro-ocular signal by using X (n) -X' (n).
Optionally, the feature { a) of the second electroencephalogram signal x (n) is obtained by using a CNN convolutional neural network model1,a2,…,aNThe method comprises the following steps:
s3.1, establishing CNN convolutional neural network models with different filter scales;
s3.2, respectively performing feature extraction on the second electroencephalogram signal X (n) by using each CNN convolutional neural network model to obtain a feature vector extracted by the small-scale filter CNN convolutional neural network model and a feature vector extracted by the large-scale filter CNN convolutional neural network model;
s3.3, splicing the feature vectors extracted by the small-scale filter CNN convolutional neural network model and the feature vectors extracted by the large-scale filter CNN convolutional neural network model to obtain sleep features { a1,a2,…,aN}。
Optionally, the CNN convolutional neural network model convolution step is as follows:
convolution of a given window size;
carrying out batch standardization operation;
the activation operation is performed using a linear rectification function.
Optionally, the obtaining of the sleep cycle classification of the first brain electrical signal based on the features comprises the following steps:
s4.1, establishing a forward LSTM function with a parameter thetafFor the forward parameters of the LSTM function,
wherein h is a hidden state in the LSTM network, and c is a cell state in the LSTM network;
s4.2, establishing a backward LSTM function with a parameter thetabAs a backward parameter of the LSTM function,
wherein h is a hidden state in the LSTM network, and c is a cell state in the LSTM network;
s4.3, splicing the outputs of the forward LSTM function and the backward LSTM function, outputting sleep cycle classification,
where h is the hidden state in the LSTM network, c is the cellular state in the LSTM network, and FC represents a group which can be represented by atConversion into a form capable of being splicedThe added vectors.
A second aspect of an embodiment of the present application provides an apparatus for sleep staging based on a single-channel electroencephalogram signal, the apparatus including at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least a portion of the computer instructions to implement the method for sleep staging based on single-channel electroencephalogram provided in the first aspect of the present disclosure.
A third aspect of the embodiments of the present application provides a computer-readable storage medium, which stores computer instructions, and when at least part of the computer instructions are executed by a processor, the method for sleep staging based on single-channel electroencephalogram provided in the first aspect of the present disclosure is implemented.
The beneficial effects of the embodiment of the application include: the method can improve the stability of an algorithm in the step of removing the electro-oculogram, reduce manual intervention, increase the relevance of brain wave information and sleep stage classification of the period before and after sleep, and improve the accuracy and the robustness of a sleep stage system.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
FIG. 1 shows a flow diagram of a sleep staging method based on a single channel electroencephalogram signal according to one embodiment of the present application;
FIG. 2 illustrates a flow diagram of a method of removing ocular artifacts according to an embodiment of the present application;
FIG. 3 shows a flow diagram of a method of extracting an amplitude envelope signal according to an embodiment of the present application;
FIG. 4 shows a flow diagram of a method of electro-ocular signal removal in the electro-ocular region according to an embodiment of the present application;
FIG. 5 shows a flow diagram of a method for obtaining brain electrical signal features using a CNN convolutional neural network model according to an embodiment of the present application;
FIG. 6 illustrates a flow diagram of a method for obtaining sleep cycle classifications using the BLSTM model according to an embodiment of the present application.
FIG. 7 illustrates a sleep data staging block diagram according to one embodiment of the present application.
Detailed Description
Certain exemplary embodiments will now be described to provide an overall understanding of the principles of the structure, function, manufacture, and use of the devices and methods disclosed herein. One or more examples of these embodiments are illustrated in the accompanying drawings. Those of ordinary skill in the art will understand that the devices and methods specifically described herein and illustrated in the accompanying drawings are non-limiting exemplary embodiments and that the scope of the various embodiments of the present application is defined solely by the claims. Features illustrated or described in connection with one exemplary embodiment may be combined with features of other embodiments. Such modifications and variations are intended to be included within the scope of the present application.
Reference throughout this specification to "embodiments," "some embodiments," "one embodiment," or "an embodiment," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, appearances of the phrases "in various embodiments," "in some embodiments," "in at least one other embodiment," or "in an embodiment," or the like, throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Thus, the particular features, structures, or characteristics shown or described in connection with one embodiment may be combined, in whole or in part, with the features, structures, or characteristics of one or more other embodiments, without limitation. Such modifications and variations are intended to be included within the scope of the present application.
Example 1
Traditionally, sleep staging is performed by staging brain waves in the form and proportion of brain waves. When alpha wave in brain wave is reduced by 50% relative to eye-closing waking state and low-amplitude mixed frequency wave appears, it is considered that the phase enters N1, and apex wave appears in deeper N1. When the wave form of brain waves appears spindle waves, K complex waves and a small amount of delta waves, the sleep of the period N2 is started. When the proportion of the delta wave exceeds 20 percent and spindle waves are almost not available any more, the sleep stage N3 is entered. The waveform of REM sleep is similar to that of N1, and is a low amplitude mixed frequency wave accompanied by rapid eye movement.
FIG. 1 shows a flow chart of a sleep staging method based on a single channel electroencephalogram signal according to one embodiment of the present application.
In step S1, a first brain electrical signal x (n) is acquired, which is expressed as follows:
X(n)=S(n)+A(n)
wherein, X (n) is a single-channel electroencephalogram signal containing electro-ocular interference, S (n) represents a normal electroencephalogram signal, and A (n) represents an electro-ocular signal.
In step S2, based on the first electroencephalogram signal x (n), the ocular interference signal is removed, and a pure second electroencephalogram signal x (n) without an ocular signal is obtained.
As shown in fig. 2, obtaining a second electroencephalogram signal x (n) of a pure electroencephalogram signal without an ocular electrical signal comprises the following steps:
in step S2.1, a long-term difference signal of the first electroencephalogram signal x (n) is obtained according to the first electroencephalogram signal x (n), and the formula thereof is as follows:
Dk(n)=X(n)-X(n-k)=[S(n)-S(n-k)]+[A(n)-A(n-k)]
where k is the time delay, one half blink period is typically chosen, i.e. about 160 ms. S (n) -S (n-k) represents the long-term difference of the electroencephalogram signal, and A (n) -A (n-k) represents the long-term difference of the electrooculogram signal.
In step S2.2, the long-term differential signal D is extractedk(n) amplitude envelope signal e (n).
As shown in fig. 3, the original long-term differential signal is squared to obtain a long-term differential energy signal, and the long-term differential energy signal is subjected to 2-fold amplitude calculation to compensate for low-frequency energy loss caused by the square calculation. Then, the second long-term differential energy signal is passed through an nth order low-pass filter h (N) to obtain an energy envelope signal, and finally, the energy envelope signal is subjected to an open square operation to be converted into an amplitude envelope signal e (N).
In step S2.3, the detection of the end point of the dual-threshold eye electrical interference interval is performed based on the amplitude envelope signal e (n), and the specific method is as follows:
first, a higher threshold value T is seth=λhSigma, preliminarily determining a starting point N'1And end point N'2Where σ is the normal EEG signal vibrationStandard deviation of the amplitude, λhAs an empirical constant, in this exampleTo cross the high amplitude normal electroencephalogram envelope.
Secondly, a lower threshold value T is setl=λlσ, where σ is the standard deviation of the amplitude of the normal brain electrical signal, λlAs an empirical constant, in this exampleTo determine the electro-ocular region.
Then, a backward and forward search is performed using the following formula to determine a final electrooculogram interval [ N1,N2]Expressed as follows:
in this embodiment, non-electro-ocular regions smaller than 50ms are rejected after the search is completed.
In step S2.4, the final electro-ocular region [ N ] is removed1,N2]The pure brain electrical signals can be extracted from the eye electrical signals, and the following method can be used for removing the eye electrical signals among the eye electrical zones.
As shown in fig. 4, in step S2.4.1, a signal X '(n) with the same length as the first electroencephalogram signal X (n) containing the electro-ocular interference is generated, the signal X' (n) is configured to correspond to the value of the original signal electro-ocular region, which is consistent with the first electroencephalogram signal X (n), and the values of the remaining regions are configured to be 0.
In step S2.4.2, the generated signal X '(N) is smoothed so that the section [ N ] of X' (N) is within the range1,N2]And the interval of X (N) [ N ]1,N2]The superposition and the smooth filtration of the EEG signals with relatively high frequency,only low frequency ocular signals below 1Hz are retained, resulting in a signal X "(n) containing only the ocular signal amplitude.
In step S2.4.3, the above two completely overlapped signals are used, and the formula X (n) -X "(n) is used to calculate the effect of removing the electro-oculogram signal, so as to obtain a second electroencephalogram signal X (n), i.e. a pure electroencephalogram signal without electro-oculogram.
In step S3, based on the second electroencephalogram signal x (n), because the second electroencephalogram signal x (n) has removed the ocular interference, the sleep characteristic { a } of the second electroencephalogram signal x (n) can be obtained by using the CNN convolutional neural network model at this time1,a2,…,aNAnd the judgment of sleep stages is carried out.
As shown in fig. 5, firstly, CNN convolutional neural network models are respectively performed on the processed EEG electroencephalogram signals by using filters with different scales to extract features, where each CNN convolutional neural network model includes 4 convolutional layers and two maximum pooling layers.
Convolution is the result of the summation of two variables multiplied together over a certain range. If the convolved variables are the sequences X (n) and h (n), the result of the convolution is expressed as follows:
each CNN convolutional neural network model sequentially performs: convolution of given window size; carrying out batch standardization operation; and thirdly, performing activation operation by using a linear rectification function.
As shown in the sleep data staging chart of fig. 7, the model block diagram of the CNN convolutional neural network in the chart shows the following parameters:
[ Fs/2] Conv,64,/[ Fs/16 ]: the CNN filter size is 1/2 sample frequencies, 64 filters, filter step 1/16 sample frequencies;
8 max-pool,/8: the maximum pooling size is 8, and the step length is 8;
8 Conv, 128: the size of the CNN filter is 8, 128 filters are adopted, and the step length is default to 1;
dropout 0.5: randomly selecting half of data to not train;
1024 Fc: outputting a fully connected layer with the size of 1024;
512/512 BLSTM: there are 512 units of bidirectional LSTM units;
5 softmax: 5 normalized exponential function of classification.
And then, respectively carrying out feature extraction on the second electroencephalogram signal X (n) by using each CNN convolutional neural network model to obtain a feature vector extracted by the small-scale filter CNN convolutional neural network model and a feature vector extracted by the large-scale filter CNN convolutional neural network model.
Finally, the characteristic vectors extracted by the small filter CNN convolutional neural network model and the characteristic vectors extracted by the large filter CNN convolutional neural network model are spliced to obtain the characteristics { a1,a2,…,aN}。
As shown in FIG. 7, for example, we have N30 second EEG signal samples { x ] for a single channel1,x2,…,xN}. We used two CNNs to sample x from the ith EEG1The ith feature extracted in (1) is ai:
Function CNN (x)i) Extraction of feature vectors h from a 30 second EEG Signal Using a convolutional neural networki,θsAnd thetalThe parameter of the small-scale filter and the parameter of the large-scale filter are represented, | | represents that the two eigenvectors are spliced together, and the spliced characteristic { a ] forms the characteristic of the pure electroencephalogram signal1,a2,…,aNAnd the feature will go into two-way long-short term notationMemory network model BLSTM operation.
In step S4, the feature { a } described above is combined1,a2,…,aNInputting the signals into a bidirectional long-short term memory network model BLSTM for training, and acquiring the sleep cycle classification of the first electroencephalogram signal X (n).
In artificial sleep staging, people often interpret the current sleep state based on the state of the last sleep stage, as described in the AASM interpretation handbook: if the last epoch was interpreted as stage N2 and the current data is a low amplitude and mixed frequency EEG waveform, we interpreted it as stage N2, even though no K-complex and spindle waves were present.
The essential feature of the bi-directional long and short term memory network model BLSTM is that there is both an internal feedback connection and a feedforward connection between the processing units, and its internal memory can be used to process an input sequence of arbitrary timing, which makes it easier to process data that is regularly continuous in time.
The two-layer bidirectional long-short term memory network model BLSTM is used for extracting temporal information, and meanwhile, features obtained by CNN convolutional neural network processing are fused with features obtained after BLSTM processing in a direct connection mode, so that the generalization of the model is greatly improved, and overfitting of the model is avoided.
The mathematics are described as follows: we have N features { a } after CNNs processing1,a2,…,aNThe processing steps are as shown in FIG. 6, establishing a forward LSTM function with a parameter of θfFor the forward parameters of the LSTM function,
where h is the hidden state in the LSTM network and c is the cellular state in the LSTM network.
Establishing a backward LSTM function with a parameter thetabAs a backward parameter of the LSTM function,
where h is the hidden state in the LSTM network and c is the cellular state in the LSTM network.
Concatenating the outputs of the forward and backward LSTM functions, the output sleep cycle classification being represented as
Where h is the hidden state in the LSTM network, c is the cellular state in the LSTM network, and FC represents a group which can be represented by atConversion into a form capable of being splicedThe added vectors.
The specific number of hidden layers and the case of fully connected layers of BLSTM are shown in fig. 7. The frame of each BLSTM bidirectional long and short term memory network represents the number of the hidden nodes in the forward and backward direction, and the frame of each FC represents the number of the hidden nodes.
When each group of samples is input into the BLSTM network, whether training or testing, the samples must be re-inputAndthe initialization is 0 in order to ensure that the model is trained and tested using only temporary information of the current object.
The method can improve the stability of an algorithm in the step of removing the electro-oculogram, reduce manual intervention, increase the relevance of brain wave information and sleep stage classification in the period before and after sleep, and improve the accuracy and robustness of a sleep stage system.
It should be appreciated that the present application provides an apparatus for removing ocular artifacts under a single channel, the apparatus comprising at least one processor and at least one memory. In some embodiments, the electronic device may be implemented by hardware, software, or a combination of software and hardware. Wherein the hardware portion may be implemented using dedicated logic; the software portions may be stored in a memory for execution by a suitable instruction execution system, such as a microprocessor or specially designed hardware. The above described methods and systems will be understood by those skilled in the art and may be implemented using computer executable instructions and/or embodied in processor control code, such code being provided, for example, on a carrier medium such as a disk, CD or DVD-ROM, programmable memory such as read only memory (firmware), or a data carrier such as an optical or electronic signal carrier. The electronic device of the present application may be implemented not only by a hardware circuit of a semiconductor such as a very large scale integrated circuit or a gate array, a logic chip, a transistor, or the like, or a programmable hardware device such as a field programmable gate array, a programmable logic device, or the like, but also by software executed by various types of processors, for example, and may also be implemented by a combination of the above hardware circuit and software (for example, firmware).
Moreover, those skilled in the art will appreciate that aspects of the present application may be illustrated and described in terms of several patentable species or situations, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereon. Accordingly, various aspects of the present application may be embodied entirely in hardware, entirely in software (including firmware, resident software, micro-code, etc.) or in a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present application may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of the present application may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages, and the like. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which elements and sequences of the processes described herein are processed, the use of alphanumeric characters, or the use of other designations, is not intended to limit the order of the processes and methods described herein, unless explicitly claimed. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the application, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to require more features than are expressly recited in the claims. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
The entire contents of each patent, patent application publication, and other material cited in this application, such as articles, books, specifications, publications, documents, and the like, are hereby incorporated by reference into this application. Except where the application is filed in a manner inconsistent or contrary to the present disclosure, and except where the claim is filed in its broadest scope (whether present or later appended to the application) as well. It is noted that the descriptions, definitions and/or use of terms in this application shall control if they are inconsistent or contrary to the statements and/or uses of the present application in the material attached to this application.
Claims (10)
1. A sleep stage classification method based on a single-channel electroencephalogram signal is characterized by mainly comprising the following steps:
s1, acquiring a first electroencephalogram signal X (n) of a single channel, wherein the electroencephalogram signal comprises a normal electroencephalogram signal and an electro-oculogram signal;
s2, acquiring a second electroencephalogram signal X (n) without ocular artifacts based on the first electroencephalogram signal X (n);
s3, based on the second electroencephalogram signal X (n), obtaining sleep characteristics { a } of the second electroencephalogram signal X (n) by utilizing a neural network model1,a2,…,aN};
S4, setting the sleep characteristics { a1,a2,…,aNInputting the signals into a bidirectional long-short term memory network model BLSTM for training, and acquiring the sleep cycle classification of the first electroencephalogram signal X (n).
2. The method for sleep staging based on single-channel electroencephalogram signals, according to claim 1, characterized in that the step of obtaining the second electroencephalogram signal X (n) without ocular artifacts comprises the following steps:
s2.1, acquiring a long-term differential signal of the first electroencephalogram signal X (n), wherein the long-term differential signal comprises a long-term differential signal of an electroencephalogram signal and a long-term differential signal of an electro-oculogram signal;
s2.2, extracting an amplitude envelope signal E (n) of the first electroencephalogram signal X (n) long-term differential signal;
s2.3, carrying out double-threshold electro-oculogram interference interval endpoint detection based on the amplitude envelope signal E (N), and obtaining a final electro-oculogram interval [ N1,N2];
S2.4, removing the final eye electric range [ N ] in the first brain electric signal X (N)1,N2]The ocular electrical signal of (a).
3. The method for sleep stage based on single-channel electroencephalogram signals, according to claim 2, wherein the step of extracting the amplitude envelope signal E (n) of the long-term differential signal comprises the following steps:
s2.2.1, performing square operation on the long-term differential signal to obtain a long-term differential energy signal;
s2.2.2, performing 2-time amplitude calculation on the long-term differential energy signal to obtain a second long-term differential energy signal so as to compensate low-frequency energy loss caused by square calculation;
s2.2.3, passing the second long-term differential energy signal through an nth-order low-pass filter h (N) to obtain an energy envelope signal;
s2.2.4, performing an evolution operation on the energy envelope signal to obtain an amplitude envelope signal E (n).
4. The single channel-based electroencephalogram of claim 2Method for signal sleep staging, characterized in that the acquisition of the final electrooculogram interval [ N ]1,N2]Further comprising the steps of:
and after the search is finished, rejecting non-electro-ocular signals less than 50 ms.
5. The method for sleep staging based on single-channel electroencephalogram signal according to claim 2, characterized in that the final ocular electrical interval [ N ] in the first electroencephalogram signal X (N) is removed1,N2]The ocular signal of (a) comprises the steps of:
s2.4.1, generating a signal X '(n) with the same length as the first brain electrical signal X (n), configuring the signal X' (n) to be consistent with the first brain electrical signal X (n) in the value corresponding to the original signal eye electrical interval, and configuring the values of the rest intervals to be 0;
s2.4.2, the generated signal X '(N) is moved and smoothed in such a way that the section [ N ] of X' (N) is set to be smooth1,N2]And the interval of X (N) [ N ]1,N2]Superposing to obtain X '(n), wherein the X' (n) filters out relatively high-frequency electroencephalogram signals and only keeps low-frequency ocular electrical signals lower than 1 Hz;
s2.4.3, removing the electro-ocular signal by using X (n) -X' (n).
6. The method for sleep staging based on single-channel electroencephalogram signal according to claim 1, characterized in that the characteristic { a) of the second electroencephalogram signal X (n) is obtained by using a CNN convolutional neural network model1,a2,…,aNThe method comprises the following steps:
s3.1, establishing CNN convolutional neural network models with different filter scales;
s3.2, respectively performing feature extraction on the second electroencephalogram signal X (n) by using each CNN convolutional neural network model to obtain a feature vector extracted by the small-scale filter CNN convolutional neural network model and a feature vector extracted by the large-scale filter CNN convolutional neural network model;
s3.3, splicing the feature vectors extracted by the small-scale filter CNN convolutional neural network model and the large-scale filterExtracting feature vectors of the CNN convolutional neural network model to obtain sleep features { a1,a2,…,aN}。
7. The method for sleep staging based on single-channel electroencephalogram signal according to claim 6, characterized in that the convolution step of the CNN convolution neural network model is as follows:
convolution of a given window size;
carrying out batch standardization operation;
the activation operation is performed using a linear rectification function.
8. The method for sleep staging based on single-channel electroencephalogram signals according to claim 1, characterized in that the step of obtaining the sleep cycle classification of the first electroencephalogram signal based on the characteristics comprises the following steps:
s4.1, establishing a forward LSTM function with a parameter thetafFor the forward parameters of the LSTM function,
wherein h is a hidden state in the LSTM network, and c is a cell state in the LSTM network;
s4.2, establishing a backward LSTM function with a parameter thetabAs a backward parameter of the LSTM function,
wherein h is a hidden state in the LSTM network, and c is a cell state in the LSTM network;
s4.3, splicing the outputs of the forward LSTM function and the backward LSTM function, outputting sleep cycle classification,
9. An apparatus for sleep staging based on single channel electroencephalogram signals, the apparatus comprising at least one processor and at least one memory;
the at least one memory is for storing computer instructions;
the at least one processor is configured to execute at least some of the computer instructions to implement the operations of any of claims 1-8.
10. A computer-readable storage medium having stored thereon computer instructions, at least some of which, when executed by a processor, perform operations according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910828706.XA CN112438738A (en) | 2019-09-03 | 2019-09-03 | Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910828706.XA CN112438738A (en) | 2019-09-03 | 2019-09-03 | Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112438738A true CN112438738A (en) | 2021-03-05 |
Family
ID=74734011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910828706.XA Pending CN112438738A (en) | 2019-09-03 | 2019-09-03 | Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112438738A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113303814A (en) * | 2021-06-13 | 2021-08-27 | 大连理工大学 | Single-channel ear electroencephalogram automatic sleep staging method based on deep transfer learning |
CN113855049A (en) * | 2021-10-22 | 2021-12-31 | 上海电机学院 | Electroencephalogram sleep staging method based on EMD-XGboost |
CN115251845A (en) * | 2022-07-28 | 2022-11-01 | 纽锐思(苏州)医疗科技有限公司 | Sleep monitoring method for processing brain wave signals based on TB-TF-BiGRU model |
CN116269244A (en) * | 2023-05-18 | 2023-06-23 | 安徽星辰智跃科技有限责任公司 | Method, system and device for quantifying sleep memory emotion tension based on eye movement |
CN116712035A (en) * | 2023-05-31 | 2023-09-08 | 苏州海神联合医疗器械有限公司 | Sleep stage method and system based on CNN-PSO-BiLSTM |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013121489A (en) * | 2011-11-11 | 2013-06-20 | Midori Anzen Co Ltd | Sleep stage detection device, sleep stage calculation device, and sleep stage detection system |
CN107495962A (en) * | 2017-09-18 | 2017-12-22 | 北京大学 | A kind of automatic method by stages of sleep of single lead brain electricity |
CN107961007A (en) * | 2018-01-05 | 2018-04-27 | 重庆邮电大学 | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term |
CN108542386A (en) * | 2018-04-23 | 2018-09-18 | 长沙学院 | A kind of sleep state detection method and system based on single channel EEG signal |
CN109157214A (en) * | 2018-09-11 | 2019-01-08 | 河南工业大学 | A method of the online removal eye electricity artefact suitable for single channel EEG signals |
CN109820525A (en) * | 2019-01-23 | 2019-05-31 | 五邑大学 | A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model |
-
2019
- 2019-09-03 CN CN201910828706.XA patent/CN112438738A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013121489A (en) * | 2011-11-11 | 2013-06-20 | Midori Anzen Co Ltd | Sleep stage detection device, sleep stage calculation device, and sleep stage detection system |
CN107495962A (en) * | 2017-09-18 | 2017-12-22 | 北京大学 | A kind of automatic method by stages of sleep of single lead brain electricity |
CN107961007A (en) * | 2018-01-05 | 2018-04-27 | 重庆邮电大学 | A kind of electroencephalogramrecognition recognition method of combination convolutional neural networks and long memory network in short-term |
CN108542386A (en) * | 2018-04-23 | 2018-09-18 | 长沙学院 | A kind of sleep state detection method and system based on single channel EEG signal |
CN109157214A (en) * | 2018-09-11 | 2019-01-08 | 河南工业大学 | A method of the online removal eye electricity artefact suitable for single channel EEG signals |
CN109820525A (en) * | 2019-01-23 | 2019-05-31 | 五邑大学 | A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113303814A (en) * | 2021-06-13 | 2021-08-27 | 大连理工大学 | Single-channel ear electroencephalogram automatic sleep staging method based on deep transfer learning |
CN113855049A (en) * | 2021-10-22 | 2021-12-31 | 上海电机学院 | Electroencephalogram sleep staging method based on EMD-XGboost |
CN115251845A (en) * | 2022-07-28 | 2022-11-01 | 纽锐思(苏州)医疗科技有限公司 | Sleep monitoring method for processing brain wave signals based on TB-TF-BiGRU model |
CN115251845B (en) * | 2022-07-28 | 2024-05-03 | 纽锐思(苏州)医疗科技有限公司 | Sleep monitoring method for processing brain wave signals based on TB-TF-BiGRU model |
CN116269244A (en) * | 2023-05-18 | 2023-06-23 | 安徽星辰智跃科技有限责任公司 | Method, system and device for quantifying sleep memory emotion tension based on eye movement |
CN116269244B (en) * | 2023-05-18 | 2023-08-15 | 安徽星辰智跃科技有限责任公司 | Method, system and device for quantifying sleep memory emotion tension based on eye movement |
CN116712035A (en) * | 2023-05-31 | 2023-09-08 | 苏州海神联合医疗器械有限公司 | Sleep stage method and system based on CNN-PSO-BiLSTM |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112438738A (en) | Sleep stage dividing method and device based on single-channel electroencephalogram signal and storage medium | |
Anh et al. | A real-time model based support vector machine for emotion recognition through EEG | |
AlSharabi et al. | A DWT-entropy-ANN based architecture for epilepsy diagnosis using EEG signals | |
CN110738141A (en) | vein identification method, device, equipment and storage medium | |
CN114305415A (en) | Cross-test and cross-modal multi-modal tension recognition method and system | |
Zhang et al. | Removal of EEG artifacts for BCI applications using fully Bayesian tensor completion | |
Martisius et al. | Using higher order nonlinear operators for SVM classification of EEG data | |
Wang et al. | A gradient-based automatic optimization CNN framework for EEG state recognition | |
Rasoulzadeh et al. | A comparative stationarity analysis of EEG signals | |
CN116889411A (en) | Automatic driving safety personnel brain electricity information semantic analysis method and system | |
Jiang et al. | Analytical comparison of two emotion classification models based on convolutional neural networks | |
CN106264460A (en) | The coding/decoding method of cerebration multidimensional time-series signal based on self study and device | |
Rashed-Al-Mahfuz et al. | Artifact suppression and analysis of brain activities with electroencephalography signals | |
Chen et al. | DEEMD-SPP: a novel framework for emotion recognition based on EEG signals | |
Shah et al. | An investigation of the multi-dimensional (1D vs. 2D vs. 3D) analyses of EEG signals using traditional methods and deep learning-based methods | |
Ibrahim et al. | An adaptive learning approach for EEG-based computer aided diagnosis of epilepsy | |
CN115192040A (en) | Electroencephalogram emotion recognition method and device based on Poincare image and second-order difference image | |
Talukdar et al. | Motor imagery EEG signal classification scheme based on entropy of intrinsic mode function | |
CN114065812A (en) | Method, device, equipment and computer medium for detecting attention of people | |
Sadik et al. | Comparison of Different Data Augmentation Methods With an Experimental EEG Dataset | |
Abdullah et al. | Discrete Attractor Pattern Recognition During Resting State in EEG Signal | |
Farid et al. | Post-Stroke Identification of EEG Signal Using Wavelet Filters and 2D-Convolutional Neural Networks | |
Shi et al. | Coupling analysis of EEG and EMG signals based on transfer entropy after consistent empirical Fourier decomposition | |
Duan et al. | Boosting specificity of MEG artifact removal by weighted support vector machine | |
CN117056788B (en) | EEG signal classification method and device based on supervised comparison learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210526 Address after: 710075 room 2501, block D, Tsinghua Science Park, Keji 2nd Road, high tech Zone, Xi'an City, Shaanxi Province Applicant after: Xi'an leading network media Technology Co.,Ltd. Address before: No.2004, block D, Tsinghua Science Park, Keji 2nd Road, high tech Zone, Xi'an City, Shaanxi Province, 710075 Applicant before: Xi'an Huinao Intelligent Technology Co.,Ltd. |
|
TA01 | Transfer of patent application right |