CN115778390A - Mixed modal fatigue detection method based on linear prediction analysis and stacking fusion - Google Patents
Mixed modal fatigue detection method based on linear prediction analysis and stacking fusion Download PDFInfo
- Publication number
- CN115778390A CN115778390A CN202310046600.0A CN202310046600A CN115778390A CN 115778390 A CN115778390 A CN 115778390A CN 202310046600 A CN202310046600 A CN 202310046600A CN 115778390 A CN115778390 A CN 115778390A
- Authority
- CN
- China
- Prior art keywords
- signal
- eog
- linear prediction
- value
- fatigue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 42
- 230000004927 fusion Effects 0.000 title claims abstract description 31
- 238000004458 analytical method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 55
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 36
- 238000012549 training Methods 0.000 claims abstract description 36
- 210000001061 forehead Anatomy 0.000 claims abstract description 30
- 238000012545 processing Methods 0.000 claims abstract description 16
- 238000000926 separation method Methods 0.000 claims abstract description 7
- 108091026890 Coding region Proteins 0.000 claims abstract description 4
- 238000000513 principal component analysis Methods 0.000 claims abstract description 4
- 230000004397 blinking Effects 0.000 claims abstract description 3
- 230000001711 saccadic effect Effects 0.000 claims abstract 2
- 239000011159 matrix material Substances 0.000 claims description 45
- 210000001508 eye Anatomy 0.000 claims description 18
- 230000004434 saccadic eye movement Effects 0.000 claims description 18
- 238000009499 grossing Methods 0.000 claims description 17
- 210000004556 brain Anatomy 0.000 claims description 13
- 230000004424 eye movement Effects 0.000 claims description 13
- 238000011156 evaluation Methods 0.000 claims description 10
- 210000003128 head Anatomy 0.000 claims description 8
- 210000000869 occipital lobe Anatomy 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 5
- 210000003478 temporal lobe Anatomy 0.000 claims description 5
- 238000005311 autocorrelation function Methods 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000000470 constituent Substances 0.000 claims description 3
- 238000000354 decomposition reaction Methods 0.000 claims description 3
- 238000012886 linear function Methods 0.000 claims description 3
- 230000002087 whitening effect Effects 0.000 claims description 3
- 230000003111 delayed effect Effects 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 3
- 238000011160 research Methods 0.000 abstract description 3
- 206010016256 fatigue Diseases 0.000 description 71
- 238000001228 spectrum Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 230000005611 electricity Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 208000019914 Mental Fatigue Diseases 0.000 description 5
- 230000008901 benefit Effects 0.000 description 5
- 238000012417 linear regression Methods 0.000 description 5
- 230000036626 alertness Effects 0.000 description 4
- 235000009413 Ratibida columnifera Nutrition 0.000 description 3
- 241000510442 Ratibida peduncularis Species 0.000 description 3
- 230000007177 brain activity Effects 0.000 description 3
- 210000005252 bulbus oculi Anatomy 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 230000006996 mental state Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 210000004761 scalp Anatomy 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004070 electrodeposition Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012880 independent component analysis Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
Abstract
The invention discloses a mixed modal fatigue detection method based on linear predictive analysis and stacking fusion, which comprises the following steps: collecting EEG signals and EOG signals; carrying out blind source separation based on a rapid independent principal component analysis algorithm to obtain a multi-channel forehead EOG signal and a multi-channel pure EOG signal; performing linear prediction analysis to solve linear prediction cepstrum coefficients; carrying out continuous wavelet transform on the Mexico cap to detect peak values, coding positive and negative peak values, and extracting the staring, saccadic and blinking statistical characteristics reflected by an eye electrical signal from a coding sequence; and performing regression training to obtain a fatigue degree prediction result. According to the invention, on the basis of exploring the fatigue state detection rule by utilizing the electroencephalogram and electro-oculogram physiological signals, the research of the processing, feature extraction and regression training method for fatigue detection of the electroencephalogram and electro-oculogram signals is developed, the fatigue related feature information reflected in the electroencephalogram and electro-oculogram is fully utilized, and the accuracy of the fatigue detection based on the electroencephalogram and electro-oculogram mixed mode is effectively improved.
Description
Technical Field
The invention relates to the technical field of electroencephalogram signal processing, in particular to a mixed modal fatigue detection method based on linear predictive analysis and stacking fusion.
Background
Mental fatigue is typically a state of decreased brain activity or efficiency resulting from longer-lasting or over-intense brain activity. Mental fatigue is one of the main causes of many safety accidents, especially for practitioners engaged in situations requiring constant vigilance, such as the drivers of automobiles and airplanes, and can lead to inadequate driver attention, slow response, reduced ability to cope with uncontrollable dangerous events, and increased potential driving risks. How to effectively detect the fatigue state of a worker is significant research work for giving early warning to dangerous work behaviors with overload.
The current methods for assessing mental fatigue status can be largely divided into two categories: subjective survey based detection methods and physiological signal based objective detection methods. The subjective survey method is mainly to evaluate the Fatigue degree of a subject by filling in a questionnaire (such as a Chalder Fatigue Scale, fatigue Scale-14 Fatigue Scale) through self-judgment or judgment of others. Although this form is simple to implement, it is subjective too much and real-time is not sufficient, and the level of fatigue state of the subject cannot be accurately known. The objective detection method based on physiological signals mainly comprises the steps of collecting physiological signals such as Electroencephalogram (EEG), electrooculogram (EOG), electrocardiosignal (ECG), electromyogram (EMG) and the like from a judging body, researching a signal change rule when fatigue occurs from the physiological signals, and extracting signal characteristics related to fatigue to reflect mental fatigue level. Although the detection method based on physiological signal analysis has specific requirements on signal acquisition equipment and is complex to process, the fatigue detection index is objective, the accuracy is higher, the real-time performance is good, and the method has a wider application prospect. EEG signals are collected from the surface of the brain scalp of a person, neurophysiological signals related to alertness are directly recorded, and the method has the advantages of strong objectivity and high real-time performance for mental state indication and is generally considered to be a reliable alertness estimation method. The EOG signal contains eyelid and eyeball movement information, and when people are tired, the blinking frequency is usually decreased unconsciously, and the eyeball activity is reduced, so analyzing the EOG signal to extract the eye movement feature information is one of the most common and effective means in fatigue detection.
Although the fatigue detection index based on the EEG is objective and has high time resolution, the EEG signal bodies have great difference, and the EEG signals collected from different subjects reflect the signal characteristics of the same fatigue level, so that the generalization capability of the model is not strong and the EEG signal is not suitable for large-scale general use in the crowd. However, the fatigue detection based on the EOG has the disadvantage of poor time resolution, because the characteristics reflected by the EOG have statistical significance only on a long data segment, which determines that the fatigue detection based on the EOG has poor detection effect on early stages in the fatigue occurrence process of a subject. The accuracy of the existing physiological signal fatigue detection algorithm needs to be improved urgently.
Disclosure of Invention
In order to solve the technical problems, the invention provides a mixed modal fatigue detection method based on Linear Prediction Cepstral Coefficients (LPCCs) and stacking fusion, which fuses an electroencephalogram mode and an electrooculogram mode by using a stacking fusion algorithm, fully utilizes information of respective physiological fatigue in the electroencephalogram and the electrooculogram, and improves the accuracy of fatigue detection based on the electroencephalogram and electrooculogram mixed modal signals.
In order to achieve the above object, the present invention provides a method for detecting fatigue in mixed mode based on linear predictive analysis and stack fusion, which is characterized in that the method comprises the following steps:
s1: collecting EEG signals and EOG signals of a detected person;
s2: processing the EEG signal and the eye electrical EOG signal through a band-pass filter to obtain a multi-channel EEG signal and an EOG signal;
s3: performing blind source separation based on fast independent component analysis (FastICA) algorithm on the multichannel EOG signals, and decomposing original multichannel eye electrical signals into multichannel forehead EOG signals and multichannel pure EOG signals;
s4: merging the multichannel EEG signal and the multichannel forehead EOG signal, performing linear prediction analysis on the merged whole multichannel EEG signal, solving a linear prediction cepstrum coefficient, and performing feature smoothing on the linear prediction cepstrum coefficient by adopting a Moving Average (MA) algorithm;
s5: carrying out Mexican Hat Wavelet Transform (MHWT) detection peak values on the multichannel pure EOG signals, coding positive and negative peak values, and extracting the fixation, saccade and blink statistical characteristics reflected by the eye electrical signals from the coding sequence;
s6: and (5) inputting the features extracted from the EEG signal and the electro-oculogram EOG signal in the step (S5) into a stacking fusion algorithm model for regression training to obtain a fatigue degree prediction result.
Preferably, in step S1), the electroencephalogram EEG signal is acquired from temporal lobe and occipital lobe of brain of the subject, and the ocular EOG signal is acquired from forehead position above eyes of the subject.
Preferably, in step S2), the band-pass filter process uses a rectangular window function to divide the electroencephalogram EEG signal and the ocular EOG signal into non-overlapping data frame segments.
Preferably, in step S3), the blind source separation specifically includes:
s3.1: centralizing the data of each EOG signal acquisition channel, and recording the acquired original multi-channel EOG data matrix as S M×L Where M is the number of channels and L is the signal length, let S M×L Where each element is subtractedThe mean value of the rows is obtained as a central processing matrix S C ;
S3.2: central processing matrix S for multi-channel mean value removal C Whitening treatment, X = ED -1/2 E T S C Where X is the whitened data matrix, E = [ c = 1 ,c 2 ,…,c L ]Is a matrix of M × L eigenvectors, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…, d -1/2 d L ]Is a matrix of L × L diagonal eigenvalues, d i Processing the matrix S for the center C Of the covariance matrix of c i I =1,2, …, L for the corresponding feature vector;
s3.3: setting the unmixing matrix as W, W is an M-dimensional vector, and converting the multi-channel data S M×L The decomposition is the sum of a series of independent constituent variables: u = W × X, non-Gaussian, J, of the signal component is calculated using negative entropy G (W) = [E{G(W T X)}-E{G(V)}] 2 Wherein J G (W) is the non-gaussian result of the calculated signal components when a non-linear function G is employed, V is a gaussian variable having the same mean and covariance matrix as X, and a maximum time unmixing matrix W that makes each component non-gaussian is iteratively solved using newton's method;
s3.4: the frontal EEG signal is reconstructed from the unmixing matrix,whereinTo satisfy the matrix of independent component variables where each component is most non-gaussian,for the forehead EOG signal to be isolated,to unmix the inverse of the matrix W, the difference signal of the original EOG signal and the separated forehead EEG is the clean EOG signal.
Preferably, the specific steps of step S4) include:
s4.1: EEG signal E1= [ p = 1 ,p 2 ,…,p m1 ]With the separated forehead EOG signal E2= [ q ] 1 ,q 2 ,…,q m2 ]And merging into a complete electroencephalogram signal:
E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ];
wherein p is i Is the brain electrical channel signal of the head, m1 is the corresponding channel number of the head electrode, q i Is a forehead electroencephalogram channel signal, and m2 is the number of channels corresponding to the forehead electrode;
s4.2: assume a linear prediction system of order p, i.e. the nth signal sample s (n) is passed through a linear combination of its previous p samplesTo estimate:
whereinMinimizing s (n) andthe linear prediction coefficient at the moment is obtained according to the prediction error;
s4.3: according to the defined relation between the linear prediction coefficient and the linear prediction cepstrum coefficient, the linear prediction coefficient a is processed by the following recursion formula in an iterative mode i Conversion to linear predictive cepstrum coefficients:
WhereinRepresents the nth linear prediction cepstrum coefficient,the method is characterized in that the method is a p-order linear prediction coefficient, and infinite linear prediction cepstrum coefficients can be obtained by a limited number of linear prediction coefficients;
s4.4: performing feature smoothing on the extracted linear prediction cepstrum coefficient by adopting a moving average algorithm, namely smoothing the linear prediction cepstrum coefficient sLPCC of the x-th data frame x And calculating the average value of all linear prediction cepstrum coefficients in a smooth window with x as the center and the length win in the dimension of a time frame.
Preferably, the specific steps at step S4.2 include:
s4.2.1: the difference between the actual sample and the predicted sample is called the prediction error, and is expressed as:
based on the minimum mean square error criterion, let the average prediction error be:
to make E { E } 2 (n) min, for a i And solving the partial derivative and making the partial derivative be 0 to obtain an equation set:
s4.2.2: rewriting the linear prediction equation set to its equivalent form according to the autocorrelation function:
wherein R is n (j) The value of the autocorrelation calculated between the nth data frame segment and the data frame segment delayed by j data points is represented, m is the index of the nth data frame segment, and N is the length of the data frame segment;
the matrix form is:
s4.2.3: solving the linear prediction equation set by adopting a Levinson-Durbin algorithm to calculate a linear prediction coefficient a i 。
Preferably, the specific steps of step S5) include:
s5.1: performing a Mexico hat continuous wavelet transform on the multichannel clean EOG signal:
wherein,for the wavelet transform result, t is the number of time points, e is the base number of the natural logarithm,is the standard deviation of the data;
s5.2: detecting signal peak values by using a sliding window with a fixed length of D and a non-overlapping moving window with a peak searching algorithm, coding a positive peak value detected by the window as 1, coding a negative peak value as 0, identifying a window without the peak value as a fixation characteristic, identifying a segment with 01 or 10 as a candidate item of a primary saccade characteristic, and identifying a segment with 010 as a primary blink characteristic when the segment is appeared;
s5.3: counting the number of saccades, the variance of the saccades, the maximum value, the minimum value, the average value, the power and the average power of the saccade amplitude on each data frame as the saccade electro-ocular characteristics; the total blink duration, the average value, the maximum value, the minimum value, the blink frequency, the maximum value, the minimum value, the average value, the power and the average power of the blink amplitude on each data frame are used as the blink eye movement characteristics; the total duration of the fixation, the average of the durations, the maximum and the minimum on each data frame are used as eye movement characteristics of the fixation.
Preferably, the specific steps of step S6) include:
s6.1: inputting the electroencephalogram characteristics into a Bayesian ridge (Bayesian ridge) regression model for training, and recording the training result as Y1; inputting the electro-ocular characteristics into a lightweight gradient elevator (lightGBM) regression model for training, and recording the training result as Y2; combining the electroencephalogram features and the electrooculogram features in series, inputting the combined features into a Bayesian ridge regression model and a lightweight gradient elevator regression model respectively, recording training results as Y3 and Y4 respectively, and finishing the training of a basic regression layer of a stacking fusion algorithm;
s6.2: and connecting the training results of the base regression layers in parallel, wherein NewX = [ Y1, Y2, Y3, Y4] is used as the input of the secondary regression layer of the stacking fusion algorithm, and NewX is the input of the secondary regression layer, and Y1, Y2, Y3, Y4 are the training results of the primary regression layer. The secondary regression layer is a meta regression layer, and a simple linear regression (linear regression) model is adopted to avoid overfitting of the fusion model;
s6.3: and (4) evaluating the predicted value output by the model, finishing training if the evaluation is passed, and returning to the step S6.1 if the evaluation is not passed.
Preferably, the method of evaluating the predicted value of the model output is to use the predicted value of the model outputAnd evaluating the performance of the model on fatigue detection by using a correlation coefficient between the actual PERCLOS fatigue index tag value Y and the correlation coefficient formula is as follows:
wherein N is the length of the actual PERCLOS fatigue index tag value Y;andrespectively obtaining an actual PERCLOS fatigue index tag value of the ith data frame segment and a fatigue degree predicted value of the model, i =1,2, …, N;andrespectively the average value of the actual PERCLOS fatigue index tag value Y and the predicted value of the model outputAverage value of (d);
and if the correlation coefficient reaches a preset value, the evaluation is passed, otherwise, the evaluation is not passed.
The present invention further provides a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the hybrid modal fatigue detection method based on linear predictive analysis and stack fusion.
With the development of Brain Computer Interface (BCI) technology, the use of EEG signals to detect mental fatigue state has received the continuous attention of researchers, and EEG signals are directly collected from the surface of the brain scalp of a person, which has the advantages of strong objectivity and high real-time performance for mental state indication. Because the EEG directly records neurophysiological signals related to alertness, it is generally considered a reliable alertness estimation method. The EEG-based fatigue detection reflects the brain activity change rule when fatigue occurs, the time resolution is high, and if the EEG and the EOG are used by fusing two signal modes, the advantage of a single mode can be fused, so that a better detection effect is achieved. The invention develops research based on the above and provides an electroencephalogram and electro-oculogram hybrid fatigue detection method based on a linear prediction cepstrum coefficient and a stacking fusion algorithm.
The invention provides a mixed modal fatigue detection method based on linear predictive analysis and stacking fusion, which comprises the steps of firstly, filtering EEG signals and EOG signals acquired based on fatigue detection through a band-pass filter, and dividing the signals into non-overlapping data frames by using a rectangular window function; then decomposing the original EOG into frontal EEG and pure EOG by adopting a FastICA blind source separation algorithm, and merging the separated EEG and EEG collected by the head into a complete EEG signal; linear prediction analysis method is adopted for all EEG signals, and a Levinson-Durbin algorithm based on self-correlation idea is used for solving linear prediction cepstrum coefficient as the feature of EEG extraction; carrying out MHWT peak searching on an EOG signal, coding positive and negative peak values, identifying saccade, blink and fixation eye movement characteristics reflected by the EOG according to a coding sequence, and calculating statistical parameters of each eye movement characteristic to be used as characteristics extracted by the EOG; inputting the obtained features into a stacking fusion algorithm model, and using a Bayesian ridge regression model for EEG features in a base regression layer; using a light weight gradient elevator regression model for the EOG feature; and (3) respectively adopting a Bayesian ridge regression model and a lightweight gradient elevator regression model for EEG and EOG series mixed characteristics, and training a secondary regression layer, namely a meta regression layer to obtain the weight coefficient of each model of the base regression layer. And the meta-regression layer adopts a linear regression model, inputs the prediction result from the base regression layer, and outputs the final fatigue index prediction value of the stacked hybrid model through training.
The invention has the following beneficial effects:
1) The linear prediction method adopted by the invention can well reflect the correlation between the front and the back of the signal, and fatigue, which is a physiological state with gradual change, is dynamically accumulated along with the prolonging of time, so the fatigue characteristics of the front and the back correlation can be well captured by adopting the idea of linear prediction.
2) The LPC is further converted into the LPCC after being calculated by utilizing the linear prediction idea, and because common characteristics in the EEG fatigue detection field, such as Power Spectral Density (PSD), differential Entropy (DE) and the like, log logarithmic measurement is usually adopted, and dB is used as a unit. According to the wiener-cinchona theorem, an autocorrelation sequence of a signal can be obtained by performing inverse fourier transform on a power spectrum of the signal. If we take Log logarithm of the power spectrum before doing inverse fourier transform to the power spectrum of the signal, we can obtain the cepstrum of the signal. Therefore, the cepstrum coefficient LPCC characteristic is a linear prediction characteristic corresponding to Log logarithmic measurement, and the characteristic can be represented more accurately.
3) The extracted EEG features are smoothed by adopting an MA smoothing algorithm, and because the EEG has poor stationarity relative to EOG and is changed quickly, and the EEG features reflecting fatigue are dynamically and gradually changed, the EEG features related to fatigue can be filtered out in a smoothing mode.
4) According to the invention, a stacked fusion algorithm model is adopted, and a Bayesian ridge regression model suitable for EEG fatigue detection is used for an EEG mode in a base regression layer; using a light weight gradient elevator regression model suitable for EOG fatigue detection based for the EOG modality; a Bayesian ridge regression model and a lightweight gradient elevator regression model are respectively adopted for EEG and EOG series mixed features, and a task of model weight training is handed to an element regression layer, so that feature information related to fatigue in the EEG and the EOG can be comprehensively and fully utilized.
5) The stacking fusion algorithm model used in the invention selects the regressors with larger model difference and complexity to train in the base regression layer, ensures the depth and the breadth of the utilization of the EEG characteristic and the EOG characteristic, adopts the simple linear regression model to train and learn the model weight of each base regressor in the meta regression layer, prevents overfitting, and the finally obtained fusion model is compatible with the advantages of the model obtained in each mode and has better generalization capability.
Drawings
Figure 1 is an illustration of EEG signal acquisition electrode position.
Fig. 2 is an illustration of the position of an EOG signal acquisition electrode.
FIG. 3 is a diagram of the steps of the method of the present invention.
Fig. 4 is a process flow diagram illustrating the method of the present invention.
Fig. 5 is a graph of a linear prediction power spectrum versus fatigue index signature.
FIG. 6 is a graph of a comparison of predicted fatigue index values to actual tag values using a single-mode and a mixed-mode approach, in one example.
Figure 7 is a line graph of correlation coefficients for the model obtained on data from individual subjects using the single and mixed modality approach.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved by the present invention clearer, the following will explain in detail the processing procedure of the method proposed by the present invention in a specific embodiment with reference to the accompanying drawings.
It should be noted that the following examples are merely illustrative, and the scope of the present invention is not limited by these examples.
The embodiment adopts a mixed data set of 23 subjects, which is formed by brain electricity, eye electricity and eye electricity, and the data set is used for recording the eyeball activity by using an eye tracker while acquiring the brain electricity, and is used for marking the fatigue degree reflected by the acquired brain electricity and eye electricity. From the Eye tracker recordings, the percent eyelid Closure (PERCLOS) of the subject was calculated every 8s as the true fatigue level signature value for the subject over that 8s, i.e. every 8s constitutes a test data frame for fatigue detection, and feature extraction and fatigue level prediction will be performed on an 8s long data frame in the following when applying the proposed method.
The embodiment of the invention provides a mixed modal fatigue detection method based on linear prediction analysis and stacking fusion, which can improve the technical effect of improving the fatigue detection accuracy based on EEG and EOG mixed signals by extracting linear prediction cepstrum coefficient characteristics in EEG signals from temporal lobes, occipital lobes and forehead, extracting eye movement statistical characteristics in EOG signals from forehead and analyzing the fatigue degree of a prediction subject.
In order to achieve the technical effects, the general idea of the invention is as follows:
collecting EEG signal data from temporal and occipital lobes of the brain, EOG signals from the forehead for a fatigue detection regression prediction task; filtering the collected EEG and EOG data by a band-pass filter and dividing the EEG and EOG data into non-overlapping data frames by adopting a rectangular window function; separating the original EOG into frontal EEG and pure EOG using a fast independent principal component analysis algorithm on EOG signals acquired from the forehead; merging EEG collected by the head and the separated forehead EOG, extracting linear prediction cepstrum coefficient characteristics and smoothing the characteristics; carrying out Mexican hat continuous wavelet transform peak searching on the separated EOG signal without EEG interference, coding positive and negative peaks, and extracting statistical parameter values of saccades, blinks and fixation characteristics as eye movement characteristic values reflected by the eye electricity; and inputting the EEG characteristics and the EOG characteristics into a stacking fusion regression model for training, wherein the output of the meta regression layer of the stacking fusion model is the predicted value of the fatigue degree.
As shown in fig. 3 and 4, the electroencephalogram and electro-oculogram hybrid fatigue detection method based on the linear prediction cepstrum coefficient and the stacking fusion algorithm provided by the invention comprises the following steps:
s1: EEG signals and EOG signals based on fatigue detection are collected, EEG is collected from temporal lobe and occipital lobe of brain, EOG is collected at forehead position above eyes, the specific position where an EEG electrode is placed can be seen in figure 1, and the specific position where the EOG electrode is placed can be seen in figure 2.
S2: processing the collected EEG signals through a band-pass filter of 0.5Hz to 50.5Hz, processing the EOG signals through a band-pass filter of 0.5Hz to 30.5Hz, and dividing the EEG signals and the EOG signals into non-overlapping 885 data frames by adopting a rectangular window function with the length of 8 s;
s3: blind source separation is carried out on the EOG signal based on a fast independent principal component analysis algorithm FastICA, and the original multi-channel EOG signal is decomposed into a multi-channel forehead EEG signal and a multi-channel pure EOG signal;
the method comprises the following specific steps:
s3.1: centralizing the data of each EOG acquisition channel, and recording the acquired original multi-channel EOG data matrix as S M×L Where M is the number of channels and L is the signal length, let S M×L Subtracting the average value of all lines from each element to obtain a central processing matrix S C 。
S3.2: central processing matrix S for multi-channel mean value removal C Whitening treatment, X = ED -1/2 E T S C Where X is the whitened data matrix, E = [ c = 1 ,c 2 ,…,c L ]Is an M × L matrix of eigenvectors, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…, d -1/2 d L ]Is a matrix of L × L diagonal eigenvalues, d i Processing the matrix S for the center C Of the covariance matrix of c i Is the corresponding feature vector.
S3.3: setting the unmixing matrix as W, W is an M-dimensional vector, and converting the multi-channel data S M×L The decomposition is the sum of a series of independent constituent variables: u = W × X, non-Gaussian, J, of the signal component is calculated using negative entropy G (W) = [E{G(W T X)}-E{G(V)}] 2 Wherein J G (W) is a non-Gaussian result of a signal component obtained by calculation when a non-linear function G is adopted, the function G (X) = -exp (-X2/2) is selected, V is a Gaussian variable with the same mean value and covariance matrix as X, and a maximum time unmixing matrix W which enables each component to be non-Gaussian is solved by using a Newton method in an iteration mode.
S3.4: the frontal EEG signal is reconstructed from the unmixing matrix,whereinTo satisfy the matrix of independent component variables where the non-gaussian property of each component is the greatest,is the separated forehead electroencephalogram,is the inverse of the unmixing matrix W. The difference signal of the original EOG and the separated forehead EEG is a clean EOG signal.
S4: merging the multichannel EEG signal with the separated multichannel forehead EEG signal, performing linear predictive analysis on the merged whole multichannel EEG signal, solving the LPCC characteristic, and performing characteristic smoothing on the LPCC by adopting an MA smoothing algorithm;
the method comprises the following specific steps:
s4.1: the EEG signal E1= [ p ] 1 ,p 2 ,…,p m1 ]And the separated forehead EEG signal E2= [ q ] 1 ,q 2 ,…,q m2 ]And merging into a complete electroencephalogram signal:
E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ]
wherein p is i Is the brain electrical channel signal of the head, m1 is the corresponding channel number of the head electrode, q i Is the forehead EEG channel signal, and m2 is the channel number corresponding to the forehead electrode.
S4.2: assume a linear prediction system of order p, i.e. the nth signal sample s (n) can be linearly combined by its previous p samplesTo estimate:
wherein,are assumed to be constants on a segment of the signal sample data frame, which are linear prediction coefficients. Minimizing s (n) andthe linear prediction coefficient at this time is obtained as the prediction error therebetween. In this embodiment, a 14 th order linear prediction system is used.
S4.2 comprises the following specific steps:
s4.2.1: the difference between the actual sample and the predicted sample is called the prediction error, and can be expressed as:
based on the minimum mean square error criterion, let the average prediction error be:
to make Ee 2 (n) min, for a i And solving the partial derivative and making the partial derivative be 0 to obtain an equation set:
s4.2.2: the system of linear prediction equations is adapted to its equivalent form according to the autocorrelation function. The linear prediction equation from S4.2.1 is shifted by the term:
for convenience of expression, note:
then the equation can be rewritten as:
the autocorrelation function of a sample signal s (n) is defined as:
the equation obtained in conjunction with S4.2.1 is:
due to R n (j) Is an even function, and R n (j) Only with respect to the relative sizes of i and j are:
solving the linear prediction equation system is rewritten as:
writing is in matrix form:
s4.2.3: solving the linear prediction equation set obtained in S4.2.2 by adopting Levinson-Durbin algorithm to calculate a linear prediction coefficient a i 。
S4.3: according to the defined relation between the linear prediction coefficient and the linear prediction cepstrum coefficient, the linear prediction coefficient a is processed by the following recursion formula in an iterative mode i Conversion to linear predictive cepstrum coefficients:
WhereinRepresents the nth linear prediction cepstrum coefficient,for the p-order linear prediction coefficients, infinite linear prediction cepstrum coefficients can be obtained from a finite number of linear prediction coefficients, but generally, it is sufficient that n is 12 to 20, and in this embodiment, n is 14.
S4.4: performing feature smoothing on the extracted linear prediction cepstrum coefficient by adopting a moving average algorithm, namely smoothing the linear prediction cepstrum coefficient of the xth data frameThe average value of all linear prediction cepstrum coefficients in a smooth window with x as the center and the length win is calculated on the time frame dimension,whereinRepresenting the non-smoothed linear prediction cepstrum coefficients calculated on the ith data frame, the smoothing window length win must be set to an odd number, in this embodiment, the smoothing window length is set to 29.
As shown in fig. 5, the PERCLOS fatigue index of the data of the subject is respectively calculated from top to bottom, a power spectrum waterfall graph is obtained by corresponding to the data frame, and a linear prediction power spectrum smoothed by an MA algorithm show that compared with the traditional used power spectrum characteristics, the change of the spectrum obtained by the linear prediction method provided by the invention in an alpha (8-14 Hz) wave band is more similar to the change of the PERCLOS fatigue index label, and the correlation between the change rule reflected by the characteristics after smoothing and the change rule of the real label is higher than that before smoothing.
S5: MHWT peak detection with the window length of 8 is adopted for the multi-channel EOG signal after interference removal, positive and negative peaks are detected and coded, and the statistical characteristics of fixation, saccades and blinks reflected by the EOG signal are extracted;
the method comprises the following specific steps:
s5.1: performing a mexican hat continuous wavelet transform on the multichannel clean eye electrical signal:;
wherein,for the wavelet transform result, t is the number of time points, e is the base number of the natural logarithm,is the standard deviation of the data; the peak value of the eye movement signal after transformation is more obvious.
S5.2: the method comprises the steps of detecting signal peak values by using a sliding window with the fixed length of 8 and a moving window with non-overlapping peak searching algorithm, coding a positive peak value detected by the window to be 1, coding a negative peak value to be 0, and directly identifying the window without the peak value as a watching characteristic. Segments with "01" or "10" are identified as candidates for a single glance feature and a single blink feature if "010" appears.
S5.3: counting the number of saccades, the variance of the saccades, the maximum value, the minimum value, the average value, the power and the average power of the saccade amplitude on each data frame as the saccade electro-ocular characteristics; the total blink duration, the average value, the maximum value, the minimum value, the blink frequency, the maximum value, the minimum value, the average value, the power and the average power of the blink amplitude on each data frame are used as blink eye movement characteristics; the gaze duration, the average of the durations, the maximum and minimum values on each data frame are used as eye movement characteristics of the gaze. All eye movement features constitute a sequence of features of the electrical eye signals.
S6: inputting the characteristics extracted from the electroencephalogram signal and the electro-oculogram signal into a stacking fusion algorithm model for regression training to obtain a fatigue degree prediction result;
the method comprises the following specific steps:
s6.1: inputting the electroencephalogram characteristics into a Bayesian ridge regression model for training, and recording the training result as Y1; inputting the electrooculogram characteristics into a regression model of a lightweight gradient elevator for training, and recording the training result as Y2; the electroencephalogram features and the electrooculogram features are fused together in series and are respectively input into a Bayesian ridge regression model and a lightweight gradient elevator regression model, and training results are recorded as Y3 and Y4 respectively. The above operations complete the base regression layer training of the stacked fusion algorithm.
S6.2: and connecting the results obtained by training the base regression layer in parallel as the input of a secondary regression layer of the stacking fusion algorithm. NewX = [ Y1, Y2, Y3, Y4], where NewX is the input to the secondary regression tier and Y1, Y2, Y3, Y4 are the training results for the primary regression tier. And the secondary regression layer is a meta regression layer, a simple linear regression model is adopted to avoid overfitting of the fusion model, and the result output by the meta regression layer is the final predicted value of the stacked fusion model to the fatigue degree of the main body.
Prediction using model outputCorrelation coefficient with actual PERCLOS fatigue index tag value Y to evaluate model performance for fatigue detectionThe correlation coefficient formula is as follows:;
wherein N is the length of the actual PERCLOS fatigue index tag value Y;andrespectively obtaining an actual PERCLOS fatigue index tag value of the ith data frame segment and a fatigue degree predicted value of the model, i =1,2, …, N;andrespectively the average value of the actual PERCLOS fatigue index tag value Y and the predicted value of the model outputAverage value of (d); and if the correlation coefficient reaches a preset value, the evaluation is passed, otherwise, the evaluation is failed.
Fig. 6 shows a line graph comparing the prediction values of fatigue index with the actual tag values for a subject using the EEG single mode, the EOG single mode and the EEG-EOG mixed mode method, and it can be seen that the prediction results of fatigue level obtained for the mixed mode used in the present invention are closer to the actual tag values than for the single mode. Fig. 7 shows the correlation coefficient results between predicted and actual values of fatigue levels for the data of 23 subjects in the case of the resulting model using the EEG single modality, the EOG single modality and the EEG-EOG mixed modality. It can be seen that the mixed mode performed better on almost all the data of the subjects, which indicates that the method provided by the invention can improve the accuracy of fatigue detection and the generalization capability of the obtained model is stronger.
The present invention has been described above using a specific case data set for understanding purposes only and is not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention. It will be understood by those skilled in the art that the foregoing is only an example of the present invention, and is not intended to limit the scope of the invention, which is defined by the appended claims.
Details not described in this specification are within the skill of the art that are well known to those skilled in the art.
Claims (10)
1. A mixed modal fatigue detection method based on linear predictive analysis and stacking fusion is characterized in that: the method comprises the following steps:
s1: collecting EEG signals and EOG signals of a detected person;
s2: processing the EEG signal and the eye electrical EOG signal through a band-pass filter to obtain a multi-channel EEG signal and an EOG signal;
s3: performing blind source separation based on a fast independent principal component analysis algorithm on the multichannel EOG signals, and decomposing original multichannel ocular electric signals into multichannel forehead EOG signals and multichannel pure EOG signals;
s4: merging the multichannel EEG signal and the multichannel forehead EOG signal, performing linear prediction analysis on the merged whole multichannel EEG signal, solving a linear prediction cepstrum coefficient, and performing feature smoothing on the linear prediction cepstrum coefficient by adopting a sliding average algorithm;
s5: carrying out Mexico cap continuous wavelet transform detection peak values on the multichannel pure EOG signal, coding positive and negative peak values, and extracting the staring, saccadic and blinking statistical characteristics reflected by the eye electrical signal from a coding sequence;
s6: and (5) inputting the features extracted from the EEG signal and the electro-oculogram EOG signal in the step (S5) into a stacking fusion algorithm model for regression training to obtain a fatigue degree prediction result.
2. The method according to claim 1, wherein the method comprises the following steps: in the step S1, the EEG signals are collected from temporal lobe and occipital lobe of brain of the detected person, and the eye electric EOG signals are collected at forehead position above eyes of the detected person.
3. The method according to claim 1, wherein the method comprises the following steps: in step S2, the band-pass filter process adopts a rectangular window function to divide the EEG signal and the eye electrical EOG signal into non-overlapping data frame segments.
4. The method according to claim 1, wherein the method comprises the following steps: in step S3, the blind source separation specifically includes:
s3.1: centralizing the data of each EOG signal acquisition channel, and recording the acquired original multi-channel EOG data matrix as S M×L Where M is the number of channels and L is the signal length, let S M×L Subtracting the average value of all lines from each element to obtain a central processing matrix S C ;
S3.2: central processing matrix S for multi-channel mean value removal C Whitening treatment, X = ED -1/2 E T S C Where X is the whitened data matrix, E = [ c = 1 ,c 2 ,…,c L ]Is a matrix of M × L eigenvectors, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…, d -1/2 d L ]Is a matrix of L × L diagonal eigenvalues, d i Processing the matrix S for the center C Of the covariance matrix of c i I =1,2, …, L for the corresponding feature vector;
s3.3: setting the unmixing matrix as W, W is an M-dimensional vector, and converting the multi-channel data S M×L The decomposition is the sum of a series of independent constituent variables: u = W × X, non-Gaussian, J, of the signal component is calculated using negative entropy G (W) = [E{G(W T X)}-E{G(V)}] 2 Wherein J G (W) is inCalculating a non-Gaussian result of the obtained signal component by adopting a non-linear function G, wherein V is a Gaussian variable having the same mean value and covariance matrix as X, and iteratively solving by using a Newton method to enable a maximum time unmixing matrix W of the non-Gaussian of each component;
s3.4: the frontal EEG signal is reconstructed from the unmixing matrix,whereinTo satisfy the matrix of independent component variables where the non-gaussian property of each component is the greatest,for the forehead EOG signal to be isolated,to unmix the inverse of the matrix W, the difference signal of the original EOG signal and the separated forehead EEG is the clean EOG signal.
5. The method according to claim 1, wherein the method comprises the following steps: the specific steps of step S4 include:
s4.1: EEG signal E1= [ p = 1 ,p 2 ,…,p m1 ]With the separated forehead EOG signal E2= [ q ] 1 ,q 2 ,…,q m2 ]And merging into a complete electroencephalogram signal:
E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ]
wherein p is i Is the brain electrical channel signal of the head, m1 is the corresponding channel number of the head electrode, q i Is a forehead electroencephalogram channel signal, and m2 is the number of channels corresponding to the forehead electrode;
s4.2: assume a linear prediction system of order p, i.e. the nth signal sample s (n) is passed through a linear combination of its previous p samplesTo estimate:
whereinMinimizing s (n) andthe linear prediction coefficient at the moment is obtained according to the prediction error;
s4.3: according to the defined relation between the linear prediction coefficient and the linear prediction cepstrum coefficient, the linear prediction coefficient a is processed by the following recursion formula in an iterative mode i Conversion to linear predictive cepstrum coefficients:
WhereinRepresents the nth linear prediction cepstrum coefficient,the method is characterized in that the method is a p-order linear prediction coefficient, and infinite linear prediction cepstrum coefficients can be obtained by a limited number of linear prediction coefficients;
s4.4: performing feature smoothing on the extracted linear prediction cepstrum coefficient by adopting a moving average algorithm, namely smoothing the linear prediction cepstrum coefficient sLPCC of the x-th data frame x And calculating the average value of all linear prediction cepstrum coefficients in a smooth window with x as the center and the length win in the dimension of a time frame.
6. The method according to claim 5, wherein the method comprises the following steps: the specific steps at step S4.2 include:
based on the minimum mean square error criterion, let the average prediction error be:
to make E { E } 2 (n) } min, for a i And solving the partial derivative and making the partial derivative be 0 to obtain an equation set:
s4.2.2: rewriting the linear prediction equation set to its equivalent form according to the autocorrelation function:
wherein R is n (j) The value of the autocorrelation calculated between the nth data frame segment and the data frame segment delayed by j data points is represented, m is the index of the nth data frame segment, and N is the length of the data frame segment;
the matrix form is:
s4.2.3: solving linear prediction by adopting Levinson-Durbin algorithmSystem of equations to calculate linear prediction coefficient a i 。
7. The method according to claim 1, wherein the method comprises the following steps: the specific steps of step S5 include:
s5.1: performing a Mexico hat continuous wavelet transform on the multichannel clean EOG signal:
wherein,for the wavelet transform result, t is the number of time points, e is the base number of the natural logarithm,is the standard deviation of the data;
s5.2: detecting signal peak values by using a sliding window with a fixed length of D and a moving window without overlapping peak searching algorithm, coding a positive peak value detected by the window as 1, coding a negative peak value as 0, identifying a window without the peak value as a fixation characteristic, identifying a segment with 01 or 10 as a candidate item of a primary saccade characteristic, and identifying 010 as a primary blink characteristic when the segment appears;
s5.3: counting the number of saccades, the variance of the saccades, the maximum value, the minimum value, the average value, the power and the average power of the saccade amplitude on each data frame as the saccade electro-ocular characteristics; the total blink duration, the average value, the maximum value, the minimum value, the blink frequency, the maximum value, the minimum value, the average value, the power and the average power of the blink amplitude on each data frame are used as the blink eye movement characteristics; the total duration of the fixation, the average of the durations, the maximum and the minimum on each data frame are used as eye movement characteristics of the fixation.
8. The method according to claim 1, wherein the method comprises the following steps: the specific steps of step S6 include:
s6.1: inputting the electroencephalogram characteristics into a Bayesian ridge regression model for training, and recording the training result as Y1; inputting the electrooculogram characteristics into a regression model of the lightweight gradient elevator for training, and recording the training result as Y2; the electroencephalogram features and the electrooculogram features are fused together in series and are respectively input into a Bayesian ridge regression model and a lightweight gradient elevator regression model, training results are respectively recorded as Y3 and Y4, and base regression layer training of a stacking fusion algorithm is completed;
s6.2: connecting the training results of the base regression layers in parallel, and using the training results as input NewX = [ Y1, Y2, Y3, Y4] of a secondary regression layer of the stacked fusion algorithm, wherein NewX is input of the secondary regression layer, and Y1, Y2, Y3, Y4 are training results of a primary regression layer;
s6.3: and (4) evaluating the predicted value output by the model, finishing training if the evaluation is passed, and returning to the step S6.1 if the evaluation is not passed.
9. The method according to claim 8, wherein the method comprises the following steps: the method for evaluating the predicted value of the model output is to adopt the predicted value of the model outputAnd evaluating the performance of the model on fatigue detection by using a correlation coefficient between the actual PERCLOS fatigue index tag value Y and the correlation coefficient formula is as follows:
wherein N is the length of the actual PERCLOS fatigue index tag value Y;andare respectively the firstThe actual PERCLOS fatigue index tag values of the i data frame segments and the predicted values of the fatigue degrees of the models are i =1,2, …, N;andrespectively the average value of the actual PERCLOS fatigue index tag value Y and the predicted value output by the modelAverage value of (d);
and if the correlation coefficient reaches a preset value, the evaluation is passed, otherwise, the evaluation is failed.
10. A computer-readable storage medium, storing a computer program, characterized in that the computer program, when being executed by a processor, carries out the method of any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310046600.0A CN115778390B (en) | 2023-01-31 | 2023-01-31 | Mixed mode fatigue detection method based on linear prediction analysis and stacking fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310046600.0A CN115778390B (en) | 2023-01-31 | 2023-01-31 | Mixed mode fatigue detection method based on linear prediction analysis and stacking fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115778390A true CN115778390A (en) | 2023-03-14 |
CN115778390B CN115778390B (en) | 2023-05-26 |
Family
ID=85429314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310046600.0A Active CN115778390B (en) | 2023-01-31 | 2023-01-31 | Mixed mode fatigue detection method based on linear prediction analysis and stacking fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115778390B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599127A (en) * | 2009-06-26 | 2009-12-09 | 安徽大学 | The feature extraction of electro-ocular signal and recognition methods |
CN109512442A (en) * | 2018-12-21 | 2019-03-26 | 杭州电子科技大学 | A kind of EEG fatigue state classification method based on LightGBM |
CN113080986A (en) * | 2021-05-07 | 2021-07-09 | 中国科学院深圳先进技术研究院 | Method and system for detecting exercise fatigue based on wearable equipment |
US20220022805A1 (en) * | 2020-07-22 | 2022-01-27 | Eysz Inc. | Seizure detection via electrooculography (eog) |
CN114246593A (en) * | 2021-12-15 | 2022-03-29 | 山东中科先进技术研究院有限公司 | Electroencephalogram, electrooculogram and heart rate fused fatigue detection method and system |
KR20220063952A (en) * | 2020-11-11 | 2022-05-18 | 라이트하우스(주) | System for preventing drowsy driving based on brain engineering |
-
2023
- 2023-01-31 CN CN202310046600.0A patent/CN115778390B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101599127A (en) * | 2009-06-26 | 2009-12-09 | 安徽大学 | The feature extraction of electro-ocular signal and recognition methods |
CN109512442A (en) * | 2018-12-21 | 2019-03-26 | 杭州电子科技大学 | A kind of EEG fatigue state classification method based on LightGBM |
US20220022805A1 (en) * | 2020-07-22 | 2022-01-27 | Eysz Inc. | Seizure detection via electrooculography (eog) |
KR20220063952A (en) * | 2020-11-11 | 2022-05-18 | 라이트하우스(주) | System for preventing drowsy driving based on brain engineering |
CN113080986A (en) * | 2021-05-07 | 2021-07-09 | 中国科学院深圳先进技术研究院 | Method and system for detecting exercise fatigue based on wearable equipment |
CN114246593A (en) * | 2021-12-15 | 2022-03-29 | 山东中科先进技术研究院有限公司 | Electroencephalogram, electrooculogram and heart rate fused fatigue detection method and system |
Non-Patent Citations (2)
Title |
---|
S. PAZHANIRAJAN: "EEG Signal Classification using Linear Predictive Cepstral Coefficient Features" * |
黄亚康: "基于脑电信号和眼电信号驾驶疲劳状态检测" * |
Also Published As
Publication number | Publication date |
---|---|
CN115778390B (en) | 2023-05-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Lai et al. | Artifacts and noise removal for electroencephalogram (EEG): A literature review | |
Asl et al. | Support vector machine-based arrhythmia classification using reduced features of heart rate variability signal | |
CN109976525B (en) | User interface interaction method and device and computer equipment | |
Fathima et al. | Wavelet based features for epileptic seizure detection | |
CN111190484B (en) | Multi-mode interaction system and method | |
CN106108894A (en) | A kind of emotion electroencephalogramrecognition recognition method improving Emotion identification model time robustness | |
CN110269609B (en) | Method for separating ocular artifacts from electroencephalogram signals based on single channel | |
CN110390272B (en) | EEG signal feature dimension reduction method based on weighted principal component analysis | |
CN113780392B (en) | Channel selection method for motor imagery electroencephalogram identification | |
Diery et al. | Automated ECG diagnostic P-wave analysis using wavelets | |
Chashmi et al. | An efficient and automatic ECG arrhythmia diagnosis system using DWT and HOS features and entropy-based feature selection procedure | |
CN101843491A (en) | Resting electroencephalogram identification method based on bilinear model | |
CN113208613A (en) | Multi-mode BCI (binary coded decimal) timing optimization method based on FHLS (FHLS) feature selection | |
Fattah et al. | An approach for classifying alcoholic and non-alcoholic persons based on time domain features extracted from EEG signals | |
CN117883082A (en) | Abnormal emotion recognition method, system, equipment and medium | |
CN115713246A (en) | Multi-modal man-machine interaction performance evaluation method for virtual scene | |
CN117918863A (en) | Method and system for processing brain electrical signal real-time artifacts and extracting features | |
Rezaee et al. | EEG-based driving fatigue recognition using hybrid deep transfer learning approach | |
CN115067910A (en) | Heart rate variability pressure detection method, device, storage medium and system | |
CN115778390B (en) | Mixed mode fatigue detection method based on linear prediction analysis and stacking fusion | |
Bhimraj et al. | Autonomous noise removal from EEG signals using independent component analysis | |
Ouyang et al. | Vigilance analysis based on continuous wavelet transform of eeg signals | |
Ramakrishna et al. | Classification of human emotions using EEG-based causal connectivity patterns | |
CN115736920A (en) | Depression state identification method and system based on bimodal fusion | |
Kulkarni et al. | Driver state analysis for ADAS using EEG signals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |