CN115778390A - Mixed modal fatigue detection method based on linear prediction analysis and stacking fusion - Google Patents

Mixed modal fatigue detection method based on linear prediction analysis and stacking fusion Download PDF

Info

Publication number
CN115778390A
CN115778390A CN202310046600.0A CN202310046600A CN115778390A CN 115778390 A CN115778390 A CN 115778390A CN 202310046600 A CN202310046600 A CN 202310046600A CN 115778390 A CN115778390 A CN 115778390A
Authority
CN
China
Prior art keywords
signal
eeg
eog
channel
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310046600.0A
Other languages
Chinese (zh)
Other versions
CN115778390B (en
Inventor
陈昆
刘智勇
马力
刘泉
艾青松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202310046600.0A priority Critical patent/CN115778390B/en
Publication of CN115778390A publication Critical patent/CN115778390A/en
Application granted granted Critical
Publication of CN115778390B publication Critical patent/CN115778390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

本发明公开了一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,包括步骤:采集脑电EEG信号和眼电EOG信号;进行基于快速独立主成分分析算法的盲源分离,得到多通道前额EOG信号和多通道纯净的EOG信号;进行线性预测分析,求解线性预测倒谱系数;进行墨西哥帽连续小波变换检测峰值,对正负峰值进行编码,从编码序列中提取眼电信号反映的注视、扫视、眨眼的统计特征;进行回归训练,获得疲劳程度预测结果。本发明在探索利用脑电和眼电生理信号检测疲劳状态规律的基础上开展了疲劳检测脑电眼电信号处理、特征提取及回归训练方法研究,充分利用脑电和眼电中反映的疲劳相关特征信息,有效提高了基于脑电眼电混合模态的疲劳检测的准确率。

Figure 202310046600

The invention discloses a mixed-mode fatigue detection method based on linear predictive analysis and stacking fusion, which comprises the steps of: collecting EEG signals and EOG signals; performing blind source separation based on a fast independent principal component analysis algorithm to obtain multiple channel forehead EOG signal and multi-channel pure EOG signal; perform linear predictive analysis to solve the linear predictive cepstral coefficient; perform Mexican hat continuous wavelet transform to detect peaks, encode positive and negative peaks, and extract the information reflected by electrooculogram signals from the coded sequence Statistical characteristics of gaze, saccade, and blink; perform regression training to obtain fatigue prediction results. On the basis of exploring and using EEG and EEG physiological signals to detect the laws of fatigue state, the present invention carries out research on EEG signal processing, feature extraction and regression training methods for fatigue detection, and makes full use of the fatigue-related features reflected in EEG and EEG information, which effectively improves the accuracy of fatigue detection based on EEG mixed modalities.

Figure 202310046600

Description

基于线性预测分析与堆叠融合的混合模态疲劳检测方法Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion

技术领域technical field

本发明涉及脑电信号处理技术领域,具体地指一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法。The invention relates to the technical field of EEG signal processing, in particular to a mixed-mode fatigue detection method based on linear predictive analysis and stacking fusion.

背景技术Background technique

精神疲惫通常是由持续时间较长或者强度过大的脑力活动导致的大脑活动能力或者效率下降的状态。精神疲劳是众多安全事故的主要诱因之一,尤其是对于从事需要持续性保持警惕状态的从业者而言,例如汽车和飞机的驾驶员,精神疲劳会导致驾驶员注意力不足,反应迟缓,应对不可控的危险事件的能力下降,增加潜在的驾驶风险。如何有效的检测工作者的疲劳状态对于超负荷的危险工作行为给予预警是一项十分有意义的研究工作。Mental fatigue is generally a state of decreased brain activity or efficiency resulting from prolonged or intense mental activity. Mental fatigue is one of the main causes of many safety accidents, especially for practitioners who need to maintain a constant state of vigilance, such as car and airplane drivers. Uncontrollable hazardous events reduce ability to increase potential driving risk. How to effectively detect the fatigue state of workers and give early warning to overloaded dangerous work behavior is a very meaningful research work.

目前用于评估精神疲劳状态的方法主要可分为两大类:基于主观调查的检测方法和基于生理信号的客观检测方法。主观调查的检测方法主要是通过自我判断或他人判断,以填写调查问卷(如Chalder疲劳量表,Fatigue Scale-14疲劳量表)的形式,评估主体的疲劳程度。这种形式虽然实施简单,但主观性太强,实时性不足,无法准确获知主体的疲劳状态水平。基于生理信号的客观检测方法主要是通过从评判主体身上采集脑电信号(Electroencephalogram, EEG)、眼电信号(Electrooculogram,EOG)、心电信号(Electrocardiogram, ECG)、肌电信号(Electromyogram, EMG)等生理信号,从这些生理信号中研究疲劳发生时的信号变化规律,提取疲劳相关的信号特征,用以反映精神疲劳水平。基于生理信号分析的检测方法虽然对于信号采集设备有特定要求,处理复杂,但是疲劳检测的指标客观,准确性更高,实时性好,具有更加广泛的应用前景。EEG信号采自人的大脑头皮表面,直接记录了与警觉性相关的神经生理信号,对于精神状态指示具有客观性强,实时性高的优点,被普遍认为是一种可靠的警觉度估计方法。EOG信号包含了眼睑和眼球运动信息,而人在疲劳时通常会不自觉地降低眨眼频率,减少眼球活动,因此分析EOG信号提取眼动特征信息是疲劳检测中最常用、最有效的手段之一。The current methods used to assess the state of mental fatigue can be mainly divided into two categories: detection methods based on subjective surveys and objective detection methods based on physiological signals. The detection method of subjective survey is mainly through self-judgment or other people's judgment, in the form of filling in questionnaires (such as Chalder fatigue scale, Fatigue Scale-14 fatigue scale) to evaluate the fatigue degree of the subject. Although this form is simple to implement, it is too subjective and not real-time enough to accurately know the fatigue state level of the subject. The objective detection method based on physiological signals mainly collects electroencephalogram (Electroencephalogram, EEG), electrooculogram (EOG), electrocardiogram (ECG) and electromyogram (EMG) signals from the judge body. From these physiological signals, the law of signal changes when fatigue occurs is studied, and the characteristics of fatigue-related signals are extracted to reflect the level of mental fatigue. Although the detection method based on physiological signal analysis has specific requirements for signal acquisition equipment and complex processing, the indicators of fatigue detection are objective, accurate, and real-time, and have wider application prospects. EEG signals are collected from the scalp surface of the human brain and directly record the neurophysiological signals related to alertness. It has the advantages of strong objectivity and high real-time performance for mental state indication, and is generally considered to be a reliable alertness estimation method. The EOG signal contains the eyelid and eye movement information, and when people are tired, they usually unconsciously reduce the blink frequency and eye movement. Therefore, analyzing the EOG signal to extract eye movement feature information is one of the most commonly used and effective methods in fatigue detection. .

尽管基于EEG的疲劳检测指标客观,时间分辨率高,但是EEG信号主体差异性很大,从不同受试者那里采集的EEG信号反映同一疲劳水平的信号特征可能因为差异太大进而导致模型的泛化能力不强,无法在人群中大规模通用。而基于EOG的疲劳检测缺点在于时间分辨率差,因为EOG反映的特征仅在一段较长的数据片段上才具备统计意义,这就决定了基于EOG的疲劳检测对于受试者疲劳发生过程中的早期阶段检测效果不佳。现有的基于生理信号疲劳检测算法的准确率亟需提升。Although the fatigue detection index based on EEG is objective and has high time resolution, the subject of EEG signals is very different. The EEG signals collected from different subjects reflect the signal characteristics of the same fatigue level. The ability to transform is not strong, and it cannot be widely used in the crowd. The disadvantage of EOG-based fatigue detection is poor time resolution, because the characteristics reflected by EOG have statistical significance only on a long data segment, which determines the effectiveness of EOG-based fatigue detection in the process of subject fatigue. Early stage detection is poor. The accuracy of existing fatigue detection algorithms based on physiological signals needs to be improved urgently.

发明内容Contents of the invention

为了解决上述技术问题,本发明提出一种基于线性预测倒谱系数(linearprediction cepstral coefficients,LPCC)与堆叠融合的混合模态疲劳检测方法,将堆叠融合算法将脑电模态与眼电模态融合,充分利用脑电和眼电中各自反映生理疲劳的信息,提高基于脑电和眼电混合模态信号的疲劳检测的准确率。In order to solve the above technical problems, the present invention proposes a mixed-modal fatigue detection method based on linear prediction cepstral coefficients (LPCC) and stack fusion, and combines the stack fusion algorithm with the EEG mode and the oculoelectric mode , make full use of the information reflecting physiological fatigue in EEG and EEG, and improve the accuracy of fatigue detection based on the mixed mode signals of EEG and EEG.

为实现上述目的,本发明所设计的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特殊之处在于,所述方法包括如下步骤:In order to achieve the above purpose, a mixed-mode fatigue detection method based on linear predictive analysis and stacking fusion designed by the present invention is special in that the method includes the following steps:

S1:采集被检测者的脑电EEG信号和眼电EOG信号;S1: collect the EEG signal and EOG signal of the subject;

S2:将所述脑电EEG信号和眼电EOG信号通过带通滤波器处理,得到多通道EEG信号和EOG信号;S2: Process the EEG signal and EOG signal through a band-pass filter to obtain a multi-channel EEG signal and EOG signal;

S3:对所述多通道的EOG信号进行基于快速独立主成分分析(fast independentcomponent analysis,FastICA)算法的盲源分离,将原始的多通道眼电信号分解为多通道前额EOG信号和多通道纯净的EOG信号;S3: Perform blind source separation based on the fast independent component analysis (FastICA) algorithm on the multi-channel EOG signal, and decompose the original multi-channel electro-oculogram signal into a multi-channel frontal EOG signal and a multi-channel pure EOG signal;

S4:将所述多通道EEG信号与所述多通道前额EOG信号归并,对归并后的整个多通道EEG信号进行线性预测分析,求解线性预测倒谱系数,采用滑动平均(moveing average,MA)算法对线性预测倒谱系数进行特征平滑;S4: Merge the multi-channel EEG signal and the multi-channel forehead EOG signal, perform linear predictive analysis on the entire merged multi-channel EEG signal, solve the linear predictive cepstral coefficient, and adopt a moving average (MA) algorithm Feature smoothing of the linear predictive cepstral coefficients;

S5:对所述多通道纯净的EOG信号进行墨西哥帽连续小波变换(mexican hatwavelet transform,MHWT)检测峰值,对正负峰值进行编码,从编码序列中提取眼电信号反映的注视、扫视、眨眼的统计特征;S5: Perform Mexican hat wavelet transform (MHWT) on the multi-channel pure EOG signal to detect the peak value, encode the positive and negative peak values, and extract the fixation, saccade, and blinking reflected by the electrooculogram signal from the encoded sequence Statistical Features;

S6:将步骤S5中对脑电EEG信号和眼电EOG信号各自提取到的特征输入到堆叠融合算法模型中进行回归训练,获得疲劳程度预测结果。S6: Input the features extracted from the EEG signal and the EOG signal in step S5 into the stacking fusion algorithm model for regression training, and obtain the fatigue degree prediction result.

优选地,步骤S1)中,所述脑电EEG信号采集自被检测者的大脑颞叶、枕叶,所述眼电EOG信号采集于被检测者的眼睛上方的前额位置。Preferably, in step S1), the EEG signal is collected from the temporal lobe and the occipital lobe of the subject's brain, and the EOG signal is collected from the forehead above the eyes of the subject.

优选地,步骤S2)中,带通滤波器处理采用矩形窗函数将所述脑电EEG信号和眼电EOG信号分成不重叠的数据帧片段。Preferably, in step S2), the band-pass filter process uses a rectangular window function to divide the EEG signal and the EOG signal into non-overlapping data frame segments.

优选地,步骤S3)中,盲源分离的具体步骤包括:Preferably, in step S3), the specific steps of blind source separation include:

S3.1:对每个EOG信号采集道通的数据进行中心化处理,记采集的原始多通道EOG数据矩阵为SM×L,其中M为通道数,L为信号长度,令SM×L中每个元素减去所在行的均值,得到中心处理矩阵SCS3.1: Perform centralized processing on the data of each EOG signal acquisition channel, record the original multi-channel EOG data matrix collected as S M×L , where M is the number of channels, L is the signal length, let S M×L Subtract the mean value of the row from each element in , and get the central processing matrix S C ;

S3.2:对多通道去均值的中心处理矩阵SC进行白化处理,X = ED-1/2ETSC,其中X为白化后的数据矩阵,E=[c1,c2,…,cL]是M×L的特征向量矩阵,D-1/2=diag[d-1/2 d 1, d-1/2d 2,…, d-1/2 d L]是L×L的对角特征值矩阵,di为中心处理矩阵SC的协方差矩阵的第i个特征值,ci为对应的特征向量,i=1,2,…,L;S3.2: Whiten the central processing matrix S C with multi-channel de-meaning, X = ED -1/2 E T S C , where X is the data matrix after whitening, E=[c 1 ,c 2 ,… ,c L ] is the eigenvector matrix of M×L, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…, d -1/2 d L ] is L× The diagonal eigenvalue matrix of L, d i is the ith eigenvalue of the covariance matrix of the central processing matrix S C , and ci is the corresponding eigenvector, i=1,2,...,L;

S3.3:设解混矩阵为W,W是一个M维向量,将多通道数据SM×L分解为一系列独立成分变量之和:U=W×X,采用负熵计算信号分量的非高斯性,JG(W) = [E{G(WTX)}-E{G(V)}]2,其中JG(W)是在采用非线性函数G时计算所得的信号分量的非高斯性结果,V是与X具有相同均值和协方差矩阵的高斯变量,使用牛顿法迭代求解使得各个分量非高斯性的最大时的解混矩阵W;S3.3: Let the unmixing matrix be W, W is an M-dimensional vector, decompose the multi-channel data S M×L into a series of independent component variables: U=W×X, use negative entropy to calculate the non- Gaussian, J G (W) = [E{G(W T X)}-E{G(V)}] 2 , where J G (W) is the calculated signal component when using the nonlinear function G Non-Gaussian results, V is a Gaussian variable with the same mean and covariance matrix as X, and the unmixing matrix W of the maximum non-Gaussian components is solved using Newton's method iteratively;

S3.4:由解混矩阵重构前额EEG信号,

Figure SMS_1
,其中
Figure SMS_2
为满足各分量非高斯性最大时的独立成分变量构成的矩阵,
Figure SMS_3
为分离出的前额EOG信号,
Figure SMS_4
为解混矩阵W的逆矩阵,原始EOG信号与分离出的前额EEG的差值信号为纯净的EOG信号。S3.4: Reconstruct the forehead EEG signal from the unmixing matrix,
Figure SMS_1
,in
Figure SMS_2
In order to satisfy the matrix composed of independent component variables when the non-Gaussian property of each component is the largest,
Figure SMS_3
is the isolated forehead EOG signal,
Figure SMS_4
is the inverse matrix of the unmixing matrix W, and the difference signal between the original EOG signal and the separated forehead EEG is a pure EOG signal.

优选地,步骤S4)的具体步骤包括:Preferably, the specific steps of step S4) include:

S4.1:将脑电EEG信号E1=[p1,p2,…,pm1]与分离出的前额EOG信号E2=[q1,q2,…,qm2],归并为完整的脑电信号:S4.1: Merge the EEG EEG signal E1=[p 1 ,p 2 ,…,p m1 ] and the separated forehead EOG signal E2=[q 1 ,q 2 ,…,q m2 ] into a complete brain electric signal:

E3=[p1,p2,…,pm1, q1,q2,…,qm2];E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ];

其中pi为头部的脑电通道信号,m1为头部电极对应通道数,qi是前额脑电通道信号,m2为前额电极对应的通道数;Among them, p i is the EEG channel signal of the head, m1 is the channel number corresponding to the head electrode, q i is the forehead EEG channel signal, and m2 is the channel number corresponding to the forehead electrode;

S4.2:假定采用一个p阶的线性预测系统,即第n个信号样本s(n)通过其先前的p个样本的线性组合

Figure SMS_5
来估计:S4.2: Assume that a p-order linear prediction system is adopted, that is, the nth signal sample s(n) is linearly combined through its previous p samples
Figure SMS_5
to estimate:

Figure SMS_6
Figure SMS_6
;

其中

Figure SMS_7
为信号样本数据帧片段上的常数,称为p阶线性预测系数,最小化s(n)与
Figure SMS_8
之间的预测误差,求出此时的线性预测系数;in
Figure SMS_7
is a constant on the segment of the signal sample data frame, which is called the p-order linear prediction coefficient, and minimizes s(n) and
Figure SMS_8
Between the prediction error, find the linear prediction coefficient at this time;

S4.3:根据线性预测系数与线性预测倒谱系数之间的定义关系,由以下递推公式,采用迭代的方式将线性预测系数ai转为线性预测倒谱系数

Figure SMS_9
:S4.3: According to the defined relationship between the linear prediction coefficient and the linear prediction cepstral coefficient, the following recursive formula is used to iteratively convert the linear prediction coefficient a i into the linear prediction cepstral coefficient
Figure SMS_9
:

Figure SMS_10
Figure SMS_10
;

其中

Figure SMS_11
表示第n个线性预测倒谱系数,
Figure SMS_12
为p阶线性预测系数,由有限个线性预测系数可以得到无限个线性预测倒谱系数;in
Figure SMS_11
Indicates the nth linear predictor cepstral coefficient,
Figure SMS_12
is the p-order linear predictive coefficient, and an infinite number of linear predictive cepstral coefficients can be obtained from a finite number of linear predictive coefficients;

S4.4:采用滑动平均算法对提取的线性预测倒谱系数进行特征平滑,即第x个数据帧的平滑线性预测倒谱系数sLPCCx根据以x为中心,长为win的平滑窗口内的所有的线性预测倒谱系数在时间帧维度上求取平均值计算得出。S4.4: Use the moving average algorithm to perform feature smoothing on the extracted linear predictive cepstral coefficients, that is, the smooth linear predictive cepstral coefficients of the xth data frame The linear predictive cepstral coefficients of are computed by averaging over the time frame dimension.

优选地,在步骤S4.2的具体步骤包括:Preferably, the specific steps in step S4.2 include:

S4.2.1:将实际样本与预测样本的差值称为预测误差,表示为:S4.2.1: The difference between the actual sample and the predicted sample is called the prediction error, expressed as:

Figure SMS_13
Figure SMS_13
;

基于最小均方误差准则,令平均预测误差为:Based on the minimum mean square error criterion, the average forecast error is:

Figure SMS_14
Figure SMS_14
;

为了使E{e2 (n)}最小,对ai求偏导并令偏导数为0,得到方程组:In order to minimize E{e 2 (n)}, take the partial derivative of a i and set the partial derivative to 0, and obtain the equation system:

Figure SMS_15
Figure SMS_15
;

S4.2.2:根据自相关函数改写线性预测方程组为其等价形式:S4.2.2: Rewrite the linear prediction equation system to its equivalent form according to the autocorrelation function:

Figure SMS_16
Figure SMS_16
;

Figure SMS_17
Figure SMS_17
;

其中Rn(j)表示第n个数据帧片段与其延迟j个数据点后的数据帧片段之间计算所得的自相关的值,m为第n个数据帧片段的索引,N为数据帧片段的长度;Where R n (j) represents the autocorrelation value calculated between the nth data frame segment and the data frame segment after j data points delay, m is the index of the nth data frame segment, and N is the data frame segment length;

矩阵形式为:The matrix form is:

Figure SMS_18
Figure SMS_18
;

S4.2.3:采用Levinson-Durbin算法求解线性预测方程组,以计算出线性预测系数aiS4.2.3: Use the Levinson-Durbin algorithm to solve the linear prediction equations to calculate the linear prediction coefficient a i .

优选地,步骤S5)的具体步骤包括:Preferably, the specific steps of step S5) include:

S5.1:对所述多通道纯净的EOG信号执行墨西哥帽连续小波变换:S5.1: Perform Mexican hat continuous wavelet transform on the multi-channel pure EOG signal:

Figure SMS_19
Figure SMS_19
;

其中,

Figure SMS_20
为小波变换结果,t为时间点数,e为自然对数的底数,
Figure SMS_21
为数据的标准差;in,
Figure SMS_20
is the result of wavelet transformation, t is the number of time points, e is the base of natural logarithm,
Figure SMS_21
is the standard deviation of the data;

S5.2:以固定长度为D的滑动窗口,采用寻峰算法不重叠的移动窗口检测信号峰值,将窗口检测到的正峰值编码为1,负峰值编码为0,无峰值的窗口识别为注视特征,具有01或10的片段被识别为一次扫视特征的候选项,出现010被识别为一次眨眼特征;S5.2: Using a sliding window with a fixed length D, use the peak-seeking algorithm to detect the peak value of the moving window without overlapping, encode the positive peak value detected by the window as 1, and encode the negative peak value as 0, and identify the window without peak value as gaze feature, segments with 01 or 10 were identified as candidates for a glance feature, and occurrences of 010 were identified as a blink feature;

S5.3:统计每个数据帧上的扫视次数,扫视次数的方差,扫视幅度的最大值、最小值、平均值、功率和平均功率作为扫视眼电特征;每个数据帧上的眨眼持续总时长,持续时长的平均值,最大值,最小值,眨眼次数,眨眼幅度的最大值、最小值、平均值、功率和平均功率作为眨眼眼动特征;每个数据帧上的注视持续总时长,持续时长的平均值,最大值和最小值作为注视的眼动特征。S5.3: Count the number of glances on each data frame, the variance of the number of glances, the maximum value, minimum value, average value, power and average power of the sweep amplitude as the characteristics of the eye-saccade; the blinking on each data frame continues to total Duration, the average value, maximum value, minimum value, number of blinks, maximum value, minimum value, average value, power and average power of the duration of the blink as eye movement features; the total duration of fixation on each data frame, The mean, maximum and minimum values of duration were used as eye movement characteristics of fixation.

优选地,步骤S6)的具体步骤包括:Preferably, the specific steps of step S6) include:

S6.1:将脑电特征输入到贝叶斯岭(bayesian ridge)回归模型中训练,训练结果记录为Y1;将眼电特征输入到轻量级梯度提升机(light gradient boosting machine,lightGBM)回归模型中训练,训练结果记录为Y2;将脑电特征和眼电特征串联融合在一起,分别输入到贝叶斯岭回归模型和轻量级梯度提升机回归模型,训练得到结果分别记录为Y3和Y4,完成堆叠融合算法的基回归层训练;S6.1: Input the EEG features into the Bayesian ridge regression model for training, and record the training result as Y1; input the EEG features into the light gradient boosting machine (light gradient boosting machine, lightGBM) regression Training in the model, the training results are recorded as Y2; the EEG features and oculoelectric features are serially fused together, and input into the Bayesian Ridge regression model and the lightweight gradient boosting machine regression model respectively, and the training results are recorded as Y3 and Y3 respectively. Y4, complete the basic regression layer training of the stacking fusion algorithm;

S6.2:将基回归层训练所得的结果并联在一起,作为堆叠融合算法的次级回归层的输入NewX=[Y1, Y2, Y3,Y4],其中NewX是次级回归层的输入,Y1, Y2, Y3,Y4是初级回归层的训练结果。次级回归层即元回归层,采用简单线性回归(linear regression)模型,避免融合模型过拟合;S6.2: Connect the training results of the basic regression layer together as the input NewX=[Y1, Y2, Y3,Y4] of the secondary regression layer of the stacking fusion algorithm, where NewX is the input of the secondary regression layer, Y1 , Y2, Y3, Y4 are the training results of the primary regression layer. The secondary regression layer is the meta-regression layer, which uses a simple linear regression model to avoid overfitting of the fusion model;

S6.3:对模型输出的预测值进行评估,评估通过则训练完成,评估未通过则返回步骤S6.1。S6.3: Evaluate the predicted value output by the model. If the evaluation is passed, the training is completed. If the evaluation fails, return to step S6.1.

优选地,对模型输出的预测值进行评估的方法为采用模型输出的预测值

Figure SMS_22
与实际PERCLOS疲劳指数标签值Y之间的相关系数来评估模型对于疲劳检测的表现,相关系数公式为:Preferably, the method of evaluating the predicted value output by the model is to use the predicted value output by the model
Figure SMS_22
The correlation coefficient between the actual PERCLOS fatigue index label value Y is used to evaluate the performance of the model for fatigue detection. The correlation coefficient formula is:

Figure SMS_23
Figure SMS_23
;

其中N为实际PERCLOS疲劳指数标签值Y的长度;

Figure SMS_24
Figure SMS_25
分别为第i个数据帧片段的实际PERCLOS疲劳指数标签值与模型对于疲劳程度预测值,i=1,2,…,N;
Figure SMS_26
Figure SMS_27
分别为实际PERCLOS疲劳指数标签值Y的平均值与模型输出的预测值
Figure SMS_28
的平均值;where N is the length of the actual PERCLOS fatigue index label value Y;
Figure SMS_24
and
Figure SMS_25
are the actual PERCLOS fatigue index label value of the i-th data frame segment and the predicted value of the model for the fatigue degree, i=1,2,...,N;
Figure SMS_26
and
Figure SMS_27
are the mean value of the actual PERCLOS fatigue index label value Y and the predicted value output by the model
Figure SMS_28
average of;

相关系数达到预设值则评估通过,否则评估未通过。If the correlation coefficient reaches the preset value, the evaluation is passed; otherwise, the evaluation is not passed.

本发明另外提出一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现上述一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法基于线性预测分析与堆叠融合的混合模态疲劳检测方法。The present invention further proposes a computer-readable storage medium, which stores a computer program, which is characterized in that, when the computer program is executed by a processor, the above-mentioned mixed-mode fatigue detection method based on linear predictive analysis and stacking fusion is implemented. A Mixed-Modal Fatigue Detection Approach by Fusion of Predictive Analytics and Stacking.

随着脑机接口(brain computer interface,BCI)技术的兴起,采用EEG信号检测精神疲劳状态得到了研究人员的持续性关注,EEG信号直接采自人的大脑头皮表面,对于精神状态指示具有客观性强,实时性高的优点。由于EEG直接记录了与警觉性相关的神经生理信号,因此,它被普遍认为是一种可靠的警觉度估计方法。基于EEG的疲劳检测反映了疲劳发生时的大脑活动变化规律,时间分辨率高,如果将EEG和EOG将两种信号模态融合使用,可以融合单一模态的优势,达到更好的检测效果。本发明据此展开研究,提出一种基于线性预测倒谱系数与堆叠融合算法的脑电眼电混合疲劳检测方法。With the rise of brain computer interface (BCI) technology, the use of EEG signals to detect mental fatigue has received continuous attention from researchers. EEG signals are directly collected from the surface of the scalp of the human brain, which is objective for mental state indicators. Strong, high real-time advantages. Because EEG directly records the neurophysiological signals associated with alertness, it is generally accepted as a reliable method for estimating alertness. EEG-based fatigue detection reflects the change of brain activity when fatigue occurs, and has a high time resolution. If EEG and EOG are used in combination of the two signal modalities, the advantages of a single modality can be combined to achieve better detection results. The present invention conducts research accordingly, and proposes an EEG hybrid fatigue detection method based on linear prediction cepstral coefficient and stacking fusion algorithm.

本发明提出的基于线性预测分析与堆叠融合的混合模态疲劳检测方法,首先对于基于疲劳检测采集的EEG信号和EOG信号通过带通滤波器滤波处理,使用矩形窗函数将信号分为不重叠的数据帧;然后采用FastICA盲源分离算法将原始EOG分解为前额EEG和纯净的EOG,将分离出的EEG与头部采集的EEG归并为完整的EEG信号;对全部EEG信号采用线性预测分析方法,基于自相关思想的Levinson-Durbin算法求取线性预测倒谱系数,作为EEG提取的特征;对EOG信号进行MHWT寻峰,对正负峰值编码,依据编码序列识别EOG反映的扫视、眨眼和注视眼动特征,计算各眼动特征的统计参数作为由EOG提取的特征;将所得的特征输入到堆叠融合算法模型,在基回归层,对于EEG特征使用贝叶斯岭回归模型;对于EOG特征使用轻量梯度提升机回归模型;对于EEG和EOG串联混合特征分别采用贝叶斯岭回归模型和轻量梯度提升机回归模型,并通过次级回归层,即元回归层训练得到基回归层各个模型的权重系数。元回归层采用线性回归模型,输入来自基回归层的预测结果,通过训练输出堆叠混合模型的最终疲劳指数预测值。The mixed-mode fatigue detection method based on linear predictive analysis and stacking fusion proposed by the present invention firstly filters and processes the EEG signals and EOG signals collected based on fatigue detection through a band-pass filter, and uses a rectangular window function to divide the signals into non-overlapping Data frame; then use the FastICA blind source separation algorithm to decompose the original EOG into forehead EEG and pure EOG, merge the separated EEG and the EEG collected from the head into a complete EEG signal; use linear predictive analysis method for all EEG signals, The Levinson-Durbin algorithm based on the idea of autocorrelation obtains the linear prediction cepstral coefficient as the feature extracted by EEG; performs MHWT peak-finding on the EOG signal, encodes the positive and negative peaks, and identifies the saccade, blink and gaze reflected by the EOG according to the coding sequence The statistical parameters of each eye movement feature are calculated as the features extracted by EOG; the obtained features are input into the stacking fusion algorithm model, and in the basic regression layer, Bayesian Ridge regression model is used for EEG features; light weight is used for EOG features Quantitative gradient boosting machine regression model; Bayesian Ridge regression model and lightweight gradient boosting machine regression model are used for EEG and EOG serial mixed features respectively, and through the secondary regression layer, that is, meta-regression layer training, the results of each model in the basic regression layer are obtained. weight factor. The meta-regression layer adopts the linear regression model, inputs the prediction results from the basic regression layer, and outputs the final fatigue index prediction value of the stacked hybrid model through training.

本发明具有如下有益效果:The present invention has following beneficial effect:

1)本发明采用的线性预测方法可以很好地反映信号前后间的关联,而疲劳作为一种渐进变化的生理状态,是随着时间的延长动态累积的,因此采用线性预测的思路可以很好地捕捉到前后关联的疲劳特征。1) The linear prediction method used in the present invention can well reflect the relationship between the signal before and after, and fatigue, as a gradually changing physiological state, is dynamically accumulated over time, so the idea of linear prediction can be very good It can accurately capture the fatigue characteristics of contextual correlation.

2)本发明利用线性预测的思路计算LPC后将其进一步转为LPCC,因为在EEG疲劳检测领域常见的特征如功率谱密度PSD、微分熵DE等,往往采用Log对数度量,使用dB为单位。根据维纳-辛钦定理,对信号的功率谱求反傅里叶变换可以得到信号的自相关序列。如果,我们在对信号的功率谱做反傅里叶变换之前,对功率谱取Log对数,则可以获得信号的倒谱。因此倒谱系数LPCC特征才是Log对数度量的对应的线性预测特征,对于特征的表示更加准确。2) The present invention uses the idea of linear prediction to calculate LPC and then convert it to LPCC, because the common features in the field of EEG fatigue detection, such as power spectral density PSD, differential entropy DE, etc., are often measured by Log logarithm, using dB as the unit . According to the Wiener-Hinchin theorem, the autocorrelation sequence of the signal can be obtained by inverting the Fourier transform of the power spectrum of the signal. If we take the Log logarithm of the power spectrum before performing inverse Fourier transform on the power spectrum of the signal, the cepstrum of the signal can be obtained. Therefore, the cepstral coefficient LPCC feature is the corresponding linear prediction feature of the Log logarithmic measure, and the representation of the feature is more accurate.

3)本发明对提取到的EEG特征采用MA平滑算法平滑,由于 EEG相对于EOG平稳性差,变化快,而反映疲劳的EEG特征是动态渐变的,故通过平滑的方式可以过滤出与疲劳相关的EEG特征。3) The present invention uses the MA smoothing algorithm to smooth the extracted EEG features. Since the EEG is less stable than the EOG and changes quickly, and the EEG features reflecting fatigue are dynamic and gradual, the fatigue-related features can be filtered out by smoothing. EEG features.

4)本发明采用堆叠融合算法模型,在基回归层,对于EEG模态使用适合基于EEG疲劳检测的贝叶斯岭回归模型;对于EOG模态使用适合基于EOG疲劳检测的轻量梯度提升机回归模型;对于EEG和EOG串联混合特征分别采用贝叶斯岭回归模型和轻量梯度提升机回归模型,将模型权重训练的任务交给元回归层,可以全面而充分的利用EEG和EOG中与疲劳相关的特征信息。4) The present invention adopts a stacked fusion algorithm model. In the basic regression layer, a Bayesian ridge regression model suitable for EEG fatigue detection is used for EEG mode; a lightweight gradient hoisting machine regression suitable for EOG fatigue detection is used for EOG mode Model; Bayesian Ridge regression model and lightweight gradient boosting machine regression model are used for EEG and EOG serial mixed features respectively, and the task of model weight training is handed over to the meta-regression layer, which can fully and fully utilize EEG and EOG to reduce fatigue related feature information.

5)本发明使用的堆叠融合算法模型在基回归层选择模型差异较大且复杂的回归器进行训练,保证了EEG特征和EOG特征利用的深度和广度,在元回归层采用简单线性回归模型训练学习各个基回归器的模型权重,防止了过拟合,最终所得的融合模型兼容了各个模态所得模型的优点,具备更好的泛化能力。5) The stacking fusion algorithm model used in the present invention selects a regressor with large model differences and complex models for training in the basic regression layer, which ensures the depth and breadth of the utilization of EEG features and EOG features, and uses a simple linear regression model for training in the meta-regression layer Learning the model weights of each base regressor prevents overfitting, and the final fusion model is compatible with the advantages of the models obtained from each modality, and has better generalization ability.

附图说明Description of drawings

图1为EEG信号采集电极位置说明图。Figure 1 is a diagram illustrating the position of electrodes for EEG signal acquisition.

图2为EOG信号采集电极位置说明图。Figure 2 is an explanatory diagram of the position of the EOG signal acquisition electrodes.

图3为本发明提出的方法实施步骤图。Fig. 3 is a diagram of the implementation steps of the method proposed by the present invention.

图4为本发明提出的方法处理流程演示图。Fig. 4 is a demonstration diagram of the processing flow of the method proposed by the present invention.

图5为线性预测功率谱与疲劳指数标签对比图。Figure 5 is a comparison chart of linear prediction power spectrum and fatigue index label.

图6为一种实例中,采用单模态和混合模态方法疲劳指数预测值与实际标签值对比折线图。Fig. 6 is a line graph comparing the predicted value of the fatigue index with the actual label value using the single-mode and mixed-mode methods in an example.

图7为采用单模态和混合模态方法在各个受试者的数据上所得模型的相关系数折线图。Figure 7 is a line chart of the correlation coefficient of the model obtained on the data of each subject using the single-modal and mixed-modal methods.

具体实施方式Detailed ways

为使本发明要解决的技术问题、技术方案和优点阐述得更加清楚,下面将结合附图详细说明本发明提出的方法在具体的实施案例中的处理过程。In order to clarify the technical problems, technical solutions and advantages to be solved by the present invention, the processing process of the method proposed by the present invention in specific implementation cases will be described in detail below in conjunction with the accompanying drawings.

需要说明的是,以下实施例仅是说明性的,本发明的保护范围并不受这些实施例的限制。It should be noted that the following examples are only illustrative, and the protection scope of the present invention is not limited by these examples.

本实施例采用一个由23位受试者数据构成脑电眼电混合数据集,该数据集在采集脑电眼电同时使用眼动仪记录眼球活动,用于给采集到的脑电和眼电反映的疲劳程度进行标记。根据眼动仪的记录,每8s计算一次受试者的眼睑闭合百分比(the Percentage ofEye Closure,PERCLOS)作为受试者在该8s内的真实疲劳水平标签值,即每8s构成一次疲劳检测的试验数据帧,以下在应用所提出的方法时,将在8s长的数据帧上进行特征提取与疲劳水平预测。In this embodiment, an EEG mixed data set composed of 23 subjects' data is used. The data set uses an eye tracker to record eye movements while collecting EEG, and is used to reflect the collected EEG and EEG. Mark the level of fatigue. According to the record of the eye tracker, the percentage of eyelid closure (the Percentage of Eye Closure, PERCLOS) of the subject is calculated every 8s as the label value of the subject's true fatigue level within the 8s, that is, every 8s constitutes a test of fatigue detection Data frame, when applying the proposed method below, feature extraction and fatigue level prediction will be performed on the 8s long data frame.

本发明实施例提供了一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,通过提取来自大脑颞叶、枕叶和前额的EEG信号中的线性预测倒谱系数特征,提取来自前额EOG信号中的眼动统计特征,分析预测主体的疲劳程度,可以实现提高基于EEG和EOG混合信号的疲劳检测准确率的技术效果。The embodiment of the present invention provides a mixed-mode fatigue detection method based on linear predictive analysis and stack fusion. By extracting the linear predictive cepstral coefficient features from the EEG signals from the temporal lobe, occipital lobe and forehead of the brain, the EOG from the forehead is extracted. Statistical characteristics of eye movement in the signal can analyze and predict the fatigue degree of the subject, which can achieve the technical effect of improving the accuracy of fatigue detection based on EEG and EOG mixed signals.

为了达到上述技术效果,本发明的总体思路如下:In order to achieve the above-mentioned technical effects, the general idea of the present invention is as follows:

采集来自大脑颞叶和枕叶的EEG信号数据,来自前额的EOG信号,用于疲劳检测回归预测任务;将采集的EEG和EOG数据经过带通滤波器过滤并采用矩形窗函数将其分为不重叠的数据帧;对从前额采集的EOG信号使用快速独立主成分分析算法,将原始EOG分离出前额EEG和纯净的EOG;将头部采集的EEG和分离出的前额EOG归并,提取线性预测倒谱系数特征并进行特征平滑;将分离出的去EEG干扰的EOG信号进行墨西哥帽连续小波变换寻峰,对正负峰值编码,提取扫视、眨眼、注视特征的统计参数值作为眼电反映的眼动特征值;将EEG特征和EOG特征输入到堆叠融合回归模型中训练,堆叠融合模型的元回归层的输出即为疲劳程度的预测值。Collect EEG signal data from the temporal lobe and occipital lobe of the brain, and EOG signal from the forehead for fatigue detection and regression prediction tasks; filter the collected EEG and EOG data with a band-pass filter and use a rectangular window function to divide them into different Overlapping data frames; use fast independent principal component analysis algorithm on the EOG signal collected from the forehead, separate the original EOG into forehead EEG and pure EOG; merge the EEG collected from the head and the separated forehead EOG, and extract the linear prediction inverted Spectral coefficient features and feature smoothing; the separated EOG signal without EEG interference is subjected to Mexican hat continuous wavelet transform to find peaks, and the positive and negative peaks are coded, and the statistical parameter values of saccade, blink, and fixation characteristics are extracted as the oculograph. The dynamic feature value; the EEG feature and EOG feature are input into the stacked fusion regression model for training, and the output of the meta-regression layer of the stacked fusion model is the predicted value of fatigue.

如图3和图4所示,本发明提出的基于线性预测倒谱系数与堆叠融合算法的脑电眼电混合疲劳检测方法,包括如下步骤:As shown in Figure 3 and Figure 4, the EEG hybrid fatigue detection method based on linear predictive cepstral coefficient and stacking fusion algorithm proposed by the present invention includes the following steps:

S1:采集基于疲劳检测EEG信号和EOG信号,EEG采集自大脑颞叶、枕叶,EOG采集于眼睛上方的前额位置,EEG电极安放的具体位置可见附图1,EOG电极安放的具体位置可见图2。S1: Acquisition of EEG and EOG signals based on fatigue detection. EEG is collected from the temporal lobe and occipital lobe of the brain, and EOG is collected from the forehead above the eyes. The specific location of EEG electrodes can be seen in Figure 1, and the specific location of EOG electrodes can be seen in Figure 1. 2.

S2:将采集到的EEG信号通过0.5Hz~50.5Hz的带通滤波器处理,EOG信号通过0.5Hz~30.5Hz的带通滤波器处理,采用8s长的矩形窗函数将EEG信号和EOG信号分成不重叠的885个数据帧;S2: Process the collected EEG signal through a band-pass filter of 0.5Hz~50.5Hz, and process the EOG signal through a bandpass filter of 0.5Hz~30.5Hz, and divide the EEG signal and EOG signal into 885 dataframes that don't overlap;

S3:对EOG信号基于快速独立主成分分析算法FastICA进行盲源分离,将原始的多通道EOG信号分解为多通道前额EEG信号和多通道纯净的EOG信号;S3: Perform blind source separation on the EOG signal based on the fast independent principal component analysis algorithm FastICA, and decompose the original multi-channel EOG signal into a multi-channel forehead EEG signal and a multi-channel pure EOG signal;

具体步骤包括:Specific steps include:

S3.1:对每个EOG采集道通的数据进行中心化处理,记采集的原始多通道EOG数据矩阵为SM×L,其中M为通道数,L为信号长度,令SM×L中每个元素减去所在行的均值,得到中心处理矩阵SCS3.1: Centralize the data collected by each EOG channel, record the original multi-channel EOG data matrix collected as S M×L , where M is the number of channels, L is the signal length, let S M×L Subtract the mean value of the row from each element to obtain the center processing matrix S C .

S3.2:对多通道去均值的中心处理矩阵SC进行白化处理,X = ED-1/2ETSC,其中X为白化后的数据矩阵,E=[c1,c2,…,cL]是M×L的特征向量矩阵,D-1/2=diag[d-1/2 d 1, d-1/2d 2,…, d-1/2 d L]是L×L的对角特征值矩阵, di为中心处理矩阵SC的协方差矩阵的第i个特征值,ci为对应的特征向量。S3.2: Whiten the central processing matrix S C with multi-channel de-meaning, X = ED -1/2 E T S C , where X is the data matrix after whitening, E=[c 1 ,c 2 ,… ,c L ] is the eigenvector matrix of M×L, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…, d -1/2 d L ] is L× The diagonal eigenvalue matrix of L, d i is the i-th eigenvalue of the covariance matrix of the central processing matrix S C , and ci is the corresponding eigenvector.

S3.3:设解混矩阵为W,W是一个M维向量,将多通道数据SM×L分解为一系列独立成分变量之和:U=W×X,采用负熵计算信号分量的非高斯性,JG(W) = [E{G(WTX)}-E{G(V)}]2,其中JG(W)是在采用非线性函数G时计算所得的信号分量的非高斯性结果,本发明选用函数G(x)=-exp(-x2/2),V是与X具有相同均值和协方差矩阵的高斯变量,使用牛顿法迭代求解使得各个分量非高斯性的最大时的解混矩阵W。S3.3: Let the unmixing matrix be W, W is an M-dimensional vector, decompose the multi-channel data S M×L into a series of independent component variables: U=W×X, use negative entropy to calculate the non- Gaussian, J G (W) = [E{G(W T X)}-E{G(V)}] 2 , where J G (W) is the calculated signal component when using the nonlinear function G Non-Gaussian result, the present invention selects function G(x)=-exp(-x2/2), V is the Gaussian variable that has identical mean value and covariance matrix with X, uses Newton's method to iteratively solve and makes each component non-Gaussian The unmixing matrix W at the maximum.

S3.4:由解混矩阵重构前额EEG信号,

Figure SMS_29
,其中
Figure SMS_30
为满足各分量非高斯性最大时的独立成分变量构成的矩阵,
Figure SMS_31
为分离出的前额脑电,
Figure SMS_32
为解混矩阵W的逆矩阵。原始EOG与分离出的前额EEG的差值信号为纯净的EOG信号。S3.4: Reconstruct the forehead EEG signal from the unmixing matrix,
Figure SMS_29
,in
Figure SMS_30
In order to satisfy the matrix composed of independent component variables when the non-Gaussian property of each component is the largest,
Figure SMS_31
For the isolated forehead EEG,
Figure SMS_32
is the inverse matrix of the unmixing matrix W. The difference signal between the original EOG and the separated forehead EEG is a pure EOG signal.

S4:将多通道EEG信号与分离出多通道前额EEG信号归并,对归并后的整个多通道脑电信号进行线性预测分析,求解LPCC特征,采用MA平滑算法对LPCC进行特征平滑;S4: Merge the multi-channel EEG signal and the separated multi-channel forehead EEG signal, perform linear prediction analysis on the merged entire multi-channel EEG signal, solve the LPCC features, and use the MA smoothing algorithm to smooth the LPCC features;

具体步骤包括:Specific steps include:

S4.1:将脑电信号E1=[p1,p2,…,pm1]与分离出的前额脑电信号E2=[q1,q2,…,qm2],归并为完整的脑电信号:S4.1: Merge the EEG signal E1=[p 1 ,p 2 ,…,p m1 ] and the separated frontal EEG signal E2=[q 1 ,q 2 ,…,q m2 ] into a complete brain electric signal:

E3=[p1,p2,…,pm1, q1,q2,…,qm2]E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ]

其中pi为头部的脑电通道信号,m1为头部电极对应通道数,qi是前额脑电通道信号,m2为前额电极对应的通道数。Among them, p i is the EEG channel signal of the head, m1 is the channel number corresponding to the head electrode, q i is the forehead EEG channel signal, and m2 is the channel number corresponding to the forehead electrode.

S4.2:假定采用一个p阶的线性预测系统,即第n个信号样本s(n)可以通过其先前的p个样本的线性组合

Figure SMS_33
来估计:S4.2: Assume that a p-order linear prediction system is adopted, that is, the nth signal sample s(n) can be linearly combined through its previous p samples
Figure SMS_33
to estimate:

Figure SMS_34
Figure SMS_34

其中,

Figure SMS_35
被假定为信号样本数据帧片段上的常数,这些常数即为线性预测系数。基于最小均方误差准则,最小化s(n)与
Figure SMS_36
之间的预测误差,求出此时的线性预测系数。在该实施案例中,采用的是14阶的线性预测系统。in,
Figure SMS_35
are assumed to be constants over the slices of the signal sample data frame, these constants are the linear prediction coefficients. Based on the minimum mean square error criterion, minimize s(n) and
Figure SMS_36
Between the prediction error, find the linear prediction coefficient at this time. In this implementation case, a 14th-order linear prediction system is used.

S4.2的具体步骤包括:The specific steps of S4.2 include:

S4.2.1:实际样本与预测样本的差值称为预测误差,可表示为:S4.2.1: The difference between the actual sample and the predicted sample is called the forecast error, which can be expressed as:

Figure SMS_37
Figure SMS_37
;

基于最小均方误差准则,令平均预测误差为:Based on the minimum mean square error criterion, the average forecast error is:

Figure SMS_38
Figure SMS_38
;

为了使E{e2 (n)}最小,对ai求偏导并令偏导数为0,得到方程组:In order to minimize E{e 2 (n)}, take the partial derivative of a i and set the partial derivative to 0, and obtain the equation system:

Figure SMS_39
Figure SMS_39
;

S4.2.2:根据自相关函数改写线性预测方程组为其等价形式。由S4.2.1所得的线性预测方程移项得:S4.2.2: Rewrite the linear prediction equations to their equivalent form according to the autocorrelation function. From the transposition of the linear prediction equation obtained in S4.2.1:

Figure SMS_40
Figure SMS_40
;

为表达方便,记:For the convenience of expression, remember:

Figure SMS_41
Figure SMS_41
;

则方程可改写为:Then the equation can be rewritten as:

Figure SMS_42
Figure SMS_42
;

一个样本信号s(n)的自相关函数定义为:The autocorrelation function of a sample signal s(n) is defined as:

Figure SMS_43
Figure SMS_43
;

结合S4.2.1所得方程有:Combined with S4.2.1, the obtained equations are:

Figure SMS_44
Figure SMS_44
;

由于Rn(j)是偶函数,且Rn(j)只与i和j的相对大小有关,则有:Since R n (j) is an even function, and R n (j) is only related to the relative size of i and j, then:

Figure SMS_45
Figure SMS_45
;

解线性预测方程组改写为:The solution to the linear prediction equations is rewritten as:

Figure SMS_46
Figure SMS_46
;

写为矩阵形式为:Written in matrix form as:

Figure SMS_47
Figure SMS_47
;

S4.2.3:采用Levinson-Durbin算法求解S4.2.2中得到的线性预测方程组,以计算出线性预测系数aiS4.2.3: Use the Levinson-Durbin algorithm to solve the linear prediction equations obtained in S4.2.2 to calculate the linear prediction coefficient a i .

S4.3:根据线性预测系数与线性预测倒谱系数之间的定义关系,由以下递推公式,采用迭代的方式将线性预测系数ai转为线性预测倒谱系数

Figure SMS_48
:S4.3: According to the defined relationship between the linear prediction coefficient and the linear prediction cepstral coefficient, the following recursive formula is used to iteratively convert the linear prediction coefficient a i into the linear prediction cepstral coefficient
Figure SMS_48
:

Figure SMS_49
Figure SMS_49
;

其中

Figure SMS_50
表示第n个线性预测倒谱系数,
Figure SMS_51
为p阶线性预测系数,由有限个线性预测系数可以得到无限个线性预测倒谱系数,但一般而言,n取到12~20已经足够,在本实施例中,n取到14。in
Figure SMS_50
Indicates the nth linear predictor cepstral coefficient,
Figure SMS_51
is the p-order linear predictive coefficient, and an infinite number of linear predictive cepstrum coefficients can be obtained from the finite linear predictive coefficients, but generally speaking, it is enough for n to be 12~20, and n is taken to be 14 in this embodiment.

S4.4:采用滑动平均算法对提取的线性预测倒谱系数进行特征平滑,即第x个数据帧的平滑线性预测倒谱系数

Figure SMS_52
根据以x为中心,长为win的平滑窗口内的所有的线性预测倒谱系数在时间帧维度上求取平均值计算得出,
Figure SMS_53
,其中
Figure SMS_54
表示第i个数据帧上计算得到的未平滑的线性预测倒谱系数,平滑窗口长度win必须设置为奇数,在本实施案例中,平滑窗口长设置为29。S4.4: Use the moving average algorithm to perform feature smoothing on the extracted linear predictive cepstral coefficients, that is, the smoothed linear predictive cepstral coefficients of the xth data frame
Figure SMS_52
Calculated by calculating the average value of all linear predictive cepstral coefficients in the smoothing window with length win in the time frame dimension centered on x,
Figure SMS_53
,in
Figure SMS_54
Indicates the unsmoothed linear prediction cepstral coefficient calculated on the i-th data frame. The smoothing window length win must be set to an odd number. In this implementation case, the smoothing window length is set to 29.

如图5所示,自上至下分别为一个受试者数据的PERCLOS疲劳指数,对应数据帧计算得到功率谱瀑布图,线性预测功率谱图和经过MA算法平滑后的线性预测功率谱图,可以看出相对于传统使用的功率谱特征,本发明提出的线性预测方法得到的谱图在alpha(8-14Hz)波段的变化与PERCLOS疲劳指数标签变化更加相似,且平滑后比平滑前特征反映的变化规律与真实标签的变化规律相关性更高。As shown in Figure 5, from top to bottom is the PERCLOS fatigue index of a subject's data, and the corresponding data frame is calculated to obtain the power spectrum waterfall diagram, the linear prediction power spectrum diagram and the linear prediction power spectrum diagram smoothed by the MA algorithm. It can be seen that compared to the traditionally used power spectrum features, the changes in the alpha (8-14Hz) band of the spectrograms obtained by the linear prediction method proposed by the present invention are more similar to the changes in the PERCLOS fatigue index label, and the smoothing feature reflects more than the smoothing front. The change law of is more correlated with the change law of the real label.

S5:对去干扰后的多通道EOG信号采用窗口长度为8的MHWT峰值检测,对检测到正负峰值进行编码,提取EOG信号反映的注视、扫视、眨眼的统计特征;S5: Use MHWT peak detection with a window length of 8 for the de-interferenced multi-channel EOG signal, encode the detected positive and negative peaks, and extract the statistical features of gaze, saccade, and blink reflected by the EOG signal;

具体步骤包括:Specific steps include:

S5.1:对多通道纯净的眼电信号执行墨西哥帽连续小波变换:

Figure SMS_55
;S5.1: Perform the Mexican hat continuous wavelet transform on the multi-channel clean electro-oculogram signal:
Figure SMS_55
;

其中,

Figure SMS_56
为小波变换结果,t为时间点数,e为自然对数的底数,
Figure SMS_57
为数据的标准差;经过变换后的眼动信号峰值会更加明显。in,
Figure SMS_56
is the result of wavelet transformation, t is the number of time points, e is the base of natural logarithm,
Figure SMS_57
is the standard deviation of the data; the peak of the transformed eye movement signal will be more obvious.

S5.2:以固定长度为8的滑动窗口,采用寻峰算法不重叠的移动窗口检测信号峰值,将窗口检测到的正峰值编码为1,负峰值编码为0,无峰值的窗口直接识别为注视特征。具有“01”或“10”的片段被识别为一次扫视特征的候选项,如果出现“010”则识别为一次眨眼特征。S5.2: Using a sliding window with a fixed length of 8, the peak-seeking algorithm is used to detect the signal peak value in a non-overlapping moving window. The positive peak value detected by the window is coded as 1, the negative peak value is coded as 0, and the window without peak value is directly identified as Look at the features. Segments with "01" or "10" were identified as candidates for a glance feature, and a blink feature if "010" was present.

S5.3:统计每个数据帧上的扫视次数,扫视次数的方差,扫视幅度的最大值、最小值、平均值、功率和平均功率作为扫视眼电特征;每个数据帧上的眨眼持续总时长,持续时长的平均值,最大值,最小值,眨眼次数,眨眼幅度的最大值、最小值、平均值、功率和平均功率作为眨眼眼动特征;每个数据帧上的注视持续总时长,持续时长的平均值,最大值和最小值作为注视的眼动特征。所有的眼动特征构成了眼电信号的特征序列。S5.3: Count the number of glances on each data frame, the variance of the number of glances, the maximum value, minimum value, average value, power and average power of the sweep amplitude as the characteristics of the eye-saccade; the blinking on each data frame continues to total Duration, the average value, maximum value, minimum value, number of blinks, maximum value, minimum value, average value, power and average power of the duration of the blink as eye movement features; the total duration of fixation on each data frame, The mean, maximum and minimum values of duration were used as eye movement characteristics of fixation. All eye movement features constitute the characteristic sequence of electrooculogram signal.

S6:将脑电信号和眼电信号各自提取到的特征输入到堆叠融合算法模型中进行回归训练,获得疲劳程度预测结果;S6: Input the extracted features of the EEG signal and the electrooculogram signal into the stacking fusion algorithm model for regression training, and obtain the fatigue degree prediction result;

具体步骤包括:Specific steps include:

S6.1:将脑电特征输入到贝叶斯岭回归模型中训练,训练结果记录为Y1;将眼电特征输入到轻量级梯度提升机回归模型中训练,训练结果记录为Y2;将脑电特征和眼电特征串联融合在一起,分别输入到贝叶斯岭回归模型和轻量级梯度提升机回归模型,训练得到结果分别记录为Y3和Y4。上述操作完成了堆叠融合算法的基回归层训练。S6.1: Input the EEG features into the Bayesian Ridge regression model for training, and record the training results as Y1; input the EEG features into the lightweight gradient boosting machine regression model for training, and record the training results as Y2; The electrical features and oculoelectric features were fused together in series, and were respectively input into the Bayesian Ridge regression model and the lightweight gradient boosting machine regression model, and the training results were recorded as Y3 and Y4 respectively. The above operations complete the basic regression layer training of the stack fusion algorithm.

S6.2:将基回归层训练所得的结果并联在一起,作为堆叠融合算法的次级回归层的输入。NewX=[Y1,Y2,Y3,Y4],其中NewX是次级回归层的输入,Y1,Y2,Y3,Y4是初级回归层的训练结果。次级回归层即元回归层,采用简单线性回归模型,避免融合模型过拟合,元回归层输出的结果为堆叠融合模型最终对于主体疲劳程度的预测值。S6.2: Connect the training results of the basic regression layer together as the input of the secondary regression layer of the stacking fusion algorithm. NewX=[Y1, Y2, Y3, Y4], where NewX is the input of the secondary regression layer, and Y1, Y2, Y3, Y4 are the training results of the primary regression layer. The secondary regression layer is the meta-regression layer, which uses a simple linear regression model to avoid over-fitting of the fusion model. The output result of the meta-regression layer is the final prediction value of the stacked fusion model for the fatigue degree of the subject.

采用模型输出的预测值

Figure SMS_58
与实际PERCLOS疲劳指数标签值Y之间的相关系数来评估模型对于疲劳检测的表现,相关系数公式如下:
Figure SMS_59
;Take the predicted value of the model output
Figure SMS_58
The correlation coefficient between the actual PERCLOS fatigue index label value Y is used to evaluate the performance of the model for fatigue detection. The correlation coefficient formula is as follows:
Figure SMS_59
;

其中N为实际PERCLOS疲劳指数标签值Y的长度;

Figure SMS_60
Figure SMS_61
分别为第i个数据帧片段的实际PERCLOS疲劳指数标签值与模型对于疲劳程度预测值,i=1,2,…,N;
Figure SMS_62
Figure SMS_63
分别为实际PERCLOS疲劳指数标签值Y的平均值与模型输出的预测值
Figure SMS_64
的平均值;相关系数达到预设值则评估通过,否则评估未通过。where N is the length of the actual PERCLOS fatigue index label value Y;
Figure SMS_60
and
Figure SMS_61
are the actual PERCLOS fatigue index label value of the i-th data frame segment and the predicted value of the model for the fatigue degree, i=1,2,...,N;
Figure SMS_62
and
Figure SMS_63
are the mean value of the actual PERCLOS fatigue index label value Y and the predicted value output by the model
Figure SMS_64
The average value of ; if the correlation coefficient reaches the preset value, the evaluation is passed, otherwise the evaluation is not passed.

图6展示了在一个受试者上采用EEG单模态,EOG单模态和EEG-EOG混合模态方法疲劳指数预测值与实际标签值对比折线图,可以看出相对于单模态,本发明采用的混合模态所得的疲劳水平预测结果与真实标签值更加接近。图7展示了采用EEG单模态,EOG单模态和EEG-EOG混合模态时所得模型对于案例中23个受试者的数据的疲劳水平预测值与实际值之间的相关系数结果。可以看出,混合模态几乎在所有的受试者数据上表现更好,说明本发明提出的方法确实能够提高疲劳检测的准确性,所得模型的泛化能力更强。Figure 6 shows the line graph of the comparison between the predicted value of the fatigue index and the actual label value using the EEG unimodal, EOG unimodal and EEG-EOG mixed modal methods on a subject. It can be seen that compared with the unimodal, this The fatigue level prediction result obtained by the hybrid mode adopted by the invention is closer to the real label value. Figure 7 shows the results of the correlation coefficient between the predicted value and the actual value of the fatigue level of the data of the 23 subjects in the case obtained when the EEG single modality, EOG single modality and EEG-EOG mixed modality are used. It can be seen that the mixed modality performs better on almost all the subject data, indicating that the method proposed by the present invention can indeed improve the accuracy of fatigue detection, and the generalization ability of the obtained model is stronger.

以上应用了具体案例数据集对本发明进行阐述,只是用于帮助理解本发明,并不用以限制本发明。对于本发明所属技术领域的技术人员,依据本发明的思想,还可以做出若干简单推演、变形或替换。本领域的技术人员容易理解,以上所述仅为本发明专利的一种实施案例而已,并不用以限制本发明专利,凡在本发明专利的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明专利的保护范围之内。The above application of specific case data sets to illustrate the present invention is only used to help understand the present invention, and is not intended to limit the present invention. For those skilled in the technical field to which the present invention belongs, some simple deduction, deformation or replacement can also be made according to the idea of the present invention. Those skilled in the art can easily understand that the above description is only an implementation case of the patent of the present invention, and is not intended to limit the patent of the present invention. Any modification, equivalent replacement and Improvements, etc., should be included within the protection scope of the patent for the present invention.

本说明书未作详细描述的内容属于本领域专业技术人员公知的现有技术。The content not described in detail in this specification belongs to the prior art known to those skilled in the art.

Claims (10)

1.一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:所述方法包括如下步骤:1. A mixed-mode fatigue detection method based on linear predictive analysis and stack fusion, characterized in that: the method comprises the steps of: S1:采集被检测者的脑电EEG信号和眼电EOG信号;S1: collect the EEG signal and EOG signal of the subject; S2:将所述脑电EEG信号和眼电EOG信号通过带通滤波器处理,得到多通道EEG信号和EOG信号;S2: Process the EEG signal and EOG signal through a band-pass filter to obtain a multi-channel EEG signal and EOG signal; S3:对所述多通道的EOG信号进行基于快速独立主成分分析算法的盲源分离,将原始的多通道眼电信号分解为多通道前额EOG信号和多通道纯净的EOG信号;S3: performing blind source separation based on a fast independent principal component analysis algorithm on the multi-channel EOG signal, decomposing the original multi-channel electro-oculogram signal into a multi-channel forehead EOG signal and a multi-channel pure EOG signal; S4:将所述多通道EEG信号与所述多通道前额EOG信号归并,对归并后的整个多通道EEG信号进行线性预测分析,求解线性预测倒谱系数,采用滑动平均算法对线性预测倒谱系数进行特征平滑;S4: Merge the multi-channel EEG signal and the multi-channel forehead EOG signal, perform linear predictive analysis on the entire merged multi-channel EEG signal, solve the linear predictive cepstral coefficient, and use the moving average algorithm to predict the linear predictive cepstral coefficient Perform feature smoothing; S5:对所述多通道纯净的EOG信号进行墨西哥帽连续小波变换检测峰值,对正负峰值进行编码,从编码序列中提取眼电信号反映的注视、扫视、眨眼的统计特征;S5: performing Mexican hat continuous wavelet transform on the multi-channel pure EOG signal to detect the peak value, encoding the positive and negative peak values, and extracting the statistical characteristics of gaze, saccade, and blink reflected by the electrooculogram signal from the encoding sequence; S6:将步骤S5中对脑电EEG信号和眼电EOG信号各自提取到的特征输入到堆叠融合算法模型中进行回归训练,获得疲劳程度预测结果。S6: Input the features extracted from the EEG signal and the EOG signal in step S5 into the stacking fusion algorithm model for regression training, and obtain the fatigue degree prediction result. 2.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S1中,所述脑电EEG信号采集自被检测者的大脑颞叶、枕叶,所述眼电EOG信号采集于被检测者的眼睛上方的前额位置。2. A kind of mixed mode fatigue detection method based on linear predictive analysis and stacking fusion according to claim 1, characterized in that: in step S1, the EEG signal is collected from the brain temporal lobe of the subject, Occipital lobe, the EOG signal is collected at the forehead position above the eyes of the subject. 3.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S2中,带通滤波器处理采用矩形窗函数将所述脑电EEG信号和眼电EOG信号分成不重叠的数据帧片段。3. A kind of mixed mode fatigue detection method based on linear predictive analysis and stacking fusion according to claim 1, characterized in that: in step S2, bandpass filter processing adopts rectangular window function to convert the EEG signal and EOG signals are divided into non-overlapping data frame segments. 4.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S3中,盲源分离的具体步骤包括:4. A kind of mixed mode fatigue detection method based on linear predictive analysis and stacking fusion according to claim 1, characterized in that: in step S3, the specific steps of blind source separation include: S3.1:对每个EOG信号采集道通的数据进行中心化处理,记采集的原始多通道EOG数据矩阵为SM×L,其中M为通道数,L为信号长度,令SM×L中每个元素减去所在行的均值,得到中心处理矩阵SCS3.1: Perform centralized processing on the data of each EOG signal acquisition channel, record the original multi-channel EOG data matrix collected as S M×L , where M is the number of channels, L is the signal length, let S M×L Subtract the mean value of the row from each element in , and get the central processing matrix S C ; S3.2:对多通道去均值的中心处理矩阵SC进行白化处理,X = ED-1/2ETSC,其中X为白化后的数据矩阵,E=[c1,c2,…,cL]是M×L的特征向量矩阵,D-1/2=diag[d-1/2 d 1, d-1/2 d 2,…,d-1/2 d L]是L×L的对角特征值矩阵,di为中心处理矩阵SC的协方差矩阵的第i个特征值,ci为对应的特征向量,i=1,2,…,L;S3.2: Whiten the central processing matrix S C with multi-channel de-meaning, X = ED -1/2 E T S C , where X is the data matrix after whitening, E=[c 1 ,c 2 ,… ,c L ] is the eigenvector matrix of M×L, D -1/2 =diag[d -1/2 d 1 , d -1/2 d 2 ,…,d -1/2 d L ] is L× The diagonal eigenvalue matrix of L, d i is the ith eigenvalue of the covariance matrix of the central processing matrix S C , and ci is the corresponding eigenvector, i=1,2,...,L; S3.3:设解混矩阵为W,W是一个M维向量,将多通道数据SM×L分解为一系列独立成分变量之和:U=W×X,采用负熵计算信号分量的非高斯性,JG(W) = [E{G(WTX)}-E{G(V)}]2,其中JG(W)是在采用非线性函数G时计算所得的信号分量的非高斯性结果,V是与X具有相同均值和协方差矩阵的高斯变量,使用牛顿法迭代求解使得各个分量非高斯性的最大时的解混矩阵W;S3.3: Let the unmixing matrix be W, W is an M-dimensional vector, decompose the multi-channel data S M×L into a series of independent component variables: U=W×X, use negative entropy to calculate the non- Gaussian, J G (W) = [E{G(W T X)}-E{G(V)}] 2 , where J G (W) is the calculated signal component when using the nonlinear function G Non-Gaussian results, V is a Gaussian variable with the same mean and covariance matrix as X, and the unmixing matrix W of the maximum non-Gaussian components is solved using Newton's method iteratively; S3.4:由解混矩阵重构前额EEG信号,
Figure QLYQS_1
,其中
Figure QLYQS_2
为满足各分量非高斯性最大时的独立成分变量构成的矩阵,
Figure QLYQS_3
为分离出的前额EOG信号,
Figure QLYQS_4
为解混矩阵W的逆矩阵,原始EOG信号与分离出的前额EEG的差值信号为纯净的EOG信号。
S3.4: Reconstruct the forehead EEG signal from the unmixing matrix,
Figure QLYQS_1
,in
Figure QLYQS_2
In order to satisfy the matrix composed of independent component variables when the non-Gaussian property of each component is the largest,
Figure QLYQS_3
is the isolated forehead EOG signal,
Figure QLYQS_4
is the inverse matrix of the unmixing matrix W, and the difference signal between the original EOG signal and the separated forehead EEG is a pure EOG signal.
5.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S4的具体步骤包括:5. A kind of mixed mode fatigue detection method based on linear predictive analysis and stack fusion according to claim 1, characterized in that: the specific steps of step S4 include: S4.1:将脑电EEG信号E1=[p1,p2,…,pm1]与分离出的前额EOG信号E2=[q1,q2,…,qm2],归并为完整的脑电信号:S4.1: Merge the EEG EEG signal E1=[p 1 ,p 2 ,…,p m1 ] and the separated forehead EOG signal E2=[q 1 ,q 2 ,…,q m2 ] into a complete brain electric signal: E3=[p1,p2,…,pm1, q1,q2,…,qm2]E3=[p 1 ,p 2 ,…,p m1 , q 1 ,q 2 ,…,q m2 ] 其中pi为头部的脑电通道信号,m1为头部电极对应通道数,qi是前额脑电通道信号,m2为前额电极对应的通道数;Among them, p i is the EEG channel signal of the head, m1 is the channel number corresponding to the head electrode, q i is the forehead EEG channel signal, and m2 is the channel number corresponding to the forehead electrode; S4.2:假定采用一个p阶的线性预测系统,即第n个信号样本s(n)通过其先前的p个样本的线性组合
Figure QLYQS_5
来估计:
S4.2: Assume that a p-order linear prediction system is adopted, that is, the nth signal sample s(n) is linearly combined through its previous p samples
Figure QLYQS_5
to estimate:
Figure QLYQS_6
Figure QLYQS_6
;
其中
Figure QLYQS_7
为信号样本数据帧片段上的常数,称为p阶线性预测系数,最小化s(n)与
Figure QLYQS_8
之间的预测误差,求出此时的线性预测系数;
in
Figure QLYQS_7
is a constant on the segment of the signal sample data frame, which is called the p-order linear prediction coefficient, and minimizes s(n) and
Figure QLYQS_8
Between the prediction error, find the linear prediction coefficient at this time;
S4.3:根据线性预测系数与线性预测倒谱系数之间的定义关系,由以下递推公式,采用迭代的方式将线性预测系数ai转为线性预测倒谱系数
Figure QLYQS_9
S4.3: According to the defined relationship between the linear prediction coefficient and the linear prediction cepstral coefficient, the following recursive formula is used to iteratively convert the linear prediction coefficient a i into the linear prediction cepstral coefficient
Figure QLYQS_9
:
Figure QLYQS_10
Figure QLYQS_10
;
其中
Figure QLYQS_11
表示第n个线性预测倒谱系数,
Figure QLYQS_12
为p阶线性预测系数,由有限个线性预测系数可以得到无限个线性预测倒谱系数;
in
Figure QLYQS_11
Indicates the nth linear predictor cepstral coefficient,
Figure QLYQS_12
is the p-order linear predictive coefficient, and an infinite number of linear predictive cepstral coefficients can be obtained from a finite number of linear predictive coefficients;
S4.4:采用滑动平均算法对提取的线性预测倒谱系数进行特征平滑,即第x个数据帧的平滑线性预测倒谱系数sLPCCx根据以x为中心,长为win的平滑窗口内的所有的线性预测倒谱系数在时间帧维度上求取平均值计算得出。S4.4: Use the moving average algorithm to perform feature smoothing on the extracted linear predictive cepstral coefficients, that is, the smooth linear predictive cepstral coefficients of the xth data frame The linear predictive cepstral coefficients of are computed by averaging over the time frame dimension.
6.根据权利要求5所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:在步骤S4.2的具体步骤包括:6. A kind of mixed mode fatigue detection method based on linear predictive analysis and stack fusion according to claim 5, characterized in that: the specific steps in step S4.2 include:
Figure QLYQS_13
Figure QLYQS_13
;
基于最小均方误差准则,令平均预测误差为:Based on the minimum mean square error criterion, the average forecast error is:
Figure QLYQS_14
Figure QLYQS_14
;
为了使E{e2 (n)}最小,对ai求偏导并令偏导数为0,得到方程组:In order to minimize E{e 2 (n)}, take the partial derivative of a i and set the partial derivative to 0, and obtain the equation system:
Figure QLYQS_15
Figure QLYQS_15
;
S4.2.2:根据自相关函数改写线性预测方程组为其等价形式:S4.2.2: Rewrite the linear prediction equation system to its equivalent form according to the autocorrelation function:
Figure QLYQS_16
Figure QLYQS_16
;
Figure QLYQS_17
Figure QLYQS_17
;
其中Rn(j)表示第n个数据帧片段与其延迟j个数据点后的数据帧片段之间计算所得的自相关的值,m为第n个数据帧片段的索引,N为数据帧片段的长度;Where R n (j) represents the autocorrelation value calculated between the nth data frame segment and the data frame segment after j data points delay, m is the index of the nth data frame segment, and N is the data frame segment length; 矩阵形式为:The matrix form is:
Figure QLYQS_18
Figure QLYQS_18
;
S4.2.3:采用Levinson-Durbin算法求解线性预测方程组,以计算出线性预测系数aiS4.2.3: Use the Levinson-Durbin algorithm to solve the linear prediction equations to calculate the linear prediction coefficient a i .
7.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S5的具体步骤包括:7. A kind of mixed mode fatigue detection method based on linear predictive analysis and stack fusion according to claim 1, characterized in that: the specific steps of step S5 include: S5.1:对所述多通道纯净的EOG信号执行墨西哥帽连续小波变换:S5.1: Perform Mexican hat continuous wavelet transform on the multi-channel pure EOG signal:
Figure QLYQS_19
Figure QLYQS_19
;
其中,
Figure QLYQS_20
为小波变换结果,t为时间点数,e为自然对数的底数,
Figure QLYQS_21
为数据的标准差;
in,
Figure QLYQS_20
is the result of wavelet transformation, t is the number of time points, e is the base of natural logarithm,
Figure QLYQS_21
is the standard deviation of the data;
S5.2:以固定长度为D的滑动窗口,采用寻峰算法不重叠的移动窗口检测信号峰值,将窗口检测到的正峰值编码为1,负峰值编码为0,无峰值的窗口识别为注视特征,具有01或10的片段被识别为一次扫视特征的候选项,出现010被识别为一次眨眼特征;S5.2: Using a sliding window with a fixed length D, use the peak-seeking algorithm to detect the peak value of the moving window without overlapping, encode the positive peak value detected by the window as 1, and encode the negative peak value as 0, and identify the window without peak value as gaze feature, segments with 01 or 10 were identified as candidates for a glance feature, and occurrences of 010 were identified as a blink feature; S5.3:统计每个数据帧上的扫视次数,扫视次数的方差,扫视幅度的最大值、最小值、平均值、功率和平均功率作为扫视眼电特征;每个数据帧上的眨眼持续总时长,持续时长的平均值,最大值,最小值,眨眼次数,眨眼幅度的最大值、最小值、平均值、功率和平均功率作为眨眼眼动特征;每个数据帧上的注视持续总时长,持续时长的平均值,最大值和最小值作为注视的眼动特征。S5.3: Count the number of glances on each data frame, the variance of the number of glances, the maximum value, minimum value, average value, power and average power of the sweep amplitude as the characteristics of the eye-saccade; the blinking on each data frame continues to total Duration, the average value, maximum value, minimum value, number of blinks, maximum value, minimum value, average value, power and average power of the duration of the blink as eye movement features; the total duration of fixation on each data frame, The mean, maximum and minimum values of duration were used as eye movement characteristics of fixation.
8.根据权利要求1所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:步骤S6的具体步骤包括:8. A kind of mixed mode fatigue detection method based on linear predictive analysis and stack fusion according to claim 1, characterized in that: the specific steps of step S6 include: S6.1:将脑电特征输入到贝叶斯岭回归模型中训练,训练结果记录为Y1;将眼电特征输入到轻量级梯度提升机回归模型中训练,训练结果记录为Y2;将脑电特征和眼电特征串联融合在一起,分别输入到贝叶斯岭回归模型和轻量级梯度提升机回归模型,训练得到结果分别记录为Y3和Y4,完成堆叠融合算法的基回归层训练;S6.1: Input the EEG features into the Bayesian Ridge regression model for training, and record the training results as Y1; input the EEG features into the lightweight gradient boosting machine regression model for training, and record the training results as Y2; The electrical features and oculoelectric features are fused together in series, and are respectively input into the Bayesian Ridge regression model and the lightweight gradient boosting machine regression model. The training results are recorded as Y3 and Y4 respectively, and the basic regression layer training of the stacking fusion algorithm is completed; S6.2:将基回归层训练所得的结果并联在一起,作为堆叠融合算法的次级回归层的输入NewX=[Y1, Y2, Y3,Y4],其中NewX是次级回归层的输入,Y1, Y2, Y3,Y4是初级回归层的训练结果;S6.2: Connect the training results of the basic regression layer together as the input NewX=[Y1, Y2, Y3,Y4] of the secondary regression layer of the stacking fusion algorithm, where NewX is the input of the secondary regression layer, Y1 , Y2, Y3, Y4 are the training results of the primary regression layer; S6.3:对模型输出的预测值进行评估,评估通过则训练完成,评估未通过则返回步骤S6.1。S6.3: Evaluate the predicted value output by the model. If the evaluation is passed, the training is completed. If the evaluation fails, return to step S6.1. 9.根据权利要求8所述的一种基于线性预测分析与堆叠融合的混合模态疲劳检测方法,其特征在于:对模型输出的预测值进行评估的方法为采用模型输出的预测值
Figure QLYQS_22
与实际PERCLOS疲劳指数标签值Y之间的相关系数来评估模型对于疲劳检测的表现,相关系数公式为:
9. A mixed-mode fatigue detection method based on linear predictive analysis and stack fusion according to claim 8, characterized in that: the method of evaluating the predicted value output by the model is to use the predicted value output by the model
Figure QLYQS_22
The correlation coefficient between the actual PERCLOS fatigue index label value Y is used to evaluate the performance of the model for fatigue detection. The correlation coefficient formula is:
Figure QLYQS_23
Figure QLYQS_23
;
其中N为实际PERCLOS疲劳指数标签值Y的长度;
Figure QLYQS_24
Figure QLYQS_25
分别为第i个数据帧片段的实际PERCLOS疲劳指数标签值与模型对于疲劳程度预测值,i=1,2,…,N;
Figure QLYQS_26
Figure QLYQS_27
分别为实际PERCLOS疲劳指数标签值Y的平均值与模型输出的预测值
Figure QLYQS_28
的平均值;
where N is the length of the actual PERCLOS fatigue index label value Y;
Figure QLYQS_24
and
Figure QLYQS_25
are the actual PERCLOS fatigue index label value of the i-th data frame segment and the predicted value of the model for the fatigue degree, i=1,2,...,N;
Figure QLYQS_26
and
Figure QLYQS_27
are the mean value of the actual PERCLOS fatigue index label value Y and the predicted value output by the model
Figure QLYQS_28
average of;
相关系数达到预设值则评估通过,否则评估未通过。If the correlation coefficient reaches the preset value, the evaluation is passed; otherwise, the evaluation is not passed.
10.一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至9中任一项所述的方法。10. A computer-readable storage medium storing a computer program, wherein the method according to any one of claims 1 to 9 is implemented when the computer program is executed by a processor.
CN202310046600.0A 2023-01-31 2023-01-31 Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion Active CN115778390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310046600.0A CN115778390B (en) 2023-01-31 2023-01-31 Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310046600.0A CN115778390B (en) 2023-01-31 2023-01-31 Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion

Publications (2)

Publication Number Publication Date
CN115778390A true CN115778390A (en) 2023-03-14
CN115778390B CN115778390B (en) 2023-05-26

Family

ID=85429314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310046600.0A Active CN115778390B (en) 2023-01-31 2023-01-31 Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion

Country Status (1)

Country Link
CN (1) CN115778390B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599127A (en) * 2009-06-26 2009-12-09 安徽大学 Feature Extraction and Recognition Method of Electro-oculogram Signal
CN109512442A (en) * 2018-12-21 2019-03-26 杭州电子科技大学 A kind of EEG fatigue state classification method based on LightGBM
CN113080986A (en) * 2021-05-07 2021-07-09 中国科学院深圳先进技术研究院 Method and system for detecting exercise fatigue based on wearable equipment
US20220022805A1 (en) * 2020-07-22 2022-01-27 Eysz Inc. Seizure detection via electrooculography (eog)
CN114246593A (en) * 2021-12-15 2022-03-29 山东中科先进技术研究院有限公司 Electroencephalogram, electrooculogram and heart rate fused fatigue detection method and system
KR20220063952A (en) * 2020-11-11 2022-05-18 라이트하우스(주) System for preventing drowsy driving based on brain engineering

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599127A (en) * 2009-06-26 2009-12-09 安徽大学 Feature Extraction and Recognition Method of Electro-oculogram Signal
CN109512442A (en) * 2018-12-21 2019-03-26 杭州电子科技大学 A kind of EEG fatigue state classification method based on LightGBM
US20220022805A1 (en) * 2020-07-22 2022-01-27 Eysz Inc. Seizure detection via electrooculography (eog)
KR20220063952A (en) * 2020-11-11 2022-05-18 라이트하우스(주) System for preventing drowsy driving based on brain engineering
CN113080986A (en) * 2021-05-07 2021-07-09 中国科学院深圳先进技术研究院 Method and system for detecting exercise fatigue based on wearable equipment
CN114246593A (en) * 2021-12-15 2022-03-29 山东中科先进技术研究院有限公司 Electroencephalogram, electrooculogram and heart rate fused fatigue detection method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
S. PAZHANIRAJAN: "EEG Signal Classification using Linear Predictive Cepstral Coefficient Features" *
黄亚康: "基于脑电信号和眼电信号驾驶疲劳状态检测" *

Also Published As

Publication number Publication date
CN115778390B (en) 2023-05-26

Similar Documents

Publication Publication Date Title
Khare et al. PDCNNet: An automatic framework for the detection of Parkinson’s disease using EEG signals
Zhai et al. Automated ECG classification using dual heartbeat coupling based on convolutional neural network
He et al. Spatial–temporal seizure detection with graph attention network and bi-directional LSTM architecture
WO2019161610A1 (en) Electrocardiogram information processing method and electrocardiogram workstation system
CN103610447B (en) An online detection method of mental load based on forehead EEG signal
CN114391846B (en) An emotion recognition method and system based on filter feature selection
CN113576481B (en) A mental load assessment method, device, equipment and medium
CN106236027B (en) Depressed crowd's decision method that a kind of brain electricity is combined with temperature
CN109875552A (en) A fatigue detection method, device and storage medium thereof
CN113208613B (en) Multimodal BCI timing optimization method based on FHLS feature selection
CN112704503B (en) Electrocardiosignal noise processing method
CN117918863A (en) A method and system for real-time artifact processing and feature extraction of electroencephalogram signals
CN115017996B (en) Mental load prediction method and system based on multiple physiological parameters
CN115736920A (en) Depression state identification method and system based on bimodal fusion
Shao et al. Eeg-based mental workload classification method based on hybrid deep learning model under iot
Ragu et al. Post-Traumatic Stress Disorder (PTSD) Analysis using Machine Learning Techniques
CN115778390B (en) Hybrid-modal fatigue detection method based on linear predictive analysis and stack fusion
Chen et al. Quantitative identification of daily mental fatigue levels based on multimodal parameters
Yun-Mei et al. The abnormal detection of electroencephalogram with three-dimensional deep convolutional neural networks
Liu et al. Health warning based on 3R ECG Sample's combined features and LSTM
CN117243607A (en) Orthogonal fusion and mental state assessment method for emotion, fatigue and subjective willingness
Velusamy et al. Comprehensive survey on ECG signal denoising, feature extraction and classification methods for heart disease diagnosis
Kulkarni et al. Driver state analysis for ADAS using EEG signals
Li et al. Assessment of firefighter-training effectiveness in China based on human-factor parameters and machine learning
CN116035576A (en) Attention mechanism-based depression electroencephalogram signal identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant