CN113850192A - Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion - Google Patents

Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion Download PDF

Info

Publication number
CN113850192A
CN113850192A CN202111129277.0A CN202111129277A CN113850192A CN 113850192 A CN113850192 A CN 113850192A CN 202111129277 A CN202111129277 A CN 202111129277A CN 113850192 A CN113850192 A CN 113850192A
Authority
CN
China
Prior art keywords
channel
data
blink
mean
calculating
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111129277.0A
Other languages
Chinese (zh)
Inventor
曹九稳
王猛
崔小南
郑润泽
王建中
蒋铁甲
高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111129277.0A priority Critical patent/CN113850192A/en
Publication of CN113850192A publication Critical patent/CN113850192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion. The blink artifact detection method based on the combination of the empirical mode decomposition and the common spatial mode algorithm electroencephalogram feature learning and intelligent fusion can achieve accurate blink artifact and non-artifact signal detection by combining a support vector machine classification model. The features extracted in the steps 2 to 4 comprise spatial domain and frequency domain information of the electroencephalogram signals, the features extracted in the steps 5-2 to 5-11 mainly comprise time domain and frequency domain information of the electroencephalogram signals, and the features extracted in the steps 5-12 to 5-15 are used for ensuring the spatiality and the completeness of the multi-channel electroencephalogram information. The method can overcome the problem of multi-channel data damage of clinical electroencephalogram EEG signals, can solve the problems that the existing model is lack of spatial filtering and frequency domain information and is high in complexity, and achieves accurate detection of the blink artifact on the basis.

Description

Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion
Technical Field
The invention belongs to the field of electroencephalogram signal processing and intelligent medical auxiliary analysis, and relates to a blink artifact detection method based on support vector machine, Empirical Mode Decomposition (EMD) and Common Space Pattern (CSP) algorithm multi-dimensional feature intelligent fusion optimization aiming at the existence of strong noise interference in real-time multi-channel electroencephalogram signals.
Background
Electroencephalograms are records of spontaneous brain electrical activity and provide voltage fluctuation measurement of brain activity, and play an important role in recognizing brain activity and behavior. In recent years, with the development of non-invasive technology, electroencephalogram has become one of the main tools for analyzing brain activity, which can reflect the functional state of the brain related to the mental state of a human, from which we can extract important information, monitor the health condition of a patient, and thus effectively assist in analyzing and diagnosing diseases of the central nervous system of the brain. However, the time resolution of the electroencephalogram is very high, the acquired electroencephalogram (EEG) is easily polluted by blink artifacts, the physiological phenomenon may interfere with neural information, and even be regarded as a normal phenomenon, and further, the potential neural information is misinterpreted to seriously hinder the subsequent analysis, research and application of the electroencephalogram. The effective blink artifact detection algorithm has great theoretical and practical significance for brain science research and clinical application. The existing blink artifact detection model is mainly generated by five algorithms: linear filters, parametric filters, blind source separation methods, source decomposition methods and combinations of the different algorithms mentioned above. However, these methods all have certain limitations:
1. the expected signals of the linear filters do not overlap in the same frequency band, so potential neural information is influenced while the blink artifact is inhibited;
2. the parametric filter requires an additional channel;
3. the effectiveness of the blind source separation method depends on the statistical independence of the EEG signal and the artifact source;
4. the source decomposition method has poor robustness to noise and is easily influenced by white noise.
In practical clinical application, electroencephalogram signals cannot easily meet basic assumptions of the method, noise of a plurality of channel data is extremely high due to age and individual difference of patients, real electroencephalogram conditions cannot be reflected, and partial characteristics are invalid due to age difference and the like when data of different individuals are mixed and analyzed. Conventional detection algorithms based on a single feature cannot solve such problems. In addition, since the spatial resolution of the EEG signal is low, efficient spatial filtering is of paramount importance. Aiming at the problems, the invention provides a blink artifact detection method based on the combination of Empirical Mode Decomposition (EMD) and Common Space Pattern (CSP) algorithm (EMD-CSP) electroencephalogram feature learning and intelligence; meanwhile, aiming at the defects of high complexity, non-automation and the like of the traditional blink artifact detection algorithm, a blink artifact accurate detection model based on a Support Vector Machine (SVM) algorithm is constructed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion.
The invention provides a blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion, which can be realized in a few channels, and constructs an accurate classification model of blink artifacts and non-blink artifact signals by combining a Support Vector Machine (SVM) algorithm, aiming at the characteristics that electroencephalogram signals contain a large amount of blink artifacts, the artifacts mainly appear in forehead channels, the traditional features lack space and frequency domain information, and the data fluctuation is large when a plurality of individuals are mixed due to individual differences. The invention has the following improvements: 1) only a few channel signals are needed, and the hardware requirement is reduced. 2) The problem of individual difference of data is solved, and the generalization capability of the model is strong. 3) And the CSP algorithm is adopted to carry out spatial filtering on the signals, so that the anti-interference capability is strong.
The technical scheme of the invention mainly comprises the following steps:
step 1, filtering preprocessing and category labeling of a multichannel EEG signal;
step 2, applying EMD to FP1 and FP2 signals of the EEG signals processed in the step 1 to obtain two groups of 3-6 dimensional connotative modal components IMFs, and selecting corresponding 3 dimensional low-frequency signals in each group of IMFs according to frequency band information to form 6 dimensional EEG signals;
and 3, dividing a training set and a testing set of the blink artifact and the non-blink artifact of the processed electroencephalogram signal, designing a spatial filter W, and extracting the feature vector with high 4-dimensional discrimination by applying a CSP algorithm.
Step 4, performing feature extraction on the EEG signal processed in the step 1 to obtain 16-dimensional time-frequency domain key features;
and 5, fusing the 16-dimensional features obtained in the step 4 and the 4-dimensional feature vectors obtained in the step 3 into 20-dimensional features, and selecting the features by applying a variance method to obtain 12-dimensional features.
Step 6, establishing an artifact detection model for the data sets of the artifact and non-artifact samples by applying a support vector machine algorithm;
and 7, applying the artifact detection model built in the step 6 to carry out electroencephalogram blink artifact detection.
Further, the step 1 comprises the following specific steps:
filtering the original EEG signals of 21 channels and 1000Hz by a 50Hz notch filter and a 0.5-30Hz band-pass filter to obtain filtered signals; normally, the duration time of the blink artifact potential signal is 0.5s, so that sliding cutting is carried out on a sliding window with the original signal size being 1s and the step length being 0.5s, and therefore a cutting experiment signal is obtained; then, carrying out classification standard on each cutting experiment signal; labeling signals containing blink artifacts as blink class (eyeblink) with a class label of 1; signals that do not contain blink artifacts are scored as non-blinking class (normal), with a class label of 0, and 20% of each of the two classes of samples are taken as test samples, with the remaining 80% being training samples.
Further, the step 2 comprises the following specific steps:
2-1, because the common spatial mode CSP needs a large number of input channels and lacks frequency domain information, the empirical mode decomposition EMD algorithm is applied to the FP1 channel and the FP2 channel of the training sample, and relatively low-frequency 3-dimensional IMFs components are respectively extracted to form a 6-channel signal.
Assume that the sample data is S2*1000,SiAll data representing the ith channel, i ═ 1, 2;
2-2 for a given SiThe effective EMD decomposition procedure was performed as follows:
2-2-1, finding out SiAll extreme points of (a);
2-2-2, forming a lower envelope EMIN (t) for the minimum value point and forming an upper envelope EMAX (t) for the maximum value point by using an interpolation method according to the extreme value point obtained in the step 2-2-1;
2-2-3, calculating the mean value M (t) according to the result of the step 2-2-2:
M(t)=(EMIN(t)+EMAX(t))/2 (1)
2-2-4, details of the pull-out signal D (t):
D(t)=Si(t)-M(t) (2)
2-2-5, judging whether the D (t) meets the average value of 0 or meets a specified iteration criterion, if not, repeating the step 2-2-1-the step 2-2-4 by taking the D (t) as an input; if so, D (t) is an IMF component.
2-2-6, obtaining IMF, and adding SiSubtracting IMF to obtain residual signal, and using the residual signal as new SiAnd repeating the step 2-2-1-the step 2-2-5, and so on until the residual signal is a monotone value or a monotone function, and completing EMD decomposition.
2-2-7 for each SiJudging the obtained IMFs, and if the dimension of the IMFs is 3, extracting all the IMFs; if the dimension of IMFs is larger than 3, extracting data of 2-4 dimensions; finally constituting the 6-dimensional input signal X.
Further, the step 3 comprises the following specific steps:
3-1, applying a feature extraction algorithm to the 6-dimensional input signal X obtained in the step 2 to share a space mode CSP, dividing a training set TR and a test set TE, designing a space filter W, and extracting a 4-dimensional feature vector.
Assume that the sample data is X6*1000,XiIndicating normalized matrix of blink sample or non-blink sample, j indicating j sample data,KiIndicates the number of i-th class samples, RiCovariance matrix representing two classes of samples, trace (X) represents the sum of elements on the diagonal of the matrix,
Figure BDA0003279908860000041
means representing the covariance matrices of the two classes of samples; i is 1, 2;
3-2 for a given signal data, the steps of designing a spatial filter are as follows:
3-2-1, solving a covariance matrix of each sample:
Figure BDA0003279908860000042
Figure BDA0003279908860000043
3-2-2, averaging spatial covariance matrices of two classes of samples
Figure BDA0003279908860000044
Figure BDA0003279908860000045
3-2-3, solving a mixed space covariance matrix R:
R=R1+R2 (6)
3-2-4, calculating a whitening feature matrix P:
R=UλUT (7)
Figure BDA0003279908860000046
in the formula, X1,jThe jth sample representing a class 1 sample (i.e., blink class); x2,jThe jth sample representing a class 2 sample (i.e., non-blinking class); u is the eigenvector matrix of R, and λ is the diagonal matrix formed by the corresponding eigenvalues.
3-2-5, transforming the variance matrix of each sample as follows:
T1=PR1PT,T2=PR2PT (9)
Figure BDA0003279908860000051
λ12=I,B1=B2=B (11)
in the formula, λ1,λ2All are corresponding eigenvalue diagonal matrices, I is an identity matrix, and B is a corresponding eigenvector.
3-2-6, solving a spatial filter W;
converting lambda in step 3-2-51In descending order of the eigenvalues, λ2The characteristic values in (1) are arranged in ascending order at1Respectively selecting 2 eigenvalues with the maximum and minimum eigenvalues, and integrating the eigenvectors corresponding to the 4 eigenvalues as
Figure BDA0003279908860000052
And obtaining W:
Figure BDA0003279908860000053
3-3, applying the obtained space filter W to obtain the required characteristic vector, and specifically comprising the following steps:
3-3-1, using W spatial filtering to the original data:
Z4×T=W4×NXN×M (13)
where N is the number of sample channels, here 6, and M is the data length;
3-3-2, selecting the required characteristic vectors FPs:
Figure BDA0003279908860000054
in the formula
Figure BDA0003279908860000055
Is the data Z filtered from the jth sample4×TVariance of p-th line, all
Figure BDA0003279908860000056
Integration into FPs is the feature vector extracted for each sample EEG.
Further, the step 4 comprises the following specific steps:
4-1, calculating the difference mean value PSDD of the power spectral density mean values of four channels including FP1 channel and FP2 channel of the training sample, average power mean value MAP, variance mean value MV, kurtosis coefficient mean value MKC, biased state coefficient mean value MSC, extreme point number mean value MEN, fractal dimension mean value MFD, correlation coefficient CC, sample entropy mean value MSE, multi-scale entropy mean value MME, smooth nonlinear energy operator kurtosis coefficient mean value MNKC, biased state coefficient mean value MNSC and correlation coefficient NCC, and calculating the difference values APD of the power spectral density mean values of four channels including FP1 channel, FP2 channel, C3 channel and C4 channel, FP1 channel, FP2 channel, O1 channel and O2 channel, and the difference values CCD of the power spectral density mean values of four channels including FP1 channel, FP2 channel, F3 channel, F4 channel, P3 channel, P4 channel, O1 channel and O2 channel); the above 16-dimensional features;
assume that the sample data is S21*1000,SkAll data of a k channel are represented, and N represents the data length; k is 1,2, …, 21;
4-2 calculating the range mean MAD by the following formula:
Figure BDA0003279908860000061
in the formula, max () represents taking the maximum value, and min () represents taking the minimum value;
4-3 the mean of variance MV is calculated by the following formula:
Figure BDA0003279908860000062
in the formula ofkRepresents the mean, S, of all data of the k-th channelk(t) t-th data representing a k-th channel;
4-4 the average power mean MAP is calculated by the following equation:
Figure BDA0003279908860000063
4-5 calculating the peak coefficient mean value MKC by the following formula:
Figure BDA0003279908860000064
wherein sigmaiRepresents the standard deviation of all data of the k channel;
4-6 calculating the mean skewness coefficient MSC by the following formula:
Figure BDA0003279908860000065
4-7, calculating an extreme point mean MEN by the following method:
first calculate Si(t), i is maximum and minimum points of 1,2, then count the sum of maximum and minimum points, and respectively mark as eniI is 1,2, then
Figure BDA0003279908860000066
Figure BDA0003279908860000071
4-8 calculating the fractal dimension mean MFD by the following formula and method:
Figure BDA0003279908860000072
FD () represents a fractal dimension value, epsilon represents the length of a grid side during box counting dimension calculation, G () represents the accumulation of the grid number of signal data, and the fractal dimension values of the first two channels are obtained through the formula and averaged to obtain MFD;
4-9 the coefficient of correlation CC is given by the formula:
Figure BDA0003279908860000073
where var () represents the variance calculation and Cov () represents the covariance calculation;
4-10, calculating a sample entropy mean value MSE by the following formula and method:
Figure BDA0003279908860000074
where SE () represents the signal sample entropy, m represents the reconstructed signal dimension, δ represents the time delay, r represents a predefined threshold parameter, and n () represents the number of matching m-dimensional vector pairs; calculating sample entropy values of the first two channels through the formula and averaging to obtain MSE;
4-11, calculating the multi-scale entropy mean value MME by the following formula and method:
ME(S,m,τ,r)=SE(S′,m,δ=τ,r)(23)
where ME () represents a multi-scale entropy value,
Figure BDA0003279908860000075
obtaining the multi-scale entropy values of the first two channels through the formula and calculating the average value to obtain an MME;
4-12 the k channel S is calculated by the following formulakIs the smooth nonlinear energy operator data of all data, denoted as ESk
ESk(t)=ω(t)*(Sk(t)*Sk(t)-Sk(t-1)*Sk(t+1)) (24)
In which ω (t) denotes a window function, ESk(t) represents ESkThe t-th data of (1); then respectively calculating the peak state coefficient mean value MNKC of the smooth nonlinear energy operator, the skewness coefficient mean value MNSC of the smooth nonlinear energy operator and the smooth nonlinear energy operatorThe correlation coefficient NCC, the formula is as follows: (ii) a
Figure BDA0003279908860000076
In the formula E mukAnd E σkRespectively represent ESkMean and standard deviation of;
Figure BDA0003279908860000081
Figure BDA0003279908860000082
4-13 in order to compensate the spatial information of the multichannel electroencephalogram signals, calculating an average power difference value APD by the following formula and method:
APD=2*(P(S1)+P(S2))-(P(S5)+P(S6)+P(S7)+P(S8)) (28)
wherein P () is the calculated average power value, and the calculation formula is:
Figure BDA0003279908860000083
4-14 calculating the correlation coefficient difference CCD by the following formula and method:
CCD=2*CC(S1,S2)-CC(S1,S9)-CC(S2,S10) (30)
wherein CC () is the value of the correlation coefficient calculated by steps 4-9;
4-15 calculating the power spectral density difference PSDD by the following formula and method:
Figure BDA0003279908860000084
PSD () represents the power spectral density of the channel signal, mean () represents the mean calculation;
4-16 assuming that all of the two classes of training samples are T, the 16-dimensional feature extraction of steps 4-2 through 4-15 is performed on all training samples and T x 16-dimensional feature vector FVs is constructed.
Further, the step 5 comprises the following steps:
performing feature fusion on FPs and FVs to form a 20-dimensional feature vector Fs, and selecting features by adopting a variance filtering method aiming at the feature vector Fs;
5-1 the variance Var in each dimension of the feature vector Fs is calculated by the following formulai(i=1,2,…,20);
Figure BDA0003279908860000085
Where N is the total number of data per dimension, here 20,
Figure BDA0003279908860000086
is the mean value of all data of the ith dimension, Vi(n) nth data for the ith dimension;
5-2, sorting the variance results obtained in the step 5-1 according to a descending order, and storing feature dimension index values corresponding to the sorted variance values;
and 5-3, according to the characteristic dimension index values obtained in the step 5-2, selecting the characteristics corresponding to the characteristic dimension index values with the number of M as new characteristic vectors TRFs with the dimensions of T x M in a positive sequence.
Further, the step 6 comprises the following specific steps:
6-1, performing model training by using the feature vectors TRFs obtained in the step 5 in combination with the labels of the two categories and a Support Vector Machine (SVM) algorithm to obtain a classifier model;
6-2, applying the divided test samples to the feature extraction algorithm in the step 2-5, obtaining test sample feature vectors TEFs by using the extracted features as the M-dimensional features after feature selection, performing prediction classification on the TEFs by using the classifier model obtained in the step 6-1, and calculating a detection result;
6-3, according to the detection result in the step 6-2, an optimal classifier model, namely an artifact detection model, is constructed by adjusting the kernel function type and the kernel function parameters of the classifier model.
The invention has the following beneficial effects
Aiming at the problems that multi-channel data of EEG signals of individuals, particularly children, are easily damaged and the difference of the signals between the individuals and different ages of the same individual is large, the invention can utilize a blink artifact detection method which is based on the combination of EEG feature learning and intelligent fusion of Empirical Mode Decomposition (EMD) and Common Space Pattern (CSP) algorithms (EMD-CSP) by utilizing a few channel data and combines a support vector machine algorithm to construct an accurate classification model of blink artifacts and non-blink artifact signals, thereby not only overcoming the problem of damage of the multi-channel data of clinical EEG signals, but also solving the problems that the existing model lacks space filtering and frequency domain information and has higher complexity, and realizing the accurate detection of the blink artifacts on the basis.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of the feature extraction method and screening of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1 and 2, the implementation steps of the blink artifact detection method based on the support vector machine and the blink artifact detection method based on the combination of the electroencephalogram feature learning of the EMD-CSP algorithm and the intelligence have been described in detail in the summary of the invention, that is, the technical scheme of the invention mainly comprises the following steps:
step 1, filtering preprocessing and category labeling of a multichannel EEG signal;
step 2, applying EMD to FP1 and FP2 signals of the EEG signals processed in the step 1 to obtain two groups of 3-6 dimensional IMFs, and selecting corresponding 3 dimensional low frequency signals in each group of IMFs according to frequency band information to form 6 dimensional electroencephalogram signals;
and 3, dividing the electroencephalogram signals processed in the step 3 into a training set and a testing set of blink artifacts and non-blink artifacts, designing a spatial filter W, and extracting feature vectors with high 4-dimensional discrimination by applying a CSP algorithm.
Step 4, performing feature extraction on the EEG signal processed in the step 1 to obtain 16-dimensional time-frequency domain key features;
and 5, fusing the 16-dimensional features obtained in the step 4 and the 4-dimensional feature vectors obtained in the step 3 into 20-dimensional features, and selecting the features by applying a variance method to obtain 12-dimensional features.
Step 6, establishing an artifact detection model for the data sets of the artifact and non-artifact samples by applying a support vector machine algorithm;
and 7, applying the artifact detection model built in the step 6 to carry out electroencephalogram blink artifact detection.
The step 1 comprises the following specific steps:
filtering the original EEG signals of 21 channels and 1000Hz by a 50Hz notch filter and a 0.5-30Hz band-pass filter to obtain filtered signals; normally, the duration time of the blink artifact potential signal is 0.5s, so that sliding cutting is carried out on a sliding window with the original signal size being 1s and the step length being 0.5s, and therefore a cutting experiment signal is obtained; then, carrying out classification standard on each cutting signal; labeling signals containing blink artifacts as blink class (eyeblink) class with a class label of 1; signals that do not contain blink artifacts are scored as non-blink (normal) class with a class label of 0, and 20% of each of the two classes of samples are taken as test samples, with the remaining 80% being training samples.
The step 2 comprises the following specific steps:
2-1, because the common Spatial mode csp (common Spatial pattern) requires a large number of input channels and lacks frequency domain information, an empirical mode decomposition (emd) algorithm is applied to channel 1(FP1 channel) and channel 2(FP2 channel) of the training samples, and relatively low-frequency components of 3-dimensional IMFs are respectively extracted to form a 6-channel signal.
Assume that the sample data is S2*1000,Si(i ═ 1,2) represents all data of the ith channel.
2-2 for a given SiThe effective EMD decomposition procedure was performed as follows:
2-2-1, finding out SiAll extreme points of (a);
2-2-2, forming a lower envelope EMIN (t) for the minimum value point and forming an upper envelope EMAX (t) for the maximum value point by using an interpolation method according to the extreme value point obtained in the step 2-2-1;
2-2-3, calculating the mean value M (t) according to the result of the step 2-2-2:
M(t)=(EMIN(t)+EMAX(t))/2 (1)
2-2-4, details of the pull-out signal D (t):
D(t)=Si(t)-M(t) (2)
2-2-5, judging whether the D (t) meets the average value of 0 or meets a specified iteration criterion, if not, repeating the step 2-2-1-the step 2-2-4 by taking the D (t) as an input; if so, D (t) is an IMF component.
2-2-6, obtaining IMF, and adding SiSubtracting IMF to obtain residual signal, and using the residual signal as new SiAnd repeating the step 2-2-1-the step 2-2-5, and so on until the residual signal is a monotone value or a monotone function, and completing EMD decomposition.
2-2-7 for each SiJudging the obtained IMFs, and if the dimension of the IMFs is 3, extracting all the IMFs; if the dimension of IMFs is larger than 3, extracting data of 2-4 dimensions; finally constituting the 6-dimensional input signal X.
The step 3 comprises the following specific steps:
3-1, applying a feature extraction algorithm to the 6-dimensional signal X obtained in the step 2 to share a common Spatial pattern CSP (common Spatial pattern), dividing a training set TR and a test set TE, designing a Spatial filter W, and extracting a 4-dimensional feature vector.
Assume that the sample data is X6*1000,XiRepresenting normalized matrix of blink or non-blink sample, j representing jth sample data, KiIndicates the number of i-th class samples, RiCovariance matrix representing two classes of samples, trace (X) represents the sum of elements on the diagonal of the matrix,
Figure BDA0003279908860000111
means representing the covariance matrices of the two classes of samples; i is 1, 2;
3-2 for a given signal data, the steps of designing a spatial filter are as follows:
3-2-1, solving a covariance matrix of each sample:
Figure BDA0003279908860000112
Figure BDA0003279908860000121
3-2-2, averaging spatial covariance matrices of two classes of samples
Figure BDA0003279908860000122
Figure BDA0003279908860000123
3-2-3, solving a mixed space covariance matrix R:
R=R1+R2 (6)
3-2-4, calculating a whitening feature matrix P:
R=UλUT (7)
Figure BDA0003279908860000124
in the formula, X1,jThe jth sample representing a class 1 sample (i.e., blink class); x2,jThe jth sample representing a class 2 sample (i.e., non-blinking class); u is the eigenvector matrix of R, and λ is the diagonal matrix formed by the corresponding eigenvalues.
3-2-5, transforming the variance matrix of each sample as follows:
T1=PR1PT,T2=PR2PT (9)
Figure BDA0003279908860000125
λ12=I,B1=B2=B (11)
in the formula, λ1,λ2All are corresponding eigenvalue diagonal matrices, I is an identity matrix, and B is a corresponding eigenvector.
3-2-6, solving a spatial filter W;
converting lambda in step 3-2-51In descending order of the eigenvalues, λ2The characteristic values in (1) are arranged in ascending order at1Respectively selecting 2 eigenvalues with the maximum and minimum eigenvalues, and integrating the eigenvectors corresponding to the 4 eigenvalues as
Figure BDA0003279908860000126
And obtaining W:
Figure BDA0003279908860000127
3-3, applying the obtained space filter W to obtain the required characteristic vector, and specifically comprising the following steps:
3-3-1, using W spatial filtering to the original data:
Z4×T=W4×NXN×M (13)
where N is the number of sample channels, here 6, and M is the data length;
3-3-2, selecting the required characteristic vectors FPs:
Figure BDA0003279908860000131
in the formula
Figure BDA0003279908860000132
Is the data Z filtered from the jth sample4×TVariance of p-th line, all
Figure BDA0003279908860000133
Integration into FPs is the feature vector extracted for each sample EEG.
The step 4 comprises the following specific steps:
4-1 calculating the extreme mean value MAD (mean of estimate differences), the mean power MAP (mean of estimate power), the variance mean value MV (mean of estimate differences), the peak coefficient mean value MKC (mean of estimate differences), the partial coefficient mean value MSC (mean of estimate differences), the extreme mean value MEN (mean of estimate differences), the dimensional fractal mean value MFD (mean of estimate differences), the correlation coefficient CC (correlation coefficient differences), the sample mean value MSE (mean of estimate differences), the multi-scale mean value MME (mean of estimate differences), the non-linear energy mean value MNcoefficient MNequation C (mean of estimate differences), the channel 1(FP1 channel) and the channel 2(FP2 channel) of the training sample, and calculating the coefficient of average of estimate differences, the coefficient of estimate differences, the channel C (mean of estimate differences), the peak coefficient of estimate differences, the channel 1 (mean of estimate differences, the channel 2(FP, the channel 2) of the training sample, the channel 1(FP, the channel 2) of the channel of the training sample, the channel of estimate differences, the channel of the average of the maximum of the channel of the maximum of the channel of the maximum of the training sample, the channel of the maximum of the channel of the training sample of the channel of the training sample of the channel of the, Average power difference values apd (average power difference), ccd (correlation coefficient difference) of the channel 2(FP2 channel), the channel 5(C3 channel), and the channel 6(C4 channel), correlation coefficient difference values ccd (correlation coefficient difference) of the channel 1(FP1 channel), the channel 2(FP2 channel), the channel 9(O1 channel), and the channel 10(O2 channel), and difference average values psdd (average power difference) of the power spectral density averages of the four channels of the channel 1(FP1 channel), the channel 2(FP2 channel), the channel 3(F3 channel), the channel 4(F4 channel) and the power spectral density averages of the four channels of the channel 7(P3 channel), the channel 8(P4 channel), the channel 9(O1 channel), and the channel 10(O2 channel); the above 16-dimensional features;
assume that the sample data is S21*1000,SkAll data of a k channel are represented, and N represents the data length; k is 1,2, …, 21;
4-2 calculating the range mean MAD by the following formula:
Figure BDA0003279908860000141
in the formula, max () represents taking the maximum value, and min () represents taking the minimum value;
4-3 the mean of variance MV is calculated by the following formula:
Figure BDA0003279908860000142
in the formula ofkRepresents the mean, S, of all data of the k-th channelk(t) t-th data representing a k-th channel;
4-4 the average power mean MAP is calculated by the following equation:
Figure BDA0003279908860000143
4-5 calculating the peak coefficient mean value MKC by the following formula:
Figure BDA0003279908860000144
wherein sigmaiRepresents the standard deviation of all data of the k channel;
4-6 calculating the mean skewness coefficient MSC by the following formula:
Figure BDA0003279908860000145
4-7, calculating an extreme point mean MEN by the following method:
first calculate Si(t), i is maximum and minimum points of 1,2, then count the sum of maximum and minimum points, and respectively mark as eniI is 1,2, then
Figure BDA0003279908860000146
Figure BDA0003279908860000147
4-8 calculating the fractal dimension mean MFD by the following formula and method:
Figure BDA0003279908860000148
FD () represents a fractal dimension value, epsilon represents the length of a grid side during box counting dimension calculation, G () represents the accumulation of the grid number of signal data, and the fractal dimension values of the first two channels are obtained through the formula and averaged to obtain MFD;
4-9 the coefficient of correlation CC is given by the formula:
Figure BDA0003279908860000149
where var () represents the variance calculation and Cov () represents the covariance calculation;
4-10, calculating a sample entropy mean value MSE by the following formula and method:
Figure BDA0003279908860000151
where SE () represents the signal sample entropy, m represents the reconstructed signal dimension, δ represents the time delay, r represents a predefined threshold parameter, and n () represents the number of matching m-dimensional vector pairs; calculating sample entropy values of the first two channels through the formula and averaging to obtain MSE;
4-11, calculating the multi-scale entropy mean value MME by the following formula and method:
ME(S,m,τ,r)=SE(S′,m,δ=τ,r) (23)
where ME () represents a multi-scale entropy value,
Figure BDA0003279908860000152
obtaining the multi-scale entropy values of the first two channels through the formula and calculating the average value to obtain an MME;
4-12 the k channel S is calculated by the following formulakIs the smooth nonlinear energy operator data of all data, denoted as ESk
ESk(t)=ω(t)*(Sk(t)*Sk(t)-Sk(t-1)*Sk(t+1)) (24)
In which ω (t) denotes a window function, ESk(t) represents ESkThe t-th data of (1); then respectively calculating a peak state coefficient mean value MNKC of the smooth nonlinear energy operator, a skewing state coefficient mean value MNSC of the smooth nonlinear energy operator and a correlation coefficient NCC of the smooth nonlinear energy operator, wherein the formula is as follows: (ii) a
Figure BDA0003279908860000153
In the formula E mukAnd E σkRespectively represent ESkMean and standard deviation of;
Figure BDA0003279908860000154
Figure BDA0003279908860000155
4-13 in order to compensate the spatial information of the multichannel electroencephalogram signals, calculating an average power difference value APD by the following formula and method:
APD=2*(P(S1)+P(S2))-(P(S5)+P(S6)+P(S7)+P(S8)) (28)
wherein P () is the calculated average power value, and the calculation formula is:
Figure BDA0003279908860000156
4-14 calculating the correlation coefficient difference CCD by the following formula and method:
CCD=2*CC(S1,S2)-CC(S1,S9)-CC(S2,S10) (30)
wherein CC () is the value of the correlation coefficient calculated by steps 4-9;
4-15 calculating the power spectral density difference PSDD by the following formula and method:
Figure BDA0003279908860000161
PSD () represents the power spectral density of the channel signal, mean () represents the mean calculation;
4-16 assuming that all of the two classes of training samples are T, the 16-dimensional feature extraction of steps 4-2 through 4-15 is performed on all training samples and T x 16-dimensional feature vector FVs is constructed.
The step 5 comprises the following specific steps:
performing feature fusion on FPs and FVs to form a 20-dimensional feature vector Fs, and selecting features by adopting a variance filtering method aiming at the feature vector Fs;
5-1 the variance Var in each dimension of the feature vector Fs is calculated by the following formulai(i=1,2,…,20);
Figure BDA0003279908860000162
Where N is the total number of data per dimension, here 20,
Figure BDA0003279908860000163
is the mean value of all data of the ith dimension, Vi(n) nth data for the ith dimension;
5-2, sorting the variance results obtained in the step 5-1 according to a descending order, and storing feature dimension index values corresponding to the sorted variance values;
and 5-3, according to the characteristic dimension index values obtained in the step 5-2, selecting the characteristics corresponding to the characteristic dimension index values with the number of M as new characteristic vectors TRFs with the dimensions of T x M in a positive sequence.
The step 6 comprises the following specific steps:
6-1, performing model training by using the feature vectors TRFs obtained in the step 5 in combination with the labels of the two categories and the SVM algorithm in the claim 2, and obtaining a classifier model;
6-2, applying the step 2-5 feature extraction algorithm of claim 1 to the test sample divided in claim 2, obtaining a test sample feature vector TEFs by using the extracted features as M-dimensional features after feature selection, performing prediction classification on the TEFs by using a classifier model obtained in 6-1, and calculating a detection result;
6-3, according to the detection result of the step 6-2, an optimal classifier model, namely an artifact detection model, is constructed by adjusting the kernel function type and the kernel function parameters of the classifier model.
In order to achieve a better effect of accurately detecting blink artifacts and non-blink artifact signals, the following description is introduced from the aspects of parameter selection and design in practical application, and is taken as a reference for other applications of the invention:
when the method is used for processing a multi-channel EEG signal, each section of input data is set to be 1 second, and the data overlapping rate is set to be 50%. Aiming at the problems that individual characteristics fail due to clinical individual difference, children signal acquisition is noisy, multi-channel data is damaged, and the traditional method lacks spatial filtering and frequency domain information, the invention is based on the blink artifact detection method based on the combination of the Empirical Mode Decomposition (EMD) and the common spatial mode (CSP) algorithm (EMD-CSP) electroencephalogram characteristic learning and intelligent fusion, and can achieve accurate blink artifact and non-artifact signal detection by combining a support vector machine classification model. The features extracted in the steps 2 to 4 comprise spatial domain and frequency domain information of the electroencephalogram signals, the features extracted in the steps 5-2 to 5-11 mainly comprise time domain and frequency domain information of the electroencephalogram signals, and the features extracted in the steps 5-12 to 5-15 are used for ensuring the spatiality and the completeness of the multi-channel electroencephalogram information.
In the feature selection process, the features with variance ordering in the first 12 bits are retained. Finally, the adopted support vector machine algorithm sets the rbf kernel function to realize nonlinear classification, the sigma parameter is set to be 1, and the C parameter is set to be 2.
In order to truly test the detection effect of the invention on the blink artifact signals, comparison experiments are carried out on EEG data of real patients in children hospitals affiliated to the medical college of Zhejiang university with various current mainstream detection algorithms:
the experimental data is 21 channels, the sampling frequency is 1000Hz, the data is totally divided into 20 different individual affected data sets and a mixed data set, and the average data length is 20 minutes. The multidimensional feature optimization algorithm proposed by Wang Jianhui et al in 2021 had an average sensitivity (sensitivity) of 90.64% on individual datasets, an average specificity (specificity) of 91.94%, a sensitivity of 83.26% on mixed datasets, and a specificity of 91.04%; the average sensitivity of the present invention was 93.70% and the average specificity was 93.89% on the individual data set, and the sensitivity was 93.42% and the specificity was 96.31% on the mixed data set. Compared with the comparison algorithm, the method improves the average sensitivity on individual data by 3.06 percent, improves the average specificity by 1.95 percent, improves the sensitivity on a mixed data set by 10.16 percent and improves the specificity by 5.27 percent. The multi-dimensional feature optimization algorithm is proved to be more excellent than a mainstream blink artifact detection model, and the excellent performance of the method on real data compared with the current blink artifact detection model is fully proved.
The method can be used for solving the problems that individualized features fail due to clinical individual differences, children signal acquisition is noisy, multi-channel data are damaged, and the traditional method lacks spatial filtering and frequency domain information.

Claims (7)

1. The blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion is characterized by comprising the following steps of:
step 1, filtering preprocessing and category labeling of a multichannel EEG signal;
step 2, applying EMD to FP1 and FP2 signals of the EEG signals processed in the step 1 to obtain two groups of 3-6 dimensional connotative modal components IMFs, and selecting corresponding 3 dimensional low-frequency signals in each group of IMFs according to frequency band information to form 6 dimensional EEG signals;
and 3, dividing a training set and a testing set of the blink artifact and the non-blink artifact of the processed electroencephalogram signal, designing a spatial filter W, and extracting the feature vector with high 4-dimensional discrimination by applying a CSP algorithm.
Step 4, performing feature extraction on the EEG signal processed in the step 1 to obtain 16-dimensional time-frequency domain key features;
and 5, fusing the 16-dimensional features obtained in the step 4 and the 4-dimensional feature vectors obtained in the step 3 into 20-dimensional features, and selecting the features by applying a variance method to obtain 12-dimensional features.
Step 6, establishing an artifact detection model for the data sets of the artifact and non-artifact samples by applying a support vector machine algorithm;
and 7, applying the artifact detection model built in the step 6 to carry out electroencephalogram blink artifact detection.
2. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 1, wherein the step 1 comprises the following steps:
filtering the original EEG signals of 21 channels and 1000Hz by a 50Hz notch filter and a 0.5-30Hz band-pass filter to obtain filtered signals; normally, the duration time of the blink artifact potential signal is 0.5s, so that sliding cutting is carried out on a sliding window with the original signal size being 1s and the step length being 0.5s, and therefore a cutting experiment signal is obtained; then, carrying out classification standard on each cutting experiment signal; labeling signals containing blink artifacts as blink class (eyeblink) with a class label of 1; signals that do not contain blink artifacts are scored as non-blinking class (normal), with a class label of 0, and 20% of each of the two classes of samples are taken as test samples, with the remaining 80% being training samples.
3. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 2, wherein the step 2 comprises the following specific steps:
2-1, because the common spatial mode CSP needs a large number of input channels and lacks frequency domain information, the empirical mode decomposition EMD algorithm is applied to the FP1 channel and the FP2 channel of the training sample, and relatively low-frequency 3-dimensional IMFs components are respectively extracted to form a 6-channel signal.
Hypothesis sampleData is S2*1000,SiAll data representing the ith channel, i ═ 1, 2;
2-2 for a given SiThe effective EMD decomposition procedure was performed as follows:
2-2-1, finding out SiAll extreme points of (a);
2-2-2, forming a lower envelope EMIN (t) for the minimum value point and forming an upper envelope EMAX (t) for the maximum value point by using an interpolation method according to the extreme value point obtained in the step 2-2-1;
2-2-3, calculating the mean value M (t) according to the result of the step 2-2-2:
M(t)=(EMIN(t)+EMAX(t))/2 (1)
2-2-4, details of the pull-out signal D (t):
D(t)=Si(t)-M(t) (2)
2-2-5, judging whether the D (t) meets the average value of 0 or meets a specified iteration criterion, if not, repeating the step 2-2-1-the step 2-2-4 by taking the D (t) as an input; if so, D (t) is an IMF component.
2-2-6, obtaining IMF, and adding SiSubtracting IMF to obtain residual signal, and using the residual signal as new SiAnd repeating the step 2-2-1-the step 2-2-5, and so on until the residual signal is a monotone value or a monotone function, and completing EMD decomposition.
2-2-7 for each SiJudging the obtained IMFs, and if the dimension of the IMFs is 3, extracting all the IMFs; if the dimension of IMFs is larger than 3, extracting data of 2-4 dimensions; finally constituting the 6-dimensional input signal X.
4. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 3, wherein the step 3 comprises the following steps:
3-1, applying a feature extraction algorithm to the 6-dimensional input signal X obtained in the step 2 to share a space mode CSP, dividing a training set TR and a test set TE, designing a space filter W, and extracting a 4-dimensional feature vector.
Assume that the sample data is X6*1000,XiIndicating normalized matrix of samples of blink or non-blink type, jDenotes the jth sample data, KiIndicates the number of i-th class samples, RiCovariance matrix representing two classes of samples, trace (X) represents the sum of elements on the diagonal of the matrix,
Figure FDA0003279908850000021
means representing the covariance matrices of the two classes of samples; i is 1, 2;
3-2 for a given signal data, the steps of designing a spatial filter are as follows:
3-2-1, solving a covariance matrix of each sample:
Figure FDA0003279908850000031
Figure FDA0003279908850000032
3-2-2, averaging spatial covariance matrices of two classes of samples
Figure FDA0003279908850000033
Figure FDA0003279908850000034
3-2-3, solving a mixed space covariance matrix R:
R=R1+R2 (6)
3-2-4, calculating a whitening feature matrix P:
R=UλUT (7)
Figure FDA0003279908850000035
in the formula, X1,jThe jth sample representing a class 1 sample (i.e., blink class); x2,jRepresenting class 2 samples (i.e.Non-blinking class); u is the eigenvector matrix of R, and λ is the diagonal matrix formed by the corresponding eigenvalues.
3-2-5, transforming the variance matrix of each sample as follows:
T1=PR1PT,T2=PR2PT (9)
Figure FDA0003279908850000036
λ12=I,B1=B2=B (11)
in the formula, λ1,λ2All are corresponding eigenvalue diagonal matrices, I is an identity matrix, and B is a corresponding eigenvector.
3-2-6, solving a spatial filter W;
converting lambda in step 3-2-51In descending order of the eigenvalues, λ2The characteristic values in (1) are arranged in ascending order at1Respectively selecting 2 eigenvalues with the maximum and minimum eigenvalues, and integrating the eigenvectors corresponding to the 4 eigenvalues as
Figure FDA0003279908850000037
And obtaining W:
Figure FDA0003279908850000038
3-3, applying the obtained space filter W to obtain the required characteristic vector, and specifically comprising the following steps:
3-3-1, using W spatial filtering to the original data:
Z4×T=W4×NXN×M (13)
where N is the number of sample channels, here 6, and M is the data length;
3-3-2, selecting the required characteristic vectors FPs:
Figure FDA0003279908850000041
in the formula
Figure FDA0003279908850000042
Is the data Z filtered from the jth sample4×TVariance of p-th line, all
Figure FDA0003279908850000043
Integration into FPs is the feature vector extracted for each sample EEG.
5. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 2, wherein the step 4 comprises the following steps:
4-1, calculating the difference mean value PSDD of the power spectral density mean values of four channels including FP1 channel and FP2 channel of the training sample, average power mean value MAP, variance mean value MV, kurtosis coefficient mean value MKC, biased state coefficient mean value MSC, extreme point number mean value MEN, fractal dimension mean value MFD, correlation coefficient CC, sample entropy mean value MSE, multi-scale entropy mean value MME, smooth nonlinear energy operator kurtosis coefficient mean value MNKC, biased state coefficient mean value MNSC and correlation coefficient NCC, and calculating the difference values APD of the power spectral density mean values of four channels including FP1 channel, FP2 channel, C3 channel and C4 channel, FP1 channel, FP2 channel, O1 channel and O2 channel, and the difference values CCD of the power spectral density mean values of four channels including FP1 channel, FP2 channel, F3 channel, F4 channel, P3 channel, P4 channel, O1 channel and O2 channel); the above 16-dimensional features;
assume that the sample data is S21*1000,SkAll data of a k channel are represented, and N represents the data length; 1,2, 21;
4-2 calculating the range mean MAD by the following formula:
Figure FDA0003279908850000044
in the formula, max () represents taking the maximum value, and min () represents taking the minimum value;
4-3 the mean of variance MV is calculated by the following formula:
Figure FDA0003279908850000051
in the formula ofkRepresents the mean, S, of all data of the k-th channelk(t) t-th data representing a k-th channel;
4-4 the average power mean MAP is calculated by the following equation:
Figure FDA0003279908850000052
4-5 calculating the peak coefficient mean value MKC by the following formula:
Figure FDA0003279908850000053
wherein sigmaiRepresents the standard deviation of all data of the k channel;
4-6 calculating the mean skewness coefficient MSC by the following formula:
Figure FDA0003279908850000054
4-7, calculating an extreme point mean MEN by the following method:
first calculate Si(t), i is maximum and minimum points of 1,2, then count the sum of maximum and minimum points, and respectively mark as eniI is 1,2, then
Figure FDA0003279908850000055
Figure FDA0003279908850000056
4-8 calculating the fractal dimension mean MFD by the following formula and method:
Figure FDA0003279908850000057
FD () represents a fractal dimension value, epsilon represents the length of a grid side during box counting dimension calculation, G () represents the accumulation of the grid number of signal data, and the fractal dimension values of the first two channels are obtained through the formula and averaged to obtain MFD;
4-9 the coefficient of correlation CC is given by the formula:
Figure FDA0003279908850000058
where var () represents the variance calculation and Cov () represents the covariance calculation;
4-10, calculating a sample entropy mean value MSE by the following formula and method:
Figure FDA0003279908850000059
where SE () represents the signal sample entropy, m represents the reconstructed signal dimension, δ represents the time delay, r represents a predefined threshold parameter, and n () represents the number of matching m-dimensional vector pairs; calculating sample entropy values of the first two channels through the formula and averaging to obtain MSE;
4-11, calculating the multi-scale entropy mean value MME by the following formula and method:
ME(S,m,τ,r)=SE(S′,m,δ=τ,r) (23)
where ME () represents a multi-scale entropy value,
Figure FDA0003279908850000061
obtaining the multi-scale entropy values of the first two channels through the formula and calculating the average value to obtain an MME;
4-12 the kth channel is calculated by the following formulaSkIs the smooth nonlinear energy operator data of all data, denoted as ESk
ESk(t)=ω(t)*(Sk(t)*Sk(t)-Sk(t-1)*Sk(t+1)) (24)
In which ω (t) denotes a window function, ESk(t) represents ESkThe t-th data of (1); then respectively calculating a peak state coefficient mean value MNKC of the smooth nonlinear energy operator, a skewing state coefficient mean value MNSC of the smooth nonlinear energy operator and a correlation coefficient NCC of the smooth nonlinear energy operator, wherein the formula is as follows: (ii) a
Figure FDA0003279908850000062
In the formula E mukAnd E σkRespectively represent ESkMean and standard deviation of;
Figure FDA0003279908850000063
Figure FDA0003279908850000064
4-13 in order to compensate the spatial information of the multichannel electroencephalogram signals, calculating an average power difference value APD by the following formula and method:
APD=2*(P(S1)+P(S2))-(P(S5)+P(S6)+P(S7)+P(S8)) (28)
wherein P () is the calculated average power value, and the calculation formula is:
Figure FDA0003279908850000065
4-14 calculating the correlation coefficient difference CCD by the following formula and method:
CCD=2*CC(S1,S2)-CC(S1,S9)-CC(S2,S10) (30)
wherein CC () is the value of the correlation coefficient calculated by steps 4-9;
4-15 calculating the power spectral density difference PSDD by the following formula and method:
Figure FDA0003279908850000071
PSD () represents the power spectral density of the channel signal, mean () represents the mean calculation;
4-16 assuming that all of the two classes of training samples are T, the 16-dimensional feature extraction of steps 4-2 through 4-15 is performed on all training samples and T x 16-dimensional feature vector FVs is constructed.
6. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 2, wherein the step 5 comprises the following steps:
performing feature fusion on FPs and FVs to form a 20-dimensional feature vector Fs, and selecting features by adopting a variance filtering method aiming at the feature vector Fs;
5-1 the variance Var in each dimension of the feature vector Fs is calculated by the following formulai(i=12,...,20);
Figure FDA0003279908850000072
Where N is the total number of data per dimension, here 20,
Figure FDA0003279908850000073
is the mean value of all data of the ith dimension, Vi(n) nth data for the ith dimension;
5-2, sorting the variance results obtained in the step 5-1 according to a descending order, and storing feature dimension index values corresponding to the sorted variance values;
and 5-3, according to the characteristic dimension index values obtained in the step 5-2, selecting the characteristics corresponding to the characteristic dimension index values with the number of M as new characteristic vectors TRFs with the dimensions of T x M in a positive sequence.
7. The method for detecting the blink artifact based on EMD-CSP electroencephalogram feature learning and intelligent fusion as claimed in claim 6, wherein the step 6 comprises the following steps:
6-1, performing model training by using the feature vectors TRFs obtained in the step 5 in combination with the labels of the two categories and a Support Vector Machine (SVM) algorithm to obtain a classifier model;
6-2, applying the divided test samples to the feature extraction algorithm in the step 2-5, obtaining test sample feature vectors TEFs by using the extracted features as the M-dimensional features after feature selection, performing prediction classification on the TEFs by using the classifier model obtained in the step 6-1, and calculating a detection result;
6-3, according to the detection result in the step 6-2, an optimal classifier model, namely an artifact detection model, is constructed by adjusting the kernel function type and the kernel function parameters of the classifier model.
CN202111129277.0A 2021-09-26 2021-09-26 Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion Pending CN113850192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111129277.0A CN113850192A (en) 2021-09-26 2021-09-26 Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111129277.0A CN113850192A (en) 2021-09-26 2021-09-26 Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion

Publications (1)

Publication Number Publication Date
CN113850192A true CN113850192A (en) 2021-12-28

Family

ID=78979781

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111129277.0A Pending CN113850192A (en) 2021-09-26 2021-09-26 Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion

Country Status (1)

Country Link
CN (1) CN113850192A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115227504A (en) * 2022-07-18 2022-10-25 浙江师范大学 Automatic lifting sickbed system based on electroencephalogram and electrooculogram signals
CN114358090B (en) * 2022-02-14 2024-06-07 重庆邮电大学 Motor imagery electroencephalogram signal classification method based on PSD and CSP

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114358090B (en) * 2022-02-14 2024-06-07 重庆邮电大学 Motor imagery electroencephalogram signal classification method based on PSD and CSP
CN115227504A (en) * 2022-07-18 2022-10-25 浙江师范大学 Automatic lifting sickbed system based on electroencephalogram and electrooculogram signals

Similar Documents

Publication Publication Date Title
CN104586387B (en) Method for extracting and fusing time, frequency and space domain multi-parameter electroencephalogram characters
CN105956624B (en) Mental imagery brain electricity classification method based on empty time-frequency optimization feature rarefaction representation
WO2018014436A1 (en) Emotion eeg recognition method providing emotion recognition model time robustness
CN110969108B (en) Limb action recognition method based on autonomic motor imagery electroencephalogram
CN107260166A (en) A kind of electric artefact elimination method of practical online brain
CN110584660B (en) Electrode selection method based on brain source imaging and correlation analysis
CN112656427A (en) Electroencephalogram emotion recognition method based on dimension model
CN110269609B (en) Method for separating ocular artifacts from electroencephalogram signals based on single channel
CN110151203B (en) Fatigue driving identification method based on multistage avalanche convolution recursive network EEG analysis
CN110537929B (en) SSVEP-based attention assessment method, training method and brain-computer interface
CN114533086B (en) Motor imagery brain electrolysis code method based on airspace characteristic time-frequency transformation
CN112464902B (en) Electroencephalogram blink artifact detection method based on multichannel multidimensional feature optimization
CN103919565A (en) Fatigue driving electroencephalogram signal feature extraction and identification method
CN106236027A (en) Depressed crowd's decision method that a kind of brain electricity combines with temperature
CN113010013A (en) Wasserstein distance-based motor imagery electroencephalogram migration learning method
CN114732409A (en) Emotion recognition method based on electroencephalogram signals
CN108836327A (en) Intelligent outlet terminal and EEG signal identification method based on brain-computer interface
CN113850192A (en) Blink artifact detection method based on EMD-CSP electroencephalogram feature learning and intelligent fusion
CN106073767B (en) Phase synchronization measurement, coupling feature extraction and the signal recognition method of EEG signal
CN113208613B (en) Multi-mode BCI (binary coded decimal) timing optimization method based on FHLS (FHLS) feature selection
CN111067513A (en) Sleep quality detection key brain area judgment method based on characteristic weight self-learning
CN111671421B (en) Electroencephalogram-based children demand sensing method
CN112560703B (en) Multi-mode BCI feature extraction method based on PF coefficient
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
Ahmed et al. Effective hybrid method for the detection and rejection of electrooculogram (EOG) and power line noise artefacts from electroencephalogram (EEG) mixtures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination