Disclosure of Invention
The present invention is directed to solving the above problems of the prior art. An eye electrical artifact removing method based on an SSDA electroencephalogram signal is provided. The technical scheme of the invention is as follows:
an ocular artifact removal method for brain electrical signals based on SSDA comprises the following steps:
and S1, in an off-line stage, taking the pure electroencephalogram signal as a training set, carrying out normalization processing, inputting a stacked sparse denoising self-coding SSDA model for training, carrying out pre-training on the stacked sparse denoising self-coding SSDA model, wherein the SSDA is composed of two sparse denoising self-coding (SDAs) in an end-to-end connection mode, the output of the first SDA is the input of the second SDA, and the output of the second SDA is subjected to inverse normalization processing, so that the reconstructed electroencephalogram signal is obtained.
S2, obtaining an error between the reconstructed EEG signal and the pure EEG signal to minimize the error, continuously training SSDA, and finely adjusting the model parameters according to a gradient descent method;
and S3, in an online stage, acquiring the electroencephalogram signals containing the ocular artifacts, normalizing the electroencephalogram signals, inputting the normalized EEG signals into the SSDA model trained in the step S2, and performing inverse normalization processing on output data to obtain the electroencephalogram signals without the ocular artifacts.
Further, the calculation formula of the normalization of the pure electroencephalogram signal in the step S1 is as follows:
in the middle, EEGstd(i) Represents the value after normalization, i represents the number of electroencephalogram sampling points, j ∈ 1,2orgRepresenting the original value before normalization.
Further, in step S2, the SSDA is trained according to the error between the reconstructed EEG signal and the clean electroencephalogram signal, which is as follows:
the pre-training process comprises the following steps:
(1) randomly zeroing the normalized EEG signal according to a proportion to destroy the normalized EEG signal to obtain
Represents one sample, n represents the number of samples;
(2) carrying out random initialization on w and b, and obtaining the mapping h of the first hidden layer of the model according to the formulas (3) to (5)(1);
h=f(wx+b) (3)
In the formulas (3) to (5), h represents the value of the hidden layer, x represents the electroencephalogram sequence, and J
DAE(w, b) represents a loss function of sparse noise reduction self-coding,
f (-) and g (-) represent mapping functions for encoding and decoding, respectively, and are usually non-linear, w represents a weight matrix between the input layer and the hidden layer, w
TRepresenting the weight matrix between the hidden layer and the output layer, b and b' representing the bias vectors of the hidden layer and the output layer, respectively, n representing the number of samples of the input,
representing contaminated input data, h
iRepresenting hidden layer vectors, w and b represent weight and offset vectors respectively;
(3) for hidden layer h(1)Performing thinning treatment to obtain h according to the formulas (6) and (7)(1)And determines model parameters w1,b1Finishing the training of the first SDA;
in the formula, beta is the weight of sparse penalty factor, s
2The number of hidden layer neurons after the sparse layer, KL is relative entropy, rho is sparse parameter,
Average activation of training set, lambda is regularization parameter weight,
As weight, s, between the input layer and the hidden layer
lThe number of hidden layer neurons after regularization. EEG (electroencephalogram)
nstd(i) For de-normalised EEG signals, EEG
org(i) And EEG
out(i) Respectively representing signals input and output by the model, and n represents the number of signal sampling points;
(4) will hide the layer h(1)Is used as the input of the second SDA, and the value of (w) trained by the last SDA is used1,b1Replace the random parameter, w1、b1Respectively representing the weight and offset value between the input layer and the output layer of the first SDA, and repeating steps (2) and (3) to determine the output and parameter { w } of the second SDA2,b2},w2、b2Representing the weights and offset values between the second SDA input layer and the hidden layer, respectively. So far, the two SDAs in the model are trained, namely the pre-training process of the SSDA is completed, and then the SSDA is finely adjusted to enable the parameter value to be optimal on the whole network.
Further, the fine tuning process specifically includes:
(1) for Δ w1And Δ b1Initialization is performed, Δ w1、Δb1Delta values representing the weight and bias between the first SDA input layer and the hidden layer, respectively. Let Δ w1=0,Δb1=0;
(2) Calculated by back propagation algorithm
And
and
are respectively indicated
And
contrast dispersion values representing weight values and bias values, respectively;
(3) order to
Δw
l、Δb
lDelta values representing the weights and biases of the ith SDA input layer and hidden layer, respectively, where l ∈ {1,2 };
(5) The parameters are updated in such a way that,
where α represents the learning rate.
At this point, the fine tuning process of the SSDA model is completed, that is, the training process of the entire model is completed, and the parameters at this time are optimized on the entire model.
Further, in step S3, the EEG signal containing the ocular electrical EOG artifact is normalized, the trained SSDA model is input, and the output data is subjected to the inverse normalization processing, so as to obtain the EEG with EOG removed, specifically as follows:
carrying out normalization processing on the EEG containing the EOG artifact, wherein the normalization calculation is shown as a formula (8):
in the formula, EEGstd(i) Representing values after normalization, EEGorgRepresenting the original value before normalization;
inputting the normalized signal into the trained SSDA model, and then performing inverse normalization processing on the output value of the model to obtain the EEG signal without EOG, wherein the process of inverse normalization calculation is shown as formula (9):
in the formula, EEGnstd(i) Representing de-normalized EEG signals, i.e. EEG signals after removal of ocular artefacts, EEGinAnd EEGout(i) Respectively representing signals input and output by the model, i represents the number of electroencephalogram signal sampling points, and j belongs to 1, 2.
The invention has the following advantages and beneficial effects:
the invention provides an SSDA-based electroencephalogram signal ocular artifact removal method, which can learn the characteristics of a pure electroencephalogram signal through strong learning capability and signal reconstruction capability of a self-coding network on the premise of not using an ocular artifact signal as a reference signal, further reconstruct a pure electroencephalogram signal from an electroencephalogram signal containing the ocular artifact, achieve the purpose of removing the ocular artifact and remove the ocular artifact of the electroencephalogram signal of any channel. The method comprises the following specific steps: firstly, taking a pure brain electrical signal as a training set, carrying out normalization processing, inputting a stack-type sparse denoising self-coding (SSDA) model and carrying out pre-training, wherein the SSDA model consists of two sparse denoising self-coding Systems (SDAs) in a head-to-tail connection mode, the output of the first SDA is the input of the second SDA, and the output of the second SDA is subjected to inverse normalization processing, so that the reconstructed brain electrical signal is obtained. And secondly, acquiring an error between the reconstructed electroencephalogram signal and the pure electroencephalogram signal to minimize the error, continuously training the SSDA, and finely adjusting the model parameters according to a gradient descent method to finish the training of the SSDA model. And finally, normalizing the electroencephalogram signals containing the ocular artifacts, inputting the normalized electroencephalogram signals into the trained SSDA model, and performing inverse normalization processing on output data to obtain the electroencephalogram signals without the ocular artifacts. Compared with other methods, the method can not only reduce the time for removing the ocular artifacts in the electroencephalogram signal, but also improve the signal-to-noise ratio of the electroencephalogram signal.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
an ocular artifact removal method for brain electrical signals based on SSDA comprises the following steps:
and S1, taking the pure electroencephalogram signal as a training set, carrying out normalization processing, and inputting the signal into the SSDA model.
In the middle, EEGstd(i) Represents the value after normalization, i represents the number of electroencephalogram sampling points, j ∈ 1,2orgRepresenting the original value before normalization.
And S2, continuously adjusting model parameters without training SSDA according to the minimum error between the reconstructed EEG signal and the pure EEG signal.
The pre-training process comprises the following steps:
(1) randomly zeroing and destroying the normalized EEG signal according to a certain proportion to obtain
Represents one sample, n represents the number of samples;
(2) performing random initialization on w and b, and obtaining the mapping h of the first hidden layer of the model according to the formulas (19) - (21)(1)。
h=f(wx+b) (19)
In the formulae (19) to (21), h represents the value of the hidden layer, x represents the electroencephalogram sequence, and J
DAE(w, b) represents a loss function of sparse noise reduction self-coding,
f (-) and g (-) represent mapping functions for encoding and decoding, respectively, and are usually non-linear, w represents a weight matrix between the input layer and the hidden layer, w
TRepresenting a weight matrix between the hidden layer and the output layer, b and b' representing offset vectors of the hidden layer and the output layer, respectively, n representing the number of samples of the input,
representing contaminated input data, h
iRepresenting hidden layer vectors, w and b represent weight and offset vectors respectively;
(3) for hidden layer h(1)Performing thinning treatment to obtain h according to the formulas (22) and (23)(1)And determines model parameters w1,b1To complete the training of the first SDA.
In the formula, beta is the weight of sparse penalty factor, s
2The number of hidden layer neurons after the sparse layer, KL is relative entropy, rho is sparse parameter,
Average activation of training set, lambda is regularization parameter weight,
As weight, s, between the input layer and the hidden layer
lThe number of hidden layer neurons after regularization. EEG (electroencephalogram)
nstd(i) For inverse normalized EEG signals, EEG
org(i) And EEG
out(i) Respectively representing the input and output signals of the model, and the number of n-generation samples;
(4) will hide the layer h(1)Is used as the input of the second SDA, and the value of (w) trained by the last SDA is used1,b1Replacing the random parameter, repeating steps (2) and (3) to determine the output of the second SDA and the parameter { w }2,b2}. At this point, the two SDAs in the model are trained, and the pre-training process of the SSDA is completed. The SSDA is then fine-tuned to optimize the parameter values throughout the network.
And (3) fine adjustment process:
(1) for Δ w1And Δ b1Initialization is performed, Δ w1、Δb1Delta values representing the weight and bias between the first SDA input layer and the hidden layer, respectively. Let Δ w1=0,Δb1=0;
(2) Calculated by back propagation algorithm
And
and
are respectively indicated
And
contrast dispersion values representing weight values and bias values, respectively;
(3) order to
Δw
l、Δb
lDelta values representing the weights and biases of the ith SDA input layer and hidden layer, respectively, where l ∈ {1,2 };
(5) The parameters are updated in such a way that,
where α represents the learning rate.
At this point, the fine tuning process of the SSDA model is completed, that is, the training process of the entire model is completed, and the parameters at this time are optimized on the entire model.
S3, the EEG signal containing the electro-oculogram (EOG) artifact is normalized, the trained SSDA model is input, and the output data is processed by inverse normalization, so that the EEG without the EOG is obtained.
In the formula, EEGstd(i) Representing values after normalization, EEGorgRepresenting the original value before normalization.
And inputting the normalized signal into the SSDA model after training, and then performing inverse normalization processing on the output value of the model to obtain the EEG signal after EOG removal. The process of the denormalization calculation is shown in equation (25):
in the formula, EEGnstd(i) Representing de-normalized EEG signals, i.e. EEG signals after removal of ocular artefacts, EEGorg(i) And EEGout(i) Respectively representing signals input and output by the model, i represents the number of electroencephalogram signal sampling points, and j belongs to 1, 2.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.