CN111860306B - Electroencephalogram signal denoising method based on width depth echo state network - Google Patents

Electroencephalogram signal denoising method based on width depth echo state network Download PDF

Info

Publication number
CN111860306B
CN111860306B CN202010695317.7A CN202010695317A CN111860306B CN 111860306 B CN111860306 B CN 111860306B CN 202010695317 A CN202010695317 A CN 202010695317A CN 111860306 B CN111860306 B CN 111860306B
Authority
CN
China
Prior art keywords
layer
state
matrix
network
electroencephalogram
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010695317.7A
Other languages
Chinese (zh)
Other versions
CN111860306A (en
Inventor
吴晓军
孙维彤
苏玉萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202010695317.7A priority Critical patent/CN111860306B/en
Publication of CN111860306A publication Critical patent/CN111860306A/en
Application granted granted Critical
Publication of CN111860306B publication Critical patent/CN111860306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

An electroencephalogram signal denoising method based on a wide-depth echo state network comprises the steps of selecting an electroencephalogram signal sample, simulating a noise-containing electroencephalogram signal sample, dividing a network training set and a testing set, constructing a network model, training the network model and verifying the testing set. The invention uses the echo state network, the learning process only calculates the output weight W out, the training parameters are less, the realization is easy, and the complexity of the linear combination and the noise reduction performance of the electroencephalogram signal are improved by increasing the number of the storage pools; the topological structure of width and depth is adopted, the capacity of extracting features of a storage pool is increased, more useful information is reserved in the feature extraction process, and multi-scale dynamic of time sequence data is captured and more complex features are extracted. The method has the advantages of high noise reduction performance, less training parameters, easiness in implementation, capability of keeping the nonlinear characteristics of the original electroencephalogram signals and the like, and can be used for preprocessing of signal processing and denoising of signals.

Description

Electroencephalogram signal denoising method based on width depth echo state network
Technical Field
The invention belongs to the technical field of electroencephalogram signal processing, and particularly relates to an electroencephalogram signal denoising method based on a wide-depth echo state network.
Technical Field
The use of several scalp-mounted electrodes to record electrical activity of the human brain, known as electroencephalogram signals, is extremely susceptible to various noise disturbances such as baseline wander, electromyographic signals, electrooculographic signals, etc. during the acquisition process, eye blink artifacts are very common in the electroencephalogram signals, and the low-frequency high-amplitude signals generated by them are much larger than the electroencephalogram signals, and these unwanted signal superposition seriously impair the electroencephalogram signals, resulting in reduced accuracy of extraction of characteristics of the electroencephalogram signals and affecting subsequent studies. Because the time-frequency domain characteristics of partial noise in the electroencephalogram signal are complex and the distribution is unknown, the traditional method is difficult to filter.
An echo state network, proposed by Jeager et al, is a typical example of a recurrent neural network, and is considered as a tool to model the time correlation between input and output sequences, learning can be achieved by either an offline linear regression or an online method. The echo state network has faster training speed and stronger nonlinear approximation capability than classical recurrent neural networks. The electroencephalogram signal is a multi-scale, nonlinear, fluctuating and random time sequence signal, the traditional echo state network only comprises one reservoir, the application range of the traditional echo state network is limited, and especially the data shows multi-scale and highly nonlinear dynamics performance, and for a multi-element time sequence, the traditional echo state network cannot meet the requirement of noise reduction performance due to the increase of characteristic information. In order to obtain higher noise reduction accuracy for the multi-element time series, more abundant features are extracted from the data, multi-scale dynamics of the time series data are captured, and more complex features are extracted, so that the depth and width of the reservoir are required to be increased.
In consideration of the decomposition mechanism of the traditional oil reservoir, a novel echo state network consisting of a plurality of reservoirs with parallel and stacked topological structures is provided for multi-element time series noise reduction, which is called a width-depth echo state network, and the dynamic characteristics of the multi-element time series can be fully reflected by means of multiple reservoirs.
In the technical field of brain signal processing, the technical problem to be solved urgently at present provides an electroencephalogram signal denoising method of a wide-depth echo state network.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides the signal denoising method based on the width-depth echo state network, which has high noise reduction performance, less training parameters, easy realization and capability of keeping the nonlinear characteristics of the original electroencephalogram signals.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) Selecting an electroencephalogram signal sample
S electroencephalogram signal samples are selected from Physionet databases and used as the output of the depth-width echo state network;
normalizing each electroencephalogram signal sample according to the following formula:
Wherein x i is sample data, i.ltoreq.s is 1.ltoreq.i.ltoreq.s, and s is a finite positive integer.
(2) Analog noise-containing electroencephalogram signal sample
Noise with different signal to noise ratios is added to an electroencephalogram signal sample, and the electroencephalogram signal sample is simulated into a noise-containing electroencephalogram signal sample, wherein the noise adopts baseline noise or Gaussian white noise or electro-oculogram noise, the noise-containing electroencephalogram sample is normalized according to a formula (1), and the data is used as the input of a depth-width echo state network.
(3) Partitioning a training set and a testing set of a network
70% -90% Of electroencephalogram signal samples and noise-containing electroencephalogram samples are respectively used as a network training set, 10% -30% of electroencephalogram signal samples are used as a network testing set, and the testing set and the training set are not crossed.
(4) Construction of network model
The width depth echo state network consists of an input layer, a hidden layer and an output layer, wherein the hidden layer consists of L multiplied by M storage pools, L and M are finite positive integers, the hidden layer comprises L storage pools, the total number of each storage pool is M, the M storage pools of the same layer are connected in parallel, and the storage pools of different layers are connected in series; the input layer is provided with K input neurons, the number of each storage pool neuron of the hidden layer is N, the output layer is provided with H output neurons, the state matrix of the input layer is u (t), the state matrix of the hidden layer is x (lm) (t), the state matrix of the output layer is y (t), K, N and H are finite positive integers, and H is equal to K; initializing internal parameters of a reserve pool, wherein the internal parameters comprise sparsity SD, a spectrum radius SR and an input scale IS, the sparsity SD IS 1% -5%, the spectrum radius SR IS 0.01-0.99, and the input scale IS IS 0.01-0.99; the connection relation among the input layer, the hidden layer and the output layer is as follows:
The connection weight matrix of the input layer and the mth reservoir of the first layer is W (m) in, Is an N multiplied by K matrix, M is more than or equal to 1 and less than or equal to M, M is an integer, the weight matrix connected with the inner part of the lmth reservoir is W (lm),L is an integer, and the connection weight matrix of the first (m-1) reservoir and the lm-th reservoir is that The connection weight matrix of the hidden layer and the output layer is W out,/>W in (m),W(lm) and/>Is a parameter that is randomly initialized before network setup and remains unchanged throughout the training of the breadth-depth echo state network.
(5) Training network model
And extracting 100-500 samples from the training set to idle the model, initializing the hidden layer state of the model to 0, and carrying out noise reduction training on the model by other data in the training set to obtain a trained network model.
(6) Verification test set
And inputting the electroencephalogram data of the test set into a trained depth-width echo state network to obtain an output electroencephalogram signal.
In the network model building step (4) of the present invention, at time t, the layer state matrix u (t) is input as follows:
u(t)=[u1(t),u2(t),...,uK(t)]T
The hidden layer state matrix x (lm) (t) is as follows:
x(lm)(t)=[x(lm) 1(t),x(lm) 2(t),...,x(lm) N(t)]T
the output layer state matrix y (t) is as follows:
y(t)=[y1(t),y2(t),...,yH(t)]T
Wherein T is 1,2, …, T, time T is the size of the input time, u K (T) is the T-th input layer state of the kth input layer neuron, x (lm) N (T) is the T-th hidden layer state of the nth hidden layer neuron, and y H (T) is the T-th output layer state of the H output layer neuron.
In the training network model step (5), the noise reduction training of the model by other data in the training set comprises the following steps:
other data in the input training set are used as input of a storage pool state update, a storage pool state value is obtained, a combined state collection matrix of neurons and hidden layers of an input layer is obtained, and the combined state collection matrix is used as input of a weight matrix W out for connecting the hidden layers and the output layers of the computing width depth echo state network.
The reservoir status update includes:
the first layer reservoir status is updated as:
Wherein u (t+1) and x (1m) (t+1) are the states of the current input layer and the hidden layer, respectively, and x (1m) (t) represents a state on the current hidden layer; f (lm) (·) is a neuron activation function employing a hyperbolic tangent function in the depth-width echo state network; is a reservoir leakage parameter matrix,/> Each element of the formula is 0.0001-0.99%
The first layer of reservoir status is updated as follows:
Wherein 2.ltoreq.l.ltoreq.l, x (lm) (t+1) is the state of the current hidden layer, and x (lm) (t) represents a state on the current hidden layer.
The joint state collection matrix is of the formula:
z(lm)(t)=[u(t);x(lm)(t)]
obtaining a hidden layer and output layer connection weight matrix W out by adopting a ridge regression method:
Wout=(ZZT+λI)-1ZTY*
lambda is a regular term coefficient in the above formula, the value is 0.00000001-1, I is an identity matrix, and Z is a state set matrix, wherein the following formula is as follows:
Z=(z(lm)(1),z(lm)(2),…,z(lm)(T))T
Where Z is the data set matrix at all times of the joint state collection matrix Z (lm) (T), Z (lm) (T) is the joint state collection matrix at time T, and T is 1,2, …, T.
Y * is the network expected output matrix and Y * is determined as follows:
Y*=(y*(1),y*(2),…,y*(T))T
Wherein Y * (T) is an electroencephalogram at the current moment, Y * is a data set matrix of the electroencephalogram at all moments of Y * (T), Y * (T) is an electroencephalogram at the moment of T, and T is 1,2, … or T.
In the training network model step (6), the inputting the electroencephalogram data of the test set into the trained depth-width echo state network comprises:
The output layer state matrix y (t) is determined as follows:
y(t)=g(lm)(Wout×z(lm)(t))
Wherein g (lm) (·) is the active function of the output layer, which is an S-type function or hyperbolic tangent function.
For all times t=1, 2, …, T:
Y=g(lm)(Wout×Z)
wherein Y is the brain electrical signal output by the network, is the data set matrix of all moments of the output layer state matrix Y (t),
In the step (2) of simulating the noise-containing electroencephalogram signal sample, the noise with different signal to noise ratios is-5 dB.
Compared with the prior art, the invention has the following advantages:
Because the invention uses the echo state network, the learning process only calculates the output weight W out, the training parameters are less, the realization is easy, a plurality of reservoirs are increased, the complexity of linear combination is improved, and the noise reduction performance of the electroencephalogram signals is improved; the topological structure of width and depth is adopted, the capacity of extracting features of a storage pool is increased, richer features can be extracted from the data, more useful information is reserved as much as possible in the process of extracting the features, multi-scale dynamics of time sequence data are captured, and more complex features are extracted. The invention has the advantages of high noise reduction performance, less training parameters, easy realization, capability of keeping the nonlinear characteristics of the original electroencephalogram signals and the like, and can be used in the preprocessing process of signal processing and the technical field of signal denoising processing.
Drawings
Fig. 1 is a flow chart of embodiment 1 of the present invention.
Fig. 2 is a schematic diagram of the breadth-depth echo state network of fig. 1.
Fig. 3 is a graph of the denoising result of the wavelet transform denoising method for the electro-oculopathy noise with the noise level of 0 dB.
Fig. 4 is a graph of the denoising result of the wiener filtering denoising method for the electro-oculogram noise with the noise level of 0 dB.
Fig. 5 is a graph of the results of an echo state network denoising method for denoising electro-oculogram noise having a noise level of 0 dB.
Fig. 6 is a graph of the denoising result of the method of embodiment 1 of the present invention for the electro-oculogram noise with the noise level of 0 dB.
Fig. 7 is a graph comparing the power spectral density of electro-oculogram noise with a noise level of 0dB for the method of example 1 and the wavelet transform denoising method.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples, but the present invention is not limited to the examples described below.
Example 1
Taking 10 electroencephalogram samples with a sampling frequency of 256Hz and a sampling time of 30 minutes, 460800 sampling points and 23 electrode channels of each electroencephalogram sample as an example, the signal denoising method based on the width-depth echo state network of the embodiment comprises the following steps (see fig. 1 and 2):
(1) Selecting an electroencephalogram signal sample
S electroencephalogram signal samples are selected from Physionet databases, s in the embodiment is 10, and the s are used as the output of a depth-width echo state network;
normalizing each electroencephalogram signal sample according to the following formula:
Wherein x i is sample data, i is more than or equal to 1 and less than or equal to 10.
(2) Analog noise-containing electroencephalogram signal sample
Noise with a signal-to-noise ratio of 0dB is added to an electroencephalogram signal sample, the noise is simulated into a noise-containing electroencephalogram signal sample, the noise in the embodiment adopts electro-oculogram noise, the noise-containing electroencephalogram sample is normalized according to the formula (1), and the data is used as the input of a depth-width echo state network.
(3) Partitioning a training set and a testing set of a network
The method is used for respectively taking 80% of the electroencephalogram signal sample and the noise-containing electroencephalogram sample as a network training set and 20% as a network testing set, and the testing set and the training set are not crossed.
(4) Construction of network model
The width depth echo state network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises L multiplied by M storage pools, L of the embodiment is 2, M is 4, the hidden layer comprises L storage pools, the total number of each storage pool is M, M storage pools of the same layer are connected in parallel, and storage pools of different layers are connected in series. The input layer has K input neurons, the K in this embodiment is 23, the number of neurons in each reservoir in the hidden layer is N, the N in this embodiment is 100, the output layer has H output neurons, the H in this embodiment is 23, the state matrix of the input layer is u (t), the state matrix of the hidden layer is x (lm) (t), and the state matrix of the output layer is y (t). Initializing internal parameters of a storage pool, wherein the internal parameters comprise sparsity SD, a spectrum radius SR and an input scale IS, the sparsity SD IS 1% -5%, the spectrum radius SR IS 0.01-0.99, and the input scale IS IS 0.01-0.99; the same parameters are used for each reservoir in this embodiment, with a sparsity SD of 2%, a spectral radius SR of 0.48, and an input scale IS of 0.58. The connection relation among the input layer, the hidden layer and the output layer is as follows:
The connection weight matrix of the input layer and the mth reservoir of the first layer is W (m) in, For an n×k matrix, m in this embodiment is: an integer of m is more than or equal to 1 and less than or equal to 4, and the weight matrix connected to the inner part of the lmth reservoir is W (lm),/>Lm of this embodiment is: l is 1, or 2, m is an integer of 1-4, and the connection weight matrix of the first (m-1) reservoir and the lm-th reservoir is/> The connection weight matrix of the hidden layer and the output layer is W out,/>W in (m),W(lm) and/>Is a parameter that is randomly initialized before network setup and remains unchanged throughout the training of the breadth-depth echo state network.
In this embodiment, at time t, the state matrix u (t) of the input layer is as follows:
u(t)=[u1(t),u2(t),...,uK(t)]T
The state matrix x (lm) (t) of the hidden layer is as follows:
x(lm)(t)=[x(lm) 1(t),x(lm) 2(t),...,x(lm) N(t)]T
The state matrix y (t) of the output layer is as follows:
y(t)=[y1(t),y2(t),...,yH(t)]T
Wherein T is 1,2, …, T is the size of the input sampling point time, T in this embodiment is 30 minutes, u K (T) is the T-th input layer state of the K-th input layer neuron, x (lm) N (T) is the T-th hidden layer state of the N-th hidden layer neuron, and y H (T) is the T-th output layer state of the H-th output layer neuron.
Because the invention adopts the hidden layer, the hidden layer comprises the storage pools which are mutually connected in series and parallel, the depth and the width of the network are increased, and the noise reduction effect is improved. In the hidden layer, the learning capability of the wide depth echo state network is enhanced through the topological form of parallel and stacked multiple storage pools, the noise-containing electroencephalogram signals established for electroencephalogram denoising are processed to obtain high-quality electroencephalogram signals, more nonlinear characteristics are reserved, the signal-to-noise ratio and root mean square error of the electroencephalogram signals are obviously improved, and the efficiency, quality and robustness of the electroencephalogram signal denoising are improved.
The invention uses the echo state network, only calculates the output weight W out in the learning process, has few training parameters, is easy to realize, increases a plurality of reservoirs, improves the complexity of linear combination, and improves the noise reduction performance of the electroencephalogram signals.
(5) Training network model
And extracting 200 samples from the training set to idle the model, initializing the hidden layer state of the model to 0, and carrying out noise reduction training on the model by other data in the training set to obtain a trained network model.
Other data in the training set of this embodiment includes:
Other data in the input training set are used as input of a storage pool state update, a storage pool state value is obtained, a combined state collection matrix of neurons of an input layer and a hidden layer is obtained, and the combined state collection matrix is used as input of a weight matrix W out for connecting the hidden layer of the echo state network with an output layer in a calculation width depth;
the reservoir status update includes:
the first layer reservoir status is updated as:
u (t+1) and x (1m) (t+1) are the states of the current input layer and the hidden layer, respectively, and x (1m) (t) represents a state on the current hidden layer; f (lm) (·) is a neuron activation function employing a hyperbolic tangent function in the depth-width echo state network; is a reservoir leakage parameter matrix, this example/> The value is as follows:
The first layer of reservoir status is updated as follows:
Where l is 2, x (lm) (t+1) is the state of the current hidden layer, and x (lm) (t) represents a state on the current hidden layer.
The joint state collection matrix of this embodiment is as follows:
z(lm)(t)=[u(t);x(lm)(t)]
obtaining a hidden layer and output layer connection weight matrix W out by adopting a ridge regression method:
Wout=(ZZT+λI)-1ZTY*
in the above formula, lambda is a regular term coefficient, the lambda value of the example is 0.000001, I is an identity matrix, and Z is a state set matrix, and the following formula is shown in the specification:
Z=(z(lm)(1),z(lm)(2),…,z(lm)(T))T
Wherein Z is a data set matrix at all times of a joint state collection matrix Z (lm) (T), Z (lm) (T) is a joint state collection matrix at time T, and T is 1,2, …, T.
Y * is the network expected output matrix and Y * is determined as follows:
Y*=(y*(1),y*(2),…,y*(T))T
Wherein Y * (T) is an electroencephalogram at the current moment, Y * is a data set matrix of the electroencephalogram at all moments of Y * (T), Y * (T) is an electroencephalogram at the moment of T, and T is 1,2, … or T.
The invention updates the output weight matrix W out in the training stage, keeps the lightweight training of the echo state network model, and improves the efficiency and quality of the electroencephalogram signal denoising in the signal preprocessing process and the signal denoising process.
(6) Verification test set
And inputting the electroencephalogram data of the test set into a trained depth-width echo state network to obtain an output electroencephalogram signal.
The trained depth-width echo state network of this embodiment includes:
The output layer state matrix y (t) is determined as follows:
y(t)=g(lm)(Wout×z(lm)(t))
Wherein g (lm) (·) is the active function of the output layer, g (lm) (·) of the present example is the active function is the hyperbolic tangent function;
for all times t=1, 2, …, T.
Y=g(lm)(Wout×Z)
Wherein Y is the brain electrical signal output by the network, is the data set matrix of the output layer state matrix Y (t) for all moments,Obtaining the output brain electrical signal.
Example 2
Taking 10 electroencephalogram samples with a sampling frequency of 256Hz and a sampling time of 30 minutes, 460800 sampling points of each electroencephalogram sample and 23 electrode channels as an example, the signal denoising method based on the width-depth echo state network in the embodiment comprises the following steps:
(1) Selecting an electroencephalogram signal sample
This step is the same as in example 1.
(2) Analog noise-containing electroencephalogram signal sample
And adding noise with a signal-to-noise ratio of-5 dB to the electroencephalogram signal sample, and simulating the electroencephalogram signal sample into a noise-containing electroencephalogram signal sample. Other steps in this step are the same as those of example 1.
(3) Partitioning a training set and a testing set of a network
70% Of electroencephalogram samples and noise-containing electroencephalogram samples are respectively used as a network training set and 30% of electroencephalogram samples are used as a network testing set by a leave-out method, and the testing set and the training set are not crossed.
(4) Construction of network model
The width depth echo state network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises L multiplied by M storage pools, L of the embodiment is 2, M is 4, the hidden layer comprises L storage pools, the total number of each storage pool is M, M storage pools of the same layer are connected in parallel, and storage pools of different layers are connected in series. The input layer has K input neurons, the K in this embodiment is 23, the number of neurons in each reservoir in the hidden layer is N, the N in this embodiment is 100, the output layer has H output neurons, the H in this embodiment is 23, the state matrix of the input layer is u (t), the state matrix of the hidden layer is x (lm) (t), and the state matrix of the output layer is y (t). Initializing internal parameters of a storage pool, wherein the internal parameters comprise sparsity SD, a spectrum radius SR and an input scale IS, the sparsity SD IS 1% -5%, the spectrum radius SR IS 0.01-0.99, and the input scale IS IS 0.01-0.99; the same parameters are used for each reservoir in this embodiment, with a sparsity SD of 1%, a spectral radius SR of 0.01, and an input scale IS of 0.01. The connection relation among the input layer, the hidden layer and the output layer is as follows:
The connection weight matrix of the input layer and the mth reservoir of the first layer is W (m) in, For an n×k matrix, m in this embodiment is: an integer of m is more than or equal to 1 and less than or equal to 4, and the weight matrix connected to the inner part of the lmth reservoir is W (lm),/>Lm of this embodiment is: l is 1, or 2, m is an integer of 1-4, and the connection weight matrix of the first (m-1) reservoir and the lm-th reservoir is/> The connection weight matrix of the hidden layer and the output layer is W out,/>W in (m),W(lm) and/>Is a parameter that is randomly initialized before network setup and remains unchanged throughout the training of the breadth-depth echo state network.
The other steps of this step are the same as those of example 1.
(5) Training network model
And extracting 100 samples from the training set to idle the model, initializing the hidden layer state of the model to 0, and carrying out noise reduction training on the model by other data in the training set to obtain a trained network model.
Other data in the training set of this embodiment includes:
Other data in the input training set are used as input of a storage pool state update, a storage pool state value is obtained, a combined state collection matrix of neurons of an input layer and a hidden layer is obtained, and the combined state collection matrix is used as input of a weight matrix W out for connecting the hidden layer of the echo state network with an output layer in a calculation width depth;
the reservoir status update includes:
the first layer reservoir status is updated as:
u (t+1) and x (1m) (t+1) are the states of the current input layer and the hidden layer, respectively, and x (1m) (t) represents a state on the current hidden layer; f (lm) (·) is a neuron activation function employing a hyperbolic tangent function in the depth-width echo state network; is a reservoir leakage parameter matrix, this example/> The value is as follows:
The first layer of reservoir status is updated as follows:
Where l is 2, x (lm) (t+1) is the state of the current hidden layer, and x (lm) (t) represents a state on the current hidden layer.
The joint state collection matrix of this embodiment is as follows:
z(lm)(t)=[u(t);x(lm)(t)]
obtaining a hidden layer and output layer connection weight matrix W out by adopting a ridge regression method:
Wout=(ZZT+λI)-1ZTY*
in the above formula, lambda is a regular term coefficient, the lambda value of the example is 0.00000001, I is an identity matrix, and Z is a state set matrix, and the following formula is adopted:
Z=(z(lm)(1),z(lm)(2),…,z(lm)(T))T
Wherein Z is a data set matrix at all times of a joint state collection matrix Z (lm) (T), Z (lm) (T) is a joint state collection matrix at time T, and T is 1,2, …, T.
Y * is the network expected output matrix and Y * is determined as follows:
Y*=(y*(1),y*(2),…,y*(T))T
Wherein Y * (T) is an electroencephalogram at the current moment, Y * is a data set matrix of the electroencephalogram at all moments of Y * (T), Y * (T) is an electroencephalogram at the moment of T, and T is 1,2, … or T.
(6) Verification test set
This step is the same as in example 1. Obtaining the output brain electrical signal.
Example 3
Taking 10 electroencephalogram samples with a sampling frequency of 256Hz and a sampling time of 30 minutes, 460800 sampling points of each electroencephalogram sample and 23 electrode channels as an example, the signal denoising method based on the width-depth echo state network in the embodiment comprises the following steps:
(1) Selecting an electroencephalogram signal sample
This step is the same as in example 1.
(2) Analog noise-containing electroencephalogram signal sample
And adding noise with a signal-to-noise ratio of 5dB to the electroencephalogram signal sample, and simulating the electroencephalogram signal sample into a noise-containing electroencephalogram signal sample. Other steps in this step are the same as those of example 1.
(3) Partitioning a training set and a testing set of a network
90% Of the electroencephalogram signal samples and the electroencephalogram samples containing noise are respectively used as a network training set and 10% of the electroencephalogram signal samples are used as a network testing set by a leave-out method, and the testing set and the training set are not crossed.
(4) Construction of network model
The width depth echo state network comprises an input layer, a hidden layer and an output layer, wherein the hidden layer comprises L multiplied by M storage pools, L of the embodiment is 2, M is 4, the hidden layer comprises L storage pools, the total number of each storage pool is M, M storage pools of the same layer are connected in parallel, and storage pools of different layers are connected in series. The input layer has K input neurons, the K in this embodiment is 23, the number of neurons in each reservoir in the hidden layer is N, the N in this embodiment is 100, the output layer has H output neurons, the H in this embodiment is 23, the state matrix of the input layer is u (t), the state matrix of the hidden layer is x (lm) (t), and the state matrix of the output layer is y (t). Initializing internal parameters of a storage pool, wherein the internal parameters comprise sparsity SD, a spectrum radius SR and an input scale IS, the sparsity SD IS 1% -5%, the spectrum radius SR IS 0.01-0.99, and the input scale IS IS 0.01-0.99; the same parameters are used for each reservoir in this embodiment, with a sparsity SD of 5%, a spectral radius SR of 0.99, and an input scale IS of 0.99. The connection relation among the input layer, the hidden layer and the output layer is as follows:
The connection weight matrix of the input layer and the mth reservoir of the first layer is W (m) in, For an n×k matrix, m in this embodiment is: an integer of m is more than or equal to 1 and less than or equal to 4, and the weight matrix connected to the inner part of the lmth reservoir is W (lm),/>Lm of this embodiment is: l is 1, or 2, m is an integer of 1-4, and the connection weight matrix of the first (m-1) reservoir and the lm-th reservoir is/> The connection weight matrix of the hidden layer and the output layer is W out,/>W in (m),W(lm) and/>Is a parameter that is randomly initialized before network setup and remains unchanged throughout the training of the breadth-depth echo state network.
The other steps of this step are the same as those of example 1.
(5) Training network model
And extracting 500 samples from the training set to idle the model, initializing the hidden layer state of the model to 0, and carrying out noise reduction training on the model by other data in the training set to obtain a trained network model.
Other data in the training set of this embodiment includes:
Other data in the input training set are used as input of a storage pool state update, a storage pool state value is obtained, a combined state collection matrix of neurons of an input layer and a hidden layer is obtained, and the combined state collection matrix is used as input of a weight matrix W out for connecting the hidden layer of the echo state network with an output layer in a calculation width depth;
the reservoir status update includes:
the first layer reservoir status is updated as:
wherein u (t+1) and x (1m) (t+1) are the states of the current input layer and the hidden layer, respectively, and x (1m) (t) represents a state on the current hidden layer; f (lm) (·) is a neuron activation function employing a hyperbolic tangent function in the depth-width echo state network; is a reservoir leakage parameter matrix, this example/> The value is as follows:
The first layer of reservoir status is updated as follows:
Where l is 2, x (lm) (t+1) is the state of the current hidden layer, and x (lm) (t) represents a state on the current hidden layer.
The joint state collection matrix of this embodiment is as follows:
z(lm)(t)=[u(t);x(lm)(t)]
obtaining a hidden layer and output layer connection weight matrix W out by adopting a ridge regression method:
Wout=(ZZT+λI)-1ZTY*
In the above formula, lambda is a regular term coefficient, lambda in the example takes a value of 1, I is a unit matrix, and Z is a state set matrix, and the following formula is shown in the specification:
Z=(z(lm)(1),z(lm)(2),…,z(lm)(T))T
Wherein Z is a data set matrix at all times of a joint state collection matrix Z (lm) (T), Z (lm) (T) is a joint state collection matrix at time T, and T is 1,2, …, T.
Y * is the network expected output matrix and Y * is determined as follows:
Y*=(y*(1),y*(2),…,y*(T))T
Wherein Y * (T) is an electroencephalogram at the current moment, Y * is a data set matrix of the electroencephalogram at all moments of Y * (T), Y * (T) is an electroencephalogram at the moment of T, and T is 1,2, … or T.
(6) Verification test set
This step is the same as in example 1. Obtaining the output brain electrical signal.
Example 4
In the above step (2) of simulating a noisy electroencephalogram signal sample of embodiments 1 to 3, the noise adopts the baseline noise, and other steps in this step are the same as those of the corresponding embodiments.
The other steps are the same as the corresponding embodiments. Obtaining the output brain electrical signal.
Example 5
In the above step (2) of simulating a noisy electroencephalogram signal sample of embodiments 1 to 3, gaussian white noise is used as noise, and other steps in the step are the same as those of the corresponding embodiments.
The other steps are the same as the corresponding embodiments. Obtaining the output brain electrical signal.
Example 6
Steps (1) to (5) in examples 1 to 5 above are the same as those in example 1.
(6) Verification test set
And inputting the electroencephalogram data of the test set into a trained depth-width echo state network to obtain an output electroencephalogram signal.
The trained depth-width echo state network of this embodiment includes:
The output layer state matrix y (t) is determined as follows:
y(t)=g(lm)(Wout×z(lm)(t))
Wherein g (lm) (·) is the activity function of the output layer, g (lm) (·) of this embodiment is the activity function, which is an S-type function;
for all times t=1, 2, …, T.
Other steps in this step are the same as those of the corresponding embodiment.
The other steps are the same as the corresponding embodiments. Obtaining the output brain electrical signal.
In order to verify the beneficial effects of the invention, the inventors conducted comparative experiments and simulation experiments using the method of example 1 of the invention, the experimental conditions were as follows:
1. Comparative experiments
The inventor adopts the method of the embodiment 1 of the invention to carry out a comparison test with a wavelet transformation method, a wiener filtering method and an echo state network method, and the experimental results are shown in figures 3 to 6.
In fig. 3 to 6, the abscissa represents the time of the electroencephalogram, the unit is s, the ordinate represents the signal amplitude value, the unit is mv, the dotted line is an electroencephalogram curve polluted by the electro-oculogram noise, and the solid line is an electroencephalogram curve processed by a wavelet transformation method, a wiener filtering method, an echo state network method and the method of example 1, respectively. As can be seen from fig. 3 to 6, the noise in the electroencephalogram signal is better removed by the method of embodiment 1.
Fig. 7 is a graph showing the contrast of the noise-removed power spectrum density, the abscissa represents the frequency value in Hz, the ordinate represents the signal power spectrum density value in dB, and four curves are respectively represented from top to bottom as the power spectrum density curves of the brain electrical signals after denoising by using the wavelet transform method, the wiener filtering method, the echo state network method and the method of example 1. As can be seen from fig. 7, the power spectrum density of the brain electrical signal after denoising by the method of the embodiment is the lowest, and the nonlinear characteristics of the brain electrical signal are maintained.
In order to verify the beneficial effects of the invention, the inventor adopts the method of the embodiment of the invention to carry out simulation experiments, and the experimental conditions are as follows:
2. Simulation experiment
(1) Simulation conditions
The hardware conditions are as follows: 4 Nvidia 1080Ti video cards and 128G memory.
(2) Simulation content and results
The experiment is carried out under the simulation conditions by adopting the method of the embodiment 1, and the results are shown in fig. 3 to 7, compared with the prior art, the invention has the following advantages:
According to the electroencephalogram denoising method, based on the learning of an echo state network, the learning capacity of the echo state network is enhanced by introducing a deep learning framework, distinguishable nonlinear characteristics are obtained by means of topological forms of a plurality of parallel and stacked reservoirs, mapping from noise electroencephalogram signals to electroencephalogram signals is established, and real-time denoising is realized; the width depth echo state network model only needs to update the output weight matrix W out in the training stage, so that the lightweight training of the echo state network model is maintained, and the efficiency and quality of the electroencephalogram signal denoising are improved in the signal preprocessing process and the signal denoising process.
The hidden layer of the constructed width depth echo state network model comprises reservoirs which are mutually connected in series and parallel, so that the network depth and width are increased, and the noise reduction effect is improved. In the hidden layer, the learning capability of the wide depth echo state network is enhanced through the topological form of parallel and stacked multiple reservoirs, the noise-containing electroencephalogram signals established for electroencephalogram denoising are processed according to the nonlinear mapping from the noise-containing electroencephalogram signals to the electroencephalogram signals, high-quality electroencephalogram signals are obtained, more nonlinear characteristics are reserved, the signal-to-noise ratio and root mean square error of the electroencephalogram signals are remarkably improved, and the efficiency, quality and robustness of the electroencephalogram signal denoising are improved.

Claims (3)

1. An electroencephalogram signal denoising method based on a width depth echo state network is characterized by comprising the following steps of:
(1) Selecting an electroencephalogram signal sample
S electroencephalogram signal samples are selected from Physionet databases and used as the output of the width-depth echo state network;
normalizing each electroencephalogram signal sample according to the following formula:
Wherein x i is sample data, i is more than or equal to 1 and less than or equal to s, and s is a finite positive integer;
(2) Analog noise-containing electroencephalogram signal sample
Adding noise with different signal to noise ratios to an electroencephalogram signal sample, simulating the electroencephalogram signal sample into a noise-containing electroencephalogram signal sample, wherein the noise adopts baseline noise or Gaussian white noise or electro-oculogram noise, carrying out normalization processing on the noise-containing electroencephalogram sample according to a formula (1), and taking the data as the input of a width-depth echo state network;
(3) Partitioning a training set and a testing set of a network
Using a reserving method to respectively take 70% -90% of the electroencephalogram signal sample and the noise-containing electroencephalogram sample as a network training set and 10% -30% as a network testing set, wherein the testing set and the training set are not crossed;
(4) Construction of network model
The width depth echo state network consists of an input layer, a hidden layer and an output layer, wherein the hidden layer consists of L multiplied by M storage pools, L and M are finite positive integers, the hidden layer comprises L storage pools, the total number of each storage pool is M, the M storage pools of the same layer are connected in parallel, and the storage pools of different layers are connected in series; the input layer is provided with K input neurons, the number of each storage pool neuron of the hidden layer is N, the output layer is provided with H output neurons, the state matrix of the input layer is u (t), the state matrix of the hidden layer is x (lm) (t), the state matrix of the output layer is y (t), K, N and H are finite positive integers, and H is equal to K; initializing internal parameters of a reserve pool, wherein the internal parameters comprise sparsity SD, a spectrum radius SR and an input scale IS, the sparsity SD IS 1% -5%, the spectrum radius SR IS 0.01-0.99, and the input scale IS IS 0.01-0.99; the connection relation among the input layer, the hidden layer and the output layer is as follows:
The connection weight matrix of the input layer and the mth reservoir of the first layer is W (m) in,W(m) in∈RN×K,RN×K which is N multiplied by K, M is an integer which is less than or equal to 1 and less than or equal to M, the internal connection weight matrix of the reservoirs of the lm is W (lm),W(lm)∈RN×N, L is less than or equal to 1 and less than or equal to L, L is an integer, and the connection weight matrix of the first (M-1) reservoir and the lm is that The connection weight matrix of the hidden layer and the output layer is W out,Wout∈RH×(K+N),Win (m),W(lm) and/>Is a parameter which is randomly initialized before the network is established and is kept unchanged in the whole training process of the wide depth echo state network;
At time t, the input layer state matrix u (t) is as follows:
u(t)=[u1(t),u2(t),...,uK(t)]T
The hidden layer state matrix x (lm) (t) is as follows:
the output layer state matrix y (t) is as follows:
y(t)=[y1(t),y2(t),...,yH(t)]T
Wherein T is 1,2, …, T 0, time T 0 is the size of the input time, u K (T) is the T-th input layer state of the K-th input layer neuron, x (lm) N (T) is the T-th hidden layer state of the N-th hidden layer neuron, and y H (T) is the T-th output layer state of the H-th output layer neuron;
(5) Training network model
Extracting 100-500 samples from the training set to idle the model, initializing the hidden layer state of the model to 0, and carrying out noise reduction training on the model by other data in the training set to obtain a trained network model;
The noise reduction training of the model by other data in the training set comprises the following steps:
Other data in the input training set are used as input of a storage pool state update, a storage pool state value is obtained, a combined state collection matrix of neurons of an input layer and a hidden layer is obtained, and the combined state collection matrix is used as input of a weight matrix W out for connecting the hidden layer of the echo state network with an output layer in a calculation width depth;
the reservoir status update includes:
the first layer reservoir status is updated as:
wherein u (t+1) and x (1m) (t+1) are the states of the current input layer and the hidden layer, respectively, and x (1m) (t) represents a state on the current hidden layer; f (lm) (·) is a neuron activation function, employing a hyperbolic tangent function in the breadth-depth echo state network; is a reservoir leakage parameter matrix,/> Each element of the formula is 0.0001-0.99%
The first layer of reservoir status is updated as follows:
wherein L is more than or equal to 2 and less than or equal to L, x (lm) (t+1) is the state of the current hidden layer, and x (lm) (t) represents a state on the current hidden layer;
the joint state collection matrix is of the formula:
z(lm)(t)=[u(t);x(lm)(t)]
obtaining a hidden layer and output layer connection weight matrix W out by adopting a ridge regression method:
Wout=(ZZT+λI)-1ZTY*
lambda is a regular term coefficient in the above formula, the value is 0.00000001-1, I is an identity matrix, and Z is a state set matrix, wherein the following formula is as follows:
Z=(z(lm)(1),z(lm)(2),…,z(lm)(T0))T
Wherein Z is a data set matrix at all times of a joint state collection matrix Z (lm) (T), Z (lm)(T0 is a joint state collection matrix at a time of T 0, and T is 1,2, … or T 0;
y * is the network expected output matrix and Y * is determined as follows:
Y*=(y*(1),y*(2),…,y*(T0))T
Wherein Y * (T) is an electroencephalogram at the current moment, Y * is a data set matrix of the electroencephalogram at all moments of Y * (T), Y *(T0) is an electroencephalogram at moment T 0, and T is 1,2, … or T 0;
(6) Verification test set
And inputting the electroencephalogram data of the test set into a trained wide-depth echo state network to obtain an output electroencephalogram signal.
2. The method for denoising brain electrical signals based on a breadth-depth echo state network according to claim 1, wherein in the step (6) of verifying the test set, inputting brain electrical data of the test set into the trained breadth-depth echo state network comprises:
The output layer state matrix y (t) is determined as follows:
y(t)=g(lm)(Wout×z(lm)(t))
Wherein g (lm) (·) is the active function of the output layer, which is an S-type function or hyperbolic tangent function;
For all times t=1, 2, …, T 0:
Y=g(lm)(Wout×Z)
wherein Y is the brain electrical signal output by the network, is the data set matrix of all moments of the output layer state matrix Y (t),
3. The method for denoising the electroencephalogram signals based on the breadth-depth echo state network according to claim 1, wherein the method comprises the following steps of: in the step (2), simulating the noise-containing electroencephalogram signal sample, wherein the noise of different signal-to-noise ratios is-5 dB.
CN202010695317.7A 2020-07-19 2020-07-19 Electroencephalogram signal denoising method based on width depth echo state network Active CN111860306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010695317.7A CN111860306B (en) 2020-07-19 2020-07-19 Electroencephalogram signal denoising method based on width depth echo state network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010695317.7A CN111860306B (en) 2020-07-19 2020-07-19 Electroencephalogram signal denoising method based on width depth echo state network

Publications (2)

Publication Number Publication Date
CN111860306A CN111860306A (en) 2020-10-30
CN111860306B true CN111860306B (en) 2024-06-14

Family

ID=73000627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010695317.7A Active CN111860306B (en) 2020-07-19 2020-07-19 Electroencephalogram signal denoising method based on width depth echo state network

Country Status (1)

Country Link
CN (1) CN111860306B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364826B (en) * 2020-12-01 2023-08-01 金陵科技学院 Insect pest identification method based on aerial image
CN112667080B (en) * 2020-12-28 2023-05-23 西安电子科技大学 Intelligent control method for electroencephalogram signal unmanned platform based on deep convolution countermeasure network
CN112862173B (en) * 2021-01-29 2022-10-11 北京工商大学 Lake and reservoir cyanobacterial bloom prediction method based on self-organizing deep confidence echo state network
CN113768520B (en) * 2021-09-18 2022-11-18 中国科学院自动化研究所 Training method and device for electroencephalogram detection model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108113665A (en) * 2017-12-14 2018-06-05 河北大学 A kind of automatic noise-reduction method of electrocardiosignal
CN109412900A (en) * 2018-12-04 2019-03-01 腾讯科技(深圳)有限公司 A kind of network state knows the method and device of method for distinguishing, model training

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403529B2 (en) * 2018-04-05 2022-08-02 Western Digital Technologies, Inc. Noise injection training for memory-based learning
CN109784242B (en) * 2018-12-31 2022-10-25 陕西师范大学 Electroencephalogram signal denoising method based on one-dimensional residual convolution neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108113665A (en) * 2017-12-14 2018-06-05 河北大学 A kind of automatic noise-reduction method of electrocardiosignal
CN109412900A (en) * 2018-12-04 2019-03-01 腾讯科技(深圳)有限公司 A kind of network state knows the method and device of method for distinguishing, model training

Also Published As

Publication number Publication date
CN111860306A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN111860306B (en) Electroencephalogram signal denoising method based on width depth echo state network
CN109784242B (en) Electroencephalogram signal denoising method based on one-dimensional residual convolution neural network
CN110069958B (en) Electroencephalogram signal rapid identification method of dense deep convolutional neural network
DE3855035T2 (en) Multi-layer neural network with dynamic programming
CN108416755A (en) A kind of image de-noising method and system based on deep learning
CN113723171B (en) Electroencephalogram signal denoising method based on residual error generation countermeasure network
CN108888264A (en) EMD and CSP merges power spectral density brain electrical feature extracting method
CN112001306A (en) Electroencephalogram signal decoding method for generating neural network based on deep convolution countermeasure
CN108664950A (en) A kind of electrical energy power quality disturbance identification and sorting technique based on deep learning
CN109325586B (en) System for denoising electroencephalogram signal
CN112435162B (en) Terahertz image super-resolution reconstruction method based on complex domain neural network
CN114521904B (en) Brain electrical activity simulation method and system based on coupled neuron group
CN111543984B (en) Method for removing ocular artifacts of electroencephalogram signals based on SSDA
CN113180659A (en) Electroencephalogram emotion recognition system based on three-dimensional features and cavity full convolution network
Liu et al. Classification of cetacean whistles based on convolutional neural network
Boubez et al. Wavelet neural networks and receptive field partitioning
CN114492560A (en) Electroencephalogram emotion classification method based on transfer learning
CN117860271A (en) Classifying method for motor imagery electroencephalogram signals
CN111476368B (en) Impulse neural network weight imaging comparison prediction and network anti-interference method
CN116369945A (en) Electroencephalogram cognitive recognition method based on 4D pulse neural network
CN116776188A (en) Electroencephalogram signal classification method based on multi-branch graph self-adaptive network
CN114519367A (en) Motor imagery electroencephalogram frequency characteristic analysis method and system based on sample learning
CN116628420A (en) Brain wave signal processing method based on LSTM neural network element learning
CN113643722B (en) Urban noise identification method based on multilayer matrix random neural network
CN113707172A (en) Single-channel voice separation method, system and computer equipment of sparse orthogonal network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant