CN113343869A - Electroencephalogram signal automatic classification and identification method based on NTFT and CNN - Google Patents

Electroencephalogram signal automatic classification and identification method based on NTFT and CNN Download PDF

Info

Publication number
CN113343869A
CN113343869A CN202110672321.6A CN202110672321A CN113343869A CN 113343869 A CN113343869 A CN 113343869A CN 202110672321 A CN202110672321 A CN 202110672321A CN 113343869 A CN113343869 A CN 113343869A
Authority
CN
China
Prior art keywords
electroencephalogram
training
neural network
convolutional neural
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110672321.6A
Other languages
Chinese (zh)
Inventor
姚彦吉
柳林涛
王国成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Precision Measurement Science and Technology Innovation of CAS
Original Assignee
Institute of Precision Measurement Science and Technology Innovation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Precision Measurement Science and Technology Innovation of CAS filed Critical Institute of Precision Measurement Science and Technology Innovation of CAS
Priority to CN202110672321.6A priority Critical patent/CN113343869A/en
Publication of CN113343869A publication Critical patent/CN113343869A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/02Preprocessing
    • G06F2218/04Denoising
    • G06F2218/06Denoising by applying a scale-space analysis, e.g. using wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Abstract

The invention discloses an automatic classification and identification method of electroencephalogram signals based on NTFT and CNN, which collects a sample data set; establishing a training sample set and a verification sample set; training the convolutional neural network model through a training sample set and a verification sample set; obtaining an optimal convolutional neural network model; and calculating the identification accuracy rate corresponding to the optimal convolutional neural network model as the optimal identification accuracy rate. The invention combines the immediate characteristics of NTFT with the learning ability of CNN, optimizes CNN through NTFT, and improves the identification accuracy.

Description

Electroencephalogram signal automatic classification and identification method based on NTFT and CNN
Technical Field
The invention belongs to the technical field of signal processing and machine learning, and particularly relates to an automatic classification and identification method of electroencephalogram signals based on NTFT (Normal Time-Frequency Transform) and CNN (Convolutional Neural Networks).
Background
The dysfunction of the brain can cause some nervous system diseases, which harm the health of people, and the affected people have wide range of directions and high death rate. The electroencephalogram signal diagnosis mainly depends on observation and experience judgment of a clinician on the conventional electroencephalogram of a patient, but the electroencephalogram examination work of the clinician is too large, the time is long, errors are easy to occur, and the diagnosis of the electroencephalogram signal is not facilitated.
With the development of computer science, data signal processing and recognition technology has been widely applied to medical signal processing, and electroencephalogram signal processing and analyzing methods are roughly divided into time domain analysis methods, frequency domain analysis methods, time-frequency analysis methods, neural networks and the like. The electroencephalogram signal is a random non-stationary signal, a common analysis method is time-frequency analysis, such as short-time Fourier transform, wavelet packet decomposition, empirical mode decomposition and the like, but the problems of low time-frequency resolution, frequency offset phenomenon, modal aliasing and the like exist. At present, researchers at home and abroad effectively apply a neural network to electroencephalogram signal processing, and apply a neural network model to effectively detect, classify and identify electroencephalogram signals in a time domain. However, in the case of noise interference in the time domain, the electroencephalogram signal characteristics are interfered by the noise and even submerged, which has a certain influence on the accuracy of classification identification.
Disclosure of Invention
The invention aims to solve the problems in the prior art, provides an automatic electroencephalogram signal classification and identification method based on NTFT and CNN, can stably, accurately and automatically process electroencephalogram signals, and solves the problem that noise interference influences the accuracy of electroencephalogram signal classification and identification. NTFT is introduced into the CNN algorithm, the characteristics of the computer signal are obviously highlighted, important characteristics in the brain electrical signal are well reserved in a frequency domain, random noise or other incoherent characteristics are relatively weakened in the frequency domain, and even partial noise is separated from the computer signal in the frequency domain. NTFT accurately expresses the instantaneous characteristics of the electroencephalogram signal and better optimizes the performance of CNN.
The above object of the present invention is achieved by the following technical solutions:
an electroencephalogram signal automatic classification and identification method based on NTFT and CNN comprises the following steps:
step 1, collecting different types of electroencephalogram signals as a sample data set;
step 2, standard time-frequency transformation is carried out on the EEG signals in the sample data set to obtain an EEG signal standard time-frequency spectrum, a part of EEG signal standard time-frequency spectrum is used as a training sample set, and the rest EEG signal standard time-frequency spectrum is used as a verification sample set; training the convolutional neural network model through the standard time spectrum of the electroencephalogram signals in the training sample set, and verifying the convolutional neural network model through the standard time spectrum of the electroencephalogram signals in the verification sample set;
step 3, calculating a training loss value through a training loss function, calculating a verification loss value through a verification loss function, and obtaining an optimal convolutional neural network model when the variation value of the training loss value is smaller than a training loss threshold value and the variation value of the verification loss value is smaller than a verification loss threshold value;
and 4, calculating the identification accuracy rate corresponding to the optimal convolutional neural network model as the optimal identification accuracy rate.
The standard frequency transformation in step 2 as described above is based on the following equation:
Figure BDA0003119853290000021
wherein the content of the first and second substances,
Figure BDA0003119853290000022
representing the standard-time frequency spectrum of the standard-time transformed post-electroencephalogram signal, tau and
Figure BDA0003119853290000023
respectively representing the instantaneous time and the instantaneous circular frequency, sigma representing the window width parameter of a Gaussian window, f (t) representing the electroencephalogram signal in the time domain, t representing the time, and j representing an imaginary number.
The training loss function train _ loss as described above is defined as follows:
Figure BDA0003119853290000024
wherein, ypFor training the standard time spectrum of the p-th EEG signal in the sample set, apTo the p-th nerve of the output layerAnd the output value of the meta-activation function q is the total number of the standard time spectrum of the electroencephalogram signal in the training sample set.
The verification loss function validation _ loss is defined as follows:
Figure BDA0003119853290000025
wherein, ykTo verify the standard time spectrum of the kth EEG signal in the sample set, akIs the output value of the k-th neuron activation function of the output layer, and l is the total number of standard time spectrums of the electroencephalogram signals in the verification sample set.
In the convolutional neural network model as described above, each neuron in the fully-connected layer is connected to all neurons in the previous layer, each neuron uses the ReLU function as an activation function, and the neurons in the output layer use the softmax activation function, which is defined as follows:
Figure BDA0003119853290000026
wherein n is the number of neurons in the output layer, aiIs the output value of the ith neuron of the output layer, ZiIs a linear weighted sum of the ith neuron, ZmIs the linear weighted sum of the mth neuron and e is the base of the natural logarithm.
Compared with the prior art, the invention has the following advantages:
the invention combines NTFT and CNN, exerts the advantages of two methods, firstly carries out NTFT transformation on the electroencephalogram signal, and the effective signal and noise characteristics are obviously distributed in the time frequency spectrum, thereby being beneficial to the CNN to learn the characteristics of the effective signal. Different electroencephalograms are used as data samples, an NTFT + CNN model is trained through different electroencephalogram sample sets, and the accuracy of classification and identification of the electroencephalograms by the NTFT + CNN model can reach 99.9%.
The NTFT + CNN model is used for classifying and identifying the electroencephalogram signals, and filtering processing on sample data is not needed in advance. The NTFT starts from data, does not need prior information, and can furthest retain the information of an original signal. The NTFT conversion meets the dimension conservation, and in the standard time spectrum, the noise and the effective EEG signal are distributed in different frequency bands to be displayed and are in one-to-one correspondence with the original waveform. The NTFT provides favorable basic conditions for CNN feature learning, and can effectively exert the capability of CNN learning features. The NTFT + CNN model has certain robustness on noise, has small influence on identification accuracy, and is simple and stable.
Drawings
FIG. 1 is a waveform diagram of a Z electroencephalogram signal (abbreviated as Z signal) and a corresponding time spectrum (Z time spectrum), wherein the left diagram is the waveform diagram of the Z electroencephalogram signal, and the right diagram is the Z time spectrum;
FIG. 2 is a waveform diagram of an O electroencephalogram signal (abbreviated as O signal) and a corresponding time spectrum (O time spectrum), wherein the left diagram is the waveform diagram of the O electroencephalogram signal, and the right diagram is the O time spectrum;
FIG. 3 is a waveform diagram of N electroencephalograms (abbreviated as N signals) and corresponding time spectrum (N time spectrum), wherein the left diagram is the waveform diagram of the N electroencephalograms, and the right diagram is the N time spectrum;
FIG. 4 is a waveform diagram of an F-EEG signal (abbreviated as an F signal) and a corresponding time spectrum (F-time spectrum), wherein the left diagram is the waveform diagram of the F-EEG signal, and the right diagram is the F-time spectrum;
FIG. 5 is a waveform diagram of S-electroencephalogram (abbreviated as S-signal) and a corresponding frequency spectrum (S-time frequency spectrum), wherein the left diagram is the waveform diagram of S-electroencephalogram and the right diagram is the S-time frequency spectrum;
fig. 6 is a diagram illustrating the recognition accuracy, the training loss value, and the verification loss value of example 1(CNN), wherein the left diagram is a diagram illustrating the recognition accuracy, and the right diagram is a diagram illustrating the training loss value and the verification loss value;
fig. 7 is a schematic diagram of the recognition accuracy, the training loss value, and the verification loss value of example 2(NTFT + CNN), where the left diagram is a schematic diagram of the recognition accuracy, and the right diagram is a schematic diagram of the training loss value and the verification loss value.
FIG. 8 is a flow chart of the present invention;
the electroencephalogram signals shown in fig. 1-5 are electroencephalogram databases clinically collected by epilepsy research laboratories of university of bourne, germany, and are a public electroencephalogram signal database which is widely applied at present. The electroencephalograms with different characteristics in fig. 1 to 5 are respectively marked as a Z electroencephalogram, an O electroencephalogram, an N electroencephalogram, an F electroencephalogram and an S electroencephalogram. The brain electrical signals in fig. 1-5 each contain 100 segments.
Detailed Description
The present invention will be described in further detail with reference to examples for the purpose of facilitating understanding and practice of the invention by those of ordinary skill in the art, and it is to be understood that the present invention has been described in the illustrative embodiments and is not to be construed as limited thereto.
Example 1:
an automatic classification and identification method for electroencephalogram signals comprises the following steps:
in the time domain, the Z electroencephalogram signal, the O electroencephalogram signal, the N electroencephalogram signal, the F electroencephalogram signal and the S electroencephalogram signal are detected, identified and classified through a convolutional neural network model (CNN) according to waveform characteristics, and the method specifically comprises the following steps:
step 1, collecting and preparing Z electroencephalogram signals and O electroencephalogram signals as a first sample data set, collecting and preparing N electroencephalogram signals, F electroencephalogram signals and S electroencephalogram signals to form a second sample data set, and according to waveform characteristics;
recording a time sequence corresponding to the Z electroencephalogram signal as a Z oscillogram;
the time sequence corresponding to the O electroencephalogram signal is an O oscillogram;
the time sequence corresponding to the N electroencephalogram signals is an N oscillogram;
the time sequence corresponding to the F electroencephalogram signal is an F oscillogram;
the time sequence corresponding to the S electroencephalogram signal is an S oscillogram;
step 2, respectively taking 80% of electroencephalogram signals in the first sample data set and 80% of electroencephalogram signals in the second sample data set as training sample sets, and taking the remaining 20% of electroencephalogram signals in the first sample data set and the remaining 20% of electroencephalogram signals in the second sample data set as verification sample sets;
training a convolutional neural network model (CNN model) through a training sample set, extracting and learning time sequence characteristics of a Z electroencephalogram signal, an O electroencephalogram signal, an N electroencephalogram signal, an F electroencephalogram signal and an S electroencephalogram signal in the training sample set, testing the trained convolutional neural network model (CNN model) through a verification sample set, and counting to obtain the identification accuracy. Each neuron in the fully-connected layer in the convolutional neural network model (CNN model) is connected with all neurons in the previous layer, and each neuron takes a ReLU function as an activation function. The output layer neurons use the softmax activation function, defined as follows:
Figure BDA0003119853290000051
wherein n is the number of neurons in the output layer, aiIs the output value of the ith neuron of the output layer. The activation function adopts a softmax activation function for classification, and a full-link layer shrinkage classification parameter and a log-likelihood function are selected as a loss function. ZiRepresenting a linear weighted sum of the ith neuron, ZmRepresents the linear weighted sum of the mth neuron, and e is the base of the natural logarithm.
And 3, judging whether the convolutional neural network model is over-fitted or not.
Overfitting is mainly due to two reasons: the first sample data set and the second sample data set have too few electroencephalogram signals or too complex convolutional neural network models. The method can be used for processing by acquiring more sample data sets, and acquiring more data from a data source or performing data enhancement processing; in addition, a proper convolutional neural network model can be used, the number of layers of the convolutional neural network model, the number of neurons and the like can be reduced, and the fitting capacity of the convolutional neural network model can be limited.
Identification accuracy:
P=Te/(Te+Fe) (2)
and P is the identification accuracy rate and represents the accuracy of the trained convolutional neural network model for identifying and verifying the electroencephalogram signal type of the sample set. Te is the number of times of identifying the correct type of the electroencephalogram signal of the verification sample set by the trained convolutional neural network model, and Fe is the number of times of identifying the wrong type of the electroencephalogram signal of the verification sample set by the trained convolutional neural network model.
When the convolutional neural network model is stable in convergence, the identification accuracy of the verification sample set is synchronously increased along with the increase of the training times of the training sample set, and the log-likelihood function is selected as the loss function to judge the optimal convolutional neural network model.
The training loss function is defined as follows:
Figure BDA0003119853290000052
wherein, ynFor the p-th EEG signal in the training sample set, apThe output value of the p-th neuron activation function of the output layer. And q is the total number of the electroencephalogram signals in the training sample set.
The validation loss function is defined as follows:
Figure BDA0003119853290000053
wherein, ykTo verify the kth electroencephalogram signal in the sample set, akIs the output value of the kth neuron activation function of the output layer. l is the total number of validation samples.
Calculating a training loss value through a training loss function, calculating a verification loss value through a verification loss function, wherein the training loss value and the verification loss value gradually decrease along with the training process of the convolutional neural network model and finally tend to be stable, when the training loss function and the verification loss function are converged and stable, an optimal convolutional neural network model is obtained, namely the training loss value tends to be stable, the variation value of the training loss value is smaller than a training loss threshold value, the verification loss value tends to be stable, and the variation value of the verification loss value is smaller than a verification loss threshold value, so that the optimal convolutional neural network model is obtained.
And 4, judging the identification accuracy of the optimal convolutional neural network model. And when the convolutional neural network model is converged, the identification accuracy rate corresponding to the optimal convolutional neural network model is the optimal identification accuracy rate.
Example 2:
an electroencephalogram signal automatic classification and identification method based on NTFT and CNN comprises the following steps:
standard time-frequency transformation (NTFT) is carried out on the electroencephalogram signals in the first sample data set and the second sample data set, and detection, classification and identification are carried out in a time-frequency domain through a convolutional neural network model, and the method specifically comprises the following steps:
step 1, collecting Z electroencephalogram signals and O electroencephalogram signals to form a first sample data set, collecting N electroencephalogram signals, F electroencephalogram signals and S electroencephalogram signals to form a second sample data set,
recording a time frequency spectrum corresponding to the Z electroencephalogram signal as a Z time frequency spectrum;
the time frequency spectrum corresponding to the O electroencephalogram signal is an O time frequency spectrum;
the time frequency spectrum corresponding to the N electroencephalogram signals is an N time frequency spectrum;
the time frequency spectrum corresponding to the F electroencephalogram signal is an F time frequency spectrum;
the time frequency spectrum corresponding to the S electroencephalogram signal is an S time frequency spectrum;
the Z electroencephalogram signal, the O electroencephalogram signal, the N electroencephalogram signal, the F electroencephalogram signal and the S electroencephalogram signal are not subjected to pre-filtering processing.
And 2, performing standard time frequency transform (NTFT) on the electroencephalogram signals in the first sample data set and the second sample data set to obtain a standard time frequency spectrum of the electroencephalogram signals. Taking the EEG standard time spectrum corresponding to 80% of EEG signals in the first sample data set and the EEG standard time spectrum corresponding to 80% of EEG signals in the second sample data set as training sample sets, and taking the EEG standard time spectrum corresponding to 20% of EEG signals in the first sample data set and the EEG standard time spectrum corresponding to 20% of EEG signals in the second sample data set as verification sample sets; the method comprises the steps of training a convolutional neural network model through a standard time spectrum of an electroencephalogram signal in a training sample set, verifying the convolutional neural network model through a standard time spectrum of the electroencephalogram signal in a verification sample set, extracting and learning the standard time spectrum characteristics of the electroencephalogram signal in the convolutional neural network model, judging, identifying and classifying the standard time spectrum of the electroencephalogram signal in the verification sample set through the convolutional neural network model, and counting the identification accuracy after all verification sample sets are identified.
The standard time-frequency spectrum of the electroencephalogram signal is obtained by using the formula (1), and the standard time-frequency transformation expression of the electroencephalogram signal is as follows:
Figure BDA0003119853290000071
the essence of equation (1) is a series of convolutions, where τ and
Figure BDA0003119853290000072
respectively representing the instant time and the instant circle frequency; the top dash "-" indicates conjugation; r represents a real number domain;
Figure BDA0003119853290000073
the kernel function is a kernel function, and is obtained by transforming the classical kernel function formula (2), wherein t is time, and τ can be understood as a step length in t- τ, which represents the step length moved during the convolution calculation of the kernel function and f (t), namely, the function is localized (instantized). Ψ represents a standard time-frequency transform, f (t) represents the brain electrical signal in the time domain,
Figure BDA0003119853290000074
and representing the electroencephalogram signal in the frequency domain after the standard time-frequency transformation as the standard frequency spectrum of the electroencephalogram signal.
Typical kernel function in standard time-frequency transform (NTFT)
Figure BDA0003119853290000075
The expression of (a) is:
Figure BDA0003119853290000076
Figure BDA0003119853290000077
is a real function, called time-frequency resolution adapter (TFRA), whose different expressions can be obtainedTo different types of NTFT. Such as
Figure BDA0003119853290000078
Time NTFT is standard Gabor transform; considering the EEG signal as a wideband signal, the command is given in the EEG signal identification method
Figure BDA0003119853290000079
In this case, the standard time-frequency transform (NTFT) is the standard wavelet transform. w (t) represents a Gaussian window function.
In the standard time-frequency transformation, the Fourier transform formula (3) of the kernel function is required to satisfy the formula (4)
Figure BDA00031198532900000710
Figure BDA00031198532900000711
Wherein, | | represents modulo; j represents an imaginary number; "ω" represents frequency; "^" denotes the Fourier transform operator.
Figure BDA00031198532900000712
Representing a typical kernel function
Figure BDA00031198532900000713
And performing Fourier transform.
Fitting a canonical kernel function in processing brain electrical signals
Figure BDA00031198532900000714
Substituting formula (1) to obtain formula (5) standard time-frequency transformation:
Figure BDA00031198532900000715
kernel function thereof
Figure BDA00031198532900000716
The formula (2) is set as the formula (6), that is, the formula (2)
Figure BDA00031198532900000717
And "w (t)" satisfies formula (6):
Figure BDA00031198532900000718
where w (t) represents a gaussian window and σ represents a window width parameter of the gaussian window.
After NTFT conversion processing is carried out on electroencephalogram signals in a time domain, characteristics are obviously unified, each neuron in a full connection layer in a convolutional neural network model is connected with all neurons in a previous layer, and each neuron takes a ReLU function as an activation function. The output layer neurons use the softmax activation function, which is defined as follows:
Figure BDA0003119853290000081
wherein n is the number of neurons in the output layer, aiIs the output value of the ith neuron of the output layer. The softmax activation function is used for classification, full connectivity layer contraction classification parameters are used, and a log-likelihood function is selected as a loss function. ZiRepresenting a linear weighted sum of the ith neuron, ZmRepresents the linear weighted sum of the mth neuron, and e is the base of the natural logarithm.
Step 3, judging whether the convolutional neural network model is over-fitted,
overfitting is mainly due to two reasons: the EEG signal standard time spectrum corresponding to the first sample data set and the second sample data set is too little or the convolutional neural network model is too complex. The method can be used for processing by acquiring more electric signal standard time spectrum sample data sets, and acquiring more data from a data source or performing data enhancement processing; in addition, the number of layers of the convolutional neural network model, the number of neurons and the like can be reduced by using a proper convolutional neural network model, so that the fitting capacity of the convolutional neural network model can be limited.
Identification accuracy:
P=Te/(Te+Fe) (8)
and P is the identification accuracy rate and represents the accuracy of the type of the electroencephalogram signals of the verification sample set identified by the trained convolutional neural network model. Te is the number of times that the trained convolutional neural network model identifies and verifies the correct type of the electroencephalogram signal of the sample set, and Fe is the number of times that the trained convolutional neural network model identifies and verifies the wrong type of the electroencephalogram signal of the sample set.
When the convolutional neural network model is stable in convergence, the identification accuracy of the verification sample set is synchronously increased along with the increase of the training times of the training sample set, and the log-likelihood function is selected as the loss function to judge the optimal convolutional neural network model.
The training loss function train _ loss is defined as follows:
Figure BDA0003119853290000082
wherein, ypFor training the standard time spectrum of the p-th EEG signal in the sample set, apThe output value of the p-th neuron activation function of the output layer. And q is the total number of the standard time spectrum of the electroencephalogram signal in the training sample set.
The verification loss function validation _ loss is defined as follows:
Figure BDA0003119853290000083
wherein, ykTo verify the standard time spectrum of the kth EEG signal in the sample set, akIs the output value of the kth neuron activation function of the output layer. l is the total number of the frequency spectrum in the standard of the electroencephalogram signal in the verification sample set.
Calculating a training loss value through a training loss function, calculating a verification loss value through a verification loss function, wherein the training loss value and the verification loss value gradually decrease along with the training process of the convolutional neural network model and finally tend to be stable, when the training loss function and the verification loss function are converged and stable, an optimal convolutional neural network model is obtained, namely the training loss value tends to be stable, the variation value of the training loss value is smaller than a training loss threshold value, the verification loss value tends to be stable, and the variation value of the verification loss value is smaller than a verification loss threshold value, so that the optimal convolutional neural network model is obtained.
And 4, judging the optimal recognition accuracy of the optimal convolutional neural network model. And the recognition accuracy rate corresponding to the optimal convolutional neural network model obtained when the convolutional neural network model is converged is the optimal recognition accuracy rate.
Comparing the recognition accuracy of the convolutional neural network model (CNN model) in the embodiment 1 with that of the convolutional neural network model (NTEF + CNN model) in the embodiment 2, the introduction of NTFT obviously enhances the characteristics of signals, the characteristics of the signals are uniform, the anti-noise capability is strong, the model is favorable for extracting and learning the characteristics of the signals, the recognition accuracy is improved, and the CNN model is optimized by the NTFT.
And respectively comparing the classification identification accuracy, stability and convergence rate of the CNN model and the NTFT + CNN model.
Detection method Rate of identification accuracy Speed of convergence
CNN 93% Large in floating
CNN+NTFT 99.9% Fast and stable
When different electroencephalograms are interfered by different environmental noises, the noises are superposed in the effective electroencephalograms in the time domain, so that the different electroencephalograms have no obvious characteristics, and the difficulty of classification and identification of the electroencephalograms by the CNN is increased. NTFT can represent the instantaneous frequency, instantaneous amplitude and instantaneous phase of a signal unbiased, and the introduction of NTFT highlights the unique features of different signals. The electroencephalogram signals are converted into a frequency domain through the NTFT, the characteristics of the electroencephalogram signals are obvious, the noise and the effective signals are distributed in different frequency bands for display, the noise has small interference on the effective signals, and the CNN is favorable for learning the characteristics of the effective signals.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (4)

1. An electroencephalogram signal automatic classification and identification method based on NTFT and CNN is characterized by comprising the following steps:
step 1, collecting different types of electroencephalogram signals as a sample data set;
step 2, standard time-frequency transformation is carried out on the EEG signals in the sample data set to obtain an EEG signal standard time-frequency spectrum, a part of EEG signal standard time-frequency spectrum is used as a training sample set, and the rest EEG signal standard time-frequency spectrum is used as a verification sample set; training the convolutional neural network model through the standard time spectrum of the electroencephalogram signals in the training sample set, and verifying the convolutional neural network model through the standard time spectrum of the electroencephalogram signals in the verification sample set;
step 3, calculating a training loss value through a training loss function, calculating a verification loss value through a verification loss function, and obtaining an optimal convolutional neural network model when the variation value of the training loss value is smaller than a training loss threshold value and the variation value of the verification loss value is smaller than a verification loss threshold value;
and 4, calculating the identification accuracy rate corresponding to the optimal convolutional neural network model as the optimal identification accuracy rate.
2. The NTFT and CNN based electroencephalogram signal automatic classification and identification method according to claim 1, wherein the standard frequency transformation in step 2 is based on the following formula:
Figure FDA0003119853280000011
wherein the content of the first and second substances,
Figure FDA0003119853280000012
representing the standard-time frequency spectrum of the standard-time transformed post-electroencephalogram signal, tau and
Figure FDA0003119853280000015
respectively representing the instantaneous time and the instantaneous circular frequency, sigma representing a Gaussian window width parameter, f (t) representing the electroencephalogram signal in the time domain, t representing the time, and j representing an imaginary number.
3. The NTFT and CNN based electroencephalogram signal automatic classification and recognition method according to claim 1, wherein the training loss function train _ loss is defined as follows:
Figure FDA0003119853280000013
wherein, ypFor training the standard time spectrum of the p-th EEG signal in the sample set, apIs the output value of the p-th neuron activation function of the output layer, and q is the total number of standard time spectrums of the electroencephalogram signals in the training sample set.
The verification loss function validation _ loss is defined as follows:
Figure FDA0003119853280000014
wherein, ykTo verify the standard time spectrum of the kth EEG signal in the sample set, akIs the output value of the k-th neuron activation function of the output layer, and l is the total number of standard time spectrums of the electroencephalogram signals in the verification sample set.
4. The NTFT and CNN-based electroencephalogram signal automatic classification and recognition method according to claim 1, wherein each neuron in the full connection layer in the convolutional neural network model is connected with all neurons in the previous layer, each neuron uses a ReLU function as an activation function, and an output layer neuron uses a softmax activation function, which is defined as follows:
Figure FDA0003119853280000021
wherein n is the number of neurons in the output layer, aiIs the output value of the ith neuron of the output layer, ZiIs a linear weighted sum of the ith neuron, ZmIs a linear weighted sum of the mth neuron.
CN202110672321.6A 2021-06-17 2021-06-17 Electroencephalogram signal automatic classification and identification method based on NTFT and CNN Pending CN113343869A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110672321.6A CN113343869A (en) 2021-06-17 2021-06-17 Electroencephalogram signal automatic classification and identification method based on NTFT and CNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110672321.6A CN113343869A (en) 2021-06-17 2021-06-17 Electroencephalogram signal automatic classification and identification method based on NTFT and CNN

Publications (1)

Publication Number Publication Date
CN113343869A true CN113343869A (en) 2021-09-03

Family

ID=77475979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110672321.6A Pending CN113343869A (en) 2021-06-17 2021-06-17 Electroencephalogram signal automatic classification and identification method based on NTFT and CNN

Country Status (1)

Country Link
CN (1) CN113343869A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988138A (en) * 2021-11-05 2022-01-28 青岛农业大学 Signal extraction and classification method based on deep learning
CN114129169A (en) * 2021-11-22 2022-03-04 中节能风力发电股份有限公司 Bioelectric signal data identification method, system, medium, and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059565A (en) * 2019-03-20 2019-07-26 杭州电子科技大学 A kind of P300 EEG signal identification method based on improvement convolutional neural networks
CN110929581A (en) * 2019-10-25 2020-03-27 重庆邮电大学 Electroencephalogram signal identification method based on space-time feature weighted convolutional neural network
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059565A (en) * 2019-03-20 2019-07-26 杭州电子科技大学 A kind of P300 EEG signal identification method based on improvement convolutional neural networks
CN110929581A (en) * 2019-10-25 2020-03-27 重庆邮电大学 Electroencephalogram signal identification method based on space-time feature weighted convolutional neural network
CN111832416A (en) * 2020-06-16 2020-10-27 杭州电子科技大学 Motor imagery electroencephalogram signal identification method based on enhanced convolutional neural network

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
周淑伊 等: "基于标准时频变换的脑电信号分析方法", 《电子测量与仪器学报》 *
周淑伊 等: "基于标准时频变换的脑电信号分析方法", 《电子测量与仪器学报》, vol. 32, no. 12, 31 December 2018 (2018-12-31), pages 127 - 133 *
周淑伊: "基于标准时频变换的酗酒者脑电信号分析方法", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *
周淑伊: "基于标准时频变换的酗酒者脑电信号分析方法", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》, no. 06, 15 June 2020 (2020-06-15), pages 071 - 66 *
姚彦吉: "地震事件自动识别的标准时频变换方法", 《武汉大学学报(信息科学版)》 *
姚彦吉: "地震事件自动识别的标准时频变换方法", 《武汉大学学报(信息科学版)》, 5 December 2020 (2020-12-05), pages 1 - 11 *
胡章芳 等: "基于时频域的卷积神经网络运动想象脑电信号识别方法", 《计算机应用》 *
胡章芳 等: "基于时频域的卷积神经网络运动想象脑电信号识别方法", 《计算机应用》, vol. 39, no. 8, 10 August 2019 (2019-08-10), pages 2480 - 2483 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113988138A (en) * 2021-11-05 2022-01-28 青岛农业大学 Signal extraction and classification method based on deep learning
CN114129169A (en) * 2021-11-22 2022-03-04 中节能风力发电股份有限公司 Bioelectric signal data identification method, system, medium, and device

Similar Documents

Publication Publication Date Title
CN108714026B (en) Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion
CN106709469B (en) Automatic sleep staging method based on electroencephalogram and myoelectricity multiple characteristics
CN110786850B (en) Electrocardiosignal identity recognition method and system based on multi-feature sparse representation
CN105841961A (en) Bearing fault diagnosis method based on Morlet wavelet transformation and convolutional neural network
CN107811649B (en) Heart sound multi-classification method based on deep convolutional neural network
CN110598676B (en) Deep learning gesture electromyographic signal identification method based on confidence score model
CN112426160A (en) Electrocardiosignal type identification method and device
CN110292377B (en) Electroencephalogram signal analysis method based on instantaneous frequency and power spectrum entropy fusion characteristics
CN113343869A (en) Electroencephalogram signal automatic classification and identification method based on NTFT and CNN
CN112487945B (en) Pulse condition identification method based on double-path convolution neural network fusion
CN114190944B (en) Robust emotion recognition method based on electroencephalogram signals
CN108567418A (en) A kind of pulse signal inferior health detection method and detecting system based on PCANet
CN110543831A (en) brain print identification method based on convolutional neural network
CN112754502A (en) Automatic music switching method based on electroencephalogram signals
CN114595728A (en) Signal denoising method based on self-supervision learning
CN111938691A (en) Basic heart sound identification method and equipment
CN115905827A (en) Steam turbine fault diagnosis method and device based on neural network
CN115017960B (en) Electroencephalogram signal classification method based on space-time combined MLP network and application
CN116211322A (en) Depression recognition method and system based on machine learning electroencephalogram signals
CN116758922A (en) Voiceprint monitoring and diagnosing method for transformer
CN114118157A (en) Illumination information diagnosis method and system based on plant electric signals
CN115374815A (en) Automatic sleep staging method based on visual Transformer
Wang et al. A novel approach for the pattern recognition of hand movements based on EMG and VPMCD
CN112149724B (en) Electroencephalogram data feature extraction method based on intra-class compactness
CN116570295B (en) Electrocardiogram low-voltage T-wave end point positioning method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210903

RJ01 Rejection of invention patent application after publication