CN108714026B - Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion - Google Patents

Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion Download PDF

Info

Publication number
CN108714026B
CN108714026B CN201810255649.6A CN201810255649A CN108714026B CN 108714026 B CN108714026 B CN 108714026B CN 201810255649 A CN201810255649 A CN 201810255649A CN 108714026 B CN108714026 B CN 108714026B
Authority
CN
China
Prior art keywords
samples
neural network
sample
models
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810255649.6A
Other languages
Chinese (zh)
Other versions
CN108714026A (en
Inventor
张敬
田婧
徐晓滨
文成林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201810255649.6A priority Critical patent/CN108714026B/en
Publication of CN108714026A publication Critical patent/CN108714026A/en
Application granted granted Critical
Publication of CN108714026B publication Critical patent/CN108714026B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7253Details of waveform analysis characterised by using transforms
    • A61B5/7257Details of waveform analysis characterised by using transforms using Fourier transforms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Psychiatry (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Physiology (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Cardiology (AREA)
  • Fuzzy Systems (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses an electrocardiogram classification method based on Deep Convolutional Neural Network (DCNN) and on-line decision fusion. Unlike previous methods that use manual features or learn features from the original signal domain, the proposed DCNN-based method learns features and classifications from the time-frequency domain in an end-to-end manner. The invention firstly utilizes the short-time Fourier transform to convert the electrocardiographic waveform signal into a time-frequency domain. Next, a specific DCNN network model is trained from training samples of a certain length. Finally, an online decision fusion method is proposed to fuse past and present decisions from different models into a more accurate decision. The experimental results of the integrated 20-class ECG data sets demonstrate the effectiveness of the proposed method.

Description

Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion
Technical Field
The invention belongs to the field of signal classification, and particularly relates to a fine-grained electrocardiosignal classification method based on a deep convolutional neural network and on-line decision fusion.
Background
Electrocardiograms (ECGs), which record the depolarization repolarization process of the electrical activity of the heart during the cardiac cycle, are widely used to monitor or diagnose the condition of the heart in patients. Patients are often required to go to a hospital and be diagnosed by a trained, experienced cardiologist, which is expensive and inconvenient. Therefore, automated monitoring and diagnostic systems are highly desirable in clinics, community medical centers, and home healthcare programs. Although great progress has been made in the past decades in the filtering, detection and classification of electrocardiosignals, effective and accurate classification of electrocardiosignals remains a challenging problem due to noise and the type of symptoms that vary from patient to patient.
Prior to classification, filtering is typically required to remove various noises from the ECG signal, including power line disturbances, baseline wander, muscle contraction noise, etc. Conventional approaches such as low pass filters and filter banks can reduce noise but can also cause some artifacts. Combining signal modeling and filtering together can alleviate this problem, but it is limited to a single type of noise. In recent years, different noise cancellation methods based on wavelet transform have been proposed, which shows great advantages in multi-resolution signal analysis. For example, s.poungponsri and x.h.yu propose an adaptive filtering method based on wavelet transform and artificial neural network, which can effectively remove different types of noise.
For ECG classification, the classical approach typically includes two parts: feature extraction and classifier training. Feature extraction is usually performed in the time domain or frequency domain, and includes amplitude, interval, higher order statistics, and the like. Common methods are: filter bank, kalman filter, principal component analysis, frequency analysis, wavelet transform and statistical methods. The classification method generally includes: support vector machines, artificial neural networks, hidden markov models, and hybrid expert methods. Among them, a large number of methods are based on the strong modeling capability of the artificial neural network. For example, m.engin proposes an electrocardiographic signal classification method based on a fuzzy neural network, which is characterized by autoregressive model coefficients, high-order cumulants and wavelet transform variances. L.y.shyu et al propose a new method for detecting ventricular premature beats (VPCs) using wavelet transformation and fuzzy neural networks, which has a reduced computational complexity due to the use of the same wavelets for QRS detection and VPC classification. I.gurer and e.d.ubeyli suggest ECG signal classification using a combined neural network model. And extracting the statistical characteristics based on the discrete wavelet transform as the input of the primary network. The network is then trained using the output of the previous stage network as input. Inc, presents a new approach that uses a robust and versatile artificial neural network architecture and trains a patient-specific model with morphological wavelet transform features and temporal features for each patient.
Although the above methods work well, there are some common drawbacks:
1) artificial features rely on expert knowledge or experience and require careful design and testing. Moreover, the classifier needs to have appropriate modeling capability for the feature.
2) The types of ECG signals are typically limited or coarse grained, e.g., 2-5. On the one hand, for a new type of electrocardiographic waveform, one should first examine existing distinctive features and redesign new features. On the other hand, they still present problems in fine-grained classification, as it requires more discriminative features and classifiers with better modeling capabilities.
Disclosure of Invention
Aiming at the defects of the prior art, the invention designs a fine-grained electrocardiosignal classification method based on deep convolutional neural network and on-line decision fusion. The present invention converts the raw ECG signal to the time-frequency domain by a short-time fourier transform (STFT). Then, the time-frequency characteristics of the signal are learned by the CNN of the two-dimensional convolution. Finally, an online decision fusion method is proposed to fuse past and current decisions of different models into a more accurate result.
The invention comprises the following steps:
acquiring and processing an electrocardiosignal waveform:
step (1-1) acquisition of a data set: an SKX-2000ECG signal simulator is used for ECG waveform generation. The simulator can simulate electrocardiographic waveforms that produce different symptoms of various amplitudes and frequencies, including but not limited to normal, coarse atrial fibrillation, fine atrial fibrillation, atrial flutter and more than 20 electrocardiographic waveforms. The 20 types of ECG waveform signals are acquired by acquiring a certain amount of waveform signals corresponding to the types of ECG waveform signals under different parameters. The mathematical representation is:
X={(xi(t),yi)|i∈Λ}(1)
wherein x isi(t) is the ith sample, yiE { 0., C-1} represents the cardiac signal xi(t) a class label, which shares a class C electrocardiosignal, and Λ is an index set of samples. x is the number ofi(t)=[xi(0),xi(1),...,xi(N-1)]TIs a time-sequential representation of the ith signal having a number N of sample points.
Step (1-2) short-time Fourier transform: firstly, converting the original electrocardiosignals into a time-frequency domain through short-time Fourier transform to obtain an electrocardio-spectrogram of the electrocardiosignals. The mathematical representation process is as follows:
Figure BDA0001608874880000031
wherein w (-) is a window function, the window of the invention adopts a Hamming window, the size is 256 sampling points, and the size of the overlapping area is 128 sampling points. S, si(k, m) is xiThe electrocardiogram having a 2-dimensional structure.
Step (2) design of network structure: considering the use of 2-dimensional spectrograms as network inputs, a deep convolutional neural network structure including 2-dimensional convolution was designed, the designed network structure including 3 convolutional layers, 2 fully-connected layers, and 1 max pooling layer. The concrete structure is as follows:
Figure BDA0001608874880000032
taking the electrocardiogram as input, and predicting a probability vector p for the deep neural network structure for the classification problemi=h(sih)∈RcAnd Pi||1=1,θhRepresenting a parameter to be learned of the network, which can be trained by minimizing a cross entropy loss function, represented as follows:
Figure BDA0001608874880000033
wherein q isiIs corresponding to the category label yiThe one-hot vector of (c).
In effect, the width of the electrocardiogram is related to the length of the electrocardiographic waveform signal for a given window function. Given a sampling rate, longer signals contain more beats. Generally, the detected and classified signals are single beat signals, but the more beats contain more information, the higher the accuracy of detection and classification. In the invention, after the sequence length and the sampling rate are determined, each sample is divided into a plurality of subsamples, and the length of each subsample is the same. The designed deep neural network model is then trained from the data set. In addition, in order to compare the performance of the model when testing longer samples, the samples are divided into sub-samples with different lengths, and corresponding deep neural network models are trained by using the sub-samples, and the models designed according to the different lengths of the samples are respectively expressed as h1-h6 according to the sizes of the sub-samples from small to large. Although training samples of different lengths will correspond to different widths of the spectrogram, the same architecture is used for all of the above models, as long as the pooling step size along the column is changed accordingly, while keeping the fully connected layer fixed.
And (3) an online decision fusion algorithm: for in-line testing, the above model can be used to sequentially test signals at different times as the length of the signal increases. When the sample length is short, the decision result is often based on this sample length; when the sample length is longer, the decision result will be determined by the whole signal. These models can be viewed as different experts, focusing on different amounts of information. The decisions of different experts may be complementary and may possibly be fused to a more accurate decision. Therefore, an online fusion mode is proposed, which is specifically expressed as:
Figure BDA0001608874880000041
wherein
Figure BDA0001608874880000042
Is the result of fusion, slE {1,2,3,4,5,6} is the order that corresponds to the longest length in the subsamples into which a signal of a particular length x can be divided; x is the number ofskIs when the model is hsThe k-th part, k, of the sample xsIs that the length is sample x corresponding to hsThe number of hour samples. For example, the total length of the samples at the current time is 2048, so sl=3,k1=4,k2=2,k3=1。ωsIs hsFusion weights of the models, and
Figure BDA0001608874880000043
as can be seen from equation 4, the weights of the fused results for each part in the same subsample are equal and averaged. This result is reasonable because each part uses the same model, with no priority. When the subsamples are of different lengths, a weight is given to each model and their effects are compared in the final fused result.
To verify the effectiveness of the algorithm presented herein, an SKX-2000ECG signal simulator was used for ECG waveform generation. The simulator can simulate electrocardiographic waveforms that produce different symptoms of various amplitudes and frequencies, including but not limited to normal, coarse atrial fibrillation, fine atrial fibrillation, atrial flutter and more than 20 electrocardiographic waveforms. 19 types of symptom electrocardiographic waveforms and normal electrocardiographic waveforms are selected to carry out simulation experiments.
For the above 20 types of ECG waveform signals, a certain amount of waveform signals corresponding to the different parameters of the type are respectively collected. Then, after removing the shorter and unwanted signals, a total of 2426 samples were collected, averaging about 120 samples per class, each sample containing 16384 points at the longest, and in the following experiments, 3-fold cross-validation was used to evaluate the proposed method.
In the short-time Fourier transform, a Hamming window is adopted as a window, the size of the Hamming window is 256 sampling points, and the size of an overlapping area is 128 sampling points. And then training the designed network structure by using the obtained electrocardiogram. When the CNN model is trained, 20000 iterations are provided, the batch size of each iteration is 128, the basic learning rate is 0.01, the basic learning rate is reduced to 0.5 times every 5000 times, and the momentum and decay parameters are respectively set to 0.9 and 5 multiplied by 10-6. And then fusing the results of different models by adopting the fusion method.
The above methods were all performed in caffe, and all experiments were performed on a workstation of Nvidia GeForce GTX Titan X (Maxwell) GPU.
The invention has the beneficial effects that: by computing a short-time fourier transform of the original signal, one can learn from the time-frequency domain to a distinctive representation of the features. An online decision fusion method is proposed to fuse past and present decisions from different models into a more accurate decision. Experimental results on the synthesized 20-class ECG data sets demonstrate the effectiveness and efficiency of the proposed method. Furthermore, the proposed method is computationally efficient and hopefully integrated in a portable ECG monitoring instrument with limited computational resources.
Drawings
FIG. 1 is a flow chart of an algorithmic implementation of the method.
Fig. 2 is a waveform diagram of an electrocardiographic signal.
Figure 3 is an electrocardiogram corresponding to figure 2.
FIG. 4 shows the fusion results and each slCorresponding single model results.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the present invention provides a fine-grained electrocardiographic signal classification based on a deep convolutional neural network and on-line decision fusion, which includes the following steps:
acquiring and processing an electrocardiosignal waveform:
step (1-1) acquisition of a data set: an SKX-2000ECG signal simulator is used for ECG waveform generation. The simulator can simulate electrocardiographic waveforms that produce different symptoms of various amplitudes and frequencies, including but not limited to normal, coarse atrial fibrillation, fine atrial fibrillation, atrial flutter and more than 20 electrocardiographic waveforms. The 20 types of ECG waveform signals are acquired by acquiring a certain amount of waveform signals corresponding to the types of ECG waveform signals under different parameters. The mathematical representation is:
X={(xi(t),yi)|i∈Λ} (1)
wherein x isi(t) is the ith sample, yiE { 0., C-1} represents the cardiac signal xi(t) a class label, which shares a class C electrocardiosignal, and Λ is an index set of samples. x is the number ofi(t)=[xi(0),xi(1),...,xi(N-1)]TThe number of sampling points being NTime-sequential representation of the ith signal. Specifically, the electrocardiographic signal waveform is shown in fig. 2, where the horizontal axis represents time in s and the vertical axis represents amplitude μ V.
Step (1-2) short-time Fourier transform: firstly, converting the original electrocardiosignals into a time-frequency domain through short-time Fourier transform to obtain an electrocardio-spectrogram of the electrocardiosignals. The mathematical representation process is as follows:
Figure BDA0001608874880000061
wherein w (-) is a window function, the window of the invention adopts a Hamming window, the size is 256 sampling points, and the size of the overlapping area is 128 sampling points. si(k, m) is xiThe electrocardiogram having a 2-dimensional structure. Fig. 3 shows an electrocardiograph corresponding to fig. 2, where the horizontal axis represents time and the vertical axis represents frequency. The time-frequency characteristics of the signals can be observed simultaneously.
Step (2) design of network structure: considering the use of 2-dimensional spectrograms as network inputs, a deep convolutional neural network structure including 2-dimensional convolution was designed, the designed network structure including 3 convolutional layers, 2 fully-connected layers, and 1 max pooling layer.
Network architecture designed in table 1
Figure BDA0001608874880000062
Taking the electrocardiogram as input, and predicting a probability vector p for the deep neural network structure for the classification problemi=h(sih)∈RcAnd Pi||1=1,θhRepresenting a parameter to be learned of the network, which can be trained by minimizing a cross entropy loss function, represented as follows:
Figure BDA0001608874880000063
wherein q isiIs corresponding to the category label yiThe one-hot vector of (a) is,
in effect, the width of the electrocardiogram is related to the length of the electrocardiographic waveform signal for a given window function. Given a sampling rate, longer signals contain more beats. Generally, the detected and classified signals are single beat signals, but the more beats contain more information, the higher the accuracy of detection and classification. In the invention, after the sequence length and the sampling rate are determined, each sample is divided into a plurality of subsamples, and the length of each subsample is the same. The designed deep neural network model (CNN model) is then trained from the data set. In addition, in order to compare the performance of the model when testing longer samples, the samples are divided into sub-samples with different lengths, and corresponding CNN models are trained by the sub-samples, and the models designed according to the different lengths of the samples are respectively expressed as h1-h6 according to the sizes of the sub-samples from small to large. Although training samples of different lengths will correspond to different widths of the spectrogram, the same architecture is used for all of the above models, as long as the pooling step size along the column is changed accordingly, while keeping the fully connected layer fixed.
To verify the validity of the method in learning the feature representation, it is checked by the learned features. For example, learning features of all training data are obtained by calculating responses of the second last layer, and then the first three principal component vectors are obtained by principal component analysis.
And (3) an online decision fusion algorithm: for in-line testing, the above model can be used to sequentially test signals at different times as the length of the signal increases. When the sample length is short, the decision result is often based on this sample length; when the sample length is longer, the decision result will be determined by the whole signal. These models can be viewed as different experts, focusing on different amounts of information. The decisions of different experts may be complementary and may possibly be fused to a more accurate decision. Therefore, an online fusion mode is proposed, which is specifically expressed as:
Figure BDA0001608874880000071
wherein
Figure BDA0001608874880000072
Is the result of fusion, slE {1,2,3,4,5,6} is the order that corresponds to the longest length in the subsamples into which a signal of a particular length x can be divided; x is the number ofskIs when the model is hsThe k-th part, k, of the sample xsIs the sample x corresponds to hsThe number of hour samples. For example, the total length of the samples at the current time is 2048, so sl=3,k1=4,k2=2,k3=1。ωsIs hsFusion weights of the models, and
Figure BDA0001608874880000073
as can be seen from equation 4, the weights of the fused results for each part in the same subsample are equal and averaged. This result is reasonable because each part uses the same model, with no priority. When the subsamples are of different lengths, a weight is given to each model and their effects are compared in the final fused result.
Two fusion weight definitions are considered: the mean weight and weight increase with higher model level (i.e., longer subsample length). The fusion weight in the latter case is calculated according to the following equation
Figure BDA0001608874880000074
FIG. 4 shows the fusion results and each slCorresponding single model results. First, it can be seen that model h4 achieves the best performance among all the different levels of models, perhaps because it balances the decision on data length and number of models. Compared to model h1, the input data is 16 times longer. Compared with model h6, model h6 can make only a single decision on the entire sequence, whereas model h4 can make 4 decisions on four different subsequences, which can be merged into oneA more accurate subsequence. It can then be seen that the fused results are consistently better than the single model results, and performance continues to improve as the data length grows. The method proves that the decision of fusing different models can obtain a more robust and more accurate decision, because the models contain different ranges of original data when sample training is carried out. Furthermore, the use of non-uniform weights does not have any advantage over the use of uniform weights. And (4) assuming that the uneven weight strategy is biased to the model with the maximum data volume, and ignoring the decision of the model with the smaller data volume. Although it is superior to the single model case, the boost is very limited, especially at higher levels, where performance depends heavily on the high level model.
To verify the validity of the algorithm proposed by the present invention, an SKX-2000ECG signal simulator was used for ECG waveform generation. The simulator can simulate electrocardiographic waveforms that produce different symptoms of various amplitudes and frequencies, including but not limited to normal, coarse atrial fibrillation, fine atrial fibrillation, atrial flutter and more than 20 electrocardiographic waveforms. 19 types of symptom electrocardiographic waveforms and normal electrocardiographic waveforms are selected to carry out simulation experiments.
For the above 20 types of ECG waveform signals, a certain amount of waveform signals corresponding to the different parameters of the type are respectively collected. Then, after removing the shorter and unwanted signals, a total of 2426 samples were collected, averaging about 120 samples per class, each sample containing 16384 points at the longest, and in the following experiments, 3-fold cross-validation was used to evaluate the proposed method.
In the short-time Fourier transform, a Hamming window is adopted as a window, the size of the Hamming window is 256 sampling points, and the size of an overlapping area is 128 sampling points. And then training the designed network structure by using the obtained electrocardiogram. When the CNN model is trained, 20000 iterations are provided, the batch size of each iteration is 128, the basic learning rate is 0.01, the basic learning rate is reduced to 0.5 times every 5000 times, and the momentum and decay parameters are respectively set to 0.9 and 5 multiplied by 10-6. And then fusing the results of different models by adopting the fusion method.
The above methods were all performed in caffe, and all experiments were performed on a workstation of Nvidia GeForce GTX Titan X (Maxwell) GPU.
The simulation results are as follows: the test results of the proposed method and other models are shown in table 2, and the run times of the proposed method models at different levels are shown in table 3.
TABLE 2 test results of different models
Figure BDA0001608874880000081
Figure BDA0001608874880000091
TABLE 3 run time of different level models
Figure BDA0001608874880000092

Claims (1)

1. The fine-grained electrocardiosignal classification method based on the deep convolutional neural network and the online decision fusion is characterized by comprising the following specific steps of:
acquiring and processing an electrocardiosignal waveform:
step (1-1) acquisition of a data set: generating an ECG waveform by adopting an SKX-2000ECG signal simulator; the mathematical representation of the cardiac signal is:
X={(xi(t),yi)|i∈Λ}
wherein x isi(t) is the ith sample, yiE { 0., C-1} represents the cardiac signal xi(t) class labels, which together have a class C electrocardiosignal, wherein Λ is an index set of samples; x is the number ofi(t)=[xi(0),xi(1),...,xi(N-1)]TIs the time sequence representation of the ith signal with the number of sampling points N;
step (1-2) short-time Fourier transform: converting the original electrocardiosignals into a time-frequency domain through short-time Fourier transform to obtain an electrocardio-spectrogram of the electrocardiosignals; the mathematical representation process is as follows:
Figure FDA0003053452760000011
wherein w (-) is a window function, the window adopts a Hamming window, the size of the window is 256 sampling points, and the size of the overlapping area is 128 sampling points; si(k, m) is xi(t) an electrocardiogram having a 2-dimensional structure;
step (2) design of network structure: considering using a 2-dimensional electrocardiogram as a network input, designing a deep convolutional neural network structure comprising 2-dimensional convolution, wherein the designed network structure comprises 3 convolutional layers, 2 full-link layers and 1 maximum pooling layer;
taking the electrocardiogram as input, and predicting a probability vector p for the deep neural network structure for the classification problemi=h(sih)∈RcAnd is
Figure FDA0003053452760000012
θhRepresenting a parameter to be learned of the network, and training the parameter by a method of minimizing a cross entropy loss function, wherein the parameter is represented as follows:
Figure FDA0003053452760000013
wherein q isiIs corresponding to the category label yiThe one-hot vector of (a);
after determining the sequence length and the sampling rate, dividing each sample into a plurality of sub-samples, wherein the length of each sub-sample is the same; secondly, training the designed deep neural network model according to the data set; in addition, in order to compare the performances of the models with different test sample lengths, the samples are divided into sub-samples with different lengths, the corresponding deep neural network models are trained by the sub-samples, and the models designed according to the different sample lengths are respectively expressed as h1-h6 from small to large;
and (3) an online decision fusion algorithm, which is specifically expressed as:
Figure FDA0003053452760000021
wherein
Figure FDA0003053452760000022
Is the result of fusion, slE {1,2,3,4,5,6} is the model number corresponding to the longest length in the sub-samples into which the sample x with the set length can be divided; x is the number ofskIs when the model is hsThe k-th part, k, of the sample xsIs that sample x corresponds to hsThe number of the hour samples; omegasIs hsFusion weights of the models, and
Figure FDA0003053452760000023
CN201810255649.6A 2018-03-27 2018-03-27 Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion Expired - Fee Related CN108714026B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810255649.6A CN108714026B (en) 2018-03-27 2018-03-27 Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810255649.6A CN108714026B (en) 2018-03-27 2018-03-27 Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion

Publications (2)

Publication Number Publication Date
CN108714026A CN108714026A (en) 2018-10-30
CN108714026B true CN108714026B (en) 2021-09-03

Family

ID=63898874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810255649.6A Expired - Fee Related CN108714026B (en) 2018-03-27 2018-03-27 Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion

Country Status (1)

Country Link
CN (1) CN108714026B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109745033A (en) * 2018-12-25 2019-05-14 东南大学 Dynamic electrocardiogram method for evaluating quality based on time-frequency two-dimensional image and machine learning
CN109793511A (en) * 2019-01-16 2019-05-24 成都蓝景信息技术有限公司 Electrocardiosignal noise detection algorithm based on depth learning technology
CN109998523A (en) * 2019-03-27 2019-07-12 苏州平稳芯跳医疗科技有限公司 It is a kind of singly to lead electrocardiosignal classification method and singly lead electrocardiosignal categorizing system
CN110269625B (en) * 2019-05-31 2022-02-11 杭州电子科技大学 Novel multi-feature fusion electrocardio authentication method and system
CN110192864B (en) * 2019-06-12 2020-09-22 北京交通大学 Cross-domain electrocardiogram biological characteristic identity recognition method
CN110507318A (en) * 2019-08-16 2019-11-29 武汉中旗生物医疗电子有限公司 A kind of electrocardiosignal QRS wave group localization method and device
CN110664395B (en) * 2019-09-29 2022-04-12 京东方科技集团股份有限公司 Image processing method, image processing apparatus, and storage medium
CN111000551A (en) * 2019-12-05 2020-04-14 常州市第一人民医院 Heart disease risk diagnosis method based on deep convolutional neural network model
CN111160139B (en) * 2019-12-13 2023-10-24 中国科学院深圳先进技术研究院 Electrocardiosignal processing method and device and terminal equipment
CN111000553B (en) * 2019-12-30 2022-09-27 山东省计算中心(国家超级计算济南中心) Intelligent classification method for electrocardiogram data based on voting ensemble learning
CN113384277B (en) * 2020-02-26 2022-09-20 京东方科技集团股份有限公司 Electrocardiogram data classification method and classification system
CN111202512A (en) * 2020-03-05 2020-05-29 齐鲁工业大学 Electrocardiogram classification method and device based on wavelet transformation and DCNN
CN111419213A (en) * 2020-03-11 2020-07-17 哈尔滨工业大学 ECG electrocardiosignal generation method based on deep learning
CN111444832A (en) * 2020-03-25 2020-07-24 哈尔滨工程大学 Whale cry classification method based on convolutional neural network
CN111543977B (en) * 2020-05-09 2023-04-07 益体康(北京)科技有限公司 Multi-cascade artificial intelligence vagina discharge method based on 12-lead resting electrocardiogram
CN111901267B (en) * 2020-07-27 2021-07-02 重庆大学 Multi-antenna blind modulation identification method based on short-time Fourier transform time-frequency analysis
CN112426160A (en) * 2020-11-30 2021-03-02 贵州省人民医院 Electrocardiosignal type identification method and device
CN112545526A (en) * 2020-12-10 2021-03-26 青岛大爱慈康智能医疗科技有限公司 Electrocardiosignal detection method and device
CN112545501A (en) * 2020-12-10 2021-03-26 徐岩 Blood component concentration detection method and device
CN112750097B (en) * 2021-01-14 2022-04-05 中北大学 Multi-modal medical image fusion based on multi-CNN combination and fuzzy neural network
CN112842348B (en) * 2021-02-07 2021-09-14 山东省人工智能研究院 Automatic classification method for electrocardiosignals based on feature extraction and deep learning
CN113171103A (en) * 2021-04-20 2021-07-27 中国科学院高能物理研究所 Electrocardiogram classification method and device, equipment and storage medium
CN114469127B (en) * 2022-03-28 2022-07-29 电子科技大学 Electrocardiosignal artificial intelligence processing circuit based on heart beat differential coding
CN114587378A (en) * 2022-04-11 2022-06-07 平安科技(深圳)有限公司 Deep learning-based electrocardiogram classification method, device, equipment and storage medium
WO2024014821A1 (en) * 2022-07-14 2024-01-18 주식회사 메디컬에이아이 Method, program, and device for training neural network model based on electrocardiogram

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060100534A1 (en) * 2004-07-09 2006-05-11 Ansar, Inc. Methods for real-time autonomic nervous system monitoring using total heart rate variability, and notched windowing
CN105105743A (en) * 2015-08-21 2015-12-02 山东省计算中心(国家超级计算济南中心) Intelligent electrocardiogram diagnosis method based on deep neural network
CN105997055A (en) * 2016-07-11 2016-10-12 吉林大学 Automatic classification method, system and device of electrocardiosignal ST band
CN106951753A (en) * 2016-01-06 2017-07-14 北京三星通信技术研究有限公司 The authentication method and authentication device of a kind of electrocardiosignal
CN107203692A (en) * 2017-05-09 2017-09-26 哈尔滨工业大学(威海) The implementation method of atrial fibrillation detection based on depth convolutional neural networks

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040260188A1 (en) * 2003-06-17 2004-12-23 The General Hospital Corporation Automated auscultation system
US7908197B2 (en) * 2008-02-06 2011-03-15 Algorithmics Software Llc Systems and methods for compound risk factor sampling with integrated market and credit risk
CN103000172A (en) * 2011-09-09 2013-03-27 中兴通讯股份有限公司 Signal classification method and device
CN102715902A (en) * 2012-06-15 2012-10-10 天津大学 Emotion monitoring method for special people
CN106725420A (en) * 2015-11-18 2017-05-31 中国科学院苏州纳米技术与纳米仿生研究所 VPB recognition methods and VPB identifying system
CN107123019A (en) * 2017-03-28 2017-09-01 华南理工大学 A kind of VR shopping commending systems and method based on physiological data and Emotion identification
CN107168945B (en) * 2017-04-13 2020-07-14 广东工业大学 Bidirectional cyclic neural network fine-grained opinion mining method integrating multiple features
CN107247703A (en) * 2017-06-08 2017-10-13 天津大学 Microblog emotional analysis method based on convolutional neural networks and integrated study
CN107341506A (en) * 2017-06-12 2017-11-10 华南理工大学 A kind of Image emotional semantic classification method based on the expression of many-sided deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060100534A1 (en) * 2004-07-09 2006-05-11 Ansar, Inc. Methods for real-time autonomic nervous system monitoring using total heart rate variability, and notched windowing
CN105105743A (en) * 2015-08-21 2015-12-02 山东省计算中心(国家超级计算济南中心) Intelligent electrocardiogram diagnosis method based on deep neural network
CN106951753A (en) * 2016-01-06 2017-07-14 北京三星通信技术研究有限公司 The authentication method and authentication device of a kind of electrocardiosignal
CN105997055A (en) * 2016-07-11 2016-10-12 吉林大学 Automatic classification method, system and device of electrocardiosignal ST band
CN107203692A (en) * 2017-05-09 2017-09-26 哈尔滨工业大学(威海) The implementation method of atrial fibrillation detection based on depth convolutional neural networks

Also Published As

Publication number Publication date
CN108714026A (en) 2018-10-30

Similar Documents

Publication Publication Date Title
CN108714026B (en) Fine-grained electrocardiosignal classification method based on deep convolutional neural network and online decision fusion
Sangaiah et al. An intelligent learning approach for improving ECG signal classification and arrhythmia analysis
CN110840402B (en) Atrial fibrillation signal identification method and system based on machine learning
CN107822622B (en) Electrocardiogram diagnosis method and system based on deep convolutional neural network
CN104523266B (en) A kind of electrocardiosignal automatic classification method
CN107837082A (en) Electrocardiogram automatic analysis method and device based on artificial intelligence self study
Cao et al. Atrial fibrillation detection using an improved multi-scale decomposition enhanced residual convolutional neural network
CN111449644A (en) Bioelectricity signal classification method based on time-frequency transformation and data enhancement technology
CN112932498B (en) T waveform state classification system with generalization capability based on deep learning
CN109645983A (en) A kind of uneven beat classification method based on multimode neural network
CN109948396B (en) Heart beat classification method, heart beat classification device and electronic equipment
CN110693489A (en) Myocardial infarction detection method based on multi-classifier reinforcement learning
CN111202512A (en) Electrocardiogram classification method and device based on wavelet transformation and DCNN
CN113180685B (en) Electrocardio abnormity discrimination system and method based on morphological filtering and wavelet threshold
CN116361688A (en) Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms
JP7487965B2 (en) Prediction method of electrocardiogram heart rate multi-type based on graph convolution
CN114469124A (en) Method for identifying abnormal electrocardiosignals in motion process
CN111419213A (en) ECG electrocardiosignal generation method based on deep learning
Antczak A generative adversarial approach to ECG synthesis and denoising
CN117582235A (en) Electrocardiosignal classification method based on CNN-LSTM model
Khandait et al. ECG signal processing using classifier to analyses cardiovascular disease
Qin et al. Multi-classification of cardiac diseases utilizing wavelet thresholding and support vector machine
CN113837139A (en) Chaotic neural network with complex value weight and application thereof in electrocardiogram classification
Murthy et al. Design and implementation of hybrid techniques and DA-based reconfigurable FIR filter design for noise removal in EEG signals on FPGA
Manimegalai et al. Comparison on denoising of electro cardiogram signal using deep learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210903

CF01 Termination of patent right due to non-payment of annual fee