CN111956208B - ECG signal classification method based on ultra-lightweight convolutional neural network - Google Patents

ECG signal classification method based on ultra-lightweight convolutional neural network Download PDF

Info

Publication number
CN111956208B
CN111956208B CN202010875217.2A CN202010875217A CN111956208B CN 111956208 B CN111956208 B CN 111956208B CN 202010875217 A CN202010875217 A CN 202010875217A CN 111956208 B CN111956208 B CN 111956208B
Authority
CN
China
Prior art keywords
layer
dscemp
full
convolutional
classification model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010875217.2A
Other languages
Chinese (zh)
Other versions
CN111956208A (en
Inventor
周军
肖剑彪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010875217.2A priority Critical patent/CN111956208B/en
Publication of CN111956208A publication Critical patent/CN111956208A/en
Application granted granted Critical
Publication of CN111956208B publication Critical patent/CN111956208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an ECG signal classification method based on an ultra-lightweight convolutional neural network, which comprises the following steps of: s1, acquiring an ECG data set, and making into a training set and a verification set; s2, training and verifying the ECG signal classification model by adopting a training set and a verification set to obtain an optimal ECG signal classification model; s3, collecting ECG heartbeat signals in real time, and preprocessing the ECG heartbeat signals to obtain a plurality of sections of ECG data; s4, sequentially inputting the multiple sections of ECG data into an optimal ECG signal classification model to obtain a classification result of the ECG data; the invention solves the problems of huge parameter quantity of the conventional full connection layer and huge calculation quantity of the conventional convolution layer in the CNN used by the end-to-end detection algorithm, and reduces the order of magnitude of the calculation complexity of the algorithm to the level suitable for equipment with extremely limited storage resources.

Description

ECG signal classification method based on ultra-lightweight convolutional neural network
Technical Field
The invention relates to the field of ECG signal classification, in particular to an ECG signal classification method based on an ultra-lightweight convolutional neural network.
Background
An Electrocardiogram (ECG) records electrical signals through the heart, which is commonly used clinically to further diagnose conditions associated with cardiac arrhythmias. Conventional arrhythmia detection methods use a hospital cumbersome electrocardiograph to acquire a patient's short-term ECG signals and are visually diagnosed by a cardiologist. However, since arrhythmias occur intermittently, especially in the early stages of the problem, it is difficult to detect them from the ECG signal over a short time window so that the best opportunity for treatment of a cardiac patient is missed. Long-term ECG monitoring with real-time arrhythmia detection capability is necessary to detect potential problems early on.
Long-term and practical ECG monitoring devices require economical and portable features of the device, and in recent years wearable ECG monitoring devices have been proposed and studied as an economical and effective solution. A wearable ECG monitoring device with automated detection capabilities can provide real-time cardiac health advice to a user while recording reactive abnormal ECG signals. Real-time arrhythmia detection is of interest to potential heart disease patients, who can detect true pre-morbid signs from the ECG signal, meaning that the patient will have more time to consult a physician or receive treatment. However, as an edge device, various resources of the edge device are extremely limited, and for a long-term real-time monitoring task, the effect of using the cloud to provide the core service is often not satisfactory in practical application, so that the practical application requirement needs to be met through edge computing. Thus, wearable ECG monitoring devices tend to have some basic characteristics: low cost, high energy efficiency and real-time performance, which require the loaded core detection algorithm to have sufficient lightweight and accuracy.
There are two main ways of detecting arrhythmias that are most prevalent. One is a classification method based on features, which is characterized in that the features are extracted through data preprocessing, and then the extracted features are sent to a classifier for classification. The method can be divided into two parts of a feature extractor and a classifier which are independent of each other. However, this structure leads to a problem in that the overall performance of the method depends largely on the merits and the number of features extracted by the feature extractor, which in turn leads to an excessive reliance on feature engineering, which requires a lot of manpower to obtain a result that good algorithm performance is achieved, and is limited by the level of human knowledge and experience. In addition, if the accuracy of the detection algorithm is improved by increasing the number of extracted features, the adverse effect is that the operation complexity and the implementation power consumption of the algorithm are significantly improved, which is not favorable for realizing the purpose of high energy efficiency of the device. Another method is to directly send the original signal to a neural network for classification, which is called an end-to-end classification method because of its structural characteristics. The advantage of using the neural network as the classifier is that only the training sample is provided for the classifier, the classifier can automatically learn to extract the features and synthesize the feature map to give the classification result. Because the extraction of the features is automatically carried out according to the training algorithm, the method can avoid the limitation of human knowledge and experience level so as to obtain higher accuracy. However, the main disadvantage of neural networks is that due to the order of magnitude of their computational and spatial complexity, such methods are difficult to directly apply to existing hardware platforms suitable for portable devices.
To sum up, the main dilemma faced by current wearable ECG monitoring devices are: 1) the traditional arrhythmia detection algorithm has design difficulty, the classification effect generally even leads to excessive complexity of operation (such as a characteristic-based classification method), and the newly-appeared end-to-end detection method overcomes the difficulty, has better performance and is still limited by huge operation complexity and parameter complexity; 2) in order to meet the real-time requirement, a developer adopts a solution of a high-end platform such as a Field Programmable Gate Array (FPGA) combined Digital Signal Processor (DSP), which results in low product cost and high power consumption level, or in order to further pursue high energy efficiency, an Application Specific Integrated Circuit (ASIC) solution is used to design a special chip for a detection algorithm, but the ASIC has high development cost and threshold, and has a long development period, and is only suitable for large companies and high-end markets.
Besides, the most common solution in the portable device market is based on a single-chip Microcomputer (MCU), unlike FPGA and DSP, the low power consumption MCU has a very low power consumption level and a very low device cost, and compared with ASIC, the advantage is very low development cost and threshold and short development period. However, existing end-to-end detection algorithms are difficult to implement completely on this platform due to the extremely limited variety of resources and poor customizability.
Disclosure of Invention
Aiming at the defects in the prior art, the ECG signal classification method based on the ultra-lightweight convolutional neural network solves the problems of huge parameter quantity of the conventional full-connection layer and huge calculation quantity of the conventional convolutional layer in the CNN used by an end-to-end detection algorithm, and reduces the order of magnitude of the calculation complexity of the algorithm to the level suitable for equipment with extremely limited storage resources.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: an ECG signal classification method based on an ultra-lightweight convolutional neural network comprises the following steps:
s1, acquiring an ECG data set, and making into a training set and a verification set;
s2, training and verifying the ECG signal classification model by adopting a training set and a verification set to obtain an optimal ECG signal classification model;
s3, collecting ECG heartbeat signals in real time, and preprocessing the ECG heartbeat signals to obtain a plurality of sections of ECG data;
and S4, sequentially inputting the multiple sections of ECG data into the optimal ECG signal classification model to obtain the classification result of the ECG data.
Further, the ECG signal classification model of the training process in step S2 includes: a first second classification model and a first fifth classification model;
the first binary model comprises: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer;
the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of a first binary classification model; an output end of the first softmax layer is used as an output end of a first binary classification model;
the first fifth classification model includes: a third DSCEMP convolutional layer, a fourth DSCEMP convolutional layer, a first LDDCP full-link layer, a second full-link layer and a second softmax layer;
the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-link layer, the second full-link layer and the second softmax layer are sequentially connected; an input of the third DSCEMP convolutional layer serves as an input of a first fifth classification model, and an output of the second softmax layer serves as an output of the first fifth classification model;
the training and verifying process of step S2 is:
the training and verification process of the first and second classification models is as follows:
a1, processing a training set by adopting a sample balance strategy to obtain a training set with similar probability;
a2, inputting training sets with similar probabilities into the first secondary classification model, and training the first secondary classification model through a back propagation training algorithm to obtain the trained first secondary classification model, wherein the calculation formula of the loss function of the back propagation training algorithm is as follows:
Figure BDA0002652455330000041
wherein J (omega) is a loss function, K is the total number of samples in training sets with similar probabilities, y (K) is a label value of a kth sample, phi (K) is a prediction result of the kth sample, and M is a bias factor;
a3, verifying the trained first secondary classification model by adopting a verification set, and reserving the optimal first secondary classification model;
the training and verification process of the first five classification models is as follows:
a4, inputting training sets with similar probabilities into a first five-classification model, and training the first five-classification model to obtain a trained first five-classification model;
a5, screening a verification set with abnormal heart rate from the verification set, inputting the verification set with abnormal heart rate into the trained first five classification models, verifying the first five classification models, and reserving the optimal first five classification models;
a6, removing a softmax layer in the optimal first second-class model and the optimal first fifth-class model, and splicing the optimal first second-class model and the optimal first fifth-class model after the softmax layer is removed to obtain an optimal ECG signal classification model;
the beneficial effects of the above further scheme are: the probability that a few samples and a plurality of samples are sent to a network for training is similar through a sample balancing strategy, so that the influence of different types of samples on the model training process is balanced; and the loss function used by the back propagation training algorithm is modified, and the loss terms corresponding to a few sample classes are multiplied by a certain weight, so that the attention of the model to the abnormal samples is improved.
Further, the structure of the optimal ECG signal classification model in step S2 is: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer, the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-connection layer and the second full-connection layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of an optimal ECG signal classification model; the second fully connected output serves as the output of the optimized ECG signal classification model.
Further, the first and third DSCEMP convolutional layers each comprise: a first depth convolution layer, a first embedded maximum pooling layer, and a first point-by-point convolution layer;
the first deep convolutional layer, the first embedded maximum pooling layer and the first point-by-point convolutional layer are sequentially connected; an input end of the first deep convolutional layer serves as an input end of the first DSCEMP convolutional layer or the third DSCEMP convolutional layer; the output end of the first point-by-point convolutional layer is used as the output end of the first DSCEMP convolutional layer or the second DSCEMP convolutional layer;
the second and fourth DSCEMP convolutional layers each comprise: a second depth convolution layer, a second embedded maximum pooling layer, and a second point-by-point convolution layer;
the second deep convolutional layer, the second embedded maximum pooling layer and the second point-by-point convolutional layer are sequentially connected; an input end of the second deep convolutional layer serves as an input end of a second DSCEMP convolutional layer or a fourth DSCEMP convolutional layer; the output end of the second point-by-point convolutional layer is used as the output end of the second DSCEMP convolutional layer or the fourth DSCEMP convolutional layer;
the number of channels of the first DSCEMP convolutional layer is half of the number of channels of the third DSCEMP convolutional layer; the number of channels of the second DSCEMP convolutional layer is half of the number of channels of the fourth DSCEMP convolutional layer;
the convolution kernel of the first depth convolution layer is a PADC-convolution kernel, and the kernel size is as follows: 15 x 1, wherein 15 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the first depth convolution layer is as follows: 4 x 1, where 4 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the first maximum embedding pooling layer is: 4 x 1, where 4 is the size of the pooling windows and steps in the length direction of the input feature map, and 1 is the size of the pooling windows and steps in the width direction of the input feature map;
the convolution kernel of the second depth convolution layer is a PADC-convolution kernel, and the kernel length of the second depth convolution layer is as follows: 9 x 1, where 9 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the second depth convolution layer is as follows: 3 × 1, where 3 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the second maximum embedding pooling layer is: 2 x 1, where 2 is the size of the pooling window and step in the length direction of the input feature map and 1 is the size of the pooling window and step in the width direction of the input feature map.
Further, the processing procedure of the first DSCEMP convolutional layer to the training set with similar probability in the training procedure includes the following steps:
b1, setting different convolution kernels on a single channel of each original image in the training set by adopting the first depth convolution layer for spatial perception to obtain an original characteristic image;
b2, performing down-sampling processing on the original feature map by adopting a first embedded maximum pooling layer to obtain a significant feature map:
Figure BDA0002652455330000071
Figure BDA0002652455330000072
wherein, F'MPSalient feature map output for the first embedded max pooling layer, FDCThe method comprises the steps that an original feature graph is obtained, s is the length of a pooling window of a first embedded maximum pooling layer, and l is the number of channels of the first embedded maximum pooling layer;
b3, performing point-by-point convolution on the salient feature map by adopting the first point-by-point convolution layer, and projecting an output channel of the salient feature map after the point-by-point convolution onto the high-dimensional output tensor to obtain the feature map output by the first DSCEMP convolution layer.
The beneficial effects of the above further scheme are: the operation amount can be effectively reduced through the depth convolution layer and the point-by-point convolution, and after the second embedded maximum pooling layer is added to carry out down-sampling processing on the original characteristic graph, the operation pressure can be further reduced through retaining the significant characteristics, and meanwhile, excessive characteristic information is not lost.
Further, the first LDSCP full-link layer comprises: a first projection layer, a first global average pooling layer, and a first L2-normalization layer;
the first projection layer, the first global average pooling layer and the first L2-normalization layer are connected in sequence;
the input end of the first projection layer is used as the input end of the first LDSCP full-connection layer;
the output end of the first L2-standardized layer is used as the output end of the first LDSCP full-connection layer;
the first LDDCP full link layer includes: a second projection layer, a first activation layer, a first weighted global averaging pooling layer WGAP, a first global maximum pooling layer GMP, a second L2-normalization layer and a third L2-normalization layer;
the input end of the second projection layer is used as the input end of the first LDDCP full-connection layer, and the output end of the second projection layer is connected with the input end of the first activation layer;
the output end of the first active layer is respectively connected with the input end of a first weighted global average pooling layer WGAP and the input end of a first global maximum pooling layer GMP;
the output end of the first weighted global average pooling layer WGAP is connected with the input end of a second L2-normalization layer;
the output of the first global max pooling layer GMP is connected to the input of a third L2-normalization layer;
the output end of the second L2-standardized layer and the output end of the third L2-standardized layer are used as the output end of the first LDDCP full-connection layer.
Further, the processing, by the first LDDCP fully-connected layer, the feature map extracted by the fourth DSCEMP convolutional layer in the training process includes the following steps:
c1, performing inter-channel association on the feature map through the second projection layer, and projecting the feature vector to the tensor of the high dimension to obtain the feature map of the high dimension projection;
c2, processing the feature map of the high-dimensional projection through the activation function of the first activation layer to obtain a nonlinear feature map;
c3, performing weighted aggregation in the global scope on each feature vector of the nonlinear feature map through the first weighted global average pooling layer WGAP to obtain a first feature vector:
Figure BDA0002652455330000081
wherein, OutiFor the ith output value in the first feature vector, n is the size of the non-linear feature map, m is the number of channels of the non-linear feature map, fijInput eigenvalue, w, of ith row and jth column in matrix of eigenvalues of m x n formed for non-linear eigenvalueijIs fijA corresponding weight;
c4, performing global aggregation on each feature vector of the nonlinear feature map through the first global maximum pooling layer GMP to obtain a second feature vector;
c5, normalizing the first feature vector through the L2 norm of the second L2-normalization layer to obtain a normalized first feature vector;
c6, normalizing the second feature vector through the L2 norm of a third L2-normalization layer to obtain a normalized second feature vector;
and C7, splicing and fusing the normalized first feature vector and the normalized second feature vector to obtain the feature vector output by the first LDDCP full-link layer.
The beneficial effects of the above further scheme are: the method comprises the steps of firstly carrying out inter-channel association on a feature map through a second projection layer constructed by 1-x 1 convolution kernel, projecting feature vectors onto a higher-dimension tensor, then improving nonlinear fitting capability through an activation function, carrying out weighting aggregation in a global range of each feature vector through a first weighted global averaging pooling layer WGAP, reducing the number of parameters, carrying out global aggregation through a first global maximum pooling layer GMP, reserving the most significant feature values in each channel, reserving the richness of features, standardizing the results of the weighting aggregation and the global aggregation to avoid bias caused by distribution difference, and fusing the standardized feature vectors into one feature vector to serve as the output of a first LDDCP full-connection layer, so that the number of the parameters can be reduced, and the richness of the features of the output result can be reserved.
Further, the ECG signal classification model of the training process in step S2 includes: a second classification model and a second quintile classification model;
the second classification model includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer and a third softmax layer;
the fifth DSCEMP convolutional layer, the sixth DSCEMP convolutional layer, the second LDSCP full-connection layer, the third full-connection layer and the third softmax layer are sequentially connected; an input end of the fifth DSCEMP convolutional layer is used as an input end of the second classification model; an output end of the third softmax layer is used as an output end of a second classification model;
the second five classification model comprises: a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer, a fourth full-link layer and a fourth softmax layer;
the seventh DSCEMP convolutional layer, the eighth DSCEMP convolutional layer, the second LDDCP full-link layer, the fourth full-link layer and the fourth softmax layer are sequentially connected; the input end of the seventh DSCEMP convolutional layer is used as the input end of the second five-classification model; and the output end of the fourth softmax layer is used as the output end of the second five-classification model.
Further, the optimal ECG signal classification model in step D6 includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer, a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer and a fourth full-link layer;
the input end of the fifth DSCEMP convolutional layer is used as the input end of the optimal ECG signal classification model, and the output end of the fifth DSCEMP convolutional layer is connected with the input end of the sixth DSCEMP convolutional layer;
the output end of the sixth DSCEMP convolutional layer is respectively connected with the input end of the second LDSCP full-connection layer, the input end of the second LDDCP full-connection layer and the output end of the eighth DSCEMP convolutional layer;
the output end of the second LDSCP full-connection layer is connected with the input end of a third full-connection layer;
the output end of the third full connection layer is connected with the input end of a seventh DSCEMP convolutional layer;
the output end of the seventh DSCEMP convolutional layer is connected with the input end of the eighth DSCEMP convolutional layer;
the output end of the second LDDCP full-connection layer is connected with the input end of a fourth full-connection layer;
the output end of the fourth full connection layer is used as the output end of the optimal ECG signal classification model;
the second LDSCP full-link layer comprises: a third projection layer, a second global average pooling layer, and a fourth L2-normalization layer; the third projection layer, the second global average pooling layer and the fourth L2-normalization layer are connected in sequence; the input end of the third projection layer is used as the input end of the second LDSCP full-connection layer; the output end of the fourth L2-standardized layer is used as the output end of the second LDSCP full-connection layer;
the second LDDCP full connectivity layer comprises: a fourth projection layer, a second activation layer, a second weighted global average pooling layer WGAP, a second global maximum pooling layer GMP, a fifth L2-normalization layer and a sixth L2-normalization layer;
the input end of the fourth projection layer is used as the input end of the second LDDCP full-connection layer, and the output end of the fourth projection layer is connected with the input end of the second activation layer; the output end of the second active layer is respectively connected with the input end of a second weighted global average pooling layer WGAP and the input end of a second global maximum pooling layer GMP; the output end of the second weighted global average pooling layer WGAP is connected with the input end of a fifth L2-normalization layer; an output of the second global maximum pooling layer GMP is connected to an input of a sixth L2-normalization layer; and the output end of the fifth L2-standardized layer and the output end of the sixth L2-standardized layer are used as the output end of the second LDDCP full-connection layer.
The beneficial effects of the above further scheme are: all input ECG heartbeat signals pass through the second classification model firstly, the output characteristic diagram of the sixth DSCEMP convolutional layer in the second classification model is reserved, when the heartbeat signals are distinguished as normal heartbeat, the classification result is directly output, otherwise, the heartbeat signals are sent into the second fifth classification model, the output characteristic diagram of the eighth DSCEMP convolutional layer in the second fifth classification model is spliced with the output characteristic diagram of the previously reserved sixth DSCEMP convolutional layer, and then the output characteristic diagram is sent into a next structure to be processed to obtain a final fifth classification result and output, so that the second fifth classification model can save half of the convolutional layer operation pressure after being started.
Further, step S3 includes the steps of:
s31, filtering the ECG data by adopting a band-pass filter to obtain filtered ECG data;
s32, performing R peak detection on the filtered ECG data by adopting a Pan-Tompkins algorithm, and positioning the position of an R peak in each cardiac cycle waveform;
and S33, segmenting the filtered ECG data according to the R peak position in each cardiac cycle waveform to obtain a plurality of segments of ECG data.
In conclusion, the beneficial effects of the invention are as follows: according to the invention, two ECG signal classification models are designed, wherein two classification models are respectively used for screening abnormal data, then the abnormal data are accurately classified through the five classification models to obtain the classification result of the abnormal data, and the pneumatic frequency of the five classification models is reduced through the preliminary screening of the two classification models, so that the actual power consumption of the algorithm is reduced.
Drawings
FIG. 1 is a flow chart of a method for classifying ECG signals based on ultra lightweight convolutional neural networks;
FIG. 2 is a block diagram of a first binary class model;
FIG. 3 is a block diagram of a first fifth classification model;
FIG. 4 is a detailed block diagram of the ECG signal classification model optimized in example 1;
FIG. 5 is a detailed block diagram of the ECG signal classification model optimized in example 2;
FIG. 6 is a schematic representation of a conventional convolutional layer without expansion ratio;
FIG. 7 is a schematic view of the setup expansion ratio of a convolutional layer of the present invention;
FIG. 8 is a schematic structural diagram of a conventional convolutional layer;
FIG. 9 is a schematic diagram of a DSCEMP convolutional layer structure;
FIG. 10 is a schematic structural diagram of a conventional fully-connected layer;
fig. 11 is a schematic structural diagram of an LDDCP full link layer.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
For the following explanation of abbreviated characters:
the LDDCP is a two-channel pooling technology based on layer decomposition;
the LDSCP is a single-channel pooling technology based on layer decomposition;
DSCEMP is a deep separation convolution technique with embedded maximum pooling;
PADC is a pooled perceptual dilation convolution technique.
As shown in fig. 1, an ECG signal classification method based on an ultra-lightweight convolutional neural network includes the following steps:
s1, acquiring an ECG data set, and making into a training set and a verification set;
s2, training and verifying the ECG signal classification model by adopting a training set and a verification set to obtain an optimal ECG signal classification model;
s3, collecting ECG heartbeat signals in real time, and preprocessing the ECG heartbeat signals to obtain a plurality of sections of ECG data;
step S3 includes the following steps:
s31, filtering the ECG data by adopting a band-pass filter to obtain filtered ECG data;
in this embodiment: a digital band-pass filter with the cut-off frequency of 0.5Hz and 40Hz is used as a denoising module to remove baseline shift, power frequency interference and myoelectric interference noise which are possibly contained in ECG data;
s32, performing R peak detection on the filtered ECG data by adopting a Pan-Tompkins algorithm, and positioning the position of an R peak in each cardiac cycle waveform;
and S33, segmenting the filtered ECG data according to the R peak position in each cardiac cycle waveform to obtain a plurality of segments of ECG data.
In this embodiment: and performing heart beat segmentation according to the R peak position, wherein the waveform of each heart beat consists of 133 sampling points on the left side and 266 sampling points on the right side of the R peak position, and the total 400 sampling points comprise the sampling points corresponding to the R peak position.
S4, sequentially inputting the multiple sections of ECG data into an optimal ECG signal classification model to obtain a classification result of the ECG data;
in this embodiment: the classification result comprises: n (normal or bundle branch block heart beat), S (supraventricular abnormal heart beat), V (ventricular abnormal heart beat), F (fusion heart beat), and Q (unclassified heart beat).
Example 1: the ECG signal classification model of the training process in step S2 includes: a first second classification model and a first fifth classification model;
as shown in fig. 2, the first binary model includes: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer;
the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of a first binary classification model; an output end of the first softmax layer is used as an output end of a first binary classification model;
as shown in fig. 3, the first fifth classification model includes: a third DSCEMP convolutional layer, a fourth DSCEMP convolutional layer, a first LDDCP full-link layer, a second full-link layer and a second softmax layer;
the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-link layer, the second full-link layer and the second softmax layer are sequentially connected; an input of the third DSCEMP convolutional layer serves as an input of a first fifth classification model, and an output of the second softmax layer serves as an output of the first fifth classification model;
the training and verifying process of step S2 is:
the training and verification process of the first and second classification models is as follows:
a1, processing a training set by adopting a sample balance strategy to obtain a training set with similar probability;
a2, inputting training sets with similar probabilities into the first secondary classification model, and training the first secondary classification model through a back propagation training algorithm to obtain the trained first secondary classification model, wherein the calculation formula of the loss function of the back propagation training algorithm is as follows:
Figure BDA0002652455330000151
wherein J (omega) is a loss function, K is the total number of samples in training sets with similar probabilities, y (K) is a label value of a kth sample, phi (K) is a prediction result of the kth sample, and M is a bias factor;
a3, verifying the trained first secondary classification model by adopting a verification set, using the F2 score as a performance index, and reserving the model with the highest F2 score as the optimal first secondary classification model;
the training and verification process of the first five classification models is as follows:
a4, inputting training sets with similar probabilities into a first five-classification model, and training the first five-classification model to obtain a trained first five-classification model;
a5, screening a verification set with abnormal heart rate from the verification set, inputting the verification set with abnormal heart rate into the trained first five classification models, verifying the first five classification models, and reserving the optimal first five classification models;
a6, removing a softmax layer in the optimal first second-class model and the optimal first fifth-class model, and splicing the optimal first second-class model and the optimal first fifth-class model after the softmax layer is removed to obtain an optimal ECG signal classification model;
as shown in FIGS. 4-5, the optimal ECG signal classification model in step S2 has the following structure: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer, the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-connection layer and the second full-connection layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of an optimal ECG signal classification model; the second fully connected output serves as the output of the optimized ECG signal classification model.
The first and third DSCEMP convolutional layers each comprise: a first depth convolution layer, a first embedded maximum pooling layer, and a first point-by-point convolution layer;
the first deep convolutional layer, the first embedded maximum pooling layer and the first point-by-point convolutional layer are sequentially connected; an input end of the first deep convolutional layer serves as an input end of the first DSCEMP convolutional layer or the third DSCEMP convolutional layer; the output end of the first point-by-point convolutional layer is used as the output end of the first DSCEMP convolutional layer or the second DSCEMP convolutional layer;
the second and fourth DSCEMP convolutional layers each comprise: a second depth convolution layer, a second embedded maximum pooling layer, and a second point-by-point convolution layer;
the second deep convolutional layer, the second embedded maximum pooling layer and the second point-by-point convolutional layer are sequentially connected; an input end of the second deep convolutional layer serves as an input end of a second DSCEMP convolutional layer or a fourth DSCEMP convolutional layer; the output end of the second point-by-point convolutional layer is used as the output end of the second DSCEMP convolutional layer or the fourth DSCEMP convolutional layer;
the number of channels of the first DSCEMP convolutional layer is half of the number of channels of the third DSCEMP convolutional layer; the number of channels of the second DSCEMP convolutional layer is half of the number of channels of the fourth DSCEMP convolutional layer;
the convolution kernel of the first depth convolution layer is a PADC-convolution kernel, and the kernel size is as follows: 15 x 1, wherein 15 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the first depth convolution layer is as follows: 4 x 1, where 4 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the first maximum embedding pooling layer is: 4 x 1, where 4 is the size of the pooling windows and steps in the length direction of the input feature map, and 1 is the size of the pooling windows and steps in the width direction of the input feature map;
the convolution kernel of the second depth convolution layer is a PADC-convolution kernel, and the kernel length of the second depth convolution layer is as follows: 9 x 1, where 9 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the second depth convolution layer is as follows: 3 × 1, where 3 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the second maximum embedding pooling layer is: 2 x 1, where 2 is the size of the pooling window and step in the length direction of the input feature map and 1 is the size of the pooling window and step in the width direction of the input feature map.
As shown in fig. 6 to 7, fig. 6 is a method for conventionally setting an expansion rate for a convolutional layer, which increases the receptive field of an output neuron by a cross-stacking manner of the convolutional layer and a pooling layer, but there is a problem that the receptive fields are excessively overlapped, fig. 7 is a method for expanding a convolutional layer by inserting a hole, so that the sensing regions of adjacent convolutional neurons in a certain range are mutually staggered, which not only can rapidly expand the receptive field of neurons in a subsequent convolutional layer, but also can reduce the degree of the excessive overlapping of the receptive fields in a layer-by-layer transmission process, and less overlapping means that the number of neurons in the convolutional layer is less, and the network computation amount is lower.
The process of demonstrating the expansion ratio of the PADC-convolution kernel setting the first depth convolution layer is as follows:
in the first depth convolution layer, the expansion rate of the convolution kernel is designed in cooperation with the subsequent embedded maximum pooling layer to minimize the receptive field overlapping area of adjacent neurons in a certain range, and further maximize the receptive field of a single output neuron in the subsequent embedded maximum pooling layer.
In order to maximize the receptive field of the maximum pooling layer neurons while keeping their output mapped on the area of the input signal continuous to avoid unnecessary loss of information during subsequent transfers, the expansion ratio of the depth convolution layer should be consistent with the pooling window size of the maximum pooling layer immediately following it.
The process of demonstrating the expansion ratio of the PADC-convolution kernel setting the second depth convolution layer is as follows:
when the appropriate dilation rate is selected for the second depth convolutional layer, the difference is that since the direct input of the second depth convolutional layer is the output of the previous DSCEMP convolutional layer, and there is already a certain degree of overlap of the receptive fields between these outputs, the dilation rate can be set to a value larger than the size of the pooling window of the subsequent embedded pooling layer by fine tuning (but cannot be much larger than the size of the window, which may result in a decrease in the classification accuracy of the neural network).
In conclusion, the expansion rate of the deep convolution kernel is set, so that the dense network structure is changed and thinned, the number of neurons in the network is reduced, the receptive field of each neuron is wider, the operation amount of the network is reduced, and the classification performance is improved.
The processing process of the first DSCEMP convolutional layer to the training set with similar probability in the training process comprises the following steps:
b1, setting different convolution kernels on a single channel of each original image in the training set by adopting the first depth convolution layer for spatial perception to obtain an original characteristic image;
b2, performing down-sampling processing on the original feature map by adopting a first embedded maximum pooling layer to obtain a significant feature map:
Figure BDA0002652455330000181
Figure BDA0002652455330000182
wherein, F'MPSalient feature map output for the first embedded max pooling layer, FDCThe method comprises the steps that an original feature graph is obtained, s is the length of a pooling window of a first embedded maximum pooling layer, and l is the number of channels of the first embedded maximum pooling layer;
b3, performing point-by-point convolution on the salient feature map by adopting the first point-by-point convolution layer, and projecting an output channel of the salient feature map after the point-by-point convolution onto the high-dimensional output tensor to obtain the feature map output by the first DSCEMP convolution layer.
The first LDSCP full-link layer comprises: a first projection layer, a first global average pooling layer, and a first L2-normalization layer;
the first projection layer, the first global average pooling layer and the first L2-normalization layer are connected in sequence;
the input end of the first projection layer is used as the input end of the first LDSCP full-connection layer;
the output end of the first L2-standardized layer is used as the output end of the first LDSCP full-connection layer;
the first LDDCP full link layer includes: a second projection layer, a first activation layer, a first weighted global averaging pooling layer WGAP, a first global maximum pooling layer GMP, a second L2-normalization layer and a third L2-normalization layer;
the input end of the second projection layer is used as the input end of the first LDDCP full-connection layer, and the output end of the second projection layer is connected with the input end of the first activation layer;
the output end of the first active layer is respectively connected with the input end of a first weighted global average pooling layer WGAP and the input end of a first global maximum pooling layer GMP;
the output end of the first weighted global average pooling layer WGAP is connected with the input end of a second L2-normalization layer;
the output of the first global max pooling layer GMP is connected to the input of a third L2-normalization layer;
the output end of the second L2-standardized layer and the output end of the third L2-standardized layer are used as the output end of the first LDDCP full-connection layer.
The processing of the feature map extracted from the fourth DSCEMP convolutional layer by the first LDDCP full link layer in the training process comprises the following steps:
c1, performing inter-channel association on the feature map through the second projection layer, and projecting the feature vector to the tensor of the high dimension to obtain the feature map of the high dimension projection;
c2, processing the feature map of the high-dimensional projection through the activation function of the first activation layer to obtain a nonlinear feature map;
c3, performing weighted aggregation in the global scope on each feature vector of the nonlinear feature map through the first weighted global average pooling layer WGAP to obtain a first feature vector:
Figure BDA0002652455330000201
wherein, OutiFor the ith output value in the first feature vector, n is the size of the non-linear feature map, m is the number of channels of the non-linear feature map, fijInput eigenvalue, w, of ith row and jth column in matrix of eigenvalues of m x n formed for non-linear eigenvalueijIs fijA corresponding weight;
c4, performing global aggregation on each feature vector of the nonlinear feature map through the first global maximum pooling layer GMP to obtain a second feature vector;
c5, normalizing the first feature vector through the L2 norm of the second L2-normalization layer to obtain a normalized first feature vector;
c6, normalizing the second feature vector through the L2 norm of a third L2-normalization layer to obtain a normalized second feature vector;
and C7, splicing and fusing the normalized first feature vector and the normalized second feature vector to obtain the feature vector output by the first LDDCP full-link layer.
Example 2: the ECG signal classification model of the training process in step S2 includes: a second classification model and a second quintile classification model;
the second classification model includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer and a third softmax layer;
the fifth DSCEMP convolutional layer, the sixth DSCEMP convolutional layer, the second LDSCP full-connection layer, the third full-connection layer and the third softmax layer are sequentially connected; an input end of the fifth DSCEMP convolutional layer is used as an input end of the second classification model; an output end of the third softmax layer is used as an output end of a second classification model;
the second five classification model comprises: a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer, a fourth full-link layer and a fourth softmax layer;
the seventh DSCEMP convolutional layer, the eighth DSCEMP convolutional layer, the second LDDCP full-link layer, the fourth full-link layer and the fourth softmax layer are sequentially connected; the input end of the seventh DSCEMP convolutional layer is used as the input end of the second five-classification model; and the output end of the fourth softmax layer is used as the output end of the second five-classification model.
As shown in fig. 8, the optimal ECG signal classification model in step D6 includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer, a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer and a fourth full-link layer;
the input end of the fifth DSCEMP convolutional layer is used as the input end of the optimal ECG signal classification model, and the output end of the fifth DSCEMP convolutional layer is connected with the input end of the sixth DSCEMP convolutional layer;
the output end of the sixth DSCEMP convolutional layer is respectively connected with the input end of the second LDSCP full-connection layer, the input end of the second LDDCP full-connection layer and the output end of the eighth DSCEMP convolutional layer;
the output end of the second LDSCP full-connection layer is connected with the input end of a third full-connection layer;
the output end of the third full connection layer is connected with the input end of a seventh DSCEMP convolutional layer;
the output end of the seventh DSCEMP convolutional layer is connected with the input end of the eighth DSCEMP convolutional layer;
the output end of the second LDDCP full-connection layer is connected with the input end of a fourth full-connection layer;
the output end of the fourth full connection layer is used as the output end of the optimal ECG signal classification model;
the structures of the seventh and fifth DSCEMP convolutional layers are the same as the first DSCEMP convolutional layer in embodiment 1; the structures of the eighth DSCEMP convolutional layer and the sixth DSCEMP convolutional layer are the same as the second DSCEMP convolutional layer in embodiment 1;
the structures of the first, second, third, fourth, fifth, sixth, and seventh DSCEMP convolutional layers are shown in fig. 9.
The second LDSCP full-link layer comprises: a third projection layer, a second global average pooling layer, and a fourth L2-normalization layer; the third projection layer, the second global average pooling layer and the fourth L2-normalization layer are connected in sequence; the input end of the third projection layer is used as the input end of the second LDSCP full-connection layer; the output end of the fourth L2-standardized layer is used as the output end of the second LDSCP full-connection layer;
the second LDDCP full connectivity layer comprises: a fourth projection layer, a second activation layer, a second weighted global average pooling layer WGAP, a second global maximum pooling layer GMP, a fifth L2-normalization layer and a sixth L2-normalization layer;
the input end of the fourth projection layer is used as the input end of the second LDDCP full-connection layer, and the output end of the fourth projection layer is connected with the input end of the second activation layer; the output end of the second active layer is respectively connected with the input end of a second weighted global average pooling layer WGAP and the input end of a second global maximum pooling layer GMP; the output end of the second weighted global average pooling layer WGAP is connected with the input end of a fifth L2-normalization layer; an output of the second global maximum pooling layer GMP is connected to an input of a sixth L2-normalization layer; and the output end of the fifth L2-standardized layer and the output end of the sixth L2-standardized layer are used as the output end of the second LDDCP full-connection layer.
Fig. 11 shows a detailed structure diagram of the first LDDCP full-link layer and the second LDDCP full-link layer, and fig. 10 shows a schematic structural diagram of a conventional full-link layer.
Fig. 8 is a schematic structural view of a conventional convolutional layer, a pooling layer is connected behind the conventional convolutional layer in fig. 8, and fig. 9 is a schematic structural view of a DSCEMP convolutional layer.
In order to reduce the computation of the convolutional layer, the DSCEMP convolutional layer designed by the invention firstly decomposes a conventional convolutional layer into a depth convolutional layer and a point-by-point convolutional layer, the depth convolutional layer only performs convolution on a spatial dimension, and the point-by-point convolutional layer performs information combination among channels.
Explanation of character meanings in fig. 8 to 11:
in fig. 8: inputting: (m @ n x 1), m is the number of channels of the input feature map, n is the size of the input feature map, h @ f x 1, h is the number of channels of the convolution layer output feature map, n x 1 is the size of the convolution layer output feature map, h @ g x 1, h is the number of channels of the maximum pooling layer output feature map, and g x 1 is the size of the maximum pooling layer output feature map.
In fig. 9: inputting: (m @ n x 1), m is the number of channels of the input feature map, n x 1 is the size of the input feature map, m @ f x 1, m is the number of channels of the output feature map of the depth convolution layer, f x 1 is the size of the output feature map of the depth convolution layer, m @ g x 1, m is the number of channels in which the maximum convolution layer output feature map is embedded, g x 1 is the size in which the maximum convolution layer output feature map is embedded, h @ g x 1, h is the number of channels in which the point-by-point convolution layer output feature map is embedded, g x 1 is the size of the point-by-point convolution layer output feature map, kernel (m @ k 1), m is the number of output channels in the depth convolution layer, k x 1 is the size of the depth convolution kernel, step (s 1), s x 1 is the step of the maximum convolution layer, kernel (h 1), h is the number of output channels in the point-by point convolution layer, 1 represents the size of the point-by-point convolution layer, and 2 is the point-by-point convolution feature map.
In fig. 10, the inputs: m × n × 1, m is the number of channels of the input feature map, n × 1 is the size of the input feature map, kernel (k @ n × 1), k is the number of fully-connected layer neurons, n × 1 is the size of the connected input feature map of the fully-connected layer neurons, m1 is the number of channels of the connected input feature map of the fully-connected layer neurons, and output: k is the length of the output feature vector.
In fig. 11, the inputs: m × n × 1, m is the number of channels of the input feature map, n × 1 is the size of the input feature map, k @ n × 1, k is the number of channels of the projection layer output feature map, n × 1 is the size of the projection layer output feature map, kernel (k @1 × 1), k is the number of projection layer convolution kernels, 1 × 1 is the size of the projection layer convolution kernels, m is the number of input channels of the projection layer convolution layer, kernel (k @ n × 1), k is the number of weighted kernels of weighted global averaging pooling layer WGAP, n × 1 is the size of the weighted kernels, and p is the length of the output feature vector alone of weighted global averaging pooling layer WGAP and global max pooling layer GMP.

Claims (6)

1. An ECG signal classification method based on an ultra-lightweight convolutional neural network is characterized by comprising the following steps:
s1, acquiring an ECG data set, and making into a training set and a verification set;
s2, training and verifying the ECG signal classification model by adopting a training set and a verification set to obtain an optimal ECG signal classification model;
s3, collecting ECG heartbeat signals in real time, and preprocessing the ECG heartbeat signals to obtain a plurality of sections of ECG data;
s4, sequentially inputting the multiple sections of ECG data into an optimal ECG signal classification model to obtain a classification result of the ECG data;
the ECG signal classification model of the training process in step S2 includes: a first second classification model and a first fifth classification model;
the first binary model comprises: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer;
the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer and the first softmax layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of a first binary classification model; an output end of the first softmax layer is used as an output end of a first binary classification model;
the first fifth classification model includes: a third DSCEMP convolutional layer, a fourth DSCEMP convolutional layer, a first LDDCP full-link layer, a second full-link layer and a second softmax layer;
the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-link layer, the second full-link layer and the second softmax layer are sequentially connected; an input of the third DSCEMP convolutional layer serves as an input of a first fifth classification model, and an output of the second softmax layer serves as an output of the first fifth classification model;
the training and verifying process of step S2 is:
the training and verification process of the first and second classification models is as follows:
a1, processing a training set by adopting a sample balance strategy to obtain a training set with similar probability;
a2, inputting training sets with similar probabilities into the first secondary classification model, and training the first secondary classification model through a back propagation training algorithm to obtain the trained first secondary classification model, wherein the calculation formula of the loss function of the back propagation training algorithm is as follows:
Figure FDA0002978070230000021
wherein J (omega) is a loss function, K is the total number of samples in training sets with similar probabilities, y (K) is a label value of a kth sample, phi (K) is a prediction result of the kth sample, and M is a bias factor;
a3, verifying the trained first secondary classification model by adopting a verification set, and reserving the optimal first secondary classification model;
the training and verification process of the first five classification models is as follows:
a4, inputting training sets with similar probabilities into a first five-classification model, and training the first five-classification model to obtain a trained first five-classification model;
a5, screening a verification set with abnormal heart rate from the verification set, inputting the verification set with abnormal heart rate into the trained first five classification models, verifying the first five classification models, and reserving the optimal first five classification models;
a6, removing a softmax layer in the optimal first second-class model and the optimal first fifth-class model, and splicing the optimal first second-class model and the optimal first fifth-class model after the softmax layer is removed to obtain an optimal ECG signal classification model;
the first and third DSCEMP convolutional layers each comprise: a first depth convolution layer, a first embedded maximum pooling layer, and a first point-by-point convolution layer;
the first deep convolutional layer, the first embedded maximum pooling layer and the first point-by-point convolutional layer are sequentially connected; an input end of the first deep convolutional layer serves as an input end of the first DSCEMP convolutional layer or the third DSCEMP convolutional layer; the output end of the first point-by-point convolutional layer is used as the output end of the first DSCEMP convolutional layer or the second DSCEMP convolutional layer;
the second and fourth DSCEMP convolutional layers each comprise: a second depth convolution layer, a second embedded maximum pooling layer, and a second point-by-point convolution layer;
the second deep convolutional layer, the second embedded maximum pooling layer and the second point-by-point convolutional layer are sequentially connected; an input end of the second deep convolutional layer serves as an input end of a second DSCEMP convolutional layer or a fourth DSCEMP convolutional layer; the output end of the second point-by-point convolutional layer is used as the output end of the second DSCEMP convolutional layer or the fourth DSCEMP convolutional layer;
the number of channels of the first DSCEMP convolutional layer is half of the number of channels of the third DSCEMP convolutional layer; the number of channels of the second DSCEMP convolutional layer is half of the number of channels of the fourth DSCEMP convolutional layer;
the convolution kernel of the first depth convolution layer is a PADC-convolution kernel, and the kernel size is as follows: 15 x 1, wherein 15 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the first depth convolution layer is as follows: 4 x 1, where 4 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the first maximum embedding pooling layer is: 4 x 1, where 4 is the size of the pooling windows and steps in the length direction of the input feature map, and 1 is the size of the pooling windows and steps in the width direction of the input feature map;
the convolution kernel of the second depth convolution layer is a PADC-convolution kernel, and the kernel length of the second depth convolution layer is as follows: 9 x 1, where 9 is the length of the PADC-convolution kernel and 1 is the width of the PADC-convolution kernel;
the expansion ratio of the PADC-convolution kernel of the second depth convolution layer is as follows: 3 × 1, where 3 is the expansion ratio in the length direction of the PADC-convolution kernel, and 1 is the expansion ratio in the width direction of the PADC-convolution kernel;
the maximum embedding pooling of the second maximum embedding pooling layer is: 2 × 1, wherein 2 is the size of the pooling window and step in the length direction of the input feature map, and 1 is the size of the pooling window and step in the width direction of the input feature map;
the structure of the optimal ECG signal classification model in step S2 is: the first DSCEMP convolutional layer, the second DSCEMP convolutional layer, the first LDSCP full-connection layer, the first full-connection layer, the third DSCEMP convolutional layer, the fourth DSCEMP convolutional layer, the first LDDCP full-connection layer and the second full-connection layer are sequentially connected; the input end of the first DSCEMP convolutional layer is used as the input end of an optimal ECG signal classification model; the second fully connected output serves as the output of the optimized ECG signal classification model.
2. The method for classifying ECG signals based on ultra-lightweight convolutional neural network as claimed in claim 1, wherein the processing procedure of the first DSCEMP convolutional layer to the training set with similar probability in the training procedure comprises the following steps:
b1, setting different convolution kernels on a single channel of each original image in the training set by adopting the first depth convolution layer for spatial perception to obtain an original characteristic image;
b2, performing down-sampling processing on the original feature map by adopting a first embedded maximum pooling layer to obtain a significant feature map:
Figure FDA0002978070230000041
Figure FDA0002978070230000042
wherein, F'MPSalient feature map output for the first embedded max pooling layer, FDCThe method comprises the steps that an original feature graph is obtained, s is the length of a pooling window of a first embedded maximum pooling layer, and l is the number of channels of the first embedded maximum pooling layer;
b3, performing point-by-point convolution on the salient feature map by adopting the first point-by-point convolution layer, and projecting an output channel of the salient feature map after the point-by-point convolution onto the high-dimensional output tensor to obtain the feature map output by the first DSCEMP convolution layer.
3. The method of ultra-lightweight convolutional neural network-based ECG signal classification of claim 1, wherein the first LDSCP fully-connected layer comprises: a first projection layer, a first global average pooling layer, and a first L2-normalization layer;
the first projection layer, the first global average pooling layer and the first L2-normalization layer are connected in sequence;
the input end of the first projection layer is used as the input end of the first LDSCP full-connection layer;
the output end of the first L2-standardized layer is used as the output end of the first LDSCP full-connection layer;
the first LDDCP full link layer includes: a second projection layer, a first activation layer, a first weighted global averaging pooling layer WGAP, a first global maximum pooling layer GMP, a second L2-normalization layer and a third L2-normalization layer;
the input end of the second projection layer is used as the input end of the first LDDCP full-connection layer, and the output end of the second projection layer is connected with the input end of the first activation layer;
the output end of the first active layer is respectively connected with the input end of a first weighted global average pooling layer WGAP and the input end of a first global maximum pooling layer GMP;
the output end of the first weighted global average pooling layer WGAP is connected with the input end of a second L2-normalization layer;
the output of the first global max pooling layer GMP is connected to the input of a third L2-normalization layer;
the output end of the second L2-standardized layer and the output end of the third L2-standardized layer are used as the output end of the first LDDCP full-connection layer.
4. The method for classifying ECG signals based on ultra-lightweight convolutional neural network according to claim 3, wherein the processing of the feature map extracted from the fourth DSCEMP convolutional layer by the first LDDCP fully-connected layer in the training process comprises the following steps:
c1, performing inter-channel association on the feature map through the second projection layer, and projecting the feature vector to the tensor of the high dimension to obtain the feature map of the high dimension projection;
c2, processing the feature map of the high-dimensional projection through the activation function of the first activation layer to obtain a nonlinear feature map;
c3, performing weighted aggregation in the global scope on each feature vector of the nonlinear feature map through the first weighted global average pooling layer WGAP to obtain a first feature vector:
Figure FDA0002978070230000061
wherein, OutiFor the ith output value in the first feature vector, n is the size of the non-linear feature map, m is the number of channels of the non-linear feature map, fijInput eigenvalue, w, of ith row and jth column in matrix of eigenvalues of m x n formed for non-linear eigenvalueijIs fijA corresponding weight;
c4, performing global aggregation on each feature vector of the nonlinear feature map through the first global maximum pooling layer GMP to obtain a second feature vector;
c5, normalizing the first feature vector through the L2 norm of the second L2-normalization layer to obtain a normalized first feature vector;
c6, normalizing the second feature vector through the L2 norm of a third L2-normalization layer to obtain a normalized second feature vector;
and C7, splicing and fusing the normalized first feature vector and the normalized second feature vector to obtain the feature vector output by the first LDDCP full-link layer.
5. The method for classifying ECG signals based on ultra-lightweight convolutional neural network as claimed in claim 1, wherein the ECG signal classification model of the training process in step S2 comprises: a second classification model and a second quintile classification model;
the second classification model includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer and a third softmax layer;
the fifth DSCEMP convolutional layer, the sixth DSCEMP convolutional layer, the second LDSCP full-connection layer, the third full-connection layer and the third softmax layer are sequentially connected; an input end of the fifth DSCEMP convolutional layer is used as an input end of the second classification model; an output end of the third softmax layer is used as an output end of a second classification model;
the second five classification model comprises: a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer, a fourth full-link layer and a fourth softmax layer;
the seventh DSCEMP convolutional layer, the eighth DSCEMP convolutional layer, the second LDDCP full-link layer, the fourth full-link layer and the fourth softmax layer are sequentially connected; the input end of the seventh DSCEMP convolutional layer is used as the input end of the second five-classification model; an output end of the fourth softmax layer is used as an output end of a second five-classification model;
the optimal ECG signal classification model in step S2 includes: a fifth DSCEMP convolutional layer, a sixth DSCEMP convolutional layer, a second LDSCP full-link layer, a third full-link layer, a seventh DSCEMP convolutional layer, an eighth DSCEMP convolutional layer, a second LDDCP full-link layer and a fourth full-link layer;
the input end of the fifth DSCEMP convolutional layer is used as the input end of the optimal ECG signal classification model, and the output end of the fifth DSCEMP convolutional layer is connected with the input end of the sixth DSCEMP convolutional layer;
the output end of the sixth DSCEMP convolutional layer is respectively connected with the input end of the second LDSCP full-connection layer, the input end of the second LDDCP full-connection layer and the output end of the eighth DSCEMP convolutional layer;
the output end of the second LDSCP full-connection layer is connected with the input end of a third full-connection layer;
the output end of the third full connection layer is connected with the input end of a seventh DSCEMP convolutional layer;
the output end of the seventh DSCEMP convolutional layer is connected with the input end of the eighth DSCEMP convolutional layer;
the output end of the second LDDCP full-connection layer is connected with the input end of a fourth full-connection layer;
the output end of the fourth full connection layer is used as the output end of the optimal ECG signal classification model;
the second LDSCP full-link layer comprises: a third projection layer, a second global average pooling layer, and a fourth L2-normalization layer; the third projection layer, the second global average pooling layer and the fourth L2-normalization layer are connected in sequence; the input end of the third projection layer is used as the input end of the second LDSCP full-connection layer; the output end of the fourth L2-standardized layer is used as the output end of the second LDSCP full-connection layer;
the second LDDCP full connectivity layer comprises: a fourth projection layer, a second activation layer, a second weighted global average pooling layer WGAP, a second global maximum pooling layer GMP, a fifth L2-normalization layer and a sixth L2-normalization layer;
the input end of the fourth projection layer is used as the input end of the second LDDCP full-connection layer, and the output end of the fourth projection layer is connected with the input end of the second activation layer; the output end of the second active layer is respectively connected with the input end of a second weighted global average pooling layer WGAP and the input end of a second global maximum pooling layer GMP; the output end of the second weighted global average pooling layer WGAP is connected with the input end of a fifth L2-normalization layer; an output of the second global maximum pooling layer GMP is connected to an input of a sixth L2-normalization layer; and the output end of the fifth L2-standardized layer and the output end of the sixth L2-standardized layer are used as the output end of the second LDDCP full-connection layer.
6. The method for classifying ECG signals based on ultra-lightweight convolutional neural network as claimed in claim 1, wherein the step S3 comprises the following steps:
s31, filtering the ECG data by adopting a band-pass filter to obtain filtered ECG data;
s32, performing R peak detection on the filtered ECG data by adopting a Pan-Tompkins algorithm, and positioning the position of an R peak in each cardiac cycle waveform;
and S33, segmenting the filtered ECG data according to the R peak position in each cardiac cycle waveform to obtain a plurality of segments of ECG data.
CN202010875217.2A 2020-08-27 2020-08-27 ECG signal classification method based on ultra-lightweight convolutional neural network Active CN111956208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010875217.2A CN111956208B (en) 2020-08-27 2020-08-27 ECG signal classification method based on ultra-lightweight convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010875217.2A CN111956208B (en) 2020-08-27 2020-08-27 ECG signal classification method based on ultra-lightweight convolutional neural network

Publications (2)

Publication Number Publication Date
CN111956208A CN111956208A (en) 2020-11-20
CN111956208B true CN111956208B (en) 2021-04-20

Family

ID=73389889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010875217.2A Active CN111956208B (en) 2020-08-27 2020-08-27 ECG signal classification method based on ultra-lightweight convolutional neural network

Country Status (1)

Country Link
CN (1) CN111956208B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112690802B (en) * 2020-12-25 2023-03-03 平安科技(深圳)有限公司 Method, device, terminal and storage medium for detecting electrocardiosignals
CN113965632A (en) * 2021-11-03 2022-01-21 武汉大学 ECG analysis system and method based on cloud-equipment terminal cooperation
CN114098757B (en) * 2021-11-12 2024-02-09 南京海量物联科技有限公司 ECG signal monitoring method based on quantum particle swarm optimization
CN114504326B (en) * 2022-01-17 2023-07-18 电子科技大学 Binary amplitude coding method for electrocardiosignal

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109077715A (en) * 2018-09-03 2018-12-25 北京工业大学 A kind of electrocardiosignal automatic classification method based on single lead
CN109222963A (en) * 2018-11-21 2019-01-18 燕山大学 A kind of anomalous ecg method for identifying and classifying based on convolutional neural networks
CN109259756A (en) * 2018-09-04 2019-01-25 周军 The ECG signal processing method of Secondary Neural Networks based on non-equilibrium training
CN109645983A (en) * 2019-01-09 2019-04-19 南京航空航天大学 A kind of uneven beat classification method based on multimode neural network
CN110263684A (en) * 2019-06-06 2019-09-20 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on lightweight neural network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180032689A1 (en) * 2016-07-29 2018-02-01 Qatar University Method and apparatus for performing feature classification on electrocardiogram data
CN107657318A (en) * 2017-11-13 2018-02-02 成都蓝景信息技术有限公司 A kind of electrocardiogram sorting technique based on deep learning model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109077715A (en) * 2018-09-03 2018-12-25 北京工业大学 A kind of electrocardiosignal automatic classification method based on single lead
CN109259756A (en) * 2018-09-04 2019-01-25 周军 The ECG signal processing method of Secondary Neural Networks based on non-equilibrium training
CN109222963A (en) * 2018-11-21 2019-01-18 燕山大学 A kind of anomalous ecg method for identifying and classifying based on convolutional neural networks
CN109645983A (en) * 2019-01-09 2019-04-19 南京航空航天大学 A kind of uneven beat classification method based on multimode neural network
CN110263684A (en) * 2019-06-06 2019-09-20 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on lightweight neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张翼飞.机器学习方法在心源性猝死早期识别中的应用研究.《中国优秀硕士学位论文全文数据库 医药卫生科技辑》.2020, *
机器学习方法在心源性猝死早期识别中的应用研究;张翼飞;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20200715;第54-59页、图5-3 *

Also Published As

Publication number Publication date
CN111956208A (en) 2020-11-20

Similar Documents

Publication Publication Date Title
CN111956208B (en) ECG signal classification method based on ultra-lightweight convolutional neural network
US11564612B2 (en) Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence
CN109543526B (en) True and false facial paralysis recognition system based on depth difference characteristics
CN109544518B (en) Method and system applied to bone maturity assessment
CN110840402A (en) Atrial fibrillation signal identification method and system based on machine learning
CN109480824B (en) Method and device for processing electrocardio waveform data and server
CN110638430B (en) Method for building cascade neural network ECG signal arrhythmia classification model
CN113065526B (en) Electroencephalogram signal classification method based on improved depth residual error grouping convolution network
CN110619322A (en) Multi-lead electrocardio abnormal signal identification method and system based on multi-flow convolution cyclic neural network
CN112508110A (en) Deep learning-based electrocardiosignal graph classification method
CN108090509B (en) Data length self-adaptive electrocardiogram classification method
Cao et al. Atrial fibrillation detection using an improved multi-scale decomposition enhanced residual convolutional neural network
CN109567789B (en) Electrocardiogram data segmentation processing method and device and readable storage medium
CN110558975B (en) Electrocardiosignal classification method and system
CN112043260B (en) Electrocardiogram classification method based on local mode transformation
CN113057648A (en) ECG signal classification method based on composite LSTM structure
CN112022141B (en) Electrocardiosignal class detection method, electrocardiosignal class detection device and storage medium
CN116361688A (en) Multi-mode feature fusion model construction method for automatic classification of electrocardiographic rhythms
CN113080996B (en) Electrocardiogram analysis method and device based on target detection
CN113627391B (en) Cross-mode electroencephalogram signal identification method considering individual difference
CN112932431B (en) Heart rate identification method based on 1DCNN + Inception Net + GRU fusion network
CN113128585B (en) Deep neural network based multi-size convolution kernel method for realizing electrocardiographic abnormality detection and classification
CN111839502B (en) Electrocardiogram data anomaly detection method, device, equipment and storage medium
CN114041800A (en) Electrocardiosignal real-time classification method and device and readable storage medium
CN116616792B (en) Atrial fibrillation detection system based on lightweight design and feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant