CN113768515A - Electrocardiosignal classification method based on deep convolutional neural network - Google Patents

Electrocardiosignal classification method based on deep convolutional neural network Download PDF

Info

Publication number
CN113768515A
CN113768515A CN202111092161.4A CN202111092161A CN113768515A CN 113768515 A CN113768515 A CN 113768515A CN 202111092161 A CN202111092161 A CN 202111092161A CN 113768515 A CN113768515 A CN 113768515A
Authority
CN
China
Prior art keywords
neural network
convolutional neural
electrocardiosignal
classification
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111092161.4A
Other languages
Chinese (zh)
Inventor
王涌超
肖斌
毕秀丽
聂茜茜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN202111092161.4A priority Critical patent/CN113768515A/en
Publication of CN113768515A publication Critical patent/CN113768515A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Abstract

The invention discloses an electrocardiosignal classification method based on a deep convolutional neural network, and relates to the technical field of deep learning and pattern recognition. The method comprises the following specific steps: 1) directly cutting the original electrocardiosignals into electrocardio segments with the length of 30 seconds in an overlapping way; 2) sending the segmented electrocardiosignal segments into a convolutional neural network model; 3) training the network model by using an electrocardio tag and a random gradient descent algorithm; 4) and comprehensively analyzing the classification result of each electrocardiogram fragment by using a voting mechanism, and voting to obtain the classification result of the whole electrocardiogram record. The invention has no complex preprocessing process and gets rid of the dependence on accurate segmentation of the heart jump cycle and the design of complex artificial features. The convolutional neural network is utilized to automatically extract the characteristics related to the heart diseases in the complex electrocardiosignals, the effect of electrocardio classification is improved, the auxiliary diagnosis of the electrocardiosignals is realized, and the real-time heart health monitoring can be carried out by combining with intelligent wearable equipment.

Description

Electrocardiosignal classification method based on deep convolutional neural network
Technical Field
The invention relates to the technical field of deep learning and pattern recognition, in particular to an electrocardiosignal classification method of a deep convolutional neural network.
Background
Data from the world health organization show that non-infectious diseases cause more deaths than infectious diseases. Cardiovascular disease (CVD), which is a non-infectious disease that afflicts humans for many years, causes about half of the deaths of non-infectious diseases (about 1750 ten thousand). Electrocardiographic diagnosis is a well established non-invasive technique for detecting cardiovascular disease, which senses and records the electrical activity of the heart through electrodes placed at specific locations on the body. Since an electrocardiogram is capable of directly reflecting the operating state of the heart, it is widely used in the medical field.
The traditional classification of cardiac signals can be roughly divided into such stages: (1) denoising the electrocardiosignal. (2) And (4) dividing the electrocardiosignal. (3) And (5) extracting and classifying the features. Electrocardio signals are very sensitive to noise, and common noise comprises baseline drift, power line interference, motion artifacts and the like. Generally, denoising the electrocardiographic signal is the first step of electrocardiographic analysis. The segmentation of the cardiac signal is generally divided into two steps: waveform detection and range definition. The waveform detection means: certain representative points or waveforms in the cardiac signal are located. The range definition refers to: and after the symbolic waveform position is found, a specific segmentation range of the electrocardiogram fragment is obtained. The feature extraction is the most important step in the classification research of the electrocardiosignals. Over the years, researchers have designed many features to improve the effect of electrocardiographic classification. These features can be broadly classified into three categories: (1) a feature associated with the statistic. (2) Features relating to cardiac signal morphology. (3) Other features. The selection of the classification method or classifier is mainly related to the type and characteristics of the feature. The traditional electrocardiogram classification method based on artificial feature extraction mainly adopts classifiers comprising the following steps: linear discriminant classifiers, K-nearest neighbor classifiers (KNN), random forests, support vector machines, and the like.
In some respects, a convolutional neural network can be considered as an algorithmic model that models the mammalian visual system. The fukushibang arm in 1980 proposed a neural network named neogonin to simulate the mechanism of operation of the visual system. neologitron is considered a precursor of convolutional neural networks. Lecun applied the back propagation algorithm to the field of neural networks in 1990 and proposed a convolutional neural network named LeNet-5. In 2012, a.krizhevsky et al made a breakthrough development in the field of image classification by means of Alexnet neural networks. Not only does Alexnet have a larger number of network layers than previous neural networks, it also introduces many new technologies such as dropout, Relu activation functions, etc. Alexnet has seen the enormous potential of neural networks, and a series of deep convolutional neural network models have been developed since then. Among them, the ones that cause huge adverse reactions include VGG and google lenet proposed in 2014, ResNet in 2016, and so on. At present, convolutional neural networks play an important role in the fields of computer vision, natural language processing, and the like.
Disclosure of Invention
The invention aims to solve the problem that the effect of the traditional electrocardio classification method depends on a complex data preprocessing process and the design and extraction of artificial features to a great extent. The invention provides the electrocardiosignal classification method based on the deep convolutional neural network, which does not need a complex electrocardiosignal preprocessing process and a special feature extraction plan.
In order to achieve the purpose, the invention adopts the technical scheme that the electrocardiosignal classification method based on the deep convolutional neural network comprises the following steps:
1) reading original electrocardiosignals, and in order to reduce calculation consumption and ensure that the electrocardio data input into the neural network contains enough heartbeat cycles, overlapping and dividing the original electrocardiosignals into electrocardiosignal segments with equal length. Specifically, the original signal is divided into 30s electrocardiogram segments, and 5s signal overlap exists between each segment.
2) And under the condition of no complexity and no processing, sending each electrocardiosignal segment into an electrocardio classification model based on a convolutional neural network to obtain a classification result of each segment.
3) And collecting the classification result of each segment, comprehensively analyzing the classification result of each segment by using a voting mechanism, and voting to obtain the classification result of each record.
Further, the electrocardio classification model combines a deep convolution neural network and a long-short term network; automatically extracting spatial domain features in the electrocardiosignals by using a convolutional neural network, and enhancing the expression of effective features from two dimensions of space and channels by the network by combining a channel attention mechanism and a space attention mechanism; and extracting features in a time domain by constructing a long-short term memory network module.
Further, seven residual attention modules are stacked in the deep convolutional neural network.
In the above scheme, the processing flow of the deep convolutional neural network includes: firstly, inputting a characteristic diagram to carry out ordinary convolution once; then, sequentially screening the characteristics through a channel attention module and a space attention module; then, after the dimensionality of the input data is expanded by 2 times by utilizing 1 × 1 convolution, adding the dimensionality of the input data to the feature map processed by the channel attention module and the spatial attention module in the previous step; then, the network training is accelerated by using batch normalization; and finally, obtaining the output of the network by utilizing a Relu activation function and maximum pooling.
In the above solution, the channel attention module comprises the following processing steps:
the first step is as follows: and constructing two pooling branches, and compressing the spatial dimension of the input feature map by using global maximum pooling and global average pooling respectively.
The second step is that: and constructing two multilayer perceptrons to respectively process the feature vectors obtained by the two pooling branches, adding the feature vectors of the two pooling branches processed by the multilayer perceptrons element by element, and normalizing the feature vectors by using a sigmoid activation function to obtain the attention weight vector. And introducing a hyper-parameter r to set the number of hidden layer neurons of the multilayer perceptron, and if the dimensionality of the pooled feature map is 1 multiplied by 2C, setting the number of the hidden neurons to be 2C/r.
The third step: and multiplying the attention weight vector by the original input feature map channel by channel to obtain the feature map subjected to attention mechanism enhancement.
Further, the space attention module comprises the following processing steps:
the first step is as follows: compressing the channel dimension of the input data by utilizing global maximum pooling and global average pooling respectively in the two pooling branches;
the second step is that: splicing the feature vectors obtained by the two pooling branches on a channel dimension, carrying out conventional convolution operation once to obtain the feature vectors, and normalizing the feature graph after convolution by using a sigmoid activation function to obtain a space attention weight vector;
the third step: and multiplying the spatial attention weight vector and the original input feature map element by element in the spatial dimension to obtain the feature map enhanced by the spatial attention mechanism.
Further, the voting mechanism in step 3) comprises: collecting and summarizing all segment classification results, and if the vote numbers are not equal, taking the category with the largest vote number as the category of the whole electrocardiogram record; and if the flat ticket appears, finding the electrocardiogram fragment with the maximum category predicted value, and taking the category of the fragment as the category of the whole record.
The invention has the following advantages and beneficial effects:
the invention provides an electrocardiosignal classification method based on a deep convolutional neural network aiming at the defects that the traditional electrocardiosignal classification method needs complex data preprocessing and the design and extraction of fine artificial features, the features of electrocardiosignals are automatically extracted by the convolutional neural network for classification in a learning mode, the simple and fast electrocardiosignal classification method is realized, and the electrocardiosignal classification method has the potential of being further applied to wearable equipment and realizing the real-time monitoring of the electrocardiosignals. In addition, the method provided by the invention utilizes the residual attention module to automatically extract the useful features of the electrocardiosignals in the spatial domain, eliminates the features harmful to classification, and simultaneously utilizes the long-term and short-term attention network to extract the time-related features hidden in the electrocardiosignals. Is innovation and breakthrough of deep learning in the application of the electrocardiosignal classification field. The result shows that compared with the traditional electrocardio classification method based on artificial feature design and extraction, the method provided by the invention has the advantages that the electrocardio classification is more accurate, the excellent effect can be obtained under the electrocardio signal multi-classification scene, and meanwhile, the robustness is good in multi-database test.
In practical application, in step 1, the invention does not need complex preprocessing steps such as signal denoising, waveform detection, extraction of electrocardiosignal morphological characteristics and the like. In step 2, extracting the spatial features and time-related features of the electrocardiosignals by using a convolutional neural network under the guidance of training data and labels, and classifying the features. And in step 3, the results obtained in the last step are integrated, and the classification result of the whole electrocardiogram record is obtained through a voting mechanism. Compared with the traditional electrocardiogram classification method, the electrocardiogram classification method based on the deep convolutional neural network is more accurate in electrocardiogram classification.
Drawings
FIG. 1 is a flow chart of a classification scheme for a preferred embodiment provided by the present invention;
FIG. 2 is a diagram of a neural network architecture according to the present invention;
FIG. 3 is an internal block diagram of a residual attention module;
FIG. 4 is an internal block diagram of a channel attention module;
FIG. 5 is an internal block diagram of a spatial attention module;
FIG. 6 is a network architecture diagram of a two-tiered long and short term memory network;
fig. 7 is a flow chart of the voting mechanism.
Detailed Description
The technical solutions in the embodiments of the present invention will be described in detail and clearly with reference to the accompanying drawings. The described embodiments are only some of the embodiments of the present invention.
The technical scheme for solving the technical problems is as follows:
referring to fig. 1: the technical scheme of the invention is explained in detail, and the electrocardio classification method of the deep convolutional neural network comprises the following steps:
1) firstly, directly reading an original electrocardiosignal, and dividing a complete electrocardiorecord into electrocardio segments with the length of 30 seconds in a manner of overlapping for 5 seconds without other complex data preprocessing processes;
2) respectively sending each electrocardiogram fragment into network training to obtain a classification result of each fragment;
3) and collecting and analyzing classification results of all the segments recorded by the electrocardiogram, and obtaining a recording-level classification result through a voting mechanism.
Compared with the traditional method, the method provided by the invention has no other complex preprocessing processes except for the segmentation of the electrocardiogram record, such as: dividing the heartbeat cycle, and designing and extracting artificial features. Also, because of the sporadic nature of certain heart diseases, not every ecg segment can respond to such diseases. And the classification result of each electrocardiogram record is obtained by the segment classification result through a voting mechanism. Therefore, in order to avoid the situation that the whole record is classified incorrectly due to too many normal electrocardiographic segments and too few abnormal segments, the electrocardiographic segmentation segments should be as long as possible. However, when the neural network extracts features, the long segments cause the resource consumption of network operation to increase, and the calculation efficiency is low. Considering that the heart rate of a normal person is 60 to 100 times per minute, the present invention segments the electrocardiographic recording into segments that are 30s long. By combining the average length of the data in the database, the 30s long electrocardiogram fragment not only improves the effectiveness of the voting mechanism, but also contains enough complete heartbeat cycles.
Referring to fig. 2: the invention combines the deep convolution neural network and the long-term and short-term network to construct an electrocardiosignal classification model. The invention utilizes the convolution neural network to automatically extract the space domain characteristics in the electrocardiosignals, and combines the channel attention mechanism and the space attention mechanism to ensure that the network automatically strengthens the expression of effective characteristics from two dimensions of space and channel, thereby reducing the influence of useless characteristics. Meanwhile, considering that the electrocardiosignals have a great number of time-related characteristics in the time domain, the time-related characteristics are not mentioned in the traditional method. Therefore, in order to fully utilize the characteristics of the time sequence signal, the method further improves the effect by constructing a long-short term memory network module to extract the characteristics in the time domain.
The network mentioned in said step 2) can be divided into a convolutional neural network part and a cyclic neural network part. The convolutional neural network part extracts the features of the electrocardiosignals in the spatial domain, and the feature map is refined layer by utilizing a residual attention module. Moreover, the residual error attention module deepens the depth of the network and relieves the problem of gradient disappearance. The recurrent neural network part extracts the characteristics of a time domain from the time sequence characteristic diagram by using a long-term and short-term memory network module. Meanwhile, the long-term and short-term memory network module also reduces the dimensionality of the characteristic diagram and is beneficial to the processing of the later stage.
Referring to fig. 3: the residual error attention module is an important component of the network, and the module combines the residual error connection and the attention mechanism, relieves the problem of gradient disappearance, deepens the depth of the network and is beneficial to extracting high-level abstract features. A total of seven residual attention modules are stacked in the network. The overall flow of the module is as follows: first, the input feature map is subjected to a normal convolution. After convolution, the number of channels of the feature map becomes twice as large as before. Next, the features are screened sequentially through a channel attention module and a spatial attention module. Then, the dimension of the input data is extended by 2 times by the 1 × 1 convolution, and the extended dimension is added to the feature map subjected to the attention mechanism processing in the previous step. Then, batch normalization is utilized to accelerate network training. And finally, obtaining the output of the network by utilizing a Relu activation function and maximum pooling.
Referring to fig. 4: the extraction of the channel attention is mainly divided into three steps:
the first step is as follows: the spatial dimensions of the input feature map are compressed using global pooling. The common attention mechanism usually uses only one pooling method, and the method constructs two pooling branches, using global maximal pooling and global average pooling for dimension compression, respectively. Assuming that the dimension of the input feature map is 1 × W × 2C, the feature map after pooling is 1 × 1 × 2C, where C represents the number of channels of the feature map and W represents the length.
The second step is that: and constructing two multilayer perceptrons (MLPs) to respectively process the feature vectors obtained by the two pooling branches. Here, a hyper-parameter (r) is introduced to set the number of hidden layer neurons of the multi-layer perceptron. Assuming that the dimension of the pooled feature map is 1 × 1 × 2C, the number of hidden neurons is set to 2C/r. And then, adding the feature vectors of the two pooled branches processed by the multilayer perceptron element by element, and normalizing the feature vectors by using a sigmoid activation function to obtain an attention weight vector.
The third step: and multiplying the attention weight vector by the original input feature map channel by channel to obtain the feature map subjected to attention mechanism enhancement.
Referring to fig. 5: similar to the channel attention, the spatial attention module has three steps as well:
the first step is as follows: the channel dimensions of the input data are compressed in two pooling branches using global maximum pooling and global average pooling, respectively. Assuming that the input feature map is 1 × W × 2C, the feature map dimension after pooling becomes 1 × W × 1.
The second step is that: and splicing the eigenvectors obtained by the two pooling branches on the channel dimension, and performing a conventional convolution operation. The feature vectors obtained from the two pooled branches are integrated using a convolution operation. And then, normalizing the feature graph after convolution by using a sigmoid activation function to obtain a spatial attention weight vector.
The third step: the spatial attention weight vector and the original input feature map are multiplied element by element in the spatial dimension. Data after spatial attention mechanism enhancement are obtained.
Referring to fig. 6: the recurrent neural network used in the method is specifically a two-layer long-short term memory network. The electrocardiogram can reflect the electrical activity of the heart in real time, and with the continuous beating of the heart, the electrocardiogram periodically generates waveforms such as P wave, QRS wave group, T wave and the like. It follows that there are many time-dependent features in the cardiac signal. The traditional electrocardio classification method ignores the relation between electrocardiosignals and time, and the invention provides a method for extracting time-related features hidden in the electrocardiosignals by using a long-term and short-term memory network. The traditional recurrent neural network cannot extract time sequence characteristics in a longer time span due to the problems of gradient diffusion and the like. Long and short term memory networks greatly alleviate this problem. The long-short term memory network controls the data update of each hidden neuron through a gating mechanism, and adds a memory function to the neurons. More specifically, the gating mechanism can control the amount of old information retained by the neuron, as well as the amount of new information added. In the long-short term memory network, each hidden neuron comprises a forgetting gate, an input gate and an output gate. The input data of the convolution data network are independent from each other, the time sequence is not considered, and the data are input according to the time sequence when the long-term and short-term memory network operates.
Referring to fig. 7: and 3) adopting a voting mechanism to comprehensively analyze the segment results. The method comprises the following steps: and sequentially sending the electrocardiogram fragments into a network to obtain the prediction category of each fragment and the prediction value of the category. All the segment classification results are collected and summarized. And if the votes are not equal, the category with the largest number of votes is obtained and is regarded as the category of the whole electrocardiogram record. And if the flat ticket appears, finding the electrocardiogram fragment with the maximum category predicted value, and taking the category of the fragment as the category of the whole record.
To verify the effect of the present invention, the following experiment was performed:
in the experiment, a network is built by utilizing a Pythroch deep learning framework and operated on a window10 computer of Intel Xeon E5-2603 v4@1.70GHz CPU, NVIDIA GeForce GTX TITAN X GPU and 32GB RAM.
The experimental method comprises the following steps:
in order to fully verify the electrocardio classification effect of the network, experiments are carried out on two databases of CPSCDB and CINCDB. The CINCDB comprises 8528 single lead electrocardiogram records, the sampling frequency is 300 Hz, and the CINCDB comprises four categories including a normal category. The CPSCDB database included 6877 standard 12-lead electrocardiographic recordings (female 3178, male 3699) collected from 11 hospitals with a sampling frequency of 500 hz for a total of nine categories.
The experiment used the F1 score as an indicator to measure the effectiveness of the algorithm. The F1 score is an index for measuring the accuracy of the model, and the accuracy and the recall rate of the model are considered simultaneously. Let TP denote the number of true positive samples, FP denote the number of false positive samples, and FN denote the number of false negative samples. The calculation of the F1 score can be formulated as follows:
Figure BDA0003267939030000071
Figure BDA0003267939030000072
Figure BDA0003267939030000073
the results of the experiments on the CINCDB database are shown in the following table
Table 1 comparative experimental results on database CINCDB
Figure BDA0003267939030000074
In experiments on the CINCDB database, F1 scores were calculated for the normal class (normal), atrial fibrillation (atrial fibrillation) and other classes (other rhytms), respectively using FN,FA,FoAs shown, the overall effectiveness of the model is measured by the mean of these three F1 scores, denoted as FM. On the database CINCDB, the method proposed herein is compared with three other methods, two of which are Martin Zihlmann et al [1 ]]The proposed method converts electrocardiosignals into spectrograms, and respectively constructs a convolutional neural network (marked as CNN method) and a convolutional recurrent neural network (marked as CRNN method) to extract and classify special diagnoses from the spectrograms. Shreyasi Datta et al [2]A traditional method based on artificial feature extraction is provided, a large number of features are elaborately designed and extracted, and three binary classifiers are cascaded to classify the electrocardiosignals. As can be seen, the methods presented herein are either categorical in performance on a single category or in the entirety of a modelThe bulk effect is superior to the comparative method.
The results of the experiments on the database CPSCDB are shown in the following table:
TABLE 2 Experimental results on database CPSCDB
Figure BDA0003267939030000081
The experimental indices and corresponding categories used on the database CPSCD are as follows:
table 3 experimental indices performed on the database CPSCD and the corresponding categories
Figure BDA0003267939030000082
In experiments on the CPSCDB database, the F1 score (F) for each category was calculated11~F12) And simultaneously, some related diseases are combined to be used as a subcategory to calculate the F1 score. The overall effect of the network is represented by a mean F1 score of 9 classes. Since the CPSCDB database is not published for a long time, no relevant literature is published on the database. Therefore, only the results of the ten-fold cross-validation on this database are given for this method. The average F1 score of the method provided by the invention on all the categories is 0.782, wherein the F1 score of the category of 'ST-segment-elevation' is obviously lower than that of other categories, and the main reason for the problem is probably that the number of the samples of the category accounts for too small proportion in the database, so that the category cannot be identified correctly during model training.
The above examples are to be construed as merely illustrative and not limitative of the remainder of the disclosure. After reading the description of the invention, the skilled person can make various changes or modifications to the invention, and these equivalent changes and modifications also fall into the scope of the invention defined by the claims.
[1]Zihlmann M,Perekrestenko D,Tschannen M.Convolutional recurrent neural networks for electrocardiogram classification[C]//2017Computing in Cardiology(CinC).IEEE,2017:1-4。
[2]Datta S,Puri C,Mukherjee A,et al.Identifying normal,AF and other abnormal ECG rhythms using a cascaded binary classifier[C]//2017 Computing in Cardiology(CinC).IEEE,2017:1-4。

Claims (9)

1. An electrocardiosignal classification method based on a deep convolutional neural network is characterized by comprising the following steps:
1) reading an original electrocardiosignal, and cutting the original electrocardiosignal into electrocardiosignal segments with equal length in an overlapped manner;
2) sending each electrocardiosignal segment into an electrocardio classification model based on a convolutional neural network to obtain a classification result of each segment;
3) and collecting the classification result of each segment, comprehensively analyzing the classification result of each segment by using a voting mechanism, and voting to obtain the classification result of each record.
2. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 1, wherein: step 1) the original signal segmentation specifically comprises: the electrocardiogram is divided into 30s electrocardiogram segments, and 5s of signals are overlapped among the segments.
3. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 1, wherein: the electrocardio classification model combines a deep convolution neural network and a long-term and short-term network; automatically extracting spatial domain features in the electrocardiosignals by using a convolutional neural network, and enhancing the expression of effective features from two dimensions of space and channels by the network by combining a channel attention mechanism and a space attention mechanism; and extracting features in a time domain by constructing a long-short term memory network module.
4. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 3, wherein: seven residual attention modules are stacked in the deep convolutional neural network.
5. The method for classifying electrocardiosignals based on the deep convolutional neural network as claimed in claim 3 or 4, wherein: the processing flow of the deep convolutional neural network comprises the following steps: firstly, inputting a characteristic diagram to carry out ordinary convolution once; then, sequentially screening the characteristics through a channel attention module and a space attention module; then, after the dimensionality of the input data is expanded by 2 times by utilizing 1 × 1 convolution, adding the dimensionality of the input data to the feature map processed by the channel attention module and the spatial attention module in the previous step; then, the network training is accelerated by using batch normalization; and finally, obtaining the output of the network by utilizing a Relu activation function and maximum pooling.
6. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 5, wherein: the channel attention module comprises the following processing steps:
the first step is as follows: constructing two pooling branches, and compressing the spatial dimension of the input feature map by respectively using global maximum pooling and global average pooling;
the second step is that: constructing two multilayer perceptrons to respectively process the feature vectors obtained by the two pooling branches, adding the feature vectors processed by the two pooling branches by the multilayer perceptrons element by element, and normalizing the feature vectors by a sigmoid activation function to obtain an attention weight vector;
the third step: and multiplying the attention weight vector by the original input feature map channel by channel to obtain the feature map subjected to attention mechanism enhancement.
7. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 6, wherein: and in the second step, introducing a hyper-parameter r to set the number of hidden layer neurons of the multilayer perceptron, and if the dimension of the pooled feature map is 1 multiplied by 2C, setting the number of the hidden neurons as 2C/r, wherein C represents the channel number of the feature map.
8. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 5, wherein: the space attention module comprises the following processing steps:
the first step is as follows: compressing the channel dimension of the input data by utilizing global maximum pooling and global average pooling respectively in the two pooling branches;
the second step is that: splicing the feature vectors obtained by the two pooling branches on a channel dimension, carrying out conventional convolution operation once to obtain the feature vectors, and normalizing the feature graph after convolution by using a sigmoid activation function to obtain a space attention weight vector;
the third step: and multiplying the spatial attention weight vector and the original input feature map element by element in the spatial dimension to obtain the feature map enhanced by the spatial attention mechanism.
9. The electrocardiosignal classification method based on the deep convolutional neural network as claimed in claim 1, wherein: step 3) the voting mechanism comprises: collecting and summarizing all segment classification results, and if the vote numbers are not equal, taking the category with the largest vote number as the category of the whole electrocardiogram record; and if the flat ticket appears, finding the electrocardiogram fragment with the maximum category predicted value, and taking the category of the fragment as the category of the whole record.
CN202111092161.4A 2021-09-17 2021-09-17 Electrocardiosignal classification method based on deep convolutional neural network Pending CN113768515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111092161.4A CN113768515A (en) 2021-09-17 2021-09-17 Electrocardiosignal classification method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111092161.4A CN113768515A (en) 2021-09-17 2021-09-17 Electrocardiosignal classification method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN113768515A true CN113768515A (en) 2021-12-10

Family

ID=78851841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111092161.4A Pending CN113768515A (en) 2021-09-17 2021-09-17 Electrocardiosignal classification method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN113768515A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115553787A (en) * 2022-09-30 2023-01-03 哈尔滨理工大学 Fetal electrocardiosignal extraction method based on multi-scale residual shrinkage U-Net
CN116842329A (en) * 2023-07-10 2023-10-03 湖北大学 Motor imagery task classification method and system based on electroencephalogram signals and deep learning
CN117281528A (en) * 2023-11-27 2023-12-26 山东锋士信息技术有限公司 Multi-lead pulse signal intelligent identification method and system based on deep learning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109620152A (en) * 2018-12-16 2019-04-16 北京工业大学 A kind of electrocardiosignal classification method based on MutiFacolLoss-Densenet
CN110179453A (en) * 2018-06-01 2019-08-30 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on convolutional neural networks and shot and long term memory network
US20210117737A1 (en) * 2019-10-18 2021-04-22 Korea University Research And Business Foundation Earthquake event classification method using attention-based convolutional neural network, recording medium and device for performing the method
CN112957054A (en) * 2021-01-25 2021-06-15 北京工业大学 12-lead electrocardiosignal classification method based on channel attention grouping residual error network
US20210232813A1 (en) * 2020-01-23 2021-07-29 Tongji University Person re-identification method combining reverse attention and multi-scale deep supervision
CN113229825A (en) * 2021-06-22 2021-08-10 郑州大学 Deep neural network-based multi-label multi-lead electrocardiogram classification method
CN113288162A (en) * 2021-06-03 2021-08-24 北京航空航天大学 Short-term electrocardiosignal atrial fibrillation automatic detection system based on self-adaptive attention mechanism

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110179453A (en) * 2018-06-01 2019-08-30 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on convolutional neural networks and shot and long term memory network
CN109620152A (en) * 2018-12-16 2019-04-16 北京工业大学 A kind of electrocardiosignal classification method based on MutiFacolLoss-Densenet
US20210117737A1 (en) * 2019-10-18 2021-04-22 Korea University Research And Business Foundation Earthquake event classification method using attention-based convolutional neural network, recording medium and device for performing the method
US20210232813A1 (en) * 2020-01-23 2021-07-29 Tongji University Person re-identification method combining reverse attention and multi-scale deep supervision
CN112957054A (en) * 2021-01-25 2021-06-15 北京工业大学 12-lead electrocardiosignal classification method based on channel attention grouping residual error network
CN113288162A (en) * 2021-06-03 2021-08-24 北京航空航天大学 Short-term electrocardiosignal atrial fibrillation automatic detection system based on self-adaptive attention mechanism
CN113229825A (en) * 2021-06-22 2021-08-10 郑州大学 Deep neural network-based multi-label multi-lead electrocardiogram classification method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115553787A (en) * 2022-09-30 2023-01-03 哈尔滨理工大学 Fetal electrocardiosignal extraction method based on multi-scale residual shrinkage U-Net
CN116842329A (en) * 2023-07-10 2023-10-03 湖北大学 Motor imagery task classification method and system based on electroencephalogram signals and deep learning
CN117281528A (en) * 2023-11-27 2023-12-26 山东锋士信息技术有限公司 Multi-lead pulse signal intelligent identification method and system based on deep learning

Similar Documents

Publication Publication Date Title
Essa et al. An ensemble of deep learning-based multi-model for ECG heartbeats arrhythmia classification
Gunduz Deep learning-based Parkinson’s disease classification using vocal feature sets
Shi et al. Automated heartbeat classification based on deep neural network with multiple input layers
Pourbabaee et al. Feature leaning with deep convolutional neural networks for screening patients with paroxysmal atrial fibrillation
CN113768515A (en) Electrocardiosignal classification method based on deep convolutional neural network
Sharma et al. Automated pre-screening of arrhythmia using hybrid combination of Fourier–Bessel expansion and LSTM
Luo et al. Multi-classification of arrhythmias using a HCRNet on imbalanced ECG datasets
Tripathi et al. Multilevel classification and detection of cardiac arrhythmias with high-resolution superlet transform and deep convolution neural network
Ma et al. An automatic system for atrial fibrillation by using a CNN-LSTM Model
Banerjee et al. A hybrid CNN-LSTM architecture for detection of coronary artery disease from ECG
Ullah et al. An effective and lightweight deep electrocardiography arrhythmia recognition model using novel special and native structural regularization techniques on cardiac signal
Prakash et al. A system for automatic cardiac arrhythmia recognition using electrocardiogram signal
Boda et al. An automated patient-specific ECG beat classification using LSTM-based recurrent neural networks
Hassan et al. Performance comparison of CNN and LSTM algorithms for arrhythmia classification
Khan et al. Congenital Heart Disease Prediction based on Hybrid Approach of CNN-GRU-AM
Mangathayaru et al. An attention based neural architecture for arrhythmia detection and classification from ecg signals
Zhang et al. D2afnet: A dual-domain attention cascade network for accurate and interpretable atrial fibrillation detection
Xu et al. Inter-patient ECG classification with intra-class coherence based weighted kernel extreme learning machine
Gao et al. A novel approach for atrial fibrillation signal identification based on temporal attention mechanism
Qiao et al. HS-Vectors: Heart sound embeddings for abnormal heart sound detection based on time-compressed and frequency-expanded TDNN with dynamic mask encoder
Qiu et al. STCT: Spatial-temporal conv-transformer network for cardiac arrhythmias recognition
Liu et al. Automatic classification of arrhythmias using multi-branch convolutional neural networks based on channel-based attention and bidirectional LSTM
Liu et al. Ensemble learning-based atrial fibrillation detection from single lead ECG wave for wireless body sensor network
Nehary et al. A deep convolutional neural network classification of heart sounds using fractional fourier transform
BALCI A hybrid attention-based LSTM-XGBoost model for detection of ECG-based atrial fibrillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination