CN108720831B - Automatic arrhythmia analysis method based on lead deep neural network - Google Patents

Automatic arrhythmia analysis method based on lead deep neural network Download PDF

Info

Publication number
CN108720831B
CN108720831B CN201810451726.5A CN201810451726A CN108720831B CN 108720831 B CN108720831 B CN 108720831B CN 201810451726 A CN201810451726 A CN 201810451726A CN 108720831 B CN108720831 B CN 108720831B
Authority
CN
China
Prior art keywords
layer
neural network
deep neural
unit
lead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810451726.5A
Other languages
Chinese (zh)
Other versions
CN108720831A (en
Inventor
臧睦君
危义民
刘通
邹海林
柳婵娟
周树森
贾世祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fengxiang (Shandong) Medical Technology Co.,Ltd.
Original Assignee
Ludong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ludong University filed Critical Ludong University
Priority to CN201810451726.5A priority Critical patent/CN108720831B/en
Publication of CN108720831A publication Critical patent/CN108720831A/en
Application granted granted Critical
Publication of CN108720831B publication Critical patent/CN108720831B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The invention discloses an automatic arrhythmia analysis method based on a lead deep neural network, which comprises the following steps: generating a multi-channel electrocardiogram sample by three sampling modes; the obtained 600-dimensional electrocardiosignals are spliced along the second dimension, when the original electrocardiosignals are two-lead, the original electrocardiosignals are equivalent to form 6 600 x 1-dimensional electrocardiosignal samples, a plurality of lead channels which are sequentially connected in parallel are respectively input, the lead channels consist of convolution layer units which are connected in series, a merging layer at an output end merges characteristic graphs of each lead channel along the last dimension, and an attention layer is arranged between the merging layer and an LSTM layer unit; the convolutional layer unit comprises a convolutional layer for extracting one-dimensional electrocardiosignal characteristics by using one-dimensional convolution, and an excitation unit operation and a pooling layer operation which are sequentially connected in series; the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series; outputting; learning parameters of the deep neural network, and automatically identifying the sample; the problem that the existing arrhythmia analysis system is not enough to meet the requirement of clinical application on accuracy is solved.

Description

Automatic arrhythmia analysis method based on lead deep neural network
Technical Field
The invention relates to the technical field of medical signal processing, in particular to an automatic arrhythmia analysis method based on a lead deep neural network.
Background
In recent years, with rapid development of auxiliary diagnostic equipment for electrocardiograms, and with technological progress in the information field, and especially with development of pattern recognition technology, the functions of the electrocardiogram equipment are not just acquiring electrocardiosignals and printing electrocardiograms, but are developed towards mining effective data in electrocardiograms and automatically recognizing and counting heartbeat information. The analysis equipment with the automatic heartbeat identification function can provide more visual and effective electrocardiogram information for doctors, effectively saves diagnosis time, improves the diagnosis efficiency of the doctors, and is one of important auxiliary medical equipment.
The automatic arrhythmia analysis system working on a computing device is the core of the equipment, and two technical approaches are provided, namely, the category of the heart beat is obtained by extracting a feature vector representing effective information of an electrocardiogram and inputting the feature vector into a classifier algorithm; and secondly, automatically learning and identifying the characteristics through a deep learning technology to obtain the category of the heart beat.
The arrhythmia analysis system based on the deep learning technology can effectively improve the identification precision by utilizing data dividend, but the current arrhythmia analysis system is not enough to meet the accuracy requirement of clinical application.
Disclosure of Invention
The invention aims to provide an automatic arrhythmia analysis method based on a lead deep neural network, aiming at solving the problem that the existing arrhythmia analysis system is not enough to meet the requirement of clinical application on accuracy.
An automatic arrhythmia analysis method based on a lead deep neural network comprises the following steps:
1) performing composite sampling by adopting three sampling modes to generate a multi-channel electrocardiogram sample;
a. for each lead electrocardiosignal, 100 points are taken before and after the heartbeat and then are resampled to a vector with a fixed dimension of 600;
b. for each lead electrocardiosignal, taking an R-R wave interval of 2 periods before cardiac beat and taking an R-R wave interval of 1 period after cardiac beat, and then resampling to a vector of a fixed dimension 600;
c. for electrocardiosignals of each lead, taking R-R wave intervals of 2 periods before cardiac beat and resampling to a 300-dimensional vector, then taking R-R wave intervals of 1 period and resampling to a 300-dimensional vector, and finally splicing the front and rear resampled signals to form a 600-dimensional signal;
splicing the 600-dimensional electrocardiosignals obtained by the three sampling modes along a second dimension vector, amplifying each lead electrocardiosignal from 600 × 1 dimensions to 600 × 3 dimensions, wherein 3 at the moment is the channel number of the lead electrocardiosignal; when the original electrocardiosignals have 2 leads, the electrocardiosignals have 600 × 6 dimensions, the 600-dimensional vector is taken as the electrocardiosignals of one channel, and the method is equivalent to forming 6 600 × 1-dimensional electrocardiosignal samples X by the original electrocardiodata of each lead in the composite sampling mode and taking the samples X as the Input of the deep neural network model;
2) building a deep neural network
The deep neural network comprises a plurality of lead channels which are connected in parallel in sequence, each lead channel consists of convolution layer units which are connected in series, a merging layer is arranged at the output end of each lead channel, and the feature maps of each lead channel are merged along the last dimension, namely the dimension of the depth of the feature map. Each convolution layer unit comprises a convolution layer, and an excitation unit operation and a pooling layer operation which are sequentially connected with the output end of the convolution layer in series; the convolution layer unit uses one-dimensional convolution and is used for extracting the characteristics of one-dimensional electrocardiosignals;
the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series; outputting;
3) learning parameters of a deep neural network;
4) automatically identifying the sample;
when the electrocardiogram data set has two lead signals, the dimension of the input signal is 6 × 600 × 1; inputting each 600 x 1 dimensional signal in the input signals into two layers of convolution layer units connected in series; the output end of each layer of convolution layer unit in each path of lead channel is sequentially connected with an excitation unit operation and a pooling layer operation in series; the number of convolution kernels of the first convolution layer unit is 32, the size of the convolution kernels is 4, the excitation unit behind the first convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the dimension of the characteristic diagram after passing through the first layer of pooling units is 200 x 32; the number of convolution kernels of the second convolution layer unit is 64, the size of the convolution kernels is 5, the excitation unit behind the second convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the characteristic graph dimension after passing through the second layer of pooling units is 67 x 64;
the deep neural network comprises a convolutional layer unit and an LSTM layer unit which are sequentially connected in series;
the outputs of the two convolution layer units in each path of lead channel are combined along the last dimension through the combination layer, namely the dimension of the depth of the combined output characteristic diagram is 67 x 384;
and after merging layers, connecting an attention unit in series, wherein the attention unit constructs a weight matrix with the same dimension of 67 x 384, and the point multiplication of the weight matrix and the corresponding element of the convolved feature map is carried out, and the output dimension of the weighted feature map is 67 x 384. The elements of this weight matrix are trained by the neural network, and the initial values of the matrix elements are random numbers ranging from 0 to 1. And inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128. The output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series, and the output dimension of the full connection layer is 4, namely the category number. Finally, the deep neural network model outputs a prediction vector dimension;
the dimensionality of a prediction vector output by the deep neural network is 4; constructing by using a keras open source framework and a python language, using cross entropy as a loss function, and optimizing the loss function by using an Adam optimizer;
the parameters of the learning deep neural network are as follows: initializing training parameters of the deep neural network, and dividing the sampled signals into training set samples and test set samples; that is, a part of the samples are randomly selected from the total samples to be used as a training set, and other unselected samples are used as a testing set. Inputting the multichannel electrocardiosignals X in the training set into the initialized deep neural network, and performing iteration by taking a minimized cost function as a target to generate and store the deep neural network; the training parameters are updated once every iteration until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and then the training can be stopped and the training parameters and the model structure information of the current network can be stored;
the automatic identification of the sample is as follows: inputting all the divided test set samples into the stored neural network, operating the deep neural network to obtain 4-dimensional predicted value vector output corresponding to the test set samples, generating 4-dimensional label vectors by using a one-hot coding method for labels of the test set samples, and then comparing the output predicted values with the labels of the test set samples to check whether the classification is correct.
Drawings
Fig. 1 is a diagram of a deep neural network architecture.
Detailed Description
Example 1 method for automated arrhythmia analysis based on deep neural networks
The invention is further described with reference to the following figures and detailed description of embodiments.
Specific examples are international traffic electrocardiogram data base MIT-BIH Arrhytmia Database (mitdb), the data and instructions of which are disclosed in the physisionet.org website known in the industry; the database contains half an hour 360Hz electrocardiogram recordings of 47 patients in a two-lead mode and has been manually marked by a cardiologist; selecting four heart beat category combinations divided according to AAMI standards from the data set as effect evaluation bases, wherein the four heart beat category combinations comprise N types (normal heart beats or bundle branch block heart beats), S types (supraventricular abnormal heart beats), V types (ventricular abnormal heart beats) and F types (fusion heart beats); the labels of the four categories and the corresponding relation between the labels and the categories in the mitdb data set are shown in table 1; in this example, this is accomplished by a software system operating on a computer and Matlab and python simulation environments known in the art.
The detailed steps of this example are as follows:
implementation of generating multichannel electrocardiogram samples by adopting composite sampling mode
And after denoising the original signals in the mitdb data set, amplifying the number of channels of the signals by adopting the following three sampling modes:
(1) the first sampling mode is as follows: for each lead electrocardiosignal, 100 points are taken before and after the heartbeat and then are resampled to a vector with a fixed dimension of 600; generating Input1 and Input4 in FIG. 1;
(2) the second sampling mode is as follows: for each lead electrocardiosignal, taking an R-R wave interval of 2 periods before cardiac beat and taking an R-R wave interval of 1 period after cardiac beat, and then resampling to a vector of a fixed dimension 600; generating Input2 and Input5 in FIG. 1;
(3) the third sampling mode is as follows: for electrocardiosignals of each lead, taking R-R wave intervals of 2 periods before cardiac beat and resampling to a 300-dimensional vector, then taking R-R wave intervals of 1 period and resampling to a 300-dimensional vector, and finally splicing the front and rear resampled signals to form a 600-dimensional signal; generating Input3 and Input6 in FIG. 1;
each lead electrocardiosignal of 600-dimensional electrocardiosignals obtained by the three sampling modes is amplified into 3-dimensional (600-1) from 600-1-dimensional. And 2 leads of electrocardiosignals are collected in the mitdb data set, the electrocardiosignals at the moment have 6X (600X 1) dimensions, and the electrocardiosignals with the (600X 1) dimension as one channel are equivalent to that the original electrocardiosignals of each lead are subjected to the compound sampling mode to form 6 600X 1-dimensional samples X which are used as the input of a neural network model.
Second, build deep neural network
(1) Concrete structure of the deep neural network
The depth model inputs the electrocardiographic signals X for each channel of each lead, and since the mitdb data set has two leads, the input signal dimension is 6X 600X 1. Inputting each 600 x 1-dimensional signal in the input signals into one lead channel, wherein the output end of each convolution layer unit in each lead channel is sequentially connected with an excitation unit operation and a pooling layer operation in series; the number of convolution kernels of the first convolution layer unit is 32, the size of the convolution kernels is 4, the excitation unit behind the first convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the dimension of the characteristic diagram after passing through the first layer of pooling units is 200 x 32; the number of convolution kernels of the second convolution layer unit is 64, the size of the convolution kernels is 5, the excitation unit behind the second convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the characteristic graph dimension after passing through the second layer of pooling units is 67 x 64;
the outputs of the two convolution units in each path of lead channel are merged into the last dimension through a merging layer (Merge layer), namely the dimension where the depth of the merged output feature map is located, and the size of the merged feature map is 67 x 384; an attention unit is connected in series after the layers are combined, a weight matrix with the same dimension of 67 x 384 and corresponding elements of the feature map after convolution are constructed by the attention unit in the neural network, and the output dimension of the feature map after weighting is 67 x 384; the elements of this weight matrix are trained by the neural network, and the initial values of the matrix elements are random numbers ranging from 0 to 1. Inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128; the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series, and the output dimension of the full connection layer is 4, namely the category number. And finally, the dimensionality of a prediction vector y _ pred output by the deep neural network model is 4.
(2) Specific implementation of the deep neural network
The model was built using a keras open source framework and python language, with the network structure parameters as in table 3. The deep neural network is built by using a functional Model in a keras framework, namely a Model function is introduced into a keras models module, the input of the Model is set as the multi-channel electrocardiosignal X after composite sampling, and the output is a prediction vector y _ pred with the dimensionality of 4; the method comprises the steps of constructing a one-dimensional convolutional layer by introducing a constraint 1D function in a kers.
Learning parameters of deep neural network
Firstly, initializing the training parameters of the deep neural network, dividing the sampled signals into training set samples and testing set samples, and displaying the divided data sets as shown in table 2. Inputting multi-channel electrocardiosignals sampled by a composite mode in a training set into an initialized deep neural network, wherein cross entropy is used as a cost function in the deep neural network; using a category _ cross control function in Keras, instantiating an object Model in the deep neural network through a constructed functional Model, and setting a parameter loss as 'category _ cross control' in the Model. Performing iteration by using an Adam optimizer and taking a minimized cost function as a target, and performing optimization by setting a parameter optimizer in a model.compound function as 'Adam' to generate the deep neural network and store a file model1.hd5 which is a suffix of hd 5; wherein the training parameters are updated once per iteration; stopping training until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and storing the training parameters and the model structure information of the current network; the deep neural network was trained for a total of 3000 batches of 64 samples each.
Fourthly, automatically identifying the sample
Inputting all the divided test set samples into the stored neural network model1.hd5, operating the deep neural network to obtain 4-dimensional predicted value vector output y _ pred corresponding to the test set samples, generating 4-dimensional label vector y _ label by using a one-hot coding method for labels of the test set samples, providing np _ utilis.to _ category function in a keras module to perform one-hot coding on the input test set labels, comparing the output predicted values with the labels of the test set samples to check whether the classification is correct, namely counting the number num of samples with the same corresponding position values of y _ pred and y _ label, and dividing the num by the total number of the test set samples to obtain the final accuracy.
Four types of heartbeats under the AAMI standard were labeled according to table 1.
Figure DEST_PATH_IMAGE001
Randomly extracting a part of all heartbeats as a classification training set of the embodiment according to the quantity shown in the table 2; of the remaining heartbeats, all heartbeats except the training set were taken as the test set.
Figure DEST_PATH_IMAGE003
According to the network structure parameters listed in table 3, the layer Type of each layer of the network is, for example, the parameter value of the corresponding position in the Type column, the layer Type of each layer is, for example, the parameter value of the corresponding position in the Output layer column, the core size of each layer of the network is set as the parameter value of the corresponding position in the Kernel size, and the moving step size of each layer of the network is set as the parameter value of the corresponding position in the threads.
Figure DEST_PATH_IMAGE005

Claims (6)

1. An automatic arrhythmia analysis method based on a lead deep neural network comprises the following steps:
1) performing composite sampling by adopting three sampling modes to generate a multi-channel electrocardiogram sample
a. For each lead electrocardiosignal, 100 points are taken before and after the heartbeat and then are resampled to a vector with a fixed dimension of 600;
b. for each lead electrocardiosignal, taking an R-R wave interval of 2 periods before cardiac beat and taking an R-R wave interval of 1 period after cardiac beat, and then resampling to a vector of a fixed dimension 600;
c. for electrocardiosignals of each lead, taking R-R wave intervals of 2 periods before cardiac beat and resampling to a 300-dimensional vector, then taking R-R wave intervals of 1 period and resampling to a 300-dimensional vector, and finally splicing the front and rear resampled signals to form a 600-dimensional signal;
splicing the 600-dimensional electrocardiosignals obtained by the three sampling modes along a second dimension vector, amplifying each lead electrocardiosignal from 600 × 1 dimensions to 600 × 3 dimensions, wherein 3 at the moment is the channel number of the lead electrocardiosignal; when the original electrocardiosignals have 2 leads, the electrocardiosignals have 600 × 6 dimensions, the 600-dimensional vector is taken as the electrocardiosignals of one channel, and the method is equivalent to forming 6 600 × 1-dimensional electrocardiosignal samples X by the original electrocardiodata of each lead in the composite sampling mode and taking the samples X as the Input of the deep neural network model;
2) building a deep neural network
The deep neural network comprises a plurality of convolutional layer units and LSTM layer units which are sequentially connected in series, and an attention layer is arranged between the convolutional layer units and the LSTM layer units and serves as a connecting unit; each convolution layer unit comprises a convolution layer and an excitation unit operation and a pooling layer operation which are sequentially connected with the output end of the convolution layer in series; the convolution layer unit uses one-dimensional convolution and is used for extracting the characteristics of one-dimensional electrocardiosignals;
combining the electrocardiosignals X of all channels, and inputting the electrocardiosignals X as input signals into the convolution layer units connected in series;
the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series; outputting;
3) learning parameters of a deep neural network;
4) and automatically identifying the sample.
2. The method of claim 1, wherein the method comprises:
when the electrocardiogram data set has two lead signals, the dimension of the input signal is 6 × 600 × 1; inputting each 600 x 1 dimensional signal in the input signals into two layers of convolution layer units connected in series; the output end of each layer of convolution layer unit in each path of lead channel is sequentially connected with an excitation unit operation and a pooling layer operation in series; the number of convolution kernels of the first convolution layer unit is 32, the size of the convolution kernels is 4, the excitation unit behind the first convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the dimension of the characteristic diagram after passing through the first layer of pooling units is 200 x 32; the number of convolution kernels of the second convolution layer unit is 64, the size of the convolution kernels is 5, the excitation unit behind the second convolution layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 6, and the pooling step size is 3; the characteristic dimension after passing through the second layer of pooling units was 67 x 64.
3. The method of claim 2, wherein the method comprises: the deep neural network comprises a convolutional layer unit and an LSTM layer unit which are sequentially connected in series;
the outputs of the two convolution layer units in each path of lead channel are combined along the last dimension through the combination layer, namely the dimension of the depth of the combined output characteristic diagram is 67 x 384;
after merging layers, connecting an attention unit in series, wherein the attention unit constructs a weight matrix with the same dimension of 67 x 384, and multiplying the weight matrix with the corresponding element point of the convolved feature map, and the output dimension of the weighted feature map is 67 x 384;
inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128;
the output of the LSTM layer unit is connected with a full-connection layer of which the excitation unit is softmax in series, and the number of the layers of the full-connection layer is 4, namely the category number;
and finally, outputting the prediction vector dimension by the deep neural network model.
4. The method of claim 3, wherein the method comprises: the dimensionality of a prediction vector output by the deep neural network is 4; the method is constructed by using a keras open source framework and a python language, cross entropy is used as a loss function, and an Adam optimizer is used for optimizing the loss function.
5. The method of claim 1, 2, 3 or 4 for automated arrhythmia analysis based on deep neural networks of leads, wherein: the parameters of the learning deep neural network are as follows: initializing training parameters of the deep neural network, and dividing the sampled signals into training set samples and test set samples; randomly extracting a part of samples from the total samples to be used as a training set, and regarding other unselected samples as a test set;
inputting the multichannel electrocardiosignals X in the training set into the initialized deep neural network, and performing iteration by taking a minimized cost function as a target to generate and store the deep neural network; and updating the training parameters once every iteration until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and stopping training and storing the training parameters and the model structure information of the current network.
6. The method of claim 5, wherein the method comprises: the automatic identification of the sample is as follows: inputting all the divided test set samples into the stored neural network, operating the deep neural network to obtain 4-dimensional predicted value vector output corresponding to the test set samples, generating 4-dimensional label vectors by using a one-hot coding method for labels of the test set samples, and then comparing the output predicted values with the labels of the test set samples to check whether the classification is correct.
CN201810451726.5A 2018-05-12 2018-05-12 Automatic arrhythmia analysis method based on lead deep neural network Active CN108720831B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810451726.5A CN108720831B (en) 2018-05-12 2018-05-12 Automatic arrhythmia analysis method based on lead deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810451726.5A CN108720831B (en) 2018-05-12 2018-05-12 Automatic arrhythmia analysis method based on lead deep neural network

Publications (2)

Publication Number Publication Date
CN108720831A CN108720831A (en) 2018-11-02
CN108720831B true CN108720831B (en) 2021-01-15

Family

ID=63938246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810451726.5A Active CN108720831B (en) 2018-05-12 2018-05-12 Automatic arrhythmia analysis method based on lead deep neural network

Country Status (1)

Country Link
CN (1) CN108720831B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109602414B (en) * 2018-11-12 2022-01-28 安徽心之声医疗科技有限公司 Multi-view-angle conversion electrocardiosignal data enhancement method
CN110236521A (en) * 2019-05-20 2019-09-17 上海数创医疗科技有限公司 Double convolutional neural networks for ECG type identification
CN110251120B (en) * 2019-06-06 2021-03-30 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method of depth residual error neural network based on attention mechanism
CN110263684B (en) * 2019-06-06 2021-02-09 山东省计算中心(国家超级计算济南中心) Electrocardiogram classification method based on lightweight neural network
CN110141220B (en) * 2019-06-20 2022-03-15 鲁东大学 Myocardial infarction automatic detection system based on multi-mode fusion neural network
CN110558975B (en) * 2019-10-14 2020-12-01 齐鲁工业大学 Electrocardiosignal classification method and system
CN110890155B (en) * 2019-11-25 2022-10-28 中国科学技术大学 Multi-class arrhythmia detection method based on lead attention mechanism
CN111297350B (en) * 2020-02-27 2021-08-31 福州大学 Three-heart beat multi-model comprehensive decision-making electrocardiogram feature classification method integrating source end influence
CN112587148B (en) * 2020-12-01 2023-02-17 上海数创医疗科技有限公司 Template generation method and device comprising fuzzification similarity measurement method

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8010471B2 (en) * 2007-07-13 2011-08-30 Microsoft Corporation Multiple-instance pruning for learning efficient cascade detectors
US8668649B2 (en) * 2010-02-04 2014-03-11 Siemens Medical Solutions Usa, Inc. System for cardiac status determination
US9737229B1 (en) * 2013-06-04 2017-08-22 Analytics For Life Noninvasive electrocardiographic method for estimating mammalian cardiac chamber size and mechanical function
US10426364B2 (en) * 2015-10-27 2019-10-01 Cardiologs Technologies Sas Automatic method to delineate or categorize an electrocardiogram
CN106901723A (en) * 2017-04-20 2017-06-30 济南浪潮高新科技投资发展有限公司 A kind of electrocardiographic abnormality automatic diagnosis method
CN107616791B (en) * 2017-07-26 2021-02-19 浙江大学 Standard 12-lead electrocardio piecewise linear reconstruction method based on self-adaptive electrocardiosignal region segmentation
CN107456227B (en) * 2017-08-16 2021-03-30 北京蓬阳丰业医疗设备有限公司 Full-lead electrocardiogram clustering template system and method
CN107811626A (en) * 2017-09-10 2018-03-20 天津大学 A kind of arrhythmia classification method based on one-dimensional convolutional neural networks and S-transformation
CN107510452B (en) * 2017-09-30 2019-10-08 扬美慧普(北京)科技有限公司 A kind of ECG detecting method based on multiple dimensioned deep learning neural network
CN107981858B (en) * 2017-11-27 2020-12-01 上海优加利健康管理有限公司 Automatic electrocardiogram heart beat identification and classification method based on artificial intelligence
CN107944493A (en) * 2017-11-30 2018-04-20 天津大学 A kind of the first lead electrocardiogram beat classification method based on convolutional neural networks

Also Published As

Publication number Publication date
CN108720831A (en) 2018-11-02

Similar Documents

Publication Publication Date Title
CN108875575B (en) Automatic arrhythmia analysis method based on multi-channel signal fusion deep neural network
CN108720831B (en) Automatic arrhythmia analysis method based on lead deep neural network
CN108937912B (en) Automatic arrhythmia analysis method based on deep neural network
CN108766557B (en) Automatic arrhythmia analysis method based on channel signal fusion neural network
CN110141219B (en) Lead fusion deep neural network-based myocardial infarction automatic detection method
CN110141220B (en) Myocardial infarction automatic detection system based on multi-mode fusion neural network
US10758139B2 (en) Automatic method to delineate or categorize an electrocardiogram
CN108764457B (en) Automatic arrhythmia analysis method based on sampling channel fusion deep neural network
CN111956212B (en) Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network
EP3692901A1 (en) Automatic recognition and classification method for electrocardiogram heartbeat based on artificial intelligence
CN108962393B (en) Automatic arrhythmia analysis method based on compression figure neural network
CN111160139A (en) Electrocardiosignal processing method and device and terminal equipment
CN111275093B (en) Heart beat classification method and device for labeling electrocardiosignals with multiple labels
CN109091138B (en) Arrhythmia origin point judging device and mapping system
CN106725426A (en) A kind of method and system of electrocardiosignal classification
CN111785366B (en) Patient treatment scheme determination method and device and computer equipment
CN111626114B (en) Electrocardiosignal arrhythmia classification system based on convolutional neural network
CN111772619A (en) Electrocardiogram heart beat identification method, terminal device and storage medium
CN112270240B (en) Signal processing method, device, electronic equipment and storage medium
CN111956214A (en) QRS wave automatic detection method based on U-net end-to-end neural network model
CN110897634A (en) Electrocardiosignal generation method based on generation countermeasure network
CN112932433A (en) Arrhythmia automatic classification method of electrocardiogram
CN110192851B (en) Arrhythmia analysis system based on machine learning technology
CN113080984B (en) Myocardial infarction identification and positioning method based on CNN and LSTM
WO2023110477A1 (en) A computer implemented method and a system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20221205

Address after: 264006 No.1, No. 10, Hengshan Road, Yantai District, China (Shandong) Pilot Free Trade Zone, Yantai City, Shandong Province

Patentee after: Fengxiang (Shandong) Medical Technology Co.,Ltd.

Address before: 264025 No. 186 Hongqi Middle Road, Zhifu District, Shandong, Yantai

Patentee before: LUDONG University

TR01 Transfer of patent right