CN111956212A - Inter-group atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network - Google Patents

Inter-group atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network Download PDF

Info

Publication number
CN111956212A
CN111956212A CN202010745422.7A CN202010745422A CN111956212A CN 111956212 A CN111956212 A CN 111956212A CN 202010745422 A CN202010745422 A CN 202010745422A CN 111956212 A CN111956212 A CN 111956212A
Authority
CN
China
Prior art keywords
layer
neural network
convolution
electrocardiosignal
deep neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010745422.7A
Other languages
Chinese (zh)
Other versions
CN111956212B (en
Inventor
臧睦君
魏小晨
刘通
刘澳伟
刘胜强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ludong University
Original Assignee
Ludong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ludong University filed Critical Ludong University
Priority to CN202010745422.7A priority Critical patent/CN111956212B/en
Publication of CN111956212A publication Critical patent/CN111956212A/en
Application granted granted Critical
Publication of CN111956212B publication Critical patent/CN111956212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7203Signal processing specially adapted for physiological signals or for diagnostic purposes for noise prevention, reduction or removal
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The invention discloses an intergroup atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network, which comprises the following steps: 1) searching an R point of the electrocardiosignal, and carrying out single-heartbeat detection on the single-lead electrocardiosignal sample; 2) performing composite sampling by adopting three sampling modes to generate a multi-channel electrocardiosignal; 3) building a neural network model; 4) training parameters of a convolutional neural network; 5) automatically identifying the test set sample; the problem of traditional machine learning frame because the heart electrograph is different because of the pathological change of information management system or patient's age, sex difference, it is weak to generalize the ability is solved.

Description

Inter-group atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network
Technical Field
The invention relates to the technical field of medical signal processing, in particular to an interclass atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network.
Background
In recent years, with the advancement of science and technology in the information field, especially the development of pattern recognition technology, computer-aided diagnosis systems have become the most promising clinical diagnosis solution due to their fast and reliable analysis means. By means of the advanced hardware, the patient's electrocardiogram signal, known as electrocardiogram, can be obtained easily. Clinical prevention and diagnosis of heart disease relies primarily on the clinician's subjective empirical judgment of electrocardiogram, however manual or visual inspection of the process of inferring these subtle morphological changes in long continuous electrocardiographic beats is time consuming and prone to error due to fatigue. Therefore, a real-time computer-aided diagnosis system is essential to help doctors monitor the condition of patients in real time, overcoming these limitations of evaluation of electrocardiogram signals.
The information in the electrocardiogram can be analyzed in real time by a computer-assisted diagnostic system to obtain useful information. The feature vectors representing the effective information of the electrocardiogram are extracted and input into a classifier algorithm to obtain the category of the heart beat, and then whether the heart beat has cardiovascular diseases or not is judged. The heart beat automatic identification system working on the computer hardware is the core of the equipment. The technical approach is to extract a feature vector capable of representing effective information of an electrocardiogram, input the feature vector into a classifier algorithm to obtain the category of the heart beat and further judge whether the heart beat has atrial fibrillation. The technical difficulty in the step of feature extraction is the extraction of morphological features, and whether reasonable feature extraction can be achieved or not will directly affect the accuracy and reliability of the result. Therefore, the invention adopts a plurality of sampling modes to form a characteristic vector to be input into the classifier, the characteristic vector is processed to output a classification result, whether the heart beat extracted in real time belongs to a healthy heart beat or a atrial fibrillation heart beat is given, and a doctor carries out deeper diagnosis on the basis.
Disclosure of Invention
The invention aims to provide an interclass atrial fibrillation identification method based on a frequency domain filtering-multimodal deep neural network, which aims to solve the problem that the generalization capability of an electrocardiogram is weak due to different pathological changes of an information management system or different ages and sexes of patients in a traditional machine learning framework.
The interclass atrial fibrillation identification method based on the frequency domain filtering-multi-mode deep neural network comprises the following steps:
1) searching for R point of electrocardiosignal, and performing single-heartbeat detection on single-lead electrocardiosignal sample
Reading in single-lead electrocardiosignal data, obtaining frequency domain single-lead electrocardiosignal data by the read-in single-lead electrocardiosignal data through a band-pass filter, carrying out Hilbert conversion on the obtained frequency domain single-lead electrocardiosignal data to obtain an envelope, then obtaining a peak value, setting a peak value threshold value to be 75% of a maximum peak value, screening all found peak values, and selecting all possible R point positions.
2) Adopts three sampling modes to carry out composite sampling to generate a multi-channel electrocardiosignal
a. 150 points are respectively taken before and after taking the R point as the center for the electrocardiosignal of one lead and are fixed to the dimension 300;
b. taking 300 points before and after taking the R point as a center for electrocardiosignals of one lead, and then resampling to a fixed dimension 300;
c. respectively taking 450 points before and after taking an R point as a center for an electrocardiosignal of one lead, and then resampling to a fixed dimension 300;
splicing 300-dimensional electrocardiosignals obtained by the three sampling modes side by side, wherein each lead electrocardiosignal is spliced into 300-3 dimensions from 300-1 dimensions; and performing a, b and c operations on each lead data of the original signals, and splicing the lead data side by side into 300X 3X 12-dimensional electrocardiosignal samples X, wherein 12 refers to the 12 leads in total of the original signals and is used as Input of the deep neural network model.
3) Building a neural network model
The deep neural network comprises three lead channels which are sequentially connected in parallel, each lead channel consists of three serially connected bottom layer convolution layer units, a merging layer is arranged at the output end of each lead channel, the characteristic diagrams of each lead channel are merged along the depth direction, and the depth direction is the third dimension direction of the tensorflow frame general characteristic diagram storage form; the output results after passing through the bottom convolution layer and merging are input into a high-level fusion convolution layer structure comprising two convolution layers connected in series. Each of the bottom layer convolutional layer unit and the high layer convolutional layer unit comprises a convolutional layer, and an excitation unit operation and a pooling layer operation which are sequentially connected with the output end of the convolutional layer in series; the convolution layer unit uses one-dimensional convolution and is used for extracting one-dimensional electrocardiosignal characteristics;
serially connecting an attention layer and an LSTM layer according to an output result after the convolution layer structure is fused through a high layer; the output of the LSTM layer unit is connected with a full connection layer with the excitation unit softmax in series for outputting the analysis result.
4) Training parameters of convolutional neural networks
Initializing the convolutional neural network parameters, dividing all the whole electrocardiosignal data sets U into data sets U1 and U2, taking samples of the data sets U1 as training sets, and taking samples of the data sets U2 as test sets; and inputting the electrocardiosignal samples of the training set into the initialized neural network, performing iteration by taking the minimum cost function as a target, generating and storing the parameters of the convolutional neural network.
5) Automatic identification of test set samples
When the electrocardiogram data set has a single lead, the dimension of an input signal is 300 x 1; respectively inputting electrocardiosignal samples obtained by three different sampling modes into 3 bottom layer convolutional layers, wherein each bottom layer convolutional layer comprises three layers of convolutional layer units, and the output end of each convolutional layer unit is sequentially connected with an excitation unit operation and a pooling layer operation in series; the number of convolution kernels of the first convolution layer unit is 8, the size of the convolution kernels is 16, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the first layer of pooling units is 300 x 8; the number of convolution kernels of the second convolution layer unit is 16, the size of the convolution kernels is 32, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the second layer of pooling units is 300 x 16; the number of convolution kernels of the third convolution layer unit is 32, the size of the convolution kernels is 8, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the third layer of the pooling units is 300 x 32;
performing the operation on the three-channel single-lead signals, and finally outputting a feature map splicing operation to form a feature map with the dimension of 300 × 32 × 3, inputting the feature map into a high-level fusion convolution layer, wherein the high-level fusion convolution layer comprises two layers of convolution layers, the features of the three channels are fused into one block, and the feature map dimension is 300 × 128;
the deep neural network is a bottom layer convolutional layer, a high layer fusion convolutional layer and an LSTM layer unit;
the output of the feature map after the high-level fusion convolution layer is connected with an attention unit in series, the attention unit in the deep neural network constructs a weight matrix with the same dimension of 300 x 128 and the point multiplication of the corresponding elements of the feature map after the convolution, and the output dimension of the feature map after the weighting is 300 x 128; the elements of this weight matrix are trained by the neural network, and the initial values of the matrix elements are random numbers ranging from 0 to 1. Inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128; the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series, and the number of the full connection layer is 5. Finally, the dimensionality of a prediction vector y _ pred output by the deep neural network model is 4, namely the category number;
the dimensionality of a prediction vector output by the deep neural network is 4; constructing by using a keras open source framework and a python language, using cross entropy as a loss function, and optimizing the loss function by using an Adam optimizer;
the parameters of the learning deep neural network are as follows: initializing training parameters of the neural network, and dividing the sampled signals into training set samples and test set samples; that is, a part of the samples are randomly selected from the total samples to be used as a training set, and other unselected samples are used as a testing set. Inputting the multichannel electrocardiosignals X in the training set into the initialized deep neural network, and performing iteration by taking a minimized cost function as a target to generate and store the deep neural network; the training parameters are updated once every iteration until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and then the training can be stopped and the training parameters and the model structure information of the current network can be stored;
the automatic identification of the sample is as follows: inputting all the divided test set samples into the stored neural network, operating the deep neural network to obtain 4-dimensional predicted value vector output corresponding to the test set samples, generating 4-dimensional label vectors by using a one-hot coding method for labels of the test set samples, and then checking whether the classification is correct or not by comparing the output predicted values with the labels of the test set samples;
the step 5) of automatically identifying the test set sample comprises the following steps:
inputting all the divided test set samples into the stored neural network model1.hd5, operating the deep neural network to obtain 4-dimensional predicted value vector output y _ pred corresponding to the test set samples, generating 4-dimensional label vector y _ label by using a one-hot coding method for labels of the test set samples, providing np _ utilis.to _ category function in a keras.utilis module to perform one-hot coding on the input test set labels, comparing the output predicted values with the labels of the test set samples to check whether the classification is correct, namely counting the number n of samples with the same corresponding position values of y _ pred and y _ label, and dividing n by the total number of the test set samples to obtain the final accuracy.
The invention provides an intergroup atrial fibrillation identification method based on frequency domain filtering-multi-mode deep neural network, which comprises the following steps: 1) searching an R point of the electrocardiosignal, and carrying out single-heartbeat detection on the single-lead electrocardiosignal sample; 2) performing composite sampling by adopting three sampling modes to generate a multi-channel electrocardiosignal; 3) building a neural network model; 4) training parameters of a convolutional neural network; 5) automatically identifying the test set sample; the problem of traditional machine learning frame because the heart electrograph is different because of the pathological change of information management system or patient's age, sex difference, it is weak to generalize the ability is solved.
Drawings
Fig. 1 is a diagram of a deep neural network architecture.
Detailed Description
Embodiment 1 interclass atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network
The invention is further described with reference to the following figures and detailed description of embodiments.
The specific example is a data set provided by a BOE-TopCoder programming competition, and the data set comprises 8528 single-lead electrocardiosignals, wherein four electrocardiosignals are respectively N (normal electrocardiosignals), A (atrial fibrillation electrocardiosignals), O (non-atrial fibrillation abnormal electrocardiosignals) and-type electrocardiosignals (noise-containing electrocardiosignals). The labels of the four categories and the corresponding relationship of the categories in the given data set are shown in table 1; in this example, this is accomplished by software systems operating on a computer and the matlab and python software environments well known in the art.
Figure DEST_PATH_IMAGE002
The detailed steps of this example are as follows:
implementation of finding R point and single-heart beat detection
Reading single-lead electrocardiosignal data by using matlab, obtaining frequency domain single-lead electrocardiosignal data by the read single-lead electrocardiosignal data through a band-pass filter, carrying out Hilbert transform on the obtained frequency domain single-lead electrocardiosignal data to obtain an envelope, searching a peak value by using a matlab built-in function findpeaks function, setting a peak value threshold value to be 75% of the maximum value of the peak value, screening all searched peak values, and selecting and storing all possible R point positions.
Second, the realization of generating multi-channel electrocardiosignal samples by adopting various sampling modes
The channel number of the electrocardiosignal is amplified by adopting the following three sampling modes:
(1) the first sampling mode is as follows: 150 points are respectively taken before and after taking the R point as the center for the electrocardiosignal of each lead, and the dimension is resampled to 300;
(2) the second sampling mode is as follows: taking 300 points before and after taking the R point as a center for the electrocardiosignal of each lead, and resampling to a dimension 300;
(3) the third sampling mode is as follows: respectively taking 450 points before and after taking the R point as a center for the electrocardiosignal of each lead, and resampling to a dimension 300;
splicing 300-dimensional electrocardiosignals obtained by the three sampling modes side by side, wherein each lead electrocardiosignal is amplified to 300-3 dimensions from 300-1 dimensions, and 3 at the moment is the channel number of the lead electrocardiosignal; and forming the 300X 3 dimensional electrocardiosignal sample X by the original electrocardio data of each lead in the composite sampling mode, and using the sample X as the input of the deep neural network model.
Thirdly, building a deep neural network
(1) Concrete structure of the deep neural network
The input of the depth model is the electrocardiosignal X after merging each channel, and because the given competition data is a single lead signal, the input signal dimension is 300X 3; where 3 denotes the number of channels. Inputting 300 x 1 dimensions of each input electrocardiosignal sample, namely correspondingly inputting data of one lead electrocardiosignal into 3 bottom convolution layers, wherein each bottom convolution layer comprises three layers of convolution layer units, and the output end of each convolution layer unit is sequentially connected with an excitation unit operation and a pooling unit operation in series; the number of convolution kernels of the first convolution layer unit is 8, the size of the convolution kernels is 16, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the first layer of pooling units is 300 x 8; the number of convolution kernels of the second convolution layer unit is 16, the size of the convolution kernels is 32, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the second layer of pooling units is 300 x 16; the number of convolution kernels of the third convolution layer unit is 32, the size of the convolution kernels is 8, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the third layer of the pooling units is 300 x 32; and then combining the characteristic graphs obtained by passing the electrocardiosignal samples obtained by the 3 sampling modes through 3 identical bottom convolution layers to finally obtain a 300 × 32 × 3 characteristic graph, wherein the network parameters of the bottom convolution layer can be looked up in a table 2.
Figure DEST_PATH_IMAGE004
Inputting a 300 × 32 × 3 dimensional feature map output by the bottom layer convolutional layer into a high layer fusion convolutional layer, wherein the high layer fusion convolutional layer comprises two layers of convolutional layer units, the number of convolution kernels of the first convolutional layer unit is 64, the size of the convolution kernels is 8, the excitation unit behind the first convolutional layer unit is a relu function, the size of the pooling kernel of the pooling layer unit is 2, and the pooling step size is 1; the feature map dimension after the first layer of pooling units was 300 x 64. The number of convolution kernels of the second convolution layer unit is 128, the size of the convolution kernels is 16, the subsequent excitation unit relu function, the pooling kernel size of the pooling layer unit is 2, the pooling step size is 1, and the feature map dimension after passing through the second pooling layer unit is 300 x 128. Table 3 can be viewed for the parameters of the specific high-level fusion convolutional layer;
Figure DEST_PATH_IMAGE006
the output of the feature map after the high-level fusion convolution layer is connected with an attention unit in series, the attention unit in the deep neural network constructs a weight matrix with the same dimension of 300 x 128 and the point multiplication of the corresponding elements of the feature map after the convolution, and the output dimension of the feature map after the weighting is 300 x 128; the elements of this weight matrix are trained by the neural network, and the initial values of the matrix elements are random numbers ranging from 0 to 1. Inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128; the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series, and the number of the full connection layer is 5. And finally, the dimensionality of a prediction vector y _ pred output by the deep neural network model is 4. The specific attention and LSTM parameters can be seen in table 4.
Figure DEST_PATH_IMAGE008
(2) Specific implementation of the deep neural network
The deep neural network is built by using a functional Model in a keras framework, namely a Model function is introduced into a keras models module, the input of the Model is set as the multi-channel electrocardiosignal X after composite sampling, and the output is a prediction vector y _ pred with the dimensionality of 4; the method comprises the steps of constructing a one-dimensional Convolution layer by introducing a constraint 1D function in a kers. Constructing an LSTM layer unit by importing an LSTM function in a kers.
Fourthly, training parameters of the deep neural network
Firstly, initializing the training parameters of the deep neural network, dividing the sampled signals into training set samples and testing set samples, and displaying the divided data sets as shown in table 5. Inputting multi-channel electrocardiosignals sampled by a composite mode in a training set into an initialized deep neural network, wherein cross entropy is used as a cost function in the deep neural network; using a categoratic _ crossntropy function in Keras, instantiating an object Model in the deep neural network through a constructed functional Model, and setting a parameter loss as 'categoratic _ crossntropy' in the Model. Performing iteration by using an Adam optimizer and taking a minimized cost function as a target, and performing optimization by setting a parameter optimizer in a model.compound function as 'Adam' to generate the deep neural network and store a file model1.hd5 which is a suffix of hd 5; wherein the training parameters are updated once per iteration; stopping training until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and storing the training parameters and the model structure information of the current network; the deep neural network was trained for 8000 batches of 600 samples each. (Table 5 shows that the training samples and the test samples are divided in an inter-group mode, all the electrocardiosignal data sets U are divided into data sets U1 and U2, samples obtained by the data sets U1 through three sampling modes are used as training sets, and samples obtained by the data sets U2 through three sampling modes are used as test sets.)
Figure DEST_PATH_IMAGE010
Fifthly, automatically identifying the sample
Inputting all the divided test set samples into the stored neural network model1.hd5, operating the deep neural network to obtain 4-dimensional predicted value vector output y _ pred corresponding to the test set samples, generating 4-dimensional label vector y _ label by using a one-hot coding method for labels of the test set samples, providing np _ utilis.to _ category function in a keras.utilis module to perform one-hot coding on the input test set labels, comparing the output predicted values with the labels of the test set samples to check whether the classification is correct, namely counting the number n of samples with the same corresponding position values of y _ pred and y _ label, and dividing n by the total number of the test set samples to obtain the final accuracy.

Claims (7)

1. The interclass atrial fibrillation identification method based on the frequency domain filtering-multi-mode deep neural network comprises the following steps:
1) searching for R point of electrocardiosignal, and performing single-heartbeat detection on single-lead electrocardiosignal sample
Reading in single-lead electrocardiosignal data, obtaining frequency domain single-lead electrocardiosignal data by the read-in single-lead electrocardiosignal data through a band-pass filter, carrying out Hilbert transform on the obtained frequency domain single-lead electrocardiosignal data to obtain an envelope, then obtaining a peak value, setting a peak value threshold value to be 75% of a maximum peak value, screening all found peak values, and selecting all possible R point positions;
2) adopts three sampling modes to carry out composite sampling to generate a multi-channel electrocardiosignal
a. 150 points are respectively taken before and after taking the R point as the center for the electrocardiosignal of one lead and are fixed to the dimension 300;
b. taking 300 points before and after taking the R point as a center for electrocardiosignals of one lead, and then resampling to a fixed dimension 300;
c. respectively taking 450 points before and after taking an R point as a center for an electrocardiosignal of one lead, and then resampling to a fixed dimension 300;
splicing 300-dimensional electrocardiosignals obtained by the three sampling modes side by side, wherein each lead electrocardiosignal is spliced into 300-3 dimensions from 300-1 dimensions; performing a, b and c operations on each lead data of the original signals, and splicing the lead data side by side into 300X 3X 12-dimensional electrocardiosignal samples X, wherein 12 refers to 12 leads in total of the original signals and is used as Input of a deep neural network model;
3) building a neural network model
The deep neural network comprises three lead channels which are sequentially connected in parallel, each lead channel consists of three serially connected bottom layer convolution layer units, a merging layer is arranged at the output end of each lead channel, the characteristic diagrams of each lead channel are merged along the depth direction, and the depth direction is the third dimension direction of the tensorflow frame general characteristic diagram storage form; the output results after passing through the bottom convolution layer and merging are input into a high-level fusion convolution layer structure comprising two convolution layers connected in series;
each of the bottom layer convolutional layer unit and the high layer convolutional layer unit comprises a convolutional layer, and an excitation unit operation and a pooling layer operation which are sequentially connected with the output end of the convolutional layer in series; the convolution layer unit uses one-dimensional convolution and is used for extracting one-dimensional electrocardiosignal characteristics;
serially connecting an attention layer and an LSTM layer according to an output result after the convolution layer structure is fused through a high layer; the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series for outputting an analysis result;
4) training parameters of convolutional neural networks
Initializing the convolutional neural network parameters, dividing all the whole electrocardiosignal data sets U into data sets U1 and U2, taking samples of the data sets U1 as training sets, and taking samples of the data sets U2 as test sets; inputting electrocardiosignal samples of a training set into the initialized neural network, performing iteration by taking a minimum cost function as a target, generating and storing parameters of the convolutional neural network;
5) and automatically identifying the test set samples.
2. The method for interclass atrial fibrillation recognition based on the frequency domain filtering-multi-modal deep neural network as claimed in claim 1, wherein:
when the electrocardiogram data set has a single lead, the dimension of an input signal is 300 x 1; respectively inputting electrocardiosignal samples obtained by three different sampling modes into 3 bottom layer convolutional layers, wherein each bottom layer convolutional layer comprises three layers of convolutional layer units, and the output end of each convolutional layer unit is sequentially connected with an excitation unit operation and a pooling layer operation in series; the number of convolution kernels of the first convolution layer unit is 8, the size of the convolution kernels is 16, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the first layer of pooling units is 300 x 8; the number of convolution kernels of the second convolution layer unit is 16, the size of the convolution kernels is 32, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the second layer of pooling units is 300 x 16; the number of convolution kernels of the third convolution layer unit is 32, the size of the convolution kernels is 8, an excitation unit behind the convolution layer unit is a relu function, the size of a pooling kernel of the pooling layer unit is 2, and the pooling step length is 1; the dimension of the characteristic diagram after passing through the third layer of the pooling units is 300 x 32;
and performing the operation on the finally output feature maps of the three-channel single-lead signals to form a feature map with the dimension of 300 × 32 × 3, inputting the feature map into a high-level fusion convolution layer, wherein the high-level fusion convolution layer comprises two layers of convolution layers, and the features of the three channels are fused into one block to form the feature map with the dimension of 300 × 128.
3. The method for interclass atrial fibrillation recognition based on the frequency domain filtering-multi-modal deep neural network as claimed in claim 2, wherein: the deep neural network is a bottom layer convolutional layer, a high layer fusion convolutional layer and an LSTM layer unit; the output of the feature map after the high-level fusion convolution layer is connected with an attention unit in series, the attention unit in the deep neural network constructs a weight matrix with the same dimension of 300 x 128 and the point multiplication of the corresponding elements of the feature map after the convolution, and the output dimension of the feature map after the weighting is 300 x 128; the elements of the weight matrix are obtained by training a neural network, and the initial value of the matrix elements is a random number in the range of 0-1; inputting the weighted feature map into an LSTM layer unit, wherein the number of hidden layers of the LSTM layer unit is 128, and the dimension of an output feature map of the LSTM layer unit is 128; the output of the LSTM layer unit is connected with a full connection layer of which the excitation unit is softmax in series, and the number of the full connection layer is 5; and finally, the dimensionality of a prediction vector y _ pred output by the deep neural network model is 4, namely the category number.
4. The method for identifying inter-group atrial fibrillation according to claim 3, wherein the method comprises the following steps: the dimensionality of a prediction vector output by the deep neural network is 4; the method is constructed by using a keras open source framework and a python language, cross entropy is used as a loss function, and an Adam optimizer is used for optimizing the loss function.
5. The method for identifying inter-group atrial fibrillation according to claim 4, wherein the method comprises the following steps: the parameters of the learning deep neural network are as follows: initializing training parameters of the neural network, and dividing the sampled signals into training set samples and test set samples; randomly extracting a part of samples from the total samples to be used as a training set, and regarding other unselected samples as a test set; inputting the multichannel electrocardiosignals X in the training set into the initialized deep neural network, and performing iteration by taking a minimized cost function as a target to generate and store the deep neural network; and updating the training parameters once every iteration until the loss value and the accuracy of the deep neural network are stabilized near a certain value, and stopping training and storing the training parameters and the model structure information of the current network.
6. The method for interclass atrial fibrillation recognition based on the frequency domain filtering-multi-modal deep neural network as claimed in claim 5, wherein: the automatic identification of the sample is as follows: inputting all the divided test set samples into the stored neural network, operating the deep neural network to obtain 4-dimensional predicted value vector output corresponding to the test set samples, generating 4-dimensional label vectors by using a one-hot coding method for labels of the test set samples, and then comparing the output predicted values with the labels of the test set samples to check whether the classification is correct.
7. The method for identifying inter-group atrial fibrillation according to claim 1, 2, 3, 4, 5 or 6, wherein the method comprises the following steps:
the step 5) of automatically identifying the test set sample comprises the following steps:
inputting all the divided test set samples into the stored neural network model1.hd5, operating the deep neural network to obtain 4-dimensional predicted value vector output y _ pred corresponding to the test set samples, generating 4-dimensional label vector y _ label by using a one-hot coding method for labels of the test set samples, providing np _ utilis.to _ category function in a keras.utilis module to perform one-hot coding on the input test set labels, comparing the output predicted values with the labels of the test set samples to check whether the classification is correct, namely counting the number n of samples with the same corresponding position values of y _ pred and y _ label, and dividing n by the total number of the test set samples to obtain the final accuracy.
CN202010745422.7A 2020-07-29 2020-07-29 Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network Active CN111956212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745422.7A CN111956212B (en) 2020-07-29 2020-07-29 Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745422.7A CN111956212B (en) 2020-07-29 2020-07-29 Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network

Publications (2)

Publication Number Publication Date
CN111956212A true CN111956212A (en) 2020-11-20
CN111956212B CN111956212B (en) 2023-08-01

Family

ID=73363465

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745422.7A Active CN111956212B (en) 2020-07-29 2020-07-29 Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network

Country Status (1)

Country Link
CN (1) CN111956212B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577747A (en) * 2020-12-07 2021-03-30 东南大学 Rolling bearing fault diagnosis method based on space pooling network
CN112633195A (en) * 2020-12-28 2021-04-09 杭州电子科技大学 Myocardial infarction identification and classification method based on frequency domain features and deep learning
CN113080992A (en) * 2021-03-30 2021-07-09 北京芯动卫士科技有限公司 Atrial fibrillation identification method based on DCGAN and deep neural network method
CN113180690A (en) * 2021-04-21 2021-07-30 上海数创医疗科技有限公司 Electrocardio diagnosis model and electrocardio detection device
CN113288162A (en) * 2021-06-03 2021-08-24 北京航空航天大学 Short-term electrocardiosignal atrial fibrillation automatic detection system based on self-adaptive attention mechanism
CN114224351A (en) * 2022-01-13 2022-03-25 浙江好络维医疗技术有限公司 Atrial fibrillation identification method based on fusion of multiple deep learning models
TWI827239B (en) * 2021-10-06 2023-12-21 長庚大學 Method for assisting identification of symptoms

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080103403A1 (en) * 2004-11-08 2008-05-01 Eyal Cohen Method and System for Diagnosis of Cardiac Diseases Utilizing Neural Networks
CN107184198A (en) * 2017-06-01 2017-09-22 广州城市职业学院 A kind of electrocardiosignal classifying identification method
CN108937912A (en) * 2018-05-12 2018-12-07 鲁东大学 A kind of automatic arrhythmia analysis method based on deep neural network
CN109303560A (en) * 2018-11-01 2019-02-05 杭州质子科技有限公司 A kind of atrial fibrillation recognition methods of electrocardiosignal in short-term based on convolution residual error network and transfer learning
US20190374166A1 (en) * 2018-06-08 2019-12-12 Cambridge Heartwear Limited Detecting abnormalities in ecg signals
CN110680310A (en) * 2019-10-21 2020-01-14 北京航空航天大学 Electrocardiosignal atrial fibrillation detection method based on one-dimensional dense connection convolution network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080103403A1 (en) * 2004-11-08 2008-05-01 Eyal Cohen Method and System for Diagnosis of Cardiac Diseases Utilizing Neural Networks
CN107184198A (en) * 2017-06-01 2017-09-22 广州城市职业学院 A kind of electrocardiosignal classifying identification method
CN108937912A (en) * 2018-05-12 2018-12-07 鲁东大学 A kind of automatic arrhythmia analysis method based on deep neural network
US20190374166A1 (en) * 2018-06-08 2019-12-12 Cambridge Heartwear Limited Detecting abnormalities in ecg signals
CN109303560A (en) * 2018-11-01 2019-02-05 杭州质子科技有限公司 A kind of atrial fibrillation recognition methods of electrocardiosignal in short-term based on convolution residual error network and transfer learning
CN110680310A (en) * 2019-10-21 2020-01-14 北京航空航天大学 Electrocardiosignal atrial fibrillation detection method based on one-dimensional dense connection convolution network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韦张跃昊,钱升谊: "基于滤波重构和卷积神经网络的心电信号分类" *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112577747A (en) * 2020-12-07 2021-03-30 东南大学 Rolling bearing fault diagnosis method based on space pooling network
CN112577747B (en) * 2020-12-07 2023-08-08 东南大学 Rolling bearing fault diagnosis method based on space pooling network
CN112633195A (en) * 2020-12-28 2021-04-09 杭州电子科技大学 Myocardial infarction identification and classification method based on frequency domain features and deep learning
CN112633195B (en) * 2020-12-28 2024-02-13 杭州电子科技大学 Myocardial infarction recognition and classification method based on frequency domain features and deep learning
CN113080992A (en) * 2021-03-30 2021-07-09 北京芯动卫士科技有限公司 Atrial fibrillation identification method based on DCGAN and deep neural network method
CN113180690A (en) * 2021-04-21 2021-07-30 上海数创医疗科技有限公司 Electrocardio diagnosis model and electrocardio detection device
CN113288162A (en) * 2021-06-03 2021-08-24 北京航空航天大学 Short-term electrocardiosignal atrial fibrillation automatic detection system based on self-adaptive attention mechanism
CN113288162B (en) * 2021-06-03 2022-06-28 北京航空航天大学 Short-term electrocardiosignal atrial fibrillation automatic detection system based on self-adaptive attention mechanism
TWI827239B (en) * 2021-10-06 2023-12-21 長庚大學 Method for assisting identification of symptoms
CN114224351A (en) * 2022-01-13 2022-03-25 浙江好络维医疗技术有限公司 Atrial fibrillation identification method based on fusion of multiple deep learning models

Also Published As

Publication number Publication date
CN111956212B (en) 2023-08-01

Similar Documents

Publication Publication Date Title
CN111956212B (en) Inter-group atrial fibrillation recognition method based on frequency domain filtering-multi-mode deep neural network
CN108875575B (en) Automatic arrhythmia analysis method based on multi-channel signal fusion deep neural network
CN110141220B (en) Myocardial infarction automatic detection system based on multi-mode fusion neural network
CN111160139B (en) Electrocardiosignal processing method and device and terminal equipment
CN108720831B (en) Automatic arrhythmia analysis method based on lead deep neural network
CN110141219B (en) Lead fusion deep neural network-based myocardial infarction automatic detection method
CN108937912B (en) Automatic arrhythmia analysis method based on deep neural network
EP3306500A1 (en) Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof
CN109544518B (en) Method and system applied to bone maturity assessment
CN108766557B (en) Automatic arrhythmia analysis method based on channel signal fusion neural network
Ahishali et al. Advance warning methodologies for covid-19 using chest x-ray images
CN111133526A (en) Discovering novel features that can be used in machine learning techniques, e.g. for diagnosing medical conditions
US11062792B2 (en) Discovering genomes to use in machine learning techniques
CN107967686B (en) Epilepsy recognition device combining dynamic brain network and long-time and short-time memory network
CN113080994A (en) Multi-lead electrocardiosignal classification method based on convolutional neural network
CN111956214A (en) QRS wave automatic detection method based on U-net end-to-end neural network model
Chen et al. Heart sound segmentation via duration long–short term memory neural network
KR20220077583A (en) Method and apparatus of explainable multi electrocardiogram arrhythmia diagnosis
Feng et al. Unsupervised semantic-aware adaptive feature fusion network for arrhythmia detection
CN110897634A (en) Electrocardiosignal generation method based on generation countermeasure network
CN113662560A (en) Method for detecting seizure-like discharge between attacks, storage medium and device
Xu et al. Interpatient ECG arrhythmia detection by residual attention CNN
CN115281688A (en) Cardiac hypertrophy multi-label detection system based on multi-mode deep learning
CN114983343A (en) Sleep staging method and system, computer-readable storage medium and electronic device
Nagadeepa et al. Artificial Intelligence based Cervical Cancer Risk Prediction Using M1 Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant