CN110942825A - Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network - Google Patents

Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network Download PDF

Info

Publication number
CN110942825A
CN110942825A CN201911177079.4A CN201911177079A CN110942825A CN 110942825 A CN110942825 A CN 110942825A CN 201911177079 A CN201911177079 A CN 201911177079A CN 110942825 A CN110942825 A CN 110942825A
Authority
CN
China
Prior art keywords
neural network
data
training
layer
convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911177079.4A
Other languages
Chinese (zh)
Inventor
李晓华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huayi Sharing Medical Technology Co Ltd
Original Assignee
Beijing Huayi Sharing Medical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huayi Sharing Medical Technology Co Ltd filed Critical Beijing Huayi Sharing Medical Technology Co Ltd
Priority to CN201911177079.4A priority Critical patent/CN110942825A/en
Publication of CN110942825A publication Critical patent/CN110942825A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention relates to the technical field of electrocardiosignal processing, in particular to an electrocardio diagnosis method based on the combination of a convolutional neural network and a cyclic neural network. The method comprises the following steps: s1, acquiring electrocardiosignal training data, and respectively attaching labels to the electrocardiosignal training data for data preprocessing; s2, performing data enhancement on the preprocessed training data; s3, constructing a joint neural network model, and training the joint neural network model by using the enhanced training data to obtain a training model; s4, acquiring a target electrocardiosignal, inputting the target electrocardiosignal into a training model for calculation, and outputting a probability value; and S5, judging the positive and negative examples according to the output probability value to obtain a classification judgment result. The invention can effectively improve the efficiency and the accuracy of the electrocardio diagnosis, and the provided training model can cover various complex electrocardio characteristics, thereby being convenient for the transfer learning of data.

Description

Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network
Technical Field
The invention relates to the technical field of electrocardiosignal processing, in particular to an electrocardio diagnosis method based on the combination of a convolutional neural network and a cyclic neural network.
Background
Electrocardiographic examination is a routine examination item in hospitals, electrocardiography is the most basic basis for doctors to judge the heart condition of patients, and electrocardiographic signals are electric signals converted from non-stable periodic biological signals caused by heart activity and contain a large amount of complex heart activity information. There are also problems in the current field of electrocardiographic diagnosis: 1. the culture period of the diagnosis doctor is long, the cost is high, and the gap of the current high-end talents is very large; 2. the subjective difference of doctors is very large, and the diagnosis standard is difficult to be completely unified; 3. the doctor only observes by naked eyes, a lot of bottom layer information is invisible, and the information utilization rate is low; 4. too many similar diseases exist, and a great deal of repetitive labor exists in the process of looking at pictures; 5. the time taken for the doctor to look at the picture is long, at least several minutes to several hours. The current solutions to these problems are the traditional digital signal processing methods and processing methods using machine learning models or statistical learning models.
The traditional digital signal processing method is utilized to manually extract features aiming at a certain disease electrocardiogram and then judge according to a threshold value, the method needs a large amount of medical and signal processing experience and has no universality, and after the disease is changed, the method has high accuracy rate and even cannot work at all. The machine learning model or the statistical learning model is utilized, automation is realized to a certain extent, certain universality is achieved, the accuracy is greatly improved compared with the first method, but the model still has the problem of insufficient expression capability, cannot cover various complex conditions, and is difficult to transfer and learn.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the electrocardio diagnosis method based on the combination of the convolutional neural network and the cyclic neural network, when the electrocardio diagnosis method is applied, the efficiency and the accuracy of the electrocardio diagnosis can be effectively improved, and the provided training model can cover various complex electrocardio characteristics and is convenient for data transfer learning.
The technical scheme adopted by the invention is as follows:
the electrocardio diagnosis method based on the combination of the convolutional neural network and the cyclic neural network comprises the following steps:
s1, acquiring electrocardiosignal training data, and respectively attaching labels to the electrocardiosignal training data for data preprocessing;
s2, performing data enhancement on the preprocessed training data;
s3, constructing a joint neural network model combining a convolutional neural network and a cyclic neural network, and training the joint neural network model by using the enhanced training data to obtain a training model;
s4, acquiring a target electrocardiosignal, inputting the target electrocardiosignal into a training model for calculation, and outputting a probability value;
and S5, judging the positive and negative examples according to the output probability value to obtain a classification judgment result.
Preferably, in step S1, the acquired training data of electrocardiographic signals includes at least 600 case data sets, each case data set having 5000 time steps and 12 channels.
Preferably, in step S1, the step of preprocessing the training data includes:
s11, reading electrocardiosignal training data of all channels;
s12, constructing a multi-channel data matrix: arranging the read multichannel electrocardio data according to a matrix form of [ time _ step, channel ], wherein the time _ step is a time step, namely the number of sampling points according to a time sequence, and the channel is the number of channels;
s13, carrying out data normalization processing on the data matrix: in the feature dimension, subtracting the mean value of all the features at the time step from each feature value, and then dividing the mean value by the standard deviation of all the features at the time step;
s14, carrying out 0-1 coding on the label corresponding to the normalized data: the positive case is 1 and the negative case is 0.
Preferably, in step S23, the formula for performing data normalization processing on the data matrix is:
Figure BDA0002290241910000031
wherein, FnewFor the feature values after normalization, FoldTo normalize the feature value, μ is the mean of all feature values at that time step, and σ is the standard deviation of all feature values at that time step.
Preferably, in step S2, the step of enhancing the preprocessed training data includes:
s21, advancing or delaying the data by a set range in the dimension of the time step;
s22, adding Gaussian noise to the data;
and S23, finally, the time sequence of the data is reversed.
Preferably, in step S3, the forward propagation direction of the constructed joint neural network model structure sequentially includes:
the first one-dimensional convolutional layer Conv1D, the output channel is 32, the convolutional kernel is 15, strides is 7, the activation function is relu, padding is valid;
conv1D of the second one-dimensional convolutional layer, an output channel is 64, a convolutional kernel is 11, strides is 3, an activation function is relu, and padding is valid;
a one-dimensional maximum pooling layer Max Paoling 1D with a pooling nucleus of 2;
conv1D of the third one-dimensional convolutional layer, an output channel is 128, a convolutional kernel is 7, strides is 2, an activation function is relu, and padding is valid;
a first long and short term memory layer LSTM with an output channel of 128;
a second long and short time memory layer LSTM with 256 output channels;
a third long-short time memory layer LSTM with an output channel of 512;
a random inactivation layer Dropout, randomly inactivating half of the neurons;
a first fully connected layer Dense, with a neuron number of 256 and an activation function of relu;
and the number of neurons in the second full connection layer Dense is 2, the activation function is softmax, and the probability of negative and positive electrocardio cases is output.
Preferably, in step S3, the training of the neural network ensemble model includes:
s31, reading a batch of electrocardiogram sample data from the training data;
s32, inputting the sample data into the joint neural network model for forward propagation to obtain a prediction probability;
s33, performing cross entropy loss function calculation on the prediction probability and the real label probability to obtain an average loss value;
s34, reversely propagating the average loss value layer by layer through a gradient descent method, and updating parameters;
and S35, repeating the steps S31 to S34 until the average loss value is reduced to the standard range.
Preferably, in step S33, the cross entropy loss function calculation formula is:
Figure BDA0002290241910000041
wherein loss is the average loss value of a batch, N is the number of samples of a batch, yiIs the true tag probability for the ith sample,
Figure BDA0002290241910000043
is the predicted probability of the ith sample.
Preferably, in step S5, the positive/negative example judgment formula based on the output probability value is:
Figure BDA0002290241910000042
wherein p is the probability value of the output of the training model, p0The division threshold for positive and negative examples was 0.5.
The invention has the beneficial effects that:
according to the invention, through constructing the joint neural network model, extracting a large amount of electrocardiosignal training data for the joint neural network model to carry out various complex characteristic training, obtaining the training model with high compatibility and wide coverage, and then utilizing the training model to realize end-to-end automatic electrocardio diagnosis, the diagnosis efficiency and accuracy are greatly improved compared with the prior art, the diagnosis is wide, various complex conditions can be covered, simultaneously, the newly added data can be migrated and learned, the model upgrading is simple and easy to operate, and the maximum compatibility of the original knowledge can be ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a block diagram of a step implementation of the present invention;
FIG. 2 is a schematic diagram of a joint neural network model structure.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It should be understood that the terms first, second, etc. are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that the term "and/or" herein is merely one type of association relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time, and the term "/and" is used herein to describe another association object relationship, which means that two relationships may exist, for example, A/and B, may mean: a alone, and both a and B alone, and further, the character "/" in this document generally means that the former and latter associated objects are in an "or" relationship.
It is to be understood that in the description of the present invention, the terms "upper", "vertical", "inside", "outside", and the like, refer to an orientation or positional relationship that is conventionally used for placing the product of the present invention, or that is conventionally understood by those skilled in the art, and are used merely for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and therefore should not be considered as limiting the present invention.
It will be understood that when an element is referred to as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly adjacent" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "disposed," "mounted," and "connected" are to be construed broadly, e.g., as meaning fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
In the following description, specific details are provided to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
Example 1:
the embodiment provides an electrocardiogram diagnosis method based on the combination of a convolutional neural network and a cyclic neural network, as shown in fig. 1:
the method comprises the following steps:
s1, acquiring electrocardiosignal training data, and respectively attaching labels to the electrocardiosignal training data for data preprocessing;
s2, performing data enhancement on the preprocessed training data;
s3, constructing a joint neural network model combining a convolutional neural network and a cyclic neural network, and training the joint neural network model by using the enhanced training data to obtain a training model;
s4, acquiring a target electrocardiosignal, inputting the target electrocardiosignal into a training model for calculation, and outputting a probability value;
and S5, judging the positive and negative examples according to the output probability value to obtain a classification judgment result.
As shown in fig. 2, in step S3, the forward propagation direction of the joint neural network model structure sequentially includes:
the first one-dimensional convolutional layer Conv1D, the output channel is 32, the convolutional kernel is 15, strides is 7, the activation function is relu, padding is valid;
conv1D of the second one-dimensional convolutional layer, an output channel is 64, a convolutional kernel is 11, strides is 3, an activation function is relu, and padding is valid;
one-dimensional maximum pooling layer Max Paoling 1D with pooling nucleus of 2
Conv1D of the third one-dimensional convolutional layer, an output channel is 128, a convolutional kernel is 7, strides is 2, an activation function is relu, and padding is valid;
a first long and short term memory layer LSTM with an output channel of 128;
a second long and short time memory layer LSTM with 256 output channels;
a third long-short time memory layer LSTM with an output channel of 512;
a random inactivation layer Dropout for randomly inactivating half of the neurons to avoid overfitting;
a first fully connected layer Dense, with a neuron number of 256 and an activation function of relu;
and the number of neurons in the second full connection layer Dense is 2, the activation function is softmax, and the probability of negative and positive electrocardio cases is output.
Convolutional neural network structures generally include:
an input layer: the input layer of the convolutional neural network can process multidimensional data, and the input layer of the one-dimensional convolutional neural network receives a one-dimensional or two-dimensional array, wherein the one-dimensional array is usually a time or frequency spectrum sample; the two-dimensional array may include a plurality of channels; an input layer of the two-dimensional convolutional neural network receives a two-dimensional or three-dimensional array; the input layer of the three-dimensional convolutional neural network receives a four-dimensional array. Because the convolutional neural network is widely applied in the field of computer vision, many studies assume three-dimensional input data, i.e., two-dimensional pixel points and RGB channels on a plane, in advance when introducing the structure thereof. Similar to other neural network algorithms, the input features of convolutional neural networks require normalization processing due to learning using a gradient descent algorithm. Specifically, before inputting the learning data into the convolutional neural network, the input data needs to be normalized in the channel or time/frequency dimension, and if the input data is a pixel, the original pixel values distributed in the input data can be normalized to an interval. The input feature standardization is beneficial to improving the learning efficiency and the performance of the convolutional neural network.
Hidden layer: hidden layers of the convolutional neural network comprise 3 types of common structures such as convolutional layers, pooling layers and fully-connected layers, and some more modern algorithms may have complicated structures such as an inclusion module and a residual block (residual block). In a common architecture, convolutional and pooling layers are characteristic of convolutional neural networks. The convolution kernel in the convolutional layer contains weight coefficients, while the pooling layer does not, and therefore in the literature, the pooling layer may not be considered a separate layer. Taking LeNet-5 as an example, the order of 3 types of common structures in the hidden layer is usually: input-convolutional layer-pooling layer-full-link layer-output. And after the convolutional layer is subjected to feature extraction, the output feature graph is transmitted to the pooling layer for feature selection and information filtering. The pooling layer contains a pre-set pooling function whose function is to replace the result of a single point in the feature map with the feature map statistics of its neighboring regions. The step of selecting the pooling area by the pooling layer is the same as the step of scanning the characteristic diagram by the convolution kernel, and the pooling size, the step length and the filling are controlled.
An output layer: the convolutional neural network is usually a fully-connected layer upstream of the output layer, and thus has the same structure and operation principle as the output layer in the conventional feedforward neural network. For the image classification problem, the output layer outputs the classification label using a logistic function or a normalized exponential function (softmax function). In the object detection problem, the output layer may be designed to output the center coordinates, size, and classification of the object. In the image semantic segmentation, the output layer directly outputs the classification result of each pixel.
A Recurrent Neural Network (RNN) is a type of Recurrent Neural Network (Recurrent Neural Network) in which sequence data is input, recursion is performed in the direction of evolution of the sequence, and all nodes (Recurrent units) are connected in a chain. The recurrent neural network has memory, parameter sharing and graph completion (training completion), and thus has certain advantages in learning the nonlinear characteristics of a sequence. The recurrent neural network has applications in Natural Language Processing (NLP), such as speech recognition, Language modeling, machine translation, and other fields, and is also used for various time series predictions. A cyclic Neural Network constructed by introducing a Convolutional Neural Network (CNN) can process computer vision problems containing sequence input.
Example 2:
as an optimization of the above embodiment, in step S1, the step of performing data preprocessing on the training data includes:
s11, reading all channel electrocardiosignal training data which are commonly 1 channel, 3 channels, 6 channels, 12 channels, 18 channels and the like;
s12, constructing a multi-channel data matrix: arranging the read multichannel electrocardio data according to a matrix form of [ time _ step, channel ], wherein the time _ step is a time step, namely the number of sampling points according to a time sequence, and the channel is the number of channels;
s13, carrying out data normalization processing on the data matrix: in the feature dimension (i.e., the channel dimension), the mean value of all features at the time step is subtracted from each feature value, and then the mean value is divided by the standard deviation of all features at the time step, and the normalization process is expressed by the following formula:
Figure BDA0002290241910000101
wherein, FnewFor the feature values after normalization, FoldThe characteristic value before normalization, mu is the mean value of all characteristic values at the time step, and sigma is the standard deviation of all characteristic values at the time step;
s14, carrying out 0-1 coding on the label corresponding to the normalized data: the positive case is 1 and the negative case is 0.
The collected electrocardiosignal training data comprises at least 600 case data, and each case data has 5000 time steps and 12 channels.
Example 3:
as an optimization of the above embodiment, in step S2, the step of performing data enhancement on the preprocessed training data includes:
s21, advancing or delaying the data by a set range in the dimension of the time step;
s22, adding Gaussian noise to the data;
and S23, finally, the time sequence of the data is reversed.
New data generated after data enhancement is added to the data set as a new sample.
Data enhancement is to improve the final generalization capability of the network by increasing the diversity and completeness of data, and if the original data amount is enough, data enhancement can not be carried out.
Example 4:
as an optimization of the above embodiment, in step S3, the training of the neural network model includes:
s31, reading a batch of electrocardiogram sample data from the training data;
s32, inputting the sample data into the joint neural network model for forward propagation to obtain a prediction probability;
s33, performing cross entropy loss function calculation on the prediction probability and the real label probability to obtain an average loss value, wherein the cross entropy loss function calculation formula is as follows:
Figure BDA0002290241910000111
wherein loss is the average loss value of a batch, N is the number of samples of a batch, yiIs the true tag probability for the ith sample,
Figure BDA0002290241910000112
the prediction probability of the ith sample;
s34, reversely propagating the average loss value layer by layer through a gradient descent method, and updating parameters;
and S35, repeating the steps S31 to S34 until the average loss value is reduced to the standard range.
Training parameter configuration:
an optimizer: RMSprop
Loss function: binary _ cross control
Learning rate: 0.01
Iteration times are as follows: about 20 rounds.
Example 5:
as an optimization of the above embodiment, in step S5, the positive/negative example judgment formula based on the output probability value is:
Figure BDA0002290241910000121
wherein p is the probability value of the output of the training model, p0The division threshold for positive and negative examples was 0.5.
When the method is applied, a joint neural network model is constructed, a large amount of electrocardiosignal training data are extracted to be used for carrying out various complex characteristic training on the joint neural network model, a training model with high compatibility and wide coverage is obtained, then end-to-end automatic electrocardio diagnosis is realized by utilizing the training model, the diagnosis efficiency and accuracy are greatly improved compared with the prior art, the diagnosis range is wide, various complex conditions can be covered, simultaneously, newly added data can be migrated and learned, the model is upgraded and is simple and easy to operate, and the maximum compatibility of original knowledge can be ensured.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (8)

1. The electrocardio diagnosis method based on the combination of the convolutional neural network and the cyclic neural network is characterized by comprising the following steps of:
s1, acquiring electrocardiosignal training data, and respectively attaching labels to the electrocardiosignal training data for data preprocessing;
s2, performing data enhancement on the preprocessed training data;
s3, constructing a joint neural network model combining a convolutional neural network and a cyclic neural network, and training the joint neural network model by using the enhanced training data to obtain a training model;
s4, acquiring a target electrocardiosignal, inputting the target electrocardiosignal into a training model for calculation, and outputting a probability value;
s5, judging positive and negative examples according to the output probability value to obtain a classification judgment result;
in step S3, the forward propagation direction of the constructed joint neural network model structure sequentially includes:
the first one-dimensional convolutional layer Conv1D, the output channel is 32, the convolutional kernel is 15, strides is 7, the activation function is relu, padding is valid;
conv1D of the second one-dimensional convolutional layer, an output channel is 64, a convolutional kernel is 11, strides is 3, an activation function is relu, and padding is valid;
one-dimensional maximum pooling layer Max Paoling 1D with pooling nucleus of 2
Conv1D of the third one-dimensional convolutional layer, an output channel is 128, a convolutional kernel is 7, strides is 2, an activation function is relu, and padding is valid;
a first long and short term memory layer LSTM with an output channel of 128;
a second long and short time memory layer LSTM with 256 output channels;
a third long-short time memory layer LSTM with an output channel of 512;
a random inactivation layer Dropout, randomly inactivating half of the neurons;
a first fully connected layer Dense, with a neuron number of 256 and an activation function of relu;
and the number of neurons in the second full connection layer Dense is 2, the activation function is softmax, and the probability of negative and positive electrocardio cases is output.
2. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 1, wherein: in step S1, the acquired training data of cardiac electrical signals includes at least 600 case data sets, each case data set having 5000 time steps and 12 channels.
3. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 1, wherein: in step S1, the step of performing data preprocessing on the training data includes:
s11, reading electrocardiosignal training data of all channels;
s12, constructing a multi-channel data matrix: arranging the read multichannel electrocardio data according to a matrix form of [ time _ step, channel ], wherein the time _ step is a time step, namely the number of sampling points according to a time sequence, and the channel is the number of channels;
s13, carrying out data normalization processing on the data matrix: in the feature dimension, subtracting the mean value of all the features at the time step from each feature value, and then dividing the mean value by the standard deviation of all the features at the time step;
s14, carrying out 0-1 coding on the label corresponding to the normalized data: the positive case is 1 and the negative case is 0.
4. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 3, wherein: in step S23, the formula for performing data normalization processing on the data matrix is:
Figure FDA0002290241900000021
wherein, FnewFor the feature values after normalization, FoldTo normalize the feature value, μ is the mean of all feature values at that time step, and σ is the standard deviation of all feature values at that time step.
5. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 1, wherein: in step S2, the step of enhancing the pre-processed training data includes:
s21, advancing or delaying the data by a set range in the dimension of the time step;
s22, adding Gaussian noise to the data;
and S23, finally, the time sequence of the data is reversed.
6. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 1, wherein: in step S3, the training of the neural network model includes:
s31, reading a batch of electrocardiogram sample data from the training data;
s32, inputting the sample data into the joint neural network model for forward propagation to obtain a prediction probability;
s33, performing cross entropy loss function calculation on the prediction probability and the real label probability to obtain an average loss value;
s34, reversely propagating the average loss value layer by layer through a gradient descent method, and updating parameters;
and S35, repeating the steps S31 to S34 until the average loss value is reduced to the standard range.
7. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 6, wherein: in step S33, the cross entropy loss function is calculated as:
Figure FDA0002290241900000031
wherein loss is the average loss value of a batch, N is the number of samples of a batch, yiIs the true tag probability for the ith sample,
Figure FDA0002290241900000032
is the predicted probability of the ith sample.
8. The electrocardiographic diagnosis method based on the combination of the convolutional neural network and the cyclic neural network as claimed in claim 1, wherein: in step S5, the positive/negative example judgment formula based on the output probability value is:
Figure FDA0002290241900000041
wherein p is the probability value of the output of the training model, p0The division threshold for positive and negative examples was 0.5.
CN201911177079.4A 2019-11-26 2019-11-26 Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network Pending CN110942825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911177079.4A CN110942825A (en) 2019-11-26 2019-11-26 Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911177079.4A CN110942825A (en) 2019-11-26 2019-11-26 Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network

Publications (1)

Publication Number Publication Date
CN110942825A true CN110942825A (en) 2020-03-31

Family

ID=69908945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911177079.4A Pending CN110942825A (en) 2019-11-26 2019-11-26 Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network

Country Status (1)

Country Link
CN (1) CN110942825A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709785A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for determining user retention time
CN111709787A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for generating user retention time
CN111710386A (en) * 2020-04-30 2020-09-25 上海数创医疗科技有限公司 Quality control system for electrocardiogram diagnosis report
CN111709786A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for generating user retention time
CN111798980A (en) * 2020-07-10 2020-10-20 哈尔滨工业大学(深圳) Complex medical biological signal processing method and device based on deep learning network
CN116189902A (en) * 2023-01-19 2023-05-30 北京未磁科技有限公司 Myocardial ischemia prediction model based on magnetocardiogram video data and construction method thereof
CN117438096A (en) * 2023-12-20 2024-01-23 天津医科大学第二医院 Modeling method of allergic rhinitis prediction model
CN117594227A (en) * 2024-01-18 2024-02-23 微脉技术有限公司 Health state monitoring method, device, medium and equipment based on wearable equipment

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111710386A (en) * 2020-04-30 2020-09-25 上海数创医疗科技有限公司 Quality control system for electrocardiogram diagnosis report
CN111709785B (en) * 2020-06-18 2023-08-22 抖音视界有限公司 Method, apparatus, device and medium for determining user retention time
CN111709787A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and medium for generating user retention time
CN111709786A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for generating user retention time
CN111709785A (en) * 2020-06-18 2020-09-25 北京字节跳动网络技术有限公司 Method, apparatus, device and medium for determining user retention time
CN111709787B (en) * 2020-06-18 2023-08-22 抖音视界有限公司 Method, device, electronic equipment and medium for generating user retention time
CN111709786B (en) * 2020-06-18 2024-04-30 抖音视界有限公司 Method, apparatus, device and medium for generating user retention time
CN111798980A (en) * 2020-07-10 2020-10-20 哈尔滨工业大学(深圳) Complex medical biological signal processing method and device based on deep learning network
CN116189902A (en) * 2023-01-19 2023-05-30 北京未磁科技有限公司 Myocardial ischemia prediction model based on magnetocardiogram video data and construction method thereof
CN116189902B (en) * 2023-01-19 2024-01-02 北京未磁科技有限公司 Myocardial ischemia prediction model based on magnetocardiogram video data and construction method thereof
CN117438096A (en) * 2023-12-20 2024-01-23 天津医科大学第二医院 Modeling method of allergic rhinitis prediction model
CN117438096B (en) * 2023-12-20 2024-04-19 天津医科大学第二医院 Modeling method of allergic rhinitis prediction model
CN117594227A (en) * 2024-01-18 2024-02-23 微脉技术有限公司 Health state monitoring method, device, medium and equipment based on wearable equipment
CN117594227B (en) * 2024-01-18 2024-04-30 微脉技术有限公司 Health state monitoring method, device, medium and equipment based on wearable equipment

Similar Documents

Publication Publication Date Title
CN110889448A (en) Electrocardiogram classification method based on convolutional neural network
CN110942825A (en) Electrocardiogram diagnosis method based on combination of convolutional neural network and cyclic neural network
Wu et al. A study on arrhythmia via ECG signal classification using the convolutional neural network
CN108133188A (en) A kind of Activity recognition method based on motion history image and convolutional neural networks
US20230334632A1 (en) Image recognition method and device, and computer-readable storage medium
WO2018052587A1 (en) Method and system for cell image segmentation using multi-stage convolutional neural networks
CN109993100B (en) Method for realizing facial expression recognition based on deep feature clustering
CN107944410B (en) Cross-domain facial feature analysis method based on convolutional neural network
Al-Kharraz et al. Automated system for chromosome karyotyping to recognize the most common numerical abnormalities using deep learning
CN112766355B (en) Electroencephalogram signal emotion recognition method under label noise
CN114564990B (en) Electroencephalogram signal classification method based on multichannel feedback capsule network
CN113011386B (en) Expression recognition method and system based on equally divided characteristic graphs
Klyuchko Application of artificial neural networks method in biotechnology
CN113221913A (en) Agriculture and forestry disease and pest fine-grained identification method and device based on Gaussian probability decision-level fusion
CN111540467A (en) Schizophrenia classification identification method, operation control device and medical equipment
CN114595725B (en) Electroencephalogram signal classification method based on addition network and supervised contrast learning
CN104331892B (en) Morphology-based neuron recognizing and analyzing method
Li et al. SeedSortNet: a rapid and highly effificient lightweight CNN based on visual attention for seed sorting
CN116189902B (en) Myocardial ischemia prediction model based on magnetocardiogram video data and construction method thereof
CN116959101A (en) Pig behavior intelligent analysis method and system based on multi-mode semantics
CN116369945A (en) Electroencephalogram cognitive recognition method based on 4D pulse neural network
CN113313185B (en) Hyperspectral image classification method based on self-adaptive spatial spectrum feature extraction
CN117523626A (en) Pseudo RGB-D face recognition method
CN114366116A (en) Parameter acquisition method based on Mask R-CNN network and electrocardiogram
Ebanesar et al. Human Ear Recognition Using Convolutional Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200331