CN114366116B - Parameter acquisition method based on Mask R-CNN network and electrocardiogram - Google Patents

Parameter acquisition method based on Mask R-CNN network and electrocardiogram Download PDF

Info

Publication number
CN114366116B
CN114366116B CN202210106655.1A CN202210106655A CN114366116B CN 114366116 B CN114366116 B CN 114366116B CN 202210106655 A CN202210106655 A CN 202210106655A CN 114366116 B CN114366116 B CN 114366116B
Authority
CN
China
Prior art keywords
electrocardiosignal
curve
unit group
training set
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210106655.1A
Other languages
Chinese (zh)
Other versions
CN114366116A (en
Inventor
罗华丽
徐圆
康静
杨华才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN202210106655.1A priority Critical patent/CN114366116B/en
Publication of CN114366116A publication Critical patent/CN114366116A/en
Application granted granted Critical
Publication of CN114366116B publication Critical patent/CN114366116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/352Detecting R peaks, e.g. for synchronising diagnostic apparatus; Estimating R-R interval
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/366Detecting abnormal QRS complex, e.g. widening
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The invention discloses a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram, which is characterized in that through 7 steps, an electrocardiosignal is firstly converted into a two-dimensional black-and-white image, and then a total prediction loss function and a Mask image are obtained by using the Mask R-CNN network.

Description

Parameter acquisition method based on Mask R-CNN network and electrocardiogram
Technical Field
The invention relates to the technical field of electrocardiograms, in particular to a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram.
Background
An Electrocardiogram (ECG) is a time-series signal for acquiring heart activity, and since the ECG signal is data for indirectly acquiring heart activity, the ECG signal is very easily interfered by various signals, and it is often necessary to ignore the interference signal and then judge the waveform of the electrocardiograph wave.
Although the electrocardiogram detection is easy to be interfered by signals, the electrocardiogram detection has the advantages of no wound and low cost, so that the electrocardiogram detection becomes one of the most commonly used detection means in clinic. The automatic electrocardiographic parameter obtaining method can realize identification and classification of electrocardiographic signals, is an indispensable technical means for computer-aided electrocardiographic analysis, and is one of research hotspots in the electrocardiographic field.
The current ECG automatic parameter acquisition can be categorized into two types: the first is a traditional parameter acquisition method based on mathematical features. Common mathematical features may be wavelet features, various higher order statistical indicators, power spectrum, etc. By means of the feature quantities, the time domain feature indexes and some traditional analysis methods, such as principal component analysis, independent principal component analysis and other means, are combined to perform an ECG parameter acquisition method. The traditional classification parameter acquisition methods are firstly to overcome the influence of an ECG interference signal, have poor adaptability to noise signals, and have no direct mapping relation between the characteristic quantity of the mathematical characteristic and the abnormal characteristic of the ECG signal, so that the mathematical characteristic cannot fully extract the abnormal characteristic of the ECG. Traditional time series parameter acquisition methods are highly dependent on extracted features, but it is difficult to extract the correct and critical information in the intrinsic properties of the captured time series data. Therefore, the method has complicated steps and poor practicability.
The second is an artificial intelligence based ECG automatic parameter acquisition method. The artificial intelligent recognition step is relatively simple and high in accuracy, and the processing efficiency can be improved. One of the recent applications of deep neural networks is time series classification, which is a problem of time series classification that is specialized in handling a large amount of data, and thus is widely used in various applications of medical care systems, natural language processing, bioinformatics, and the like. For example, feng et al (FENG Y R, CHEN W, CAI G Y.biometric Extraction and Recognition based on ECG Signals [ J ]. Computer & Digital Engineering,2016,46 (6): 1099-1103) propose a method of filtering an ECG signal using a multiple wavelet algorithm and then employing an SVM classifier for parameter acquisition. Venkatesan et al (Venkatesan C, karthigaikumar P, varatharajan R.A novel LMS algorithm for ECG signal preprocessing and KNN classifier based abnormality detection [ J ], multimedia Tools & Applications, 2018.) propose a method for acquiring arrhythmia parameters based on K nearest neighbor classification algorithm, and realize a method for acquiring parameters of ECG signals. Zhang et al (Zhang K, LI X, XIE X J, et al student on Arrhythmia Detection Algorithm based on Deep Learning [ J ]. Chinese Medical Equipment Journal,2008,39 (12): 6-9+31) performed an automatic parameter acquisition method for 5 different beats using a convolutional neural network (Convolutional Neural Networks, CNN) and Long-Term Memory neural network (LSTM) complex algorithm. Rajpurkar et al (Rajpurkar P, hannun A Y, haghpanahi M, et al cardiology-Level Arrhythmia Detection with Convolutional Neural Networks [ J ]. 2017.) and Hannun et al (Hannun A Y, rajpurkar P, haghpanahi M, et al cardiology-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network [ J ]. Nature medium, 2019,25 (1): 65-69.) employ 34-layer convolutional neural networks using a large dataset of 91232 records from more than 5 tens of thousands of pairs of images to divide the electrocardiographic signal into 11 heart rhythms including sinus heart rhythms. The ECG signal detection by the ECG parameter acquisition method based on artificial intelligence generally comprises three steps of signal preprocessing, feature learning and ECG classification. The electrocardiosignal parameters can be detected rapidly through the trained network architecture. The biggest problem of the ECG parameter acquisition method based on artificial intelligence is that ECG signal classification is regarded as a time sequence classification problem, and the behavior ability of simulating human eyes to recognize an electrocardio wave image is not enough, so that the adaptability is poor.
Therefore, in order to solve the deficiencies of the prior art, it is necessary to provide a parameter acquisition method based on Mask R-CNN network and electrocardiogram.
Disclosure of Invention
The invention aims to avoid the defects of the prior art and provide a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram. The parameter acquisition method based on the Mask R-CNN network and the electrocardiogram has the advantages of simple processing and high practicability.
The above object of the present invention is achieved by the following technical measures:
the parameter obtaining method based on Mask R-CNN network and electrocardiogram comprises the following steps:
step one, acquiring a data set of electrocardiographic data, wherein each electrocardiographic data is provided with a plurality of electrocardiographic signals, and dividing the data set into a training set and a testing set;
step two, selecting any electrocardiosignal from each electrocardiosignal data respectively aiming at all the electrocardiosignal data in the training set to obtain an electrocardiosignal group of the training set; for all the electrocardiosignals in the test set, selecting any one electrocardiosignal from each electrocardiosignal to obtain an electrocardiosignal group of the test set;
step three, respectively obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set;
step four, respectively obtaining an electrocardiosignal graph unit group of the training set and an electrocardiosignal graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set;
fifthly, respectively obtaining an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set; the electrocardiosignal curve graph units in the central electric signal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are two-dimensional black-and-white graphs;
step six, an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the test set are input into a Mask-RCNN network for training by taking a classification type group of the training set as label data, so that a trained Mask-RCNN network is obtained;
and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graph unit group of the training set and the electrocardiosignal curve graph unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
Preferably, the third step is specifically divided into:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and when the electrocardiosignal accords with the condition that the electrocardiosignal rises to the highest point and then falls to the threshold value H, namely defining the value of the highest point as an R wave peak value of the QRS wave, wherein the range of the H value is 0.7-0.9;
step 3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit according to the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electrical signal curve unit according to the wave crest width of three adjacent R waves, and correspondingly obtaining the electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set.
Preferably, the fourth step specifically comprises:
step 4.1, counting the average sampling points among R wave peaks in each electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, wherein n times of the average sampling points among the R wave peaks is taken as the total sampling length, n is greater than 1, and when the sampling length exceeds the length in an electrocardiosignal image unit, the y value of the sampling point is 0;
step 4.2, carrying out linear normalization processing on each sampling point of each electrocardiosignal curve unit in the step 4.1, so that y values of the sampling points are distributed to a [ -11 ] interval;
step 4.3, discretizing the sampling points distributed in the interval [ -11 ] in the step 4.2, and correspondingly obtaining a discrete electrocardiosignal curve unit group of the training set and a discrete electrocardiosignal curve unit group of the test set;
and 4.4, performing N times sparse sampling on the discrete electrocardiosignal curve units obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graph unit group of a training set and an electrocardiosignal graph unit group of a test set, wherein the range of N is 2-4.
Preferably, the fifth step is to find out each electrocardiograph signal graphic unit in the electrocardiograph signal graphic unit group of the training set and each electrocardiograph signal graphic unit in the electrocardiograph signal graphic unit group of the test set, find out the pixel point of each column of electrocardiograph curve in each electrocardiograph signal graphic unit, set all the points below the pixel point as black, and correspondingly obtain the electrocardiograph signal curve graphic unit group of the training set and the electrocardiograph signal curve graphic unit group of the test set.
Preferably, the types of classification described above include normal and sinus rhythms, sinus bradycardia, sinus tachycardia, electrical axis left bias, electrical axis right bias, sinus arrhythmia, right bundle branch block, ventricular premature beat, full right bundle branch block, left ventricular high voltage, ST-T changes, ST segment changes, first atrial chamber block, partial right bundle branch block, and atrial fibrillation.
Preferably, the total predicted loss function L is represented by formula (I),
L=L cls +L box +L mask … … (I),
wherein L is cls To classify the predictive loss function, L box L is a regression prediction loss function mask The predictive loss function is output for the mask.
Preferably, the mask image is an electrocardiographic signal curve graphic unit with different colors.
Preferably, the statistical method of the average sampling points in the step 4.1 is that all sampling points in all the electrocardiosignal curve units in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set are counted, and then all the sampling points are divided by the total number of the electrocardiosignal curve units.
Preferably, the Mask-RCNN network is a Mask-RCNN network that introduces a residual network and a feature gold network.
Preferably, the residual network is a ResNet50 or ResNet101 network.
The invention relates to a parameter acquisition method based on Mask R-CNN network and electrocardiogram, which comprises the following steps: step one, acquiring a data set of electrocardiographic data, wherein each electrocardiographic data is provided with a plurality of electrocardiographic signals, and dividing the data set into a training set and a testing set; step two, selecting any electrocardiosignal from each electrocardiosignal data respectively aiming at all the electrocardiosignal data in the training set to obtain an electrocardiosignal group of the training set; for all the electrocardiosignals in the test set, selecting any one electrocardiosignal from each electrocardiosignal to obtain an electrocardiosignal group of the test set; step three, respectively obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set; then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set; step four, respectively obtaining an electrocardiosignal graph unit group of the training set and an electrocardiosignal graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set; fifthly, respectively obtaining an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set; the electrocardiosignal curve graph units in the central electric signal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are two-dimensional black-and-white graphs; step six, an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the test set are input into a Mask-RCNN network for training by taking a classification type group of the training set as label data, so that a trained Mask-RCNN network is obtained; and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graph unit group of the training set and the electrocardiosignal curve graph unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image. According to the invention, through the seven steps, the electrocardiosignal is converted into a two-dimensional black-and-white image for the first time, and then the Mask R-CNN network is utilized to obtain the total prediction loss function and the Mask image.
Drawings
The invention is further illustrated by the accompanying drawings, which are not to be construed as limiting the invention in any way.
Fig. 1 is a flowchart of a parameter obtaining method based on Mask R-CNN network and electrocardiogram according to the present invention.
Fig. 2 is a schematic diagram of the structure of the Mask R-CNN network of the present invention.
Fig. 3 is a schematic diagram of electrocardiographic data.
Fig. 4 is a schematic diagram of an electrocardiographic signal.
Fig. 5 is a schematic diagram of an electrocardiographic signal curve unit.
Fig. 6 is a schematic diagram of an electrocardiographic signal graphic unit.
Fig. 7 is a schematic diagram of an electrocardiographic signal curve graphic unit.
Detailed Description
The technical scheme of the invention is further described with reference to the following examples.
Example 1.
A parameter acquisition method based on Mask R-CNN network and electrocardiogram is shown in figure 1, comprising the following steps:
step one, acquiring a data set of electrocardiographic data, wherein each electrocardiographic data is provided with a plurality of electrocardiographic signals, and dividing the data set into a training set and a testing set;
step two, selecting any electrocardiosignal from each electrocardiosignal data respectively aiming at all the electrocardiosignal data in the training set to obtain an electrocardiosignal group of the training set; for all the electrocardiosignals in the test set, selecting any one electrocardiosignal from each electrocardiosignal to obtain an electrocardiosignal group of the test set;
step three, respectively obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set;
step four, respectively obtaining an electrocardiosignal graph unit group of the training set and an electrocardiosignal graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set;
fifthly, respectively obtaining an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set; the electrocardiosignal curve graph units in the central electric signal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are two-dimensional black-and-white graphs;
step six, an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the test set are input into a Mask-RCNN network for training by taking a classification type group of the training set as label data, so that a trained Mask-RCNN network is obtained;
and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graph unit group of the training set and the electrocardiosignal curve graph unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
Wherein, the third step is specifically divided into:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and when the electrocardiosignal accords with the condition that the electrocardiosignal rises to the highest point and then falls to the threshold value H, namely defining the value of the highest point as an R wave peak value of the QRS wave, wherein the range of the H value is 0.7-0.9, and preferably 0.8;
step 3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit according to the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electrical signal curve unit according to the wave crest width of three adjacent R waves, and correspondingly obtaining the electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set.
The fourth step is specifically as follows:
step 4.1, counting the average sampling points among R wave peaks in each electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, wherein n times of the average sampling points among the R wave peaks is taken as the total sampling length, n is greater than 1, and when the sampling length exceeds the length in an electrocardiosignal image unit, the y value of the sampling point is 0;
step 4.2, carrying out linear normalization processing on each sampling point of each electrocardiosignal curve unit in the step 4.1, so that y values of the sampling points are distributed to a [ -11 ] interval;
step 4.3, discretizing the sampling points distributed in the interval [ -11 ] in the step 4.2, and correspondingly obtaining a discrete electrocardiosignal curve unit group of the training set and a discrete electrocardiosignal curve unit group of the test set;
and 4.4, performing N times sparse sampling on the discrete electrocardiosignal curve units obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graph unit group of a training set and an electrocardiosignal graph unit group of a test set, wherein the range of N is 2-4, preferably 4.
The invention can reduce the memory capacity and improve the operation speed through N times of sparse sampling.
The statistical method of the average sampling points in step 4.1 is to count all sampling points in all electrocardiosignal curve units in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, and divide all sampling points by the total number of the electrocardiosignal curve units.
And fifthly, respectively finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the test set, finding out pixel points of each row of electrocardiosignal curves in each electrocardiosignal graphic unit, setting all points below the pixel points to be black, and correspondingly obtaining the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set.
The types of classification of the present invention include normal and sinus rhythms, sinus bradycardia, sinus tachycardia, electrical axis left bias, electrical axis right bias, sinus arrhythmia, right bundle branch block, ventricular premature beat, full right bundle branch block, left ventricular high voltage, ST-T changes, ST segment changes, first atrial chamber block, partial right bundle branch block, and atrial fibrillation.
The total predicted loss function L is represented by formula (i),
L=L cls +L box +L mask … … (I),
wherein L is cls To classify the predictive loss function, L box L is a regression prediction loss function mask The predictive loss function is output for the mask.
It should be noted that, because a large training sample is often needed for training the multi-layer deep convolutional neural network, aiming at the problem that network overfitting is easy to be caused due to a small sample size, the invention loads weights which are trained in advance by using the COCO data set and are used for migration learning. In the transfer learning training process, only the Head layer of the Mask R-CNN is trained first, and the iteration number is 200. And finally, fine-tuning all layers of the network, wherein the iteration number is 300, and obtaining the weight parameters suitable for the electrocardiosignal curve. Specifically, in the iterative process, the basis and the object of the optimization are the output loss function L of the Mask-RCNN. The loss function L of the output formula (I) is the sum of the predicted loss functions of classification, regression and mask output. Wherein L is cls The classification loss of the network FPN network in each iteration is calculated, L box Calculated is the target frame offset penalty, L, for the FPN network mask The loss function of the pixel division mask of the FPN network is calculated, which is calculated by the rule of the average binary cross entropy of the mask image in this example.
The mask image obtained by the invention is an electrocardiosignal curve graph unit with different colors.
The Mask-RCNN network is a Mask-RCNN network which introduces a residual network and a characteristic gold network. Wherein the residual network is a ResNet50 or ResNet101 network, as shown in FIG. 2.
The Mask-RCNN Network of the invention introduces a residual Network ResNet with stronger feature extraction capability, wherein the main idea of the residual Network is to add a direct connection channel in the Network, namely the idea of a high Network, the previous Network structure is that the performance input is subjected to nonlinear transformation, and the high Network allows a certain proportion of the output of the previous Network layer to be reserved. The concept of ResNet is very similar to that of Highway networks, allowing the original input information to pass directly into the later layers.
While ResNet has different network layers, the implementation of the full ResNet specifically selects a more commonly used 50-layer as ResNet50. In ResNet50, the algorithm framework solves the multi-scale detection problem using a Feature golden network (Feature Pyramid Network, FPN), combined with ResNet network and FPN network for extracting the shared convolution Feature (Feature Map). The candidate box region to be detected is then generated on the extracted convolution feature using the region suggestion network RPN. The Mask R-CNN network can identify different objects belonging to the same category in a pixel-level scene, and the Mask R-CNN network is used for identifying an electrocardiosignal curve in an electrocardiosignal image unit and automatically acquiring a Mask image of the curve.
The Mask-RCNN network of the invention solves the multi-scale detection problem by using a characteristic gold word network (Feature Pyramid Network, FPN) and combines ResNet and FPN for extracting a shared convolution characteristic (Feature Map). The candidate box region to be detected is then generated on the extracted convolution feature using the region suggestion network RPN. Because there is a two-step quantization process in the region of interest (region of interest, ROI) pooling process, which can cause pixel level bias between input and output, mask R-CNN networks use ROI alignment layers to correct this bias on pixel level due to quantization, then classify, regress and Mask segment the ROI alignment layer corrected feature map and candidate box regions, adding Mask output branches for obtaining Mask images for each target compared to the fast R-CNNMask-RCNN network.
The parameter acquisition method based on the Mask R-CNN network and the electrocardiogram converts the electrocardiosignal into a two-dimensional black-and-white image for the first time, and then the total prediction loss function and the Mask image are obtained by using the Mask R-CNN network.
Example 2.
A parameter acquisition method based on Mask R-CNN network and electrocardiogram is carried out on a PC with Intel (R) 2.5GHz and 32GB memory by using MATLAB R2014a, wherein the programming environment for deep learning is based on Python, the computer memory is 32GB, and the GPU is NVIDIA GeForce GTX TITAN Xp.
Step one, acquiring a data set of electrocardiographic data, wherein each electrocardiographic data is provided with a plurality of electrocardiographic signals, and dividing the data set into a training set and a testing set;
in this embodiment, the data used is 20072 samples, and only waveform data, the name of the electrocardiographic abnormal event and partial age-sex information of the electrocardiographic abnormal event are reserved after the data is subjected to desensitization treatment. Each sample was sampled at a frequency of 500Hz, a length of 10s, and a unit voltage of 4.88uV. Each sample corresponds to a plurality of heart rate types, as shown in fig. 3. The types of classification in the sample set include 16 classification types, normal type, sinus rhythm, sinus bradycardia, sinus tachycardia, electrical axis left bias, electrical axis right bias, sinus arrhythmia, right bundle branch block, ventricular premature beat, full right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first atrial chamber block, partial right bundle branch block, atrial fibrillation. 80% of the number of samples, namely 16057 samples, are divided into training sets, 20% of the number of samples, namely 4015 samples, are divided into verification sets, and the ratios of the training sets and the verification sets are kept uniform.
Step two, selecting any electrocardiosignal for each electrocardiosignal data in the training set respectively to obtain an electrocardiosignal group of the training set; selecting any one electrocardiosignal for each electrocardiosignal data in the test set respectively to obtain an electrocardiosignal group of the test set, wherein one electrocardiosignal is shown in figure 4;
the third step is as follows:
step 3.1, setting a constant H for the training set and the testing set as a threshold value, wherein the H is specifically 0.6, taking the constant H threshold value as a starting point, and when the electrocardiosignal accords with the condition that the electrocardiosignal firstly rises to the highest point and then falls to the threshold value H, namely defining the value of the highest point as an R wave peak value of the QRS wave;
step 3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit according to the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; for each electrocardiosignal in the electrocardiosignal group of the test set, dividing the electrocardiosignal into an electrical signal curve unit by three adjacent R wave peak widths, correspondingly obtaining the electrocardiosignal curve unit group of the test set, specifically, dividing the abscissa of three adjacent middle R wave peak values into H i+1 The abscissa of the R wave peak value at the critical position on the left side is H i The abscissa of the R wave peak value at the critical position on the right side is H i+2 As in fig. 5;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set.
The fourth step is:
step 4.1, counting the average sampling points among R wave peaks in each electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, wherein n times of the average sampling points among the R wave peaks is taken as the total sampling length, n is greater than 1, and when the sampling length exceeds the length in an electrocardiosignal image unit, the y value of the sampling point is 0;
step 4.2, carrying out linear normalization processing on each sampling point of each electrocardiosignal curve unit in the step 4.1, so that y values of the sampling points are distributed to a [ -11 ] interval;
step 4.3, discretizing the sampling points distributed in the interval [ -11 ] in step 4.2, and correspondingly obtaining a discrete electrocardiosignal curve unit group of the training set and a discrete electrocardiosignal curve unit group of the test set, wherein one discrete electrocardiosignal curve unit is shown in figure 6;
and 4.4, performing N times sparse sampling on the discrete electrocardiosignal curve units obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graphic unit group of a training set and an electrocardiosignal graphic unit group of a test set, wherein N is greater than 1.
Specifically, in the training set stage, the average sampling point number among R wave peaks in the electrocardiosignal processing unit is counted to be 415 after rounding, 2.5 times of the total sampling length 1037 is taken as the sampling point number, then y values corresponding to each sampling point of the electrocardiosignal curve unit are distributed in a [ -11 ] interval, [ -11 ] is divided into 1024 points to be discretized, and in the processing process, if the sampling length exceeds the sampling length in the electrocardiosignal curve unit, the sampling value obtained by default is 0. And then 4 times of sparse sampling is carried out on the discretized image, and finally, an electrocardiosignal curve graph unit with the resolution of 258 multiplied by 256 is formed. In the test set phase, the same as in the training set phase, the total sampling length 1037 of the training phase is used as the sampling length, and the equal multiple sparse sampling is performed.
Fifthly, respectively finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the test set, finding out pixel points of each row of electrocardiosignal curves in each electrocardiosignal graphic unit, setting all points below the pixel points to be black, and correspondingly obtaining the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set, wherein one electrocardiosignal curve graphic unit is shown in figure 7.
Step six, inputting a test set with a resolution of 258 multiplied by 256 electrocardiosignal curve unit groups, a training set with a resolution of 258 multiplied by 256 electrocardiosignal curve unit groups and a test set with a resolution of 258 multiplied by 256 electrocardiosignal curve unit groups into a Mask-RCNN network for training by taking a classification type group of the training set as label data to obtain a trained Mask-RCNN network;
and step seven, inputting an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group with 258 multiplied by 256 of resolution of the training set and an electrocardiosignal curve graph unit group with 258 multiplied by 256 of resolution of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
The parameter acquisition method based on the Mask R-CNN network and the electrocardiogram converts the electrocardiosignal into a two-dimensional black-and-white image for the first time, and then the total prediction loss function and the Mask image are obtained by using the Mask R-CNN network.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the scope of the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that the technical solution of the present invention may be modified or substituted equally without departing from the spirit and scope of the technical solution of the present invention.

Claims (9)

1. A parameter acquisition method based on Mask R-CNN network and electrocardiogram is characterized by comprising the following steps:
step one, acquiring a data set of electrocardiographic data, wherein each electrocardiographic data is provided with a plurality of electrocardiographic signals, and dividing the data set into a training set and a testing set;
step two, selecting any electrocardiosignal from each electrocardiosignal data respectively aiming at all the electrocardiosignal data in the training set to obtain an electrocardiosignal group of the training set; for all the electrocardiosignals in the test set, selecting any one electrocardiosignal from each electrocardiosignal to obtain an electrocardiosignal group of the test set;
step three, respectively obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set;
step four, respectively obtaining an electrocardiosignal graph unit group of the training set and an electrocardiosignal graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set;
fifthly, respectively obtaining an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the testing set according to the electrocardiosignal graph unit group of the training set and the electrocardiosignal graph unit group of the testing set; the electrocardiosignal curve graph units in the central electric signal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are two-dimensional black-and-white graphs;
step six, an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the test set are input into a Mask R-CNN network for training by taking a classification type group of the training set as label data, so that a trained Mask R-CNN network is obtained;
step seven, inputting an electrocardiosignal curve unit group of a test set, an electrocardiosignal curve unit group of a training set, an electrocardiosignal curve graph unit group of the training set and an electrocardiosignal curve graph unit group of the test set into the trained Mask R-CNN network obtained in the step six to obtain a total prediction loss function and a Mask image;
the third step is specifically divided into:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and when the electrocardiosignal accords with the condition that the electrocardiosignal rises to the highest point and then falls to the threshold value H, namely defining the value of the highest point as an R wave peak value of the QRS wave, wherein the range of the H value is 0.7-0.9;
step 3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit according to the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electrical signal curve unit according to the wave crest width of three adjacent R waves, and correspondingly obtaining the electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set, and correspondingly obtaining a classification type group of the training set.
2. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 1, wherein the fourth step is specifically:
step 4.1, counting the average sampling points among R wave peaks in each electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, wherein n times of the average sampling points among the R wave peaks is taken as the total sampling length, n is greater than 1, and when the sampling length exceeds the length in an electrocardiosignal image unit, the y value of the sampling point is 0;
step 4.2, carrying out linear normalization processing on each sampling point of each electrocardiosignal curve unit in the step 4.1, so that y values of the sampling points are distributed to a [ -11 ] interval;
step 4.3, discretizing the sampling points distributed in the interval [ -11 ] in the step 4.2, and correspondingly obtaining a discrete electrocardiosignal curve unit group of the training set and a discrete electrocardiosignal curve unit group of the test set;
and 4.4, performing N times sparse sampling on the discrete electrocardiosignal curve units obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graph unit group of a training set and an electrocardiosignal graph unit group of a test set, wherein the range of N is 2-4.
3. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 2, wherein: and step five, respectively finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and the electrocardiosignal graphic unit group of the test set, finding out the pixel point of each column of electrocardiosignal curve in each electrocardiosignal graphic unit, setting all points below the pixel point as black, and correspondingly obtaining the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set.
4. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 3, wherein: the types of classification include normal and sinus rhythm, sinus bradycardia, sinus tachycardia, electrical axis left bias, electrical axis right bias, sinus arrhythmia, right bundle branch block, ventricular premature beat, full right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first atrial chamber block, partial right bundle branch block, and atrial fibrillation.
5. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 4, wherein: the total predicted loss function L is represented by formula (i),
L=L cls +L box +L mask …… the formula (I),
wherein L is cls To classify the predictive loss function, L box L is a regression prediction loss function mask The predictive loss function is output for the mask.
6. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 5, wherein: the mask image is an electrocardiosignal curve graph unit with different colors.
7. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 6, wherein: the statistical method of the average sampling points in the step 4.1 is to count all sampling points in all electrocardiosignal curve units in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the test set, and then divide all sampling points by the total number of the electrocardiosignal curve units.
8. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 7, wherein: the Mask R-CNN network is a Mask R-CNN network which introduces a residual network and a characteristic gold word network.
9. The parameter obtaining method based on Mask R-CNN network and electrocardiogram according to claim 8, wherein: the residual network is a ResNet50 or ResNet101 network.
CN202210106655.1A 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram Active CN114366116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210106655.1A CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210106655.1A CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Publications (2)

Publication Number Publication Date
CN114366116A CN114366116A (en) 2022-04-19
CN114366116B true CN114366116B (en) 2023-08-25

Family

ID=81146540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210106655.1A Active CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Country Status (1)

Country Link
CN (1) CN114366116B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862843B (en) * 2022-12-12 2024-02-02 哈尔滨医科大学 Auxiliary identification system and equipment for myocardial troponin elevation type and cardiovascular diseases

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256453A (en) * 2018-01-06 2018-07-06 天津大学 A kind of method based on one-dimensional ECG signal extraction two dimension CNN features
CN110522444A (en) * 2019-09-03 2019-12-03 西安邮电大学 A kind of electrocardiosignal method for identifying and classifying based on Kernel-CNN
KR20190141326A (en) * 2018-06-14 2019-12-24 한국과학기술원 Method and Apparatus for ECG Arrhythmia Classification using a Deep Convolutional Neural Network
WO2020006939A1 (en) * 2018-07-06 2020-01-09 苏州大学张家港工业技术研究院 Electrocardiogram generation and classification method based on generative adversarial network
CN111882559A (en) * 2020-01-20 2020-11-03 深圳数字生命研究院 ECG signal acquisition method and device, storage medium and electronic device
CN112686217A (en) * 2020-11-02 2021-04-20 坝道工程医院(平舆) Mask R-CNN-based detection method for disease pixel level of underground drainage pipeline
CN112826513A (en) * 2021-01-05 2021-05-25 华中科技大学 Fetal heart rate detection system based on deep learning and specificity correction on FECG
CN113057648A (en) * 2021-03-22 2021-07-02 山西三友和智慧信息技术股份有限公司 ECG signal classification method based on composite LSTM structure
CN113274031A (en) * 2021-04-30 2021-08-20 西安理工大学 Arrhythmia classification method based on deep convolution residual error network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256453A (en) * 2018-01-06 2018-07-06 天津大学 A kind of method based on one-dimensional ECG signal extraction two dimension CNN features
KR20190141326A (en) * 2018-06-14 2019-12-24 한국과학기술원 Method and Apparatus for ECG Arrhythmia Classification using a Deep Convolutional Neural Network
WO2020006939A1 (en) * 2018-07-06 2020-01-09 苏州大学张家港工业技术研究院 Electrocardiogram generation and classification method based on generative adversarial network
CN110522444A (en) * 2019-09-03 2019-12-03 西安邮电大学 A kind of electrocardiosignal method for identifying and classifying based on Kernel-CNN
CN111882559A (en) * 2020-01-20 2020-11-03 深圳数字生命研究院 ECG signal acquisition method and device, storage medium and electronic device
CN112686217A (en) * 2020-11-02 2021-04-20 坝道工程医院(平舆) Mask R-CNN-based detection method for disease pixel level of underground drainage pipeline
CN112826513A (en) * 2021-01-05 2021-05-25 华中科技大学 Fetal heart rate detection system based on deep learning and specificity correction on FECG
CN113057648A (en) * 2021-03-22 2021-07-02 山西三友和智慧信息技术股份有限公司 ECG signal classification method based on composite LSTM structure
CN113274031A (en) * 2021-04-30 2021-08-20 西安理工大学 Arrhythmia classification method based on deep convolution residual error network

Also Published As

Publication number Publication date
CN114366116A (en) 2022-04-19

Similar Documents

Publication Publication Date Title
Wang et al. A high-precision arrhythmia classification method based on dual fully connected neural network
CN110840402B (en) Atrial fibrillation signal identification method and system based on machine learning
Yang et al. Automatic recognition of arrhythmia based on principal component analysis network and linear support vector machine
Li et al. Inter-patient arrhythmia classification with improved deep residual convolutional neural network
CN110890155B (en) Multi-class arrhythmia detection method based on lead attention mechanism
Übeyli Recurrent neural networks employing Lyapunov exponents for analysis of ECG signals
Mousavi et al. ECGNET: Learning where to attend for detection of atrial fibrillation with deep visual attention
Liu et al. Automatic identification of abnormalities in 12-lead ECGs using expert features and convolutional neural networks
CN110974214A (en) Automatic electrocardiogram classification method, system and equipment based on deep learning
CN111631705A (en) Electrocardio abnormality detection method, model training method, device, equipment and medium
Ma et al. Multi-classification of arrhythmias using ResNet with CBAM on CWGAN-GP augmented ECG Gramian Angular Summation Field
Wen et al. Classification of ECG complexes using self-organizing CMAC
CN114366116B (en) Parameter acquisition method based on Mask R-CNN network and electrocardiogram
CN112957052B (en) Multi-lead electrocardiosignal classification method based on NLF-CNN lead fusion depth network
Yakut et al. A high-performance arrhythmic heartbeat classification using ensemble learning method and PSD based feature extraction approach
Wu et al. A deep neural network ensemble classifier with focal loss for automatic arrhythmia classification
Nurmaini et al. Machine learning techniques with low-dimensional feature extraction for improving the generalizability of cardiac arrhythmia
Shi et al. Automated heartbeat classification based on convolutional neural network with multiple kernel sizes
CN112450944A (en) Label correlation guide feature fusion electrocardiogram multi-classification prediction system and method
Victor et al. A Cost-Based Dual ConvNet-Attention Transfer Learning Model for ECG Heartbeat Classification
Khazaee Automated cardiac beat classification using RBF neural networks
Harrane et al. Classification of ECG heartbeats using deep neural networks
Tribhuvanam et al. Analysis and classification of ECG beat based on wavelet decomposition and SVM
Saeed et al. A Slantlet based statistical features extraction for classification of normal, arrhythmia, and congestive heart failure in electrocardiogram
Kovalchuk et al. A novel feature vector for ECG classification using deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant