CN114366116A - Parameter acquisition method based on Mask R-CNN network and electrocardiogram - Google Patents

Parameter acquisition method based on Mask R-CNN network and electrocardiogram Download PDF

Info

Publication number
CN114366116A
CN114366116A CN202210106655.1A CN202210106655A CN114366116A CN 114366116 A CN114366116 A CN 114366116A CN 202210106655 A CN202210106655 A CN 202210106655A CN 114366116 A CN114366116 A CN 114366116A
Authority
CN
China
Prior art keywords
electrocardiosignal
curve
unit group
training set
mask
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210106655.1A
Other languages
Chinese (zh)
Other versions
CN114366116B (en
Inventor
罗华丽
徐圆
康静
杨华才
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern Medical University
Original Assignee
Southern Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern Medical University filed Critical Southern Medical University
Priority to CN202210106655.1A priority Critical patent/CN114366116B/en
Publication of CN114366116A publication Critical patent/CN114366116A/en
Application granted granted Critical
Publication of CN114366116B publication Critical patent/CN114366116B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/352Detecting R peaks, e.g. for synchronising diagnostic apparatus; Estimating R-R interval
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • A61B5/349Detecting specific parameters of the electrocardiograph cycle
    • A61B5/366Detecting abnormal QRS complex, e.g. widening
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

A parameter acquisition method based on Mask R-CNN network and electrocardiogram, through 7 steps, this method converts the electrocardiosignal into the two-dimentional black-and-white picture for the first time and then utilize Mask R-CNN network to get total prediction loss function and Mask picture, the invention converts the electrocardiosignal into the two-dimentional black-and-white picture so as to can imitate the behavior that the human eye discerns the electrocardio wave picture, and the invention has overcome the electrocardio classification model and realized the complicacy, the adaptability is relatively poor etc. problem, have the advantage processed simply, with high practicability.

Description

Parameter acquisition method based on Mask R-CNN network and electrocardiogram
Technical Field
The invention relates to the technical field of electrocardiograms, in particular to a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram.
Background
An Electrocardiogram (ECG) is a time-series signal for acquiring heart activity, and since the ECG signal is data for indirectly acquiring heart activity, it is very easily interfered by various signals, and it is often necessary to ignore the interfering signal and then determine the waveform of the ECG wave.
Although the electrocardiographic detection is easily interfered by signals, the electrocardiographic detection has the advantages of no wound and low cost, so that the electrocardiographic detection becomes one of the most common detection means in clinic. The automatic parameter acquisition method of the electrocardiogram can realize the identification and classification of electrocardiogram signals, is an indispensable technical means for computer-aided electrocardiogram analysis, and is one of the research hotspots in the electrocardiogram field.
Current ECG automated parameter acquisition can be categorized into two types: the first is a traditional parameter acquisition method based on mathematical and physical characteristics. Common mathematical features may be wavelet features, various higher order statistical indicators, power spectra, etc. The ECG parameter acquisition method is carried out by combining the characteristic quantities with time domain characteristic indexes and some traditional analysis methods, such as principal component analysis, independent principal component analysis and the like. The conventional classification parameter acquisition methods firstly overcome the influence of an ECG interference signal, have poor adaptability to a noise signal, and have no direct mapping relationship between the characteristic quantity of the mathematical and physical characteristics and the abnormal characteristics of the ECG signal, so that the mathematical and physical characteristics cannot sufficiently dig out the abnormal characteristics of the ECG. Conventional time series parameter acquisition methods are highly dependent on the extracted features, but it is difficult to extract correct and critical information in capturing the intrinsic properties of time series data. Therefore, the method has complicated steps and poor practicability.
The second is an artificial intelligence based ECG automatic parameter acquisition method. The artificial intelligence identification step is relatively simple and high in accuracy, and the processing efficiency can be improved. One of the recent applications of the deep neural network is time series classification, which is a problem of time series classification that specially handles a large amount of data, and thus is widely used in various applications such as a healthcare system, natural language processing, bioinformatics, and the like. For example, Feng et al (FENG Y R, CHEN W, CAI G Y. biometric Extraction and correlation based on ECG Signals [ J ]. Computer & Digital Engineering,2016,46(6): 1099-. Venkatesan et al (Venkatesan C, Karthikaikumar P, Varatharajan R.A novel LMS algorithm for ECG signal preprocessing and KNN classifier based arrhythmia detection [ J ]. Multimedia Tools & Applications,2018.) propose to complete arrhythmia parameter acquisition method based on K nearest neighbor classification algorithm, and realize parameter acquisition method of ECG signal. Zhang et al (Zhang K, LI X, XIE X J, et al. study on Arrhythmia Detection Algorithm based on Deep Learning [ J ]. Chinese Medical Equipment Journal,2008,39(12):6-9+31) adopt a Convolutional Neural Network (CNN) and a Long-Short-Term Memory Neural network (LSTM) composite Algorithm to perform automatic parameter acquisition methods for 5 different heartbeats. Rajpurkar et al (Rajpurkar P, Hannun A Y, Haghpanahi M, et al. Cardiologist-Level Arrhytmia Detection with connected Neural Networks [ J ].2017.) and Hannun et al (Hannun A Y, Rajpurkar P, Haghpanahi M, et al. Cardiologist-Level Arrhythmia Detection and classification in aggregate electron cardio Networks [ J ]. Natmedia, 2019,25(1):65-69.) use 34-layer Convolutional Neural Networks, use 91232 records constituting a large dataset from more than 5 ten thousand bit objects, and classify signals into 11 kinds including sinus rhythm. The detection of ECG signals by an artificial intelligence based ECG parameter acquisition method is generally divided into three steps, signal preprocessing, feature learning and ECG classification. The trained network architecture can quickly detect the electrocardiosignal parameters. The biggest problem of the above ECG parameter acquisition method based on artificial intelligence is that the behavior ability of identifying the electrocardiowave image by the simulated human eyes is lacked by taking ECG signal classification as a time sequence classification problem, so that the adaptability is poor.
Therefore, it is necessary to provide a Mask R-CNN network and electrocardiogram-based parameter acquisition method to solve the deficiencies of the prior art.
Disclosure of Invention
The invention aims to avoid the defects of the prior art and provides a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram. The parameter acquisition method based on the Mask R-CNN network and the electrocardiogram has the advantages of simple processing and high practicability.
The above object of the present invention is achieved by the following technical measures:
a parameter acquisition method based on a Mask R-CNN network and an electrocardiogram is provided, which comprises the following steps:
acquiring a data set of electrocardiogram data, wherein each electrocardiogram data has a plurality of electrocardiogram signals, and dividing the data set into a training set and a test set;
secondly, selecting any electrocardiosignal from each piece of electrocardiosignal aiming at all the electrocardiosignals in the training set to obtain an electrocardiosignal group of the training set; respectively selecting any one electrocardiosignal from each piece of electrocardio data aiming at all the electrocardio data in the test set to obtain an electrocardiosignal group of the test set;
thirdly, correspondingly obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set respectively;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set;
step four, correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set according to the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three respectively;
correspondingly obtaining an electrocardiosignal curve pattern unit group of the training set and an electrocardiosignal curve pattern unit group of the testing set according to the electrocardiosignal pattern unit group of the training set and the electrocardiosignal pattern unit group of the testing set respectively; the electrocardiosignal curve graph units in the electrocardiosignal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are both two-dimensional black and white graphs;
step six, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into a Mask-RCNN network for training by taking the classification type group of the training set as label data to obtain a trained Mask-RCNN network;
and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
Preferably, the third step specifically comprises:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and defining the value of the highest point as the R wave peak value of the QRS wave when the electrocardiosignal is consistent with the condition that the electrocardiosignal firstly rises to the highest point and then falls to the threshold value H, wherein the range of the H value is 0.7-0.9;
3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set.
Preferably, the step four is specifically:
step 4.1, counting the average sampling point number between R wave peaks in each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three, taking n times of the average sampling point number between the R wave peaks as the total sampling length, wherein n is more than 1, and when the sampling length exceeds the length in the electrocardiosignal image unit, the y value of the sampling point is made to be 0;
step 4.2, each sampling point of each electrocardiosignal curve unit in the step 4.1 is subjected to linear normalization processing, so that the y value of the sampling point is distributed to the range of [ -11 ];
step 4.3, discretizing the sampling points distributed in the range of [ -11 ] in the step 4.2 to correspondingly obtain a discretized electrocardiosignal curve unit group of the training set and a discretized electrocardiosignal curve unit group of the testing set;
and 4.4, carrying out N-time sparse sampling on the discrete electrocardiosignal curve unit obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set, wherein the range of N is 2-4.
Preferably, the fifth step is to find out each electrocardiographic signal graphic unit in the electrocardiographic signal graphic unit group of the training set and each electrocardiographic signal graphic unit in the electrocardiographic signal graphic unit group of the test set respectively, find out a pixel point of each row of electrocardiographic curves in each electrocardiographic signal graphic unit, set all the points below the pixel point to be black, and correspondingly obtain the electrocardiographic signal curve graphic unit group of the training set and the electrocardiographic signal curve graphic unit group of the test set.
Preferably, the categories include normal and sinus rhythm, sinus bradycardia, sinus tachycardia, electrical axis left deviation, electrical axis right deviation, sinus arrhythmia, right bundle branch block, premature ventricular contraction, complete right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first degree atrioventricular block, incomplete right bundle branch block, and atrial fibrillation.
Preferably, the total prediction loss function L is represented by formula (I),
L=Lcls+Lbox+Lmask… … formula (I) is shown,
wherein L isclsPredicting the loss function for classification, LboxFor regression prediction of loss functions, LmaskA predictive loss function is output for the mask.
Preferably, the mask image is an electrocardiographic signal curve graph unit with different colors.
Preferably, the statistical method of the average number of sampling points in step 4.1 is to count all sampling points in all the electrocardiographic signal curve units in the electrocardiographic signal curve unit group of the training set and the electrocardiographic signal curve unit group of the test set, and then divide all the sampling points by the total number of the electrocardiographic signal curve units.
Preferably, the Mask-RCNN network is a Mask-RCNN network with a residual network and a feature pyramid network introduced.
Preferably, the residual network is a ResNet50 or a ResNet101 network.
The invention relates to a parameter acquisition method based on Mask R-CNN network and electrocardiogram, comprising the following steps: acquiring a data set of electrocardiogram data, wherein each electrocardiogram data has a plurality of electrocardiogram signals, and dividing the data set into a training set and a test set; secondly, selecting any electrocardiosignal from each piece of electrocardiosignal aiming at all the electrocardiosignals in the training set to obtain an electrocardiosignal group of the training set; respectively selecting any one electrocardiosignal from each piece of electrocardio data aiming at all the electrocardio data in the test set to obtain an electrocardiosignal group of the test set; thirdly, correspondingly obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set respectively; then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set; step four, correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set according to the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three respectively; correspondingly obtaining an electrocardiosignal curve pattern unit group of the training set and an electrocardiosignal curve pattern unit group of the testing set according to the electrocardiosignal pattern unit group of the training set and the electrocardiosignal pattern unit group of the testing set respectively; the electrocardiosignal curve graph units in the electrocardiosignal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are both two-dimensional black and white graphs; step six, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into a Mask-RCNN network for training by taking the classification type group of the training set as label data to obtain a trained Mask-RCNN network; and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image. Through the seven steps, the electrocardiosignal is converted into the two-dimensional black-and-white image for the first time, and then the Mask R-CNN network is utilized to obtain the total prediction loss function and the Mask image.
Drawings
The invention is further illustrated by means of the attached drawings, the content of which is not in any way limiting.
FIG. 1 is a flow chart of a Mask R-CNN network and electrocardiogram-based parameter acquisition method of the present invention.
FIG. 2 is a schematic structural diagram of a Mask R-CNN network according to the present invention.
Fig. 3 is a schematic diagram of electrocardiogram data.
Fig. 4 is a schematic diagram of an electrocardiographic signal.
Fig. 5 is a schematic diagram of an electrocardiographic signal curve unit.
FIG. 6 is a schematic diagram of a cardiac signal graph element.
Fig. 7 is a schematic diagram of an electrocardiographic signal curve graph unit.
Detailed Description
The technical solution of the present invention is further illustrated by the following examples.
Example 1.
A parameter acquisition method based on Mask R-CNN network and electrocardiogram comprises the following steps as shown in figure 1:
acquiring a data set of electrocardiogram data, wherein each electrocardiogram data has a plurality of electrocardiogram signals, and dividing the data set into a training set and a test set;
secondly, selecting any electrocardiosignal from each piece of electrocardiosignal aiming at all the electrocardiosignals in the training set to obtain an electrocardiosignal group of the training set; respectively selecting any one electrocardiosignal from each piece of electrocardio data aiming at all the electrocardio data in the test set to obtain an electrocardiosignal group of the test set;
thirdly, correspondingly obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set respectively;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set;
step four, correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set according to the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three respectively;
correspondingly obtaining an electrocardiosignal curve pattern unit group of the training set and an electrocardiosignal curve pattern unit group of the testing set according to the electrocardiosignal pattern unit group of the training set and the electrocardiosignal pattern unit group of the testing set respectively; the electrocardiosignal curve graph units in the electrocardiosignal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are both two-dimensional black and white graphs;
step six, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into a Mask-RCNN network for training by taking the classification type group of the training set as label data to obtain a trained Mask-RCNN network;
and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
Wherein, the third step is specifically divided into:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and defining the value of the highest point as the R wave peak value of the QRS wave when the electrocardiosignal accords with the condition that the electrocardiosignal firstly rises to the highest point and then falls to the threshold value H, wherein the range of the H value is 0.7-0.9, and preferably 0.8;
3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set.
The fourth step is specifically as follows:
step 4.1, counting the average sampling point number between R wave peaks in each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three, taking n times of the average sampling point number between the R wave peaks as the total sampling length, wherein n is more than 1, and when the sampling length exceeds the length in the electrocardiosignal image unit, the y value of the sampling point is made to be 0;
step 4.2, each sampling point of each electrocardiosignal curve unit in the step 4.1 is subjected to linear normalization processing, so that the y value of the sampling point is distributed to the range of [ -11 ];
step 4.3, discretizing the sampling points distributed in the range of [ -11 ] in the step 4.2 to correspondingly obtain a discretized electrocardiosignal curve unit group of the training set and a discretized electrocardiosignal curve unit group of the testing set;
and 4.4, carrying out N-time sparse sampling on the discrete electrocardiosignal curve unit obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set, wherein the range of N is 2-4, and is preferably 4.
It should be noted that the present invention can reduce the memory capacity and increase the operation speed by N times of sparse sampling.
The statistical method of the average number of sampling points in step 4.1 is to count all sampling points in all the electrocardiographic signal curve units in the electrocardiographic signal curve unit group of the training set and the electrocardiographic signal curve unit group of the test set, and then divide all the sampling points by the total number of the electrocardiographic signal curve units.
And step five, respectively finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the testing set, finding out pixel points of each row of electrocardio curves in each electrocardiosignal graphic unit, setting all points below the pixel points to be black, and correspondingly obtaining the electrocardiosignal graphic unit group of the training set and the electrocardiosignal graphic unit group of the electrocardiosignal graphic unit of the testing set.
Types of classifications of the present invention include normal and sinus rhythms, sinus bradycardia, sinus tachycardia, electrical axis left deviation, electrical axis right deviation, sinus arrhythmia, right bundle branch block, premature ventricular contraction, complete right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first degree atrioventricular block, incomplete right bundle branch block, and atrial fibrillation.
The overall prediction loss function L is represented by formula (i),
L=Lcls+Lbox+Lmask… … formula (I) is shown,
wherein L isclsPredicting the loss function for classification, LboxFor regression prediction of loss functions, LmaskA predictive loss function is output for the mask.
It should be noted that, because training a multi-layer deep convolutional neural network usually requires a large training sample, for the problem that the network overfitting is easily caused due to a small sample amount, the invention loads open-source weights which are trained in advance by using a COCO data set to perform transfer learning. In the course of transfer learning trainingFirstly, only the Head layer of the Mask R-CNN is trained, and the iteration number is 200. And finally, finely adjusting all layers of the network, wherein the iteration number is 300, and obtaining the weight parameters suitable for the electrocardiosignal curve. Particularly, in the iterative process, the basis and object of optimization is the output loss function L of Mask-RCNN. The loss function L, which outputs formula (I), is the sum of the predicted loss functions of the classification, regression, and mask outputs. Wherein L isclsCalculated is the classification loss, L, of the FPN network of the network at each iterationboxCalculated is the target frame offset penalty, L, of the FPN networkmaskThe loss function of the pixel segmentation mask of the FPN network is calculated, which is calculated in this example by the rule of mean binary cross entropy of the mask image.
The mask image obtained by the invention is an electrocardiosignal curve graphic unit with different colors.
The Mask-RCNN network is a Mask-RCNN network with a residual error network and a characteristic pyramid network introduced. Wherein the residual network is a ResNet50 or a ResNet101 network, as shown in fig. 2.
The Mask-RCNN Network introduces a residual error Network ResNet with stronger feature extraction capability, wherein the main idea of the residual error Network is to add a direct connection channel in the Network, namely the idea of a high-way Network, the previous Network structure is to perform nonlinear transformation on performance input, and the high-way Network allows the output of a certain proportion of the previous Network layer to be reserved. The idea of ResNet is also very similar to that of Highway Network, allowing the original input information to pass directly to the following layers.
The ResNet has different network layer numbers, and the common 50-layer is the ResNet 50. In the ResNet50, an algorithm framework uses a Feature Pyramid Network (FPN) to solve the multi-scale detection problem, and combines the ResNet Network and the FPN Network to extract a shared convolution Feature (Feature Map). And then generating a candidate frame region to be detected by using the region suggestion network RPN on the extracted convolution characteristics. The Mask R-CNN network can identify different objects belonging to the same category in a pixel level scene.
The Mask-RCNN Network of the invention solves the problem of multi-scale detection by using a Feature Pyramid Network (FPN), and is combined with ResNet and FPN to extract shared convolution features (Feature Map). And then generating a candidate frame region to be detected by using the region suggestion network RPN on the extracted convolution characteristics. Because there is a two-step quantization process in the region of interest (ROI) pooling process, which may cause the deviation on the pixel level between the input and the output, for this reason, the Mask R-CNN network uses the ROI alignment layer to correct the deviation on the pixel level caused by quantization, and then classifies, regresses and performs Mask segmentation on the feature map and candidate frame region corrected by the ROI alignment layer, compared to the fast R-CNNMask-RCNN network, a Mask output branch is added for obtaining a Mask image of each object.
According to the parameter acquisition method based on the Mask R-CNN network and the electrocardiogram, the electrocardiosignals are firstly converted into the two-dimensional black-and-white image, and then the Mask image and the total prediction loss function are obtained by utilizing the Mask R-CNN network.
Example 2.
A parameter obtaining method based on a Mask R-CNN network and an electrocardiogram is carried out on a PC with an Intel (R)2.5GHz and 32GB internal memory under the environment of MATLAB R2014a, wherein the programming environment of deep learning is based on Python, the internal memory of the computer is 32GB, and the GPU is NVIDIA GeForce GTX TITAN Xp.
Acquiring a data set of electrocardiogram data, wherein each electrocardiogram data has a plurality of electrocardiogram signals, and dividing the data set into a training set and a test set;
specifically, the data used in the embodiment is 20072 samples, and the data is subjected to desensitization processing, and only waveform data, the name of the abnormal cardiac electrical event, and part of the abnormal cardiac electrical event are reserved, such as age and gender information. Each sample has a sampling frequency of 500Hz, a length of 10s and a unit voltage of 4.88 uV. Each sample corresponds to a plurality of heart rate types, as in fig. 3. Types of classifications in the sample set include 16 types of normal, sinus rhythm, sinus bradycardia, sinus tachycardia, electrical axis left deviation, electrical axis right deviation, sinus arrhythmia, right bundle branch block, premature ventricular contraction, complete right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first degree atrioventricular block, incomplete right bundle branch block, atrial fibrillation. 80% of the number of samples, namely 16057 samples, is divided into a training set, 20% of the number of samples, namely 4015 samples, is divided into a verification set, and the proportion of each class in the training set to the proportion of each class in the verification set are kept uniform and consistent.
Secondly, selecting any electrocardiosignal from each piece of electrocardio data in the training set to obtain an electrocardiosignal group of the training set; selecting any electrocardiosignal from each piece of electrocardio data in the test set to obtain an electrocardiosignal group of the test set, wherein one electrocardiosignal is shown in figure 4;
the third step is as follows:
step 3.1, setting a constant H of the training set and the testing set as a threshold, wherein H is specifically 0.6, and taking the constant H threshold as a starting point, when the electrocardiosignal accords with the condition that the electrocardiosignal firstly rises to a highest point and then falls to the threshold H, defining the value of the highest point as the R wave peak value of the QRS wave;
3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the test set, wherein the abscissa of three adjacent middle R wave peaks is Hi+1The abscissa of the R wave peak value at the left critical position is HiThe abscissa of the R wave peak value at the right critical position is Hi+2As in fig. 5;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set.
The fourth step is:
step 4.1, counting the average sampling point number between R wave peaks in each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three, taking n times of the average sampling point number between the R wave peaks as the total sampling length, wherein n is more than 1, and when the sampling length exceeds the length in the electrocardiosignal image unit, the y value of the sampling point is made to be 0;
step 4.2, each sampling point of each electrocardiosignal curve unit in the step 4.1 is subjected to linear normalization processing, so that the y value of the sampling point is distributed to the range of [ -11 ];
step 4.3, discretizing the sampling points distributed in the range of [ -11 ] in the step 4.2 to correspondingly obtain a post-discretization electrocardiosignal curve unit group of the training set and a post-discretization electrocardiosignal curve unit group of the testing set, wherein one post-discretization electrocardiosignal curve unit is as shown in FIG. 6;
and 4.4, carrying out N-time sparse sampling on the discrete electrocardiosignal curve unit obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the test set, wherein N is more than 1.
Specifically, in a training set stage, the number of average sampling points between R wave peaks in an electrocardiosignal processing unit is counted and integrated to be 415, 2.5 times of 415 is taken as a total sampling length 1037, then y values corresponding to each sampling point of an electrocardiosignal curve unit are distributed in an interval of < -11 >, the < -11 > is divided into 1024 points in total for discretization, and in the processing process, if the sampling length exceeds the sampling length in the electrocardiosignal curve unit, the default obtained sampling value is 0. Then 4 times of sparse sampling is carried out on the discretized image, and finally the electrocardiosignal curve graph unit with the resolution of 258 multiplied by 256 is formed. In the test set phase, as in the training set phase, equal-fold sparse sampling is also performed, again using the total sampling length 1037 of the training phase as the sampling length.
And fifthly, respectively finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the testing set, finding out pixel points of each row of electrocardio curves in each electrocardiosignal graphic unit, setting all points below the pixel points to be black, and correspondingly obtaining the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the testing set, wherein one electrocardiosignal curve graphic unit is as shown in fig. 7.
Step six, inputting the classification type group of the training set as label data into a Mask-RCNN network for training to obtain a trained Mask-RCNN network, wherein the resolution of the testing set is 258 multiplied by 256 electrocardiosignal curve unit group, the resolution of the training set is 258 multiplied by 256 electrocardiosignal curve graphic unit group, and the resolution of the testing set is 258 multiplied by 256 electrocardiosignal curve graphic unit group;
and seventhly, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set with the resolution of 258 multiplied by 256 and the electrocardiosignal curve graphic unit group of the test set with the resolution of 258 multiplied by 256 into the trained Mask-RCNN network obtained in the sixth step to obtain a total prediction loss function and a Mask image.
According to the parameter acquisition method based on the Mask R-CNN network and the electrocardiogram, the electrocardiosignals are firstly converted into the two-dimensional black-and-white image, and then the Mask image and the total prediction loss function are obtained by utilizing the Mask R-CNN network.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the protection scope of the present invention, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A parameter acquisition method based on Mask R-CNN network and electrocardiogram is characterized by comprising the following steps:
acquiring a data set of electrocardiogram data, wherein each electrocardiogram data has a plurality of electrocardiogram signals, and dividing the data set into a training set and a test set;
secondly, selecting any electrocardiosignal from each piece of electrocardiosignal aiming at all the electrocardiosignals in the training set to obtain an electrocardiosignal group of the training set; respectively selecting any one electrocardiosignal from each piece of electrocardio data aiming at all the electrocardio data in the test set to obtain an electrocardiosignal group of the test set;
thirdly, correspondingly obtaining an electrocardiosignal curve unit group of the training set and an electrocardiosignal curve unit group of the testing set according to the electrocardiosignal group of the training set and the electrocardiosignal group of the testing set respectively;
then, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set;
step four, correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set according to the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three respectively;
correspondingly obtaining an electrocardiosignal curve pattern unit group of the training set and an electrocardiosignal curve pattern unit group of the testing set according to the electrocardiosignal pattern unit group of the training set and the electrocardiosignal pattern unit group of the testing set respectively; the electrocardiosignal curve graph units in the electrocardiosignal curve graph unit group and the electrocardiosignal curve graph unit group of the test set are both two-dimensional black and white graphs;
step six, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into a Mask-RCNN network for training by taking the classification type group of the training set as label data to obtain a trained Mask-RCNN network;
and step seven, inputting the electrocardiosignal curve unit group of the test set, the electrocardiosignal curve unit group of the training set, the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the test set into the trained Mask-RCNN network obtained in the step six to obtain a total prediction loss function and a Mask image.
2. The Mask R-CNN network and electrocardiogram-based parameter acquisition method according to claim 1, wherein the third step comprises:
step 3.1, setting a constant H as a threshold value for the training set and the test set, taking the constant H threshold value as a starting point, and defining the value of the highest point as the R wave peak value of the QRS wave when the electrocardiosignal is consistent with the condition that the electrocardiosignal firstly rises to the highest point and then falls to the threshold value H, wherein the range of the H value is 0.7-0.9;
3.2, dividing each electrocardiosignal in the electrocardiosignal group of the training set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the training set; dividing each electrocardiosignal in the electrocardiosignal group of the test set into an electric signal curve unit by the width of three adjacent R wave peaks, and correspondingly obtaining an electrocardiosignal curve unit group of the test set;
and 3.3, classifying each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set to correspondingly obtain a classification type group of the training set.
3. The Mask R-CNN network and electrocardiogram-based parameter acquisition method according to claim 2, wherein the fourth step specifically is:
step 4.1, counting the average sampling point number between R wave peaks in each electrocardiosignal curve unit in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set obtained in the step three, taking n times of the average sampling point number between the R wave peaks as the total sampling length, wherein n is more than 1, and when the sampling length exceeds the length in the electrocardiosignal image unit, the y value of the sampling point is made to be 0;
step 4.2, each sampling point of each electrocardiosignal curve unit in the step 4.1 is subjected to linear normalization processing, so that the y value of the sampling point is distributed to the range of [ -11 ];
step 4.3, discretizing the sampling points distributed in the range of [ -11 ] in the step 4.2 to correspondingly obtain a discretized electrocardiosignal curve unit group of the training set and a discretized electrocardiosignal curve unit group of the testing set;
and 4.4, carrying out N-time sparse sampling on the discrete electrocardiosignal curve unit obtained in the step 4.3, and correspondingly obtaining an electrocardiosignal graphic unit group of the training set and an electrocardiosignal graphic unit group of the testing set, wherein the range of N is 2-4.
4. The Mask R-CNN network and electrocardiogram-based parameter acquisition method according to claim 3, wherein: and step five, specifically, finding out each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the training set and each electrocardiosignal graphic unit in the electrocardiosignal graphic unit group of the testing set respectively, finding out pixel points of each row of electrocardio curves in each electrocardiosignal graphic unit, setting all the points below the pixel points to be black, and correspondingly obtaining the electrocardiosignal curve graphic unit group of the training set and the electrocardiosignal curve graphic unit group of the testing set.
5. The Mask R-CNN network and electrocardiogram-based parameter acquisition method according to claim 4, wherein: the categories include normal and sinus rhythm, sinus bradycardia, sinus tachycardia, electrical axis left deviation, electrical axis right deviation, sinus arrhythmia, right bundle branch block, premature ventricular contraction, complete right bundle branch block, left ventricular high voltage, ST-T change, ST segment change, first degree atrioventricular block, incomplete right bundle branch block, and atrial fibrillation.
6. The Mask R-CNN network and electrocardiogram-based parameter acquisition method according to claim 5, wherein: the overall prediction loss function L is represented by formula (i),
L=Lcls+Lbox+Lmask… … formula (I) is shown,
wherein L isclsPredicting the loss function for classification, LboxFor regression prediction of loss functions, LmaskA predictive loss function is output for the mask.
7. The Mask R-CNN network and electrocardiogram based parameter acquisition method according to claim 6, wherein: the mask image is an electrocardiosignal curve graphic unit with different colors.
8. The Mask R-CNN network and electrocardiogram based parameter acquisition method according to claim 7, wherein: the statistical method of the average number of sampling points in the step 4.1 is to count all the sampling points in all the electrocardiosignal curve units in the electrocardiosignal curve unit group of the training set and the electrocardiosignal curve unit group of the testing set, and then divide all the sampling points by the total number of the electrocardiosignal curve units.
9. The Mask R-CNN network and electrocardiogram based parameter acquisition method according to claim 8, wherein: the Mask-RCNN network is a Mask-RCNN network with a residual error network and a characteristic pyramid network introduced.
10. The Mask R-CNN network and electrocardiogram based parameter acquisition method according to claim 9, wherein: the residual network is a ResNet50 or ResNet101 network.
CN202210106655.1A 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram Active CN114366116B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210106655.1A CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210106655.1A CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Publications (2)

Publication Number Publication Date
CN114366116A true CN114366116A (en) 2022-04-19
CN114366116B CN114366116B (en) 2023-08-25

Family

ID=81146540

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210106655.1A Active CN114366116B (en) 2022-01-28 2022-01-28 Parameter acquisition method based on Mask R-CNN network and electrocardiogram

Country Status (1)

Country Link
CN (1) CN114366116B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862843A (en) * 2022-12-12 2023-03-28 哈尔滨医科大学 Auxiliary identification system and equipment for myocardial muscle calcium egg rising type and cardiovascular diseases

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256453A (en) * 2018-01-06 2018-07-06 天津大学 A kind of method based on one-dimensional ECG signal extraction two dimension CNN features
CN110522444A (en) * 2019-09-03 2019-12-03 西安邮电大学 A kind of electrocardiosignal method for identifying and classifying based on Kernel-CNN
KR20190141326A (en) * 2018-06-14 2019-12-24 한국과학기술원 Method and Apparatus for ECG Arrhythmia Classification using a Deep Convolutional Neural Network
WO2020006939A1 (en) * 2018-07-06 2020-01-09 苏州大学张家港工业技术研究院 Electrocardiogram generation and classification method based on generative adversarial network
CN111882559A (en) * 2020-01-20 2020-11-03 深圳数字生命研究院 ECG signal acquisition method and device, storage medium and electronic device
CN112686217A (en) * 2020-11-02 2021-04-20 坝道工程医院(平舆) Mask R-CNN-based detection method for disease pixel level of underground drainage pipeline
CN112826513A (en) * 2021-01-05 2021-05-25 华中科技大学 Fetal heart rate detection system based on deep learning and specificity correction on FECG
CN113057648A (en) * 2021-03-22 2021-07-02 山西三友和智慧信息技术股份有限公司 ECG signal classification method based on composite LSTM structure
CN113274031A (en) * 2021-04-30 2021-08-20 西安理工大学 Arrhythmia classification method based on deep convolution residual error network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256453A (en) * 2018-01-06 2018-07-06 天津大学 A kind of method based on one-dimensional ECG signal extraction two dimension CNN features
KR20190141326A (en) * 2018-06-14 2019-12-24 한국과학기술원 Method and Apparatus for ECG Arrhythmia Classification using a Deep Convolutional Neural Network
WO2020006939A1 (en) * 2018-07-06 2020-01-09 苏州大学张家港工业技术研究院 Electrocardiogram generation and classification method based on generative adversarial network
CN110522444A (en) * 2019-09-03 2019-12-03 西安邮电大学 A kind of electrocardiosignal method for identifying and classifying based on Kernel-CNN
CN111882559A (en) * 2020-01-20 2020-11-03 深圳数字生命研究院 ECG signal acquisition method and device, storage medium and electronic device
CN112686217A (en) * 2020-11-02 2021-04-20 坝道工程医院(平舆) Mask R-CNN-based detection method for disease pixel level of underground drainage pipeline
CN112826513A (en) * 2021-01-05 2021-05-25 华中科技大学 Fetal heart rate detection system based on deep learning and specificity correction on FECG
CN113057648A (en) * 2021-03-22 2021-07-02 山西三友和智慧信息技术股份有限公司 ECG signal classification method based on composite LSTM structure
CN113274031A (en) * 2021-04-30 2021-08-20 西安理工大学 Arrhythmia classification method based on deep convolution residual error network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862843A (en) * 2022-12-12 2023-03-28 哈尔滨医科大学 Auxiliary identification system and equipment for myocardial muscle calcium egg rising type and cardiovascular diseases
CN115862843B (en) * 2022-12-12 2024-02-02 哈尔滨医科大学 Auxiliary identification system and equipment for myocardial troponin elevation type and cardiovascular diseases

Also Published As

Publication number Publication date
CN114366116B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
Wang et al. A high-precision arrhythmia classification method based on dual fully connected neural network
Shi et al. Automated heartbeat classification based on deep neural network with multiple input layers
Yang et al. Automatic recognition of arrhythmia based on principal component analysis network and linear support vector machine
Shi et al. A hierarchical method based on weighted extreme gradient boosting in ECG heartbeat classification
Tuncer et al. Automated arrhythmia detection using novel hexadecimal local pattern and multilevel wavelet transform with ECG signals
CN111160139B (en) Electrocardiosignal processing method and device and terminal equipment
Li et al. Inter-patient arrhythmia classification with improved deep residual convolutional neural network
Wang et al. Arrhythmia classification algorithm based on multi-head self-attention mechanism
Mousavi et al. ECGNET: Learning where to attend for detection of atrial fibrillation with deep visual attention
Alquran et al. ECG classification using higher order spectral estimation and deep learning techniques
CN111990989A (en) Electrocardiosignal identification method based on generation countermeasure and convolution cyclic network
CN110974214A (en) Automatic electrocardiogram classification method, system and equipment based on deep learning
Jin et al. A novel interpretable method based on dual-level attentional deep neural network for actual multilabel arrhythmia detection
Wen et al. Classification of ECG complexes using self-organizing CMAC
CN111631705A (en) Electrocardio abnormality detection method, model training method, device, equipment and medium
Wang et al. Multi-class arrhythmia detection based on neural network with multi-stage features fusion
Feyisa et al. Lightweight multireceptive field CNN for 12-lead ECG signal classification
CN115530788A (en) Arrhythmia classification method based on self-attention mechanism
Al-Huseiny et al. Diagnosis of arrhythmia based on ECG analysis using CNN
CN112957052B (en) Multi-lead electrocardiosignal classification method based on NLF-CNN lead fusion depth network
CN112450944B (en) Label correlation guide feature fusion electrocardiogram multi-classification prediction system and method
CN114366116A (en) Parameter acquisition method based on Mask R-CNN network and electrocardiogram
CN116864140A (en) Intracardiac branch of academic or vocational study postoperative care monitoring data processing method and system thereof
CN116649899A (en) Electrocardiogram signal classification method based on attention mechanism feature fusion
Khazaee Automated cardiac beat classification using RBF neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant