CN117322887A - Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network - Google Patents

Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network Download PDF

Info

Publication number
CN117322887A
CN117322887A CN202311295179.3A CN202311295179A CN117322887A CN 117322887 A CN117322887 A CN 117322887A CN 202311295179 A CN202311295179 A CN 202311295179A CN 117322887 A CN117322887 A CN 117322887A
Authority
CN
China
Prior art keywords
data
training
neural network
training data
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311295179.3A
Other languages
Chinese (zh)
Inventor
边俊杰
阿里
计玮
孟亮
曹迪
季晓龙
张景宾
齐保垒
朱林
王振浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Shanren Medical Technology Co ltd
Original Assignee
Henan Shanren Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Shanren Medical Technology Co ltd filed Critical Henan Shanren Medical Technology Co ltd
Priority to CN202311295179.3A priority Critical patent/CN117322887A/en
Publication of CN117322887A publication Critical patent/CN117322887A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • A61B5/346Analysis of electrocardiograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Mathematical Physics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Cardiology (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Pathology (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

The application relates to the technical field of medical data processing, in particular to an electrocardiographic heart sound diagnosis method, device, electronic equipment and storage medium based on an artificial neural network, which are used for obtaining a predicted data label through a trained heart recognition model and improving the accuracy and the prediction efficiency of data label prediction. The main scheme is as follows: training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; acquiring first training data characteristics corresponding to the first training data through a trained first neural network model, and acquiring second training data characteristics corresponding to the second training data based on a trained second neural network model; training the heart recognition model according to the first training data features and the second training data features, and obtaining a predicted data label corresponding to user data of the user to be recognized according to the trained heart recognition model.

Description

Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network
Technical Field
The application relates to the technical field of medical data processing, in particular to an electrocardiographic heart sound diagnosis method, device, electronic equipment and storage medium based on an artificial neural network.
Background
Electrocardiographic and phonocardiographic examinations are the fundamental way of diagnosing heart diseases: electrocardiogram can provide information on heart rate, heart rhythm and partial heart structure, but has weak differential diagnosis capability on diseases. The main expression is as follows: the heart diseases with different causes can show similar electrocardiographic characterization, for example, the electrocardiograph prompts left ventricular hypertrophy, the causes of the electrocardiograph can be hypertension, mitral regurgitation or stenosis, aortic valve stenosis and the like, and at the moment, if the heart diseases can be combined with the change condition of heart sound signals of patients, more parameter basis can be provided for the identification of the diseases.
At present, a doctor is mainly used for diagnosing problems of a patient according to an electrocardiogram and a phonocardiogram, namely the prior art mainly depends on the industry experience of the doctor, and the diagnosis efficiency and the diagnosis accuracy of the patient are low due to uneven medical capability.
Disclosure of Invention
In view of this, the present application provides an electrocardiographic heart sound diagnosis method, device, electronic equipment and storage medium based on an artificial neural network, which are used for obtaining a predicted data tag through a trained heart recognition model, and improving the accuracy and the prediction efficiency of data tag prediction.
In a first aspect, an embodiment of the present application provides an electrocardiographic heart sound diagnosis method based on an artificial neural network, where the method includes:
training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge; the sample data comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency;
acquiring first training data features corresponding to the first training data through the trained first neural network model, and acquiring second training data features corresponding to the second training data based on the trained second neural network model;
training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition;
and obtaining a predicted data tag corresponding to the user data of the user to be identified according to the trained heart identification model.
Optionally, the acquiring, by using the trained first neural network model, the first training data feature corresponding to the first training data, and acquiring, based on the trained second neural network model, the second training data feature corresponding to the second training data, includes:
Inputting the first training data into the trained first neural network model to obtain first training data characteristics to be selected and a first prediction data label;
inputting the second training data into the trained second neural network model to obtain second training data characteristics to be selected and second prediction data labels;
and determining the first training data characteristic and the second training data characteristic based on the actual data label, the first prediction data label and the second prediction data label which belong to the same sample data.
Optionally, the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data includes:
acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
and if the actual data labels, the first predicted data labels and the second predicted data labels which belong to the same sample data are the same, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
Optionally, the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data includes:
acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
determining a first tag probability value and a second tag probability value which are the same as the actual data tag in the first predicted data tag and the second predicted data tag;
weighting the first tag probability value and the second tag probability value to obtain a weighted value;
and if the weighted value is larger than a preset value, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
Optionally, the training the heart recognition model according to the first training data feature and the second training data feature until the loss value of the heart recognition model meets a preset condition includes:
inputting the first training data characteristic and the second training data characteristic into the heart recognition model to obtain a prediction data label;
Calculating a loss value of the heart recognition model according to the prediction data tag and the corresponding actual data tag;
determining whether the loss value of the heart recognition model meets a preset condition;
if the loss value meets a preset condition, stopping training the heart recognition model; and if the loss value does not meet the preset condition, continuing to train the heart recognition model.
Optionally, the method further comprises:
determining an electrocardiographic feature map, a heart sound feature map, an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as third training data;
training a third neural network model based on third training data corresponding to the sample data and an actual data label until the third neural network model is converged;
and determining a predicted data label corresponding to the user data of the user to be identified through the trained first neural network model, the trained second neural network model and the trained third neural network model.
Optionally, the determining, by using the trained first neural network model, the second neural network model and the third neural network model, a predicted data tag corresponding to user data of the user to be identified includes:
Determining an electrocardio feature map, a heart sound feature map, an electrocardio parameter value, a heart sound parameter value and user basic data of the user to be identified from the user data of the user to be identified;
inputting the electrocardiographic feature map, the heart sound feature map and the user basic data of the user to be identified into a trained first neural network model to obtain a first prediction data label;
inputting the electrocardio parameter value, the heart sound parameter value and the user basic data of the user to be identified into a trained second neural network model to obtain a second prediction data label;
inputting an electrocardiographic feature map, a heart sound feature map, electrocardiographic parameter values, heart sound parameter values and user basic data of a user to be identified into a trained third neural network model to obtain a third prediction data label;
and determining a predicted data tag corresponding to the user data of the user to be identified according to the first predicted data tag, the second predicted data tag and the third predicted data tag.
In a second aspect, an embodiment of the present application further provides an electrocardiographic heart sound diagnostic device based on an artificial neural network, where the device includes:
the training module is used for training the first neural network model based on the first training data and the actual data label corresponding to the sample data, and training the second neural network model based on the second training data and the actual data label corresponding to the sample data; until both the first neural network model and the second neural network model converge; the sample data comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency;
The acquisition module is further used for acquiring first training data features corresponding to the first training data through the trained first neural network model, and acquiring second training data features corresponding to the second training data based on the trained second neural network model;
the training module is further configured to train the heart recognition model according to the first training data feature and the second training data feature until a loss value of the heart recognition model meets a preset condition;
and the prediction module is used for obtaining a prediction data label corresponding to the user data of the user to be identified according to the trained heart identification model.
In a third aspect, embodiments of the present application further provide an electronic device, including: the system comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the electronic device is running, the processor and the memory are communicated through the bus, and the machine-readable instructions are executed by the processor to perform the steps of the electrocardiograph heart sound diagnosis method based on the artificial neural network of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to perform the steps of the electrocardiographic heart sound diagnosis method based on the artificial neural network in the first aspect.
According to the electrocardiographic heart sound diagnosis method, device, electronic equipment and storage medium based on the artificial neural network, sample data for training a heart recognition model and an actual data tag corresponding to the sample data are obtained; converting the electrocardiogram and the phonocardiogram into an electrocardiogram characteristic diagram and a phonocardiogram, and respectively extracting an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram; determining an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the sample data as first training data, and determining an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as second training data; training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge; acquiring first training data characteristics corresponding to the first training data through a trained first neural network model, and acquiring second training data characteristics corresponding to the second training data based on a trained second neural network model; training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition; and obtaining a predicted data tag corresponding to the user data of the user to be identified according to the trained heart identification model. According to the embodiment, sample data required by training the heart recognition model is obtained through the first neural network model and the second neural network model, and then the prediction data label is obtained through the trained heart recognition model, namely, the prediction data label can be obtained through the trained heart recognition model without intervention of medical staff, so that the accuracy and the prediction efficiency of data label prediction can be improved through the embodiment.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting the scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a flowchart of an electrocardiographic heart sound diagnosis method based on an artificial neural network according to an embodiment of the present application;
FIG. 2 is a flowchart of another method for electrocardiographic heart sound diagnosis based on an artificial neural network according to an embodiment of the present application;
fig. 3 shows a block diagram of an electrocardiographic heart sound diagnostic device based on an artificial neural network according to an embodiment of the present application.
Detailed Description
The terms first, second, third and the like in the description and in the claims and in the above drawings are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order.
In the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as examples, illustrations, or descriptions. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion that may be readily understood.
In the description of the present application, unless otherwise indicated, "/" means that the associated object is an "or" relationship, e.g., a/B may represent a or B; the term "and/or" in this application is merely an association relation describing an association object, and means that three kinds of relations may exist, for example, a and/or B may mean: there are three cases, a alone, a and B together, and B alone, wherein a, B may be singular or plural. Also, in the description of the present application, unless otherwise indicated, "a plurality" means two or more than two. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b, or c may represent: a, b, c, a-b, a-c, b-c, or a-b-c, wherein a, b, c may be single or plural.
In the embodiments of the present application, at least one may also be described as one or more, and a plurality may be two, three, four or more, which is not limited in this application.
As shown in fig. 1, an embodiment of the present application provides an electrocardiograph heart sound diagnosis method based on an artificial neural network, where the electrocardiograph heart sound diagnosis method based on the artificial neural network provided in the present application may include:
s101, acquiring sample data of a training heart recognition model and an actual data tag corresponding to the sample data.
Wherein the sample data includes user base data and an electrocardiogram and a phonocardiogram acquired based on the same time frequency. The user basic data comprises information such as gender, age, weight, health condition and the like of the user; the actual data label in this embodiment is a case label corresponding to sample data, and the content of the label may be specific to various problems about the heart, for example, the actual data label may be sinus tachycardia, sinus bradycardia, arrhythmia, myocardial infarction, etc., which is not specifically limited in this embodiment.
S102, converting the electrocardiogram and the phonocardiogram into an electrocardiogram characteristic diagram and a phonocardiogram, and respectively extracting an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram.
After the electrocardiograph and the phonocardiogram are obtained, the electrocardiograph period of the electrocardiograph and the phonocardiogram period of the phonocardiogram can be matched on a time axis, the electrocardiograph and the phonocardiogram which belong to the electrocardiograph feature diagram and the phonocardiogram corresponding to one cardiac period can be synchronously identified, specifically, the electrocardiograph and the phonocardiogram which belong to the part corresponding to the same cardiac period can be intercepted from the electrocardiograph feature diagram and the phonocardiogram, and then the intercepted electrocardiograph and phonocardiogram are determined to be the electrocardiograph feature diagram and the phonocardiogram.
In this embodiment, the values of the electrocardiographic parameters extracted from the electrocardiogram are values corresponding to the respective electrocardiographic parameters, and the following table 1 contains the respective electrocardiographic parameters obtained from the electrocardiogram, the meanings represented by the respective electrocardiographic parameters, and the normal reference ranges of the values of the respective electrocardiographic parameters.
TABLE 1
In this embodiment, the values of the cardiac electrical parameters extracted from the phonocardiogram are values corresponding to the respective cardiac parameters, and the following table 2 contains the respective cardiac parameters obtained from the electrocardiogram, the meaning represented by the respective cardiac parameters, and the normal reference range of the values of the respective cardiac parameters.
TABLE 2
S103, determining an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the sample data as first training data, and determining an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as second training data.
S104, training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge.
It should be noted that, the first neural network model and the second neural network model in this embodiment may be trained by using the same network structure, or may be trained by using different network structures. Optionally, the first neural network model and the second neural network model are trained using different network structures.
In this embodiment, the neural network model convergence may be determined by a loss function, that is, a loss value is calculated by the loss function, and then whether the loss value is smaller than a certain value is determined to determine whether the neural network model converges.
Specifically, in this embodiment, first training data corresponding to sample data is input to a first neural network model to obtain a predicted data tag, and then a first loss value is calculated according to the predicted data tag and an actual data tag; and inputting second training data corresponding to the sample data into a second neural network model to obtain a predicted data label, calculating a second loss value according to the predicted data label and the actual data label, and determining that the first neural network model and the second neural network model are converged when the first loss value and the second loss value are smaller than a certain value.
S105, acquiring first training data features corresponding to the first training data through the trained first neural network model, and acquiring second training data features corresponding to the second training data based on the trained second neural network model.
It should be noted that, the training data feature in this embodiment is obtained corresponding to a layer before the output layer in the neural network model, that is, the first training data is input into the trained first neural network model, so as to obtain the first training data feature corresponding to the first training data; and inputting the second training data into the trained second neural network model to obtain second training data characteristics corresponding to the second training data.
In an optional embodiment provided in the present application, the obtaining, by the trained first neural network model, the first training data feature corresponding to the first training data, and obtaining, based on the trained second neural network model, the second training data feature corresponding to the second training data, includes:
s1051, inputting the first training data into the trained first neural network model to obtain first training data features to be selected and first prediction data labels.
S1052, inputting the second training data into the trained second neural network model to obtain second training data features to be selected and second prediction data labels.
S1053, determining the first training data characteristic and the second training data characteristic based on the actual data label, the first prediction data label and the second prediction data label corresponding to the same sample data.
Optionally, the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data includes: acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data; and if the actual data labels, the first predicted data labels and the second predicted data labels which belong to the same sample data are the same, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
For example, the training data set includes sample data 1 and sample data 2, a first training data corresponding to the sample data 1 is input into the first neural network model to obtain a first predicted data tag, a second training data corresponding to the sample data 1 is input into the second neural network model to obtain a second predicted data tag, and if the first predicted data tag, the second predicted data tag and the actual data tag of the sample data 1 are all tag a, the first training data feature to be selected corresponding to the sample data 1 is determined to be a first training data feature, and the second training data feature to be selected is determined to be a second training data feature. If the first predicted data tag, the second predicted data tag, and the actual data tag of the sample data 2 are different, the sample data 2 is discarded.
For the embodiment of the application, when the actual data tag, the first predicted data tag and the second predicted data tag corresponding to the same sample data are identical, it is explained that the corresponding category can be accurately predicted based on the sample data, that is, the sample data includes data features capable of performing data category prediction, so that the embodiment determines the first training data feature to be selected of the sample data as the first training data feature and the second training data feature to be selected as the second training data feature, thereby ensuring that a heart recognition model is effectively trained according to the first training data feature and the second training data feature in the subsequent step.
Optionally, the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data includes: acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data; determining a first tag probability value and a second tag probability value which are the same as the actual data tag in the first predicted data tag and the second predicted data tag; weighting the first tag probability value and the second tag probability value to obtain a weighted value; and if the weighted value is larger than a preset value, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
For example, the actual data label corresponding to the sample data 3 is the label B, the probability value of the label B is 80% after the first training data corresponding to the sample data 3 is input to the first neural network model, the probability value of the label B is 90% after the second training data corresponding to the sample data 3 is input to the second neural network model, if the weight value of the first neural network model is 0.6 and the weight value of the second neural network model is 0.4, the weight values of the first label probability value and the second label probability value are calculated to be 0.84 (80%. 0.6+90%. 0.4=0.84), and if the preset value is 0.8, the first training data feature to be selected corresponding to the sample data 3 can be determined as the first training data feature and the second training data feature to be selected can be determined as the second training data feature.
In this embodiment, after determining the first tag probability value and the second tag probability value that are the same as the actual data tag in the first predicted data tag and the second predicted data tag, if the weighted value of the first tag probability value and the second tag probability value is greater than the preset value, determining the first training data feature to be selected as the first training data feature, and determining the second training data feature to be selected as the second training data feature. In this way, it is ensured that the heart recognition model is effectively trained in a subsequent step on the basis of the first training data characteristic and the second training data characteristic.
And S106, training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition.
In an optional embodiment provided in the present application, the training the cardiac recognition model according to the first training data feature and the second training data feature until the loss value of the cardiac recognition model meets a preset condition includes: inputting the first training data characteristic and the second training data characteristic into a heart recognition model to obtain a prediction data label; calculating a loss value of the heart recognition model according to the predicted data label and the corresponding actual data label; determining whether the loss value of the heart recognition model meets a preset condition; if the loss value meets a preset condition, stopping training the heart recognition model; and if the loss value does not meet the preset condition, continuing to train the heart recognition model.
And S107, obtaining a predicted data label corresponding to the user data of the user to be identified according to the trained heart identification model.
The user data of the user to be identified comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency. After user data of a user to be identified is acquired, according to the step content of S101-S103, first data (an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the user data) and second data (an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the user data) are determined according to the user data, then first data features corresponding to the first data are acquired through a trained first neural network model, second data features corresponding to the second data are acquired based on a trained second neural network model, and then the first data features and the second data features are input into a trained heart identification model to obtain a predicted data tag.
According to the electrocardiographic heart sound diagnosis method based on the artificial neural network, sample data of a training heart recognition model and an actual data tag corresponding to the sample data are obtained; converting the electrocardiogram and the phonocardiogram into an electrocardiogram feature map and a phonocardiogram, and respectively extracting an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram; determining an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the sample data as first training data, and determining an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as second training data; training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge; acquiring first training data characteristics corresponding to the first training data through a trained first neural network model, and acquiring second training data characteristics corresponding to the second training data based on a trained second neural network model; training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition; and obtaining a predicted data tag corresponding to the user data of the user to be identified according to the trained heart identification model. According to the embodiment, sample data required by training the heart recognition model is obtained through the first neural network model and the second neural network model, and then the prediction data label is obtained through the trained heart recognition model, namely, the prediction data label can be obtained through the trained heart recognition model without intervention of medical staff, so that the accuracy and the prediction efficiency of data label prediction can be improved.
As shown in fig. 2, another method for diagnosing electrocardiographic heart sounds based on an artificial neural network is provided in an embodiment of the present application, where the method for diagnosing electrocardiographic heart sounds based on an artificial neural network may include:
s201, acquiring sample data of a training heart recognition model and an actual data tag corresponding to the sample data; the sample data includes user base data and electrocardiography and phonocardiogram acquired based on the same time frequency.
S202, converting the electrocardiogram and the phonocardiogram into an electrocardiogram characteristic diagram and a phonocardiogram, and respectively extracting an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram.
It should be noted that, the steps S201 and S202 are the same as those of the corresponding steps in fig. 1, and the description of this embodiment is omitted here.
S203, determining an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the sample data as first training data, and determining an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as second training data; and determining an electrocardiographic feature map, a heart sound feature map, an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as third training data.
S204, training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; and training a third neural network model based on the third training data corresponding to the sample data and the actual data label.
S205, determining a predicted data label corresponding to the user data of the user to be identified through the trained first neural network model, the trained second neural network model and the trained third neural network model.
In an alternative embodiment, determining a predicted data tag corresponding to user data of the user to be identified through the trained first neural network model, second neural network model and third neural network model includes:
s2051, determining an electrocardio characteristic diagram, a heart sound characteristic diagram, an electrocardio parameter value, a heart sound parameter value and user basic data of the user to be identified from the user data of the user to be identified.
S2052, inputting the electrocardiographic feature map, the heart sound feature map and the user basic data of the user to be identified into a trained first neural network model to obtain a first prediction data label.
S2053, inputting the electrocardio parameter value, the heart sound parameter value and the user basic data of the user to be identified into a trained second neural network model to obtain a second predicted data label.
S2054, inputting the electrocardiographic feature map, the heart sound feature map, the electrocardiographic parameter value, the heart sound parameter value and the user basic data of the user to be identified into a trained third neural network model to obtain a third prediction data label.
S2055, determining a predicted data tag corresponding to the user data of the user to be identified according to the first predicted data tag, the second predicted data tag and the third predicted data tag.
Specifically, according to the weighted calculation of the tag probability values corresponding to the first predicted data tag, the second predicted data tag and the third predicted data tag, the predicted data tag corresponding to the user data of the user to be identified can be obtained.
The embodiment of the application also provides an electrocardiograph heart sound diagnosis device based on the artificial neural network, as shown in fig. 3, the electrocardiograph heart sound diagnosis device based on the artificial neural network may include:
the acquiring module 31 is configured to acquire sample data of the training heart recognition model and an actual data tag corresponding to the sample data; the sample data comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency;
a conversion module 32, configured to convert the electrocardiogram and the phonocardiogram into an electrocardiogram feature map and a phonocardiogram, and extract an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram, respectively;
A determining module 33, configured to determine an electrocardiographic feature map, a heart sound feature map, and user basic data corresponding to the sample data as first training data, and determine an electrocardiographic parameter value, a heart sound parameter value, and user basic data corresponding to the sample data as second training data;
a training module 34, configured to train a first neural network model based on the first training data and the actual data label corresponding to the sample data, and train a second neural network model based on the second training data and the actual data label corresponding to the sample data; until both the first neural network model and the second neural network model converge;
the obtaining module 31 is further configured to obtain a first training data feature corresponding to the first training data through the trained first neural network model, and obtain a second training data feature corresponding to the second training data based on the trained second neural network model;
the training module 34 is further configured to train the cardiac recognition model according to the first training data feature and the second training data feature until a loss value of the cardiac recognition model meets a preset condition;
And the prediction module 35 is configured to obtain a predicted data tag corresponding to user data of the user to be identified according to the trained heart identification model.
Optionally, the obtaining module 31 is specifically configured to:
inputting the first training data into the trained first neural network model to obtain first training data characteristics to be selected and a first prediction data label;
inputting the second training data into the trained second neural network model to obtain second training data characteristics to be selected and second prediction data labels;
and determining the first training data characteristic and the second training data characteristic based on the actual data label, the first prediction data label and the second prediction data label which belong to the same sample data.
Optionally, the determining module 33 is further configured to:
acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
and if the actual data labels, the first predicted data labels and the second predicted data labels which belong to the same sample data are the same, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
Optionally, the determining module 33 is further configured to:
acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
determining a first tag probability value and a second tag probability value which are the same as the actual data tag in the first predicted data tag and the second predicted data tag;
weighting the first tag probability value and the second tag probability value to obtain a weighted value;
and if the weighted value is larger than a preset value, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
Optionally, the training module 34 is specifically configured to:
inputting the first training data characteristic and the second training data characteristic into the heart recognition model to obtain a prediction data label;
calculating a loss value of the heart recognition model according to the prediction data tag and the corresponding actual data tag;
determining whether the loss value of the heart recognition model meets a preset condition;
if the loss value meets a preset condition, stopping training the heart recognition model; and if the loss value does not meet the preset condition, continuing to train the heart recognition model.
Optionally, the determining module 33 is configured to:
determining an electrocardiographic feature map, a heart sound feature map, an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as third training data;
training a third neural network model based on third training data corresponding to the sample data and an actual data label until the third neural network model is converged;
and determining a predicted data label corresponding to the user data of the user to be identified through the trained first neural network model, the trained second neural network model and the trained third neural network model.
Optionally, the determining module 33 is configured to:
determining an electrocardio feature map, a heart sound feature map, an electrocardio parameter value, a heart sound parameter value and user basic data of the user to be identified from the user data of the user to be identified;
inputting the electrocardiographic feature map, the heart sound feature map and the user basic data of the user to be identified into a trained first neural network model to obtain a first prediction data label;
inputting the electrocardio parameter value, the heart sound parameter value and the user basic data of the user to be identified into a trained second neural network model to obtain a second prediction data label;
Inputting an electrocardiographic feature map, a heart sound feature map, electrocardiographic parameter values, heart sound parameter values and user basic data of a user to be identified into a trained third neural network model to obtain a third prediction data label;
and determining a predicted data tag corresponding to the user data of the user to be identified according to the first predicted data tag, the second predicted data tag and the third predicted data tag.
Based on the same application concept, the embodiment of the application also provides a computer readable storage medium, and the computer readable storage medium stores a computer program, and the computer program is executed by a processor to execute the steps of the electrocardiograph heart sound diagnosis method based on the artificial neural network.
Specifically, the storage medium may be a general storage medium, such as a mobile disk, a hard disk, or the like, and when a computer program on the storage medium is executed, the electrocardiographic heart sound diagnosis method based on the artificial neural network may be executed, so as to obtain sample data for training a heart recognition model and an actual data tag corresponding to the sample data; converting the electrocardiogram and the phonocardiogram into an electrocardiogram characteristic diagram and a phonocardiogram, and respectively extracting an electrocardiogram parameter value and a phonocardiogram parameter value from the electrocardiogram and the phonocardiogram; determining an electrocardiographic feature map, a heart sound feature map and user basic data corresponding to the sample data as first training data, and determining an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as second training data; training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge; acquiring first training data characteristics corresponding to the first training data through a trained first neural network model, and acquiring second training data characteristics corresponding to the second training data based on a trained second neural network model; training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition; and obtaining a predicted data tag corresponding to the user data of the user to be identified according to the trained heart identification model. According to the embodiment, sample data required by training the heart recognition model is obtained through the first neural network model and the second neural network model, and then the prediction data label is obtained through the trained heart recognition model, namely, the prediction data label can be obtained through the trained heart recognition model without intervention of medical staff, so that the accuracy and the prediction efficiency of data label prediction can be improved through the embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed methods and apparatuses may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present application, and are not intended to limit the scope of the present application, but the present application is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, the present application is not limited thereto. Any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or make equivalent substitutions for some of the technical features within the technical scope of the disclosure of the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the corresponding technical solutions. Are intended to be encompassed within the scope of this application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An electrocardiographic heart sound diagnosis method based on an artificial neural network, which is characterized by comprising the following steps:
training a first neural network model based on first training data and actual data labels corresponding to the sample data, and training a second neural network model based on second training data and actual data labels corresponding to the sample data; until both the first neural network model and the second neural network model converge; the sample data comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency;
Acquiring first training data features corresponding to the first training data through the trained first neural network model, and acquiring second training data features corresponding to the second training data based on the trained second neural network model;
training the heart recognition model according to the first training data characteristic and the second training data characteristic until the loss value of the heart recognition model meets a preset condition;
and obtaining a predicted data tag corresponding to the user data of the user to be identified according to the trained heart identification model.
2. The method of claim 1, wherein the obtaining, by the trained first neural network model, the first training data feature corresponding to the first training data, and obtaining, based on the trained second neural network model, the second training data feature corresponding to the second training data, comprises:
inputting the first training data into the trained first neural network model to obtain first training data characteristics to be selected and a first prediction data label;
inputting the second training data into the trained second neural network model to obtain second training data characteristics to be selected and second prediction data labels;
And determining the first training data characteristic and the second training data characteristic based on the actual data label, the first prediction data label and the second prediction data label which belong to the same sample data.
3. The method of claim 2, wherein the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data comprises:
acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
and if the actual data labels, the first predicted data labels and the second predicted data labels which belong to the same sample data are the same, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
4. The method of claim 2, wherein the determining the first training data feature and the second training data feature based on the actual data tag, the first predicted data tag, and the second predicted data tag corresponding to the same sample data comprises:
Acquiring an actual data tag, the first predicted data tag and the second predicted data tag which belong to the same sample data and correspond to the same sample data;
determining a first tag probability value and a second tag probability value which are the same as the actual data tag in the first predicted data tag and the second predicted data tag;
weighting the first tag probability value and the second tag probability value to obtain a weighted value;
and if the weighted value is larger than a preset value, determining the first training data characteristic to be selected as a first training data characteristic, and determining the second training data characteristic to be selected as a second training data characteristic.
5. The method of claim 1, wherein training the cardiac recognition model based on the first training data feature and the second training data feature until a loss value of the cardiac recognition model meets a preset condition comprises:
inputting the first training data characteristic and the second training data characteristic into the heart recognition model to obtain a prediction data label;
calculating a loss value of the heart recognition model according to the prediction data tag and the corresponding actual data tag;
Determining whether the loss value of the heart recognition model meets a preset condition;
if the loss value meets a preset condition, stopping training the heart recognition model; and if the loss value does not meet the preset condition, continuing to train the heart recognition model.
6. The method according to claim 1, wherein the method further comprises:
determining an electrocardiographic feature map, a heart sound feature map, an electrocardiographic parameter value, a heart sound parameter value and user basic data corresponding to the sample data as third training data;
training a third neural network model based on third training data corresponding to the sample data and an actual data label until the third neural network model is converged;
and determining a predicted data label corresponding to the user data of the user to be identified through the trained first neural network model, the trained second neural network model and the trained third neural network model.
7. The method of claim 6, wherein determining the predicted data tag corresponding to the user data of the user to be identified by the trained first, second, and third neural network models comprises:
Determining an electrocardio feature map, a heart sound feature map, an electrocardio parameter value, a heart sound parameter value and user basic data of the user to be identified from the user data of the user to be identified;
inputting the electrocardiographic feature map, the heart sound feature map and the user basic data of the user to be identified into a trained first neural network model to obtain a first prediction data label;
inputting the electrocardio parameter value, the heart sound parameter value and the user basic data of the user to be identified into a trained second neural network model to obtain a second prediction data label;
inputting an electrocardiographic feature map, a heart sound feature map, electrocardiographic parameter values, heart sound parameter values and user basic data of a user to be identified into a trained third neural network model to obtain a third prediction data label;
and determining a predicted data tag corresponding to the user data of the user to be identified according to the first predicted data tag, the second predicted data tag and the third predicted data tag.
8. An electrocardiographic heart sound diagnostic device based on an artificial neural network, the device comprising:
the training module is used for training a first neural network model based on the first training data and the actual data label corresponding to the sample data, and training a second neural network model based on the second training data and the actual data label corresponding to the sample data; until both the first neural network model and the second neural network model converge; the sample data comprises user basic data and an electrocardiogram and a phonocardiogram which are acquired based on the same time frequency;
The acquisition module is further used for acquiring first training data features corresponding to the first training data through the trained first neural network model, and acquiring second training data features corresponding to the second training data based on the trained second neural network model;
the training module is further configured to train the heart recognition model according to the first training data feature and the second training data feature until a loss value of the heart recognition model meets a preset condition;
and the prediction module is used for obtaining a prediction data label corresponding to the user data of the user to be identified according to the trained heart identification model.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the artificial neural network-based electrocardiograph sound diagnostic method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the artificial neural network-based electrocardiographic heart sound diagnosis method according to any one of claims 1 to 7.
CN202311295179.3A 2023-10-08 2023-10-08 Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network Pending CN117322887A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311295179.3A CN117322887A (en) 2023-10-08 2023-10-08 Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311295179.3A CN117322887A (en) 2023-10-08 2023-10-08 Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network

Publications (1)

Publication Number Publication Date
CN117322887A true CN117322887A (en) 2024-01-02

Family

ID=89276798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311295179.3A Pending CN117322887A (en) 2023-10-08 2023-10-08 Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network

Country Status (1)

Country Link
CN (1) CN117322887A (en)

Similar Documents

Publication Publication Date Title
US20230131876A1 (en) Systems and methods of identity analysis of electrocardiograms
US20170156592A1 (en) Healthcare systems and monitoring method for physiological signals
US20230143594A1 (en) Systems and methods for reduced lead electrocardiogram diagnosis using deep neural networks and rule-based systems
Pecchia et al. Remote health monitoring of heart failure with data mining via CART method on HRV features
CN111990989A (en) Electrocardiosignal identification method based on generation countermeasure and convolution cyclic network
CN207084814U (en) For detecting the equipment and system of anomalous beat
Rabkin et al. A new QT interval correction formulae to adjust for increases in heart rate
JP2013524865A5 (en)
RU2657384C2 (en) Method and system for noninvasive screening physiological parameters and pathology
Lee et al. Deep belief networks ensemble for blood pressure estimation
CN110801218B (en) Electrocardiogram data processing method and device, electronic equipment and computer readable medium
Poddar et al. Automated diagnosis of coronary artery diseased patients by heart rate variability analysis using linear and non-linear methods
US20160135704A1 (en) Matrix-Based Patient Signal Analysis
WO2021071646A1 (en) Systems and methods for electrocardiogram diagnosis using deep neural networks and rule-based systems
Ukil et al. Resource constrained CVD classification using single lead ECG on wearable and implantable devices
CN114190950B (en) Electrocardiogram intelligent analysis method for noise-containing label and electrocardiograph
Pimentel et al. Human mental state monitoring in the wild: Are we better off with deeperneural networks or improved input features?
Lee Development of ventricular fibrillation diagnosis method based on neuro-fuzzy systems for automated external defibrillators
CN116269426A (en) Twelve-lead ECG-assisted heart disease multi-mode fusion screening method
CN116269416A (en) Method and device for determining cardiac risk parameters
CN117322887A (en) Electrocardiogram and heart sound diagnosis method, device and equipment based on artificial neural network
CN110786847A (en) Electrocardiogram signal library building method and analysis method
CN115337018A (en) Electrocardiosignal classification method and system based on overall dynamic characteristics
CN112022140B (en) Automatic diagnosis method and system for diagnosis conclusion of electrocardiogram
Saxena et al. Extraction of various features of ECG signal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination