CN110558944A - Heart sound processing method and device, electronic equipment and computer readable storage medium - Google Patents

Heart sound processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110558944A
CN110558944A CN201910847225.3A CN201910847225A CN110558944A CN 110558944 A CN110558944 A CN 110558944A CN 201910847225 A CN201910847225 A CN 201910847225A CN 110558944 A CN110558944 A CN 110558944A
Authority
CN
China
Prior art keywords
heart sound
neural network
sample data
network model
segmented
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910847225.3A
Other languages
Chinese (zh)
Inventor
章毅
张蕾
王建勇
王璟玲
胡俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Original Assignee
Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Intelligent Diega Technology Partnership (limited Partnership) filed Critical Chengdu Intelligent Diega Technology Partnership (limited Partnership)
Priority to CN201910847225.3A priority Critical patent/CN110558944A/en
Publication of CN110558944A publication Critical patent/CN110558944A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Public Health (AREA)
  • Acoustics & Sound (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

the invention relates to a heart sound processing method, a device, electronic equipment and a computer readable storage medium. When a follow-up doctor diagnoses a patient, abnormal segmented heart sound data of the patient can be directly selected, so that the aim of improving diagnosis efficiency can be fulfilled.

Description

heart sound processing method and device, electronic equipment and computer readable storage medium
Technical Field
the application belongs to the field of data processing, and particularly relates to a heart sound processing method and device, electronic equipment and a computer-readable storage medium.
Background
Auscultation of heart sounds is a simple and effective means for examining heart-like diseases. The auscultation diagnosis of the heart sounds can be completed without the help of professional doctors, and the clinical level of different doctors greatly influences the diagnosis result. In addition, although most hospitals have auscultation equipment and doctors, the phenomenon of "seeing a doctor" causes that patients may not be able to consult specialized doctors in time after registering, thereby causing the condition of illness to be delayed or even aggravated.
based on the problems, more and more patients acquire the heart sound audio of the patients through off-line equipment and upload the heart sound audio to the internet, so that professional doctors can diagnose the patients through an on-line inquiry mode. However, since the doctor does not touch the patient and the obtained heart sound audio has no emphasis, the online diagnosis on the heart sound is inefficient.
Disclosure of Invention
In view of this, an object of the present application is to provide a method and an apparatus for processing heart sound, an electronic device, and a computer-readable storage medium, so as to process the heart sound audio data of a patient, facilitate diagnosis by a doctor, and improve diagnosis efficiency of the doctor.
The embodiment of the application is realized as follows:
In a first aspect, an embodiment of the present application provides a method for processing heart sounds, where the method includes: inputting the acquired original heart sound data into a first neural network model which is created in advance; obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model; inputting the segmented heart sound data into a pre-created second neural network model; and obtaining abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model. When a follow-up doctor diagnoses a patient, abnormal segmented heart sound data of the patient can be directly selected, so that the aim of improving diagnosis efficiency can be fulfilled.
With reference to the embodiment of the first aspect, in a possible implementation manner, after the obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model, and before the obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model, the method further includes: and confirming the original heart sound data as the heart sound audio with the heart sound characteristics through the first neural network model.
With reference to the embodiment of the first aspect, in a possible implementation manner, before the inputting the acquired original heart sound data into the first neural network model created in advance, the method further includes: obtaining first sample data, wherein the first sample data comprises a segmentation label; inputting the first sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds with the same segmented labels, and obtaining the first neural network model.
with reference to the embodiment of the first aspect, in a possible implementation manner, the inputting the first sample data to a neural network for training includes: extracting envelope information characteristics of the first sample data; inputting the envelope information features into the neural network for training.
With reference to the embodiment of the first aspect, in a possible implementation manner, before the inputting the segmented heart sound data into a pre-created second neural network model, the method further includes: acquiring second sample data, wherein the second sample data comprises a segmentation label and an abnormal segment label; inputting the second sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds simultaneously provided with the same segmented label and the same abnormal segment label, and obtaining the second neural network model.
With reference to the embodiment of the first aspect, in a possible implementation manner, the inputting the second sample data to a neural network for training includes: extracting a mel frequency cepstrum coefficient of the second sample data; and inputting the Mel frequency cepstrum coefficient into the neural network for training. .
With reference to the embodiment of the first aspect, in a possible implementation manner, before the acquiring the first sample data or before the acquiring the second sample data, the method further includes: and removing noise in the sample data according to each piece of acquired sample data, and adjusting the sample data to be equal in length to obtain the first sample data or the second sample data.
In a second aspect, an embodiment of the present application provides a heart sound processing apparatus, including: the first input module is used for inputting the acquired original heart sound data into a first neural network model which is created in advance; an obtaining module, configured to obtain segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model; the second input module is used for inputting the segmented heart sound data into a pre-created second neural network model; the obtaining module is further configured to obtain abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
with reference to the second aspect, in a possible implementation manner, the apparatus further includes a confirming module, configured to confirm, by the first neural network model, that the original heart sound data is a heart sound audio with heart sound characteristics.
With reference to the embodiment of the second aspect, in a possible implementation manner, the apparatus further includes a training module, and the obtaining module is further configured to obtain first sample data, where the first sample data includes a segment label; the training module is used for inputting the first sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds with the same segmented labels, and the first neural network model is obtained.
With reference to the embodiment of the second aspect, in a possible implementation manner, the training module is configured to extract an envelope information feature of the first sample data; inputting the envelope information features into the neural network for training.
With reference to the second aspect, in a possible implementation manner, the obtaining module is further configured to obtain second sample data, where the second sample data includes a segment tag and an abnormal segment tag; the training module is further configured to input the second sample data to a neural network for training, so that the neural network learns common feature information of segmented heart sounds having the same segmentation label and the same abnormal segment label at the same time, and the second neural network model is obtained.
With reference to the second aspect, in a possible implementation manner, the training module is further configured to extract mel-frequency cepstrum coefficients of the second sample data; and inputting the Mel frequency cepstrum coefficient into the neural network for training.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes a preprocessing module, configured to remove noise in each piece of acquired sample data, and adjust the sample data to be equal in length to obtain the first sample data or the second sample data.
in a third aspect, an embodiment of the present application further provides an electronic device, including: a memory and a processor, the memory and the processor connected; the memory is used for storing programs; the processor calls a program stored in the memory to perform the method of the first aspect embodiment and/or any possible implementation manner of the first aspect embodiment.
In a fourth aspect, the present application further provides a non-transitory computer-readable storage medium (hereinafter, referred to as a computer-readable storage medium), on which a computer program is stored, where the computer program is executed by a computer to perform the method in the foregoing first aspect and/or any possible implementation manner of the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows one of flowcharts of a heart sound processing method according to an embodiment of the present application.
Fig. 2 shows a second flowchart of a heart sound processing method according to an embodiment of the present application.
fig. 3 is a block diagram illustrating a structure of a heart sound processing apparatus according to an embodiment of the present application.
Fig. 4 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In addition, the problem that the heart sound audio obtained by on-line inquiry in the prior art has no emphasis on the low diagnosis efficiency of doctors is the result obtained after the applicant has practiced and studied carefully, and therefore, the discovery process of the above defects and the solution proposed by the embodiment of the present application to the above defects in the following should be the contribution of the applicant to the present application in the process of the present application.
in order to solve the above problems, embodiments of the present application provide a method and an apparatus for processing heart sounds, an electronic device, and a computer-readable storage medium, which are convenient for a doctor to diagnose and improve the diagnosis efficiency of the doctor. The technology can be realized by adopting corresponding software, hardware and a combination of software and hardware. The following describes embodiments of the present application in detail.
The following will describe a heart sound processing method provided by the present application.
Referring to fig. 1, an embodiment of the present application provides a method for processing heart sounds. The method may be applied to an electronic device, an Application (APP) installed in the electronic device, or an applet embedded in a public platform installed in the electronic device, such as a wechat public number.
The steps involved will be described below with reference to fig. 1.
Step S110: and inputting the acquired original heart sound data into a first neural network model which is created in advance.
Optionally, an electronic stethoscope may be disposed at each data collection point of a hospital, each data collection point of a health care facility, or at a home of a patient, and a specific model of the electronic stethoscope may not be limited as long as the electronic stethoscope has an audio collection (reception) function and can export audio, for example, in one embodiment, the electronic stethoscope may be a radish stethoscope.
When a patient needs to perform online diagnosis, the electronic stethoscope is firstly placed on the precordial region of the patient (the precordial region refers to the region where the heart of the chest is located, is generally equivalent to the palm of the patient in size, and is generally positioned at the left edge of the sternum and between the second rib and the fifth rib. After the electronic stethoscope starts working, the raw heart sound data of the patient is obtained, and can be exported under the operation of the patient and then transmitted as input to the first neural network model.
Of course, in an alternative embodiment, the electronic stethoscope may also have a data transmission function, which directly transmits the raw heart sound data as an input to the first neural network model in response to a transmission instruction of the patient after acquiring the raw heart sound data of the patient.
The first neural network model is trained in advance and stored in the electronic equipment.
The first neural network model will be described below.
Aiming at the first neural network model, the method mainly comprises four parts of sample data preparation, model design, model training and model testing.
aiming at a sample data preparation stage, the electronic equipment firstly acquires first sample data. Each first sample data is actual heart sound data of a human body, and a segmentation label is set after heart sound segmentation is carried out by a professional doctor. It is noted that a complete cardiac cycle is divided into four segmented heart sounds in time sequence, i.e. a first heart sound s1, a systolic heart sound, a second heart sound s2, and a diastolic heart sound. After segmenting the heart sound data acquired by the electronic stethoscope according to the heart sound segmentation principle according to experience, professional doctors set corresponding segmentation labels for the segmented heart sounds. The labels serve as identifiers for characterizing the category to which the segmented heart sounds belong, for example, the segmented labels are ABCD, respectively, where a is used for characterizing the first heart sound s1, B is used for characterizing the systolic phase, C is used for characterizing the second heart sound s2, and D is used for characterizing the diastolic phase. In this embodiment, the physician may place label a at the beginning of the first heart sound s1, label B at the beginning of systole, label C at the beginning of the second heart sound s2, and label D at the beginning of diastole. Of course, the segment labels herein are merely examples, and do not limit the concrete representation of the segment labels.
Of course, as an optional implementation manner, for each piece of acquired first sample data, the electronic device may further perform preprocessing on the first sample data, remove noise (noise such as environmental noise and friction noise of clothing material) in the first sample data, and adjust the first sample data to be equal in length, so that the effective length of each piece of first sample data is equal. The first sample data can be expanded by using a fourier function, and the heart sound data corresponding to the first sample data is converted into a frequency domain, so that frequencies corresponding to noise and redundant frequencies are removed in the frequency domain, and the purposes of removing the noise and adjusting the noise to be equal in length are achieved. For example, the duration of each piece of the first sample data may be adjusted to nine seconds in consideration of the calculation amount and accuracy.
After the first sample data is obtained, the first sample data may be divided into a training set and a testing set according to a preset proportion, so as to be used in a subsequent training model and a subsequent testing model. The ratio of samples in the training set to the test set may be 4: 1.
And aiming at the model design stage, the basic model adopted by the first neural network model is a deep neural network. The deep neural network comprises a feature extraction module, a feature binding module and a classification module. The feature extraction module comprises an input layer and a plurality of convolution structures. Each convolution structure includes a plurality of convolution layers and pooling layers of different scales. The feature binding module includes a multiscale pooling layer that binds features of multiple audios in the first sample data in an audio dimension. The classification module includes a multi-class classifier (softmax).
The process of learning the input data for the model training phase, i.e., the deep neural network. In the present application, after the first sample data is input, the training is performed to make the first neural network model learn the common feature information of the segmented heart sounds having the same segmented labels, and to minimize the loss function.
Optionally, when the first neural network model is trained, smooth envelope information features in the first sample data may be extracted through wavelet denoising processing and hilbert transform, and then the envelope information features are input to the deep neural network for training, so that the deep neural network completes heart sound segmentation through learning segmentation labels.
Optionally, when the first sample data is input into the first neural network model for training, the first sample data may be augmented to increase the number of the first sample data.
It should be noted that the process of obtaining the envelope information characteristic of the first sample data and the amplification of the sample are all the prior art, and are not described herein again.
And aiming at the model test stage, comparing the segmentation result predicted by the deep neural network with the actual segmentation labels of the samples in the test set, and counting the number of the samples with correct prediction. And when the test result achieves the expected effect, finishing the training of the first neural network model.
Step S120: and obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model.
After the first neural network model is obtained through training, the first sample data is input into the first neural network model, and then the original heart sound data can be automatically segmented to obtain segmented heart sound data.
in an alternative embodiment, referring to fig. 2, before performing step S120, the method further includes:
step S101: and judging whether the original heart sound data is a heart sound audio with heart sound characteristics or not through the first neural network model.
Step S102: if yes, step S120 is executed.
Step S103: if not, the patient is prompted to re-upload the original heart sound data.
In this embodiment, correspondingly, the first neural network model also learns characteristics corresponding to human heart sound audio, so that whether the acquired original heart sound data is the heart sound audio with the heart sound characteristics can be determined, and the patient is prevented from uploading wrong original heart sound data.
Step S130: inputting the segmented heart sound data into a pre-created second neural network model.
Of course, similar to the first neural network model, the second neural network model is also pre-trained and stored in the electronic device.
the second neural network model is also similar to the first neural network model and mainly comprises four parts of sample data preparation, model design, model training and model testing.
And aiming at the sample data preparation stage, the electronic equipment firstly acquires second sample data. And in addition, if a certain segmented heart sound is abnormal, the physician also sets an abnormal segment label representing the abnormality on the segmented heart sound. It is noted that a complete cardiac cycle is divided into four parts, i.e., a first heart sound s1, a systolic heart sound, a second heart sound s2, and a diastolic heart sound, in chronological order. After segmenting the heart sound data acquired by the electronic stethoscope according to the heart sound segmentation principle according to experience, a professional sets a corresponding segmentation label and an abnormal segment label (if any) for the segmented heart sound, for example, sets a label a at the start position of the first heart sound s 1; setting a label B at the initial position of the contraction period; setting a label C at the starting position of the second heart sound s2, abnormal; tag D was set at the start of diastole, abnormal. Of course, the anomaly field tag may also take the form of other tags, such as characterizing an anomaly with the tag "S".
Of course, as an optional implementation manner, for each piece of acquired second sample data, the electronic device may further perform preprocessing on the second sample data, remove noise (such as environmental noise and friction noise of clothing material) in the second sample data, and adjust the second sample data to be equal in length, so that the effective length of each piece of second sample data is equal. The second sample data may be expanded by using a fourier function, and the heart sound data corresponding to the second sample data may be converted into a frequency domain, so as to remove a frequency corresponding to the noise and an unnecessary frequency in the frequency domain, thereby achieving the purposes of removing the noise and adjusting the noise to be equal in length. For example, the duration of each piece of second sample data may be adjusted to nine seconds in consideration of the calculation amount and accuracy.
After the second sample data is obtained, the second sample data may be divided into a training set and a testing set according to a preset proportion, so as to be used in a subsequent training model and a subsequent testing model. The ratio of samples in the training set to the test set may be 4: 1.
for the model design stage, the structure of the second neural network model may be consistent with the structure of the first neural network model, and details are not repeated here.
In the model training phase, after the second sample data is input, the training aims to enable the second neural network model to learn the common characteristic information of the segmented heart sounds with the same segmented label and the same abnormal segment label, and simultaneously minimize the loss function.
Optionally, when the second neural network model is trained, a mel-frequency cepstrum coefficient of the second sample data may be obtained by performing a series of operations such as framing, windowing, fast fourier transform, mel-filter processing, and logarithm operation on the second sample data, and then the mel-frequency cepstrum coefficient is input to the neural network for training, so that the deep neural network completes detection of abnormal segmented heart sounds through learning the segmented labels and the abnormal segmented labels.
Optionally, when the second sample data is input into the second neural network model for training, the second sample data may be augmented to expand the number of the second sample data.
It should be noted that the process of obtaining the mel-frequency cepstrum coefficient of the second sample data and the amplification of the sample are prior art, and are not described herein again.
And aiming at the model test stage, comparing the detection result of the abnormal segmented heart sound predicted by the deep neural network with the actual abnormal segmented heart sound of the sample in the test set, and counting the number of the samples with correct prediction. And when the test result achieves the expected effect, finishing the training of the second neural network model.
Because the different segmented heart sound data obtained by segmenting the complete original heart sound data have different highlighted contents (for example, the first heart sound marks that the heart chamber enters the contraction period, the first heart sound can reflect the strength of the contraction force of the heart chamber, the first heart sound of a common patient with clinical manifestation heart failure is weaker, the first heart sound of a patient with rheumatic myocarditis is dull, the second heart sound marks that the heart chamber enters the relaxation period and can reflect the blood pressure of an aorta and a pulmonary artery), the expression of the characteristics of the segmented heart sound data is different, and therefore, after the effective heart sound extraction, the audio normalization processing and the heart sound segmentation are completed through the first neural network model, the obtained segmented heart sound data are input into the second model, and the detection of the subsequent abnormal segmented heart sound is facilitated. Step S140: and obtaining abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
After the second neural network model is obtained through training, the segmented heart sound data are input into the second neural network model, the segmented heart sound data can be automatically classified, and abnormal segmented heart sound data which are possibly abnormal are output. When a doctor diagnoses a patient, abnormal segmented heart sound data of the patient can be directly selected, so that the aim of improving diagnosis efficiency is fulfilled.
In the heart sound processing method provided in the first embodiment of the present application, the obtained original heart sound data is input to the first neural network model having the heart sound segmentation function, so as to obtain segmented heart sound data, and then the segmented heart sound data is input to the second neural network model having the function of outputting abnormal segmented heart sounds, so as to obtain abnormal segmented heart sound data. When a follow-up doctor diagnoses a patient, abnormal segmented heart sound data of the patient can be directly selected, so that the aim of improving diagnosis efficiency can be fulfilled.
As shown in fig. 3, an embodiment of the present application further provides a heart sound processing apparatus 400, where the heart sound processing apparatus 400 may include: a first input module 410, an acquisition module 420, and a second input module 430.
A first input module 410, configured to input the acquired original heart sound data into a first neural network model created in advance;
An obtaining module 420, configured to obtain segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model;
A second input module 430, configured to input the segmented heart sound data into a pre-created second neural network model;
The obtaining module 420 is further configured to obtain abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
In a possible implementation manner, the apparatus further includes a determining module, configured to confirm, by the first neural network model, that the original heart sound data is a heart sound audio with heart sound characteristics.
In a possible implementation, the apparatus further includes a training module, and the obtaining module 420 is further configured to obtain first sample data, where the first sample data includes a segment label; the training module is used for inputting the first sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds with the same segmented labels, and the first neural network model is obtained.
In a possible implementation manner, the training module is further configured to extract envelope information features of the first sample data; inputting the envelope information features into the neural network for training.
In a possible implementation manner, the obtaining module 420 is further configured to obtain second sample data, where the second sample data includes a segment tag and an abnormal segment tag; the training module is further configured to input the second sample data to a neural network for training, so that the neural network learns common feature information of segmented heart sounds having the same segmentation label and the same abnormal segment label at the same time, and the second neural network model is obtained.
In a possible implementation manner, the training module is further configured to extract mel-frequency cepstrum coefficients of the second sample data; and inputting the Mel frequency cepstrum coefficient into the neural network for training.
In a possible implementation manner, the apparatus further includes a preprocessing module, configured to remove noise in each piece of acquired sample data, and adjust the sample data to be equal in length to obtain the first sample data or the second sample data.
The heart sound processing apparatus 400 provided in the embodiment of the present application has the same implementation principle and the same technical effect as those of the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the apparatus embodiments that are not mentioned.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a computer, the steps included in the above-mentioned heart sound processing method are executed.
In addition, please refer to fig. 4, an embodiment of the present application provides an electronic device 100, which is used to implement a method and an apparatus for processing heart sounds according to an embodiment of the present application.
Alternatively, the electronic Device 100 may be, but is not limited to, a Personal Computer (PC), a smart phone, a tablet PC, a Mobile Internet Device (MID), a Personal digital assistant, a server, and the like. The server may be, but is not limited to, a web server, a database server, a cloud server, and the like.
Among them, the electronic device 100 may include: a processor 110, a memory 120.
it should be noted that the components and structure of electronic device 100 shown in FIG. 4 are exemplary only, and not limiting, and electronic device 100 may have other components and structures as desired. For example, in some cases, electronic device 100 may also include a speaker. A speaker may be connected to the processor 110 and may be used to play the abnormally segmented heart sound data.
The processor 110, memory 120, and other components that may be present in the electronic device 100 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the processor 110, the memory 120, and other components that may be present may be electrically coupled to each other via one or more communication buses or signal lines.
The memory 120 is used for storing a program, such as a program corresponding to the above-mentioned heart sound processing method or the above-mentioned heart sound processing apparatus 400. Optionally, when the heart sound processing apparatus 400 is stored in the memory 120, the heart sound processing apparatus 400 includes at least one software functional module that can be stored in the memory 120 in the form of software or firmware (firmware).
Alternatively, the software function module included in the heart sound processing apparatus 400 may also be solidified in an Operating System (OS) of the electronic device 100.
The processor 110 is adapted to execute executable modules stored in the memory 120, such as software functional modules or computer programs comprised by the heart sound processing apparatus 400. When the processor 110 receives the execution instruction, it may execute the computer program, for example, to perform: inputting the acquired original heart sound data into a first neural network model which is created in advance; obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model; inputting the segmented heart sound data into a pre-created second neural network model; and obtaining abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
of course, the method disclosed in any embodiment of the present application may be applied to the processor 110, or implemented by the processor 110.
In summary, according to the heart sound processing method, the apparatus, the electronic device, and the computer-readable storage medium provided in the embodiments of the present invention, the method obtains segmented heart sound data by inputting the obtained original heart sound data into the first neural network model having a heart sound segmentation function, and then inputs the segmented heart sound data into the second neural network model having an abnormal segmented heart sound output function, so as to obtain abnormal segmented heart sound data. When a follow-up doctor diagnoses a patient, abnormal segmented heart sound data of the patient can be directly selected, so that the aim of improving diagnosis efficiency can be fulfilled.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A method of processing heart sounds, the method comprising:
Inputting the acquired original heart sound data into a first neural network model which is created in advance;
Obtaining segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model;
Inputting the segmented heart sound data into a pre-created second neural network model;
And obtaining abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
2. The method of claim 1, wherein after the obtaining segmented heart sound data obtained by segmenting the raw heart sound data by the first neural network model, and before the obtaining segmented heart sound data obtained by segmenting the raw heart sound data by the first neural network model, the method further comprises:
And confirming the original heart sound data as the heart sound audio with the heart sound characteristics through the first neural network model.
3. The method of claim 1, wherein prior to said inputting the acquired raw heart sound data into the pre-created first neural network model, the method further comprises:
Obtaining first sample data, wherein the first sample data comprises a segmentation label;
Inputting the first sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds with the same segmented labels, and obtaining the first neural network model.
4. The method of claim 3, wherein the inputting the first sample data into a neural network for training comprises:
extracting envelope information characteristics of the first sample data;
Inputting the envelope information features into the neural network for training.
5. the method of claim 1, wherein prior to said inputting the segmented heart sound data into a pre-created second neural network model, the method further comprises:
acquiring second sample data, wherein the second sample data comprises a segmentation label and an abnormal segment label;
Inputting the second sample data into a neural network for training, so that the neural network learns the common characteristic information of the segmented heart sounds simultaneously provided with the same segmented label and the same abnormal segment label, and obtaining the second neural network model.
6. The method of claim 5, wherein said inputting said second sample data into a neural network for training comprises:
Extracting a mel frequency cepstrum coefficient of the second sample data;
And inputting the Mel frequency cepstrum coefficient into the neural network for training.
7. the method according to claim 3 or 5, wherein prior to said obtaining first sample data or prior to said obtaining second sample data, the method further comprises:
and removing noise in the sample data according to each piece of acquired sample data, and adjusting the sample data to be equal in length to obtain the first sample data or the second sample data.
8. A heart sound processing apparatus, characterized in that the apparatus comprises:
The first input module is used for inputting the acquired original heart sound data into a first neural network model which is created in advance;
an obtaining module, configured to obtain segmented heart sound data obtained by segmenting the original heart sound data by the first neural network model;
the second input module is used for inputting the segmented heart sound data into a pre-created second neural network model;
The obtaining module is further configured to obtain abnormal segmented heart sound data obtained by classifying the segmented heart sound data by the second neural network model.
9. An electronic device, comprising: a memory and a processor, the memory and the processor connected;
The memory is used for storing programs;
The processor calls a program stored in the memory to perform the method of any of claims 1-7.
10. a computer-readable storage medium, on which a computer program is stored which, when executed by a computer, performs the method of any one of claims 1-7.
CN201910847225.3A 2019-09-09 2019-09-09 Heart sound processing method and device, electronic equipment and computer readable storage medium Pending CN110558944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910847225.3A CN110558944A (en) 2019-09-09 2019-09-09 Heart sound processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910847225.3A CN110558944A (en) 2019-09-09 2019-09-09 Heart sound processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN110558944A true CN110558944A (en) 2019-12-13

Family

ID=68778445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910847225.3A Pending CN110558944A (en) 2019-09-09 2019-09-09 Heart sound processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110558944A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111759345A (en) * 2020-08-10 2020-10-13 北京中科信利技术有限公司 Heart valve abnormality analysis method, system and device based on convolutional neural network
WO2022161023A1 (en) * 2021-01-26 2022-08-04 上海微创数微医疗科技有限公司 Heart sound signal denoising method and apparatus, and storage medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1850007A (en) * 2006-05-16 2006-10-25 清华大学深圳研究生院 Heart disease automatic classification system based on heart sound analysis and heart sound segmentation method
CN101930734A (en) * 2010-07-29 2010-12-29 重庆大学 Classification and identification method and device for cardiechema signals
CN102934989A (en) * 2012-12-05 2013-02-20 隋聪 Heart sound recognition device and method based on neural network
CN107529645A (en) * 2017-06-29 2018-01-02 重庆邮电大学 A kind of heart sound intelligent diagnosis system and method based on deep learning
CN107811649A (en) * 2017-12-13 2018-03-20 四川大学 A kind of more sorting techniques of heart sound based on depth convolutional neural networks
CN108143407A (en) * 2017-12-25 2018-06-12 四川大学 A kind of heart sound segmentation method for automatically extracting heart sound envelope characteristic
CN108323158A (en) * 2018-01-18 2018-07-24 深圳前海达闼云端智能科技有限公司 Heart sound identification method and cloud system
CN109074822A (en) * 2017-10-24 2018-12-21 深圳和而泰智能控制股份有限公司 Specific sound recognition methods, equipment and storage medium
CN109919210A (en) * 2019-02-26 2019-06-21 华南理工大学 A kind of heart sound semisupervised classification method based on depth convolutional network
US20190192110A1 (en) * 2016-09-07 2019-06-27 Koninklijke Philips N.V. Classifier ensemble for detection of abnormal heart sounds
CN109961017A (en) * 2019-02-26 2019-07-02 杭州电子科技大学 A kind of cardiechema signals classification method based on convolution loop neural network
CN110123367A (en) * 2019-04-04 2019-08-16 平安科技(深圳)有限公司 Computer equipment, recognition of heart sound device, method, model training apparatus and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1850007A (en) * 2006-05-16 2006-10-25 清华大学深圳研究生院 Heart disease automatic classification system based on heart sound analysis and heart sound segmentation method
CN101930734A (en) * 2010-07-29 2010-12-29 重庆大学 Classification and identification method and device for cardiechema signals
CN102934989A (en) * 2012-12-05 2013-02-20 隋聪 Heart sound recognition device and method based on neural network
US20190192110A1 (en) * 2016-09-07 2019-06-27 Koninklijke Philips N.V. Classifier ensemble for detection of abnormal heart sounds
CN107529645A (en) * 2017-06-29 2018-01-02 重庆邮电大学 A kind of heart sound intelligent diagnosis system and method based on deep learning
CN109074822A (en) * 2017-10-24 2018-12-21 深圳和而泰智能控制股份有限公司 Specific sound recognition methods, equipment and storage medium
CN107811649A (en) * 2017-12-13 2018-03-20 四川大学 A kind of more sorting techniques of heart sound based on depth convolutional neural networks
CN108143407A (en) * 2017-12-25 2018-06-12 四川大学 A kind of heart sound segmentation method for automatically extracting heart sound envelope characteristic
CN108323158A (en) * 2018-01-18 2018-07-24 深圳前海达闼云端智能科技有限公司 Heart sound identification method and cloud system
CN109919210A (en) * 2019-02-26 2019-06-21 华南理工大学 A kind of heart sound semisupervised classification method based on depth convolutional network
CN109961017A (en) * 2019-02-26 2019-07-02 杭州电子科技大学 A kind of cardiechema signals classification method based on convolution loop neural network
CN110123367A (en) * 2019-04-04 2019-08-16 平安科技(深圳)有限公司 Computer equipment, recognition of heart sound device, method, model training apparatus and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111759345A (en) * 2020-08-10 2020-10-13 北京中科信利技术有限公司 Heart valve abnormality analysis method, system and device based on convolutional neural network
WO2022161023A1 (en) * 2021-01-26 2022-08-04 上海微创数微医疗科技有限公司 Heart sound signal denoising method and apparatus, and storage medium

Similar Documents

Publication Publication Date Title
Thompson et al. Artificial intelligence-assisted auscultation of heart murmurs: validation by virtual clinical trial
Sengupta et al. Prediction of abnormal myocardial relaxation from signal processed surface ECG
Alaskar et al. The implementation of pretrained AlexNet on PCG classification
Patidar et al. Automatic diagnosis of septal defects based on tunable-Q wavelet transform of cardiac sound signals
Wang et al. Phonocardiographic signal analysis method using a modified hidden Markov model
US20180260706A1 (en) Systems and methods of identity analysis of electrocardiograms
Alqudah et al. Classification of heart sound short records using bispectrum analysis approach images and deep learning
Meziani et al. Analysis of phonocardiogram signals using wavelet transform
Mei et al. Classification of heart sounds based on quality assessment and wavelet scattering transform
Emmanuel A review of signal processing techniques for heart sound analysis in clinical diagnosis
Javed et al. A signal processing module for the analysis of heart sounds and heart murmurs
Gavrovska et al. Automatic heart sound detection in pediatric patients without electrocardiogram reference via pseudo-affine Wigner–Ville distribution and Haar wavelet lifting
Andrisevic et al. Detection of heart murmurs using wavelet analysis and artificial neural networks
CN110558944A (en) Heart sound processing method and device, electronic equipment and computer readable storage medium
CN111370120B (en) Heart diastole dysfunction detection method based on heart sound signals
Ari et al. A robust heart sound segmentation algorithm for commonly occurring heart valve diseases
Sujadevi et al. A hybrid method for fundamental heart sound segmentation using group-sparsity denoising and variational mode decomposition
Lee et al. Combining bootstrap aggregation with support vector regression for small blood pressure measurement
Desai et al. Application of ensemble classifiers in accurate diagnosis of myocardial ischemia conditions
Atbi et al. Separation of heart sounds and heart murmurs by Hilbert transform envelogram
Hazeri et al. Classification of normal/abnormal PCG recordings using a time–frequency approach
Ajitkumar Singh et al. An improved unsegmented phonocardiogram classification using nonlinear time scattering features
Bourouhou et al. Heart sounds classification for a medical diagnostic assistance
Wang et al. PCTMF-Net: heart sound classification with parallel CNNs-transformer and second-order spectral analysis
Mubarak et al. Quality assessment and classification of heart sounds using PCG signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191213

RJ01 Rejection of invention patent application after publication