WO2019140600A1 - 心音的识别方法及云系统 - Google Patents

心音的识别方法及云系统 Download PDF

Info

Publication number
WO2019140600A1
WO2019140600A1 PCT/CN2018/073237 CN2018073237W WO2019140600A1 WO 2019140600 A1 WO2019140600 A1 WO 2019140600A1 CN 2018073237 W CN2018073237 W CN 2018073237W WO 2019140600 A1 WO2019140600 A1 WO 2019140600A1
Authority
WO
WIPO (PCT)
Prior art keywords
heart sound
heart
data
sound data
type
Prior art date
Application number
PCT/CN2018/073237
Other languages
English (en)
French (fr)
Inventor
南一冰
廉士国
Original Assignee
深圳前海达闼云端智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳前海达闼云端智能科技有限公司 filed Critical 深圳前海达闼云端智能科技有限公司
Priority to CN201880000118.0A priority Critical patent/CN108323158A/zh
Priority to PCT/CN2018/073237 priority patent/WO2019140600A1/zh
Publication of WO2019140600A1 publication Critical patent/WO2019140600A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Definitions

  • the present application relates to the field of smart medical technology, and in particular, to a heart sound recognition method and a cloud system.
  • Stethoscope is one of the most commonly used clinical diagnostic tools for doctors. Its intelligence is the first choice for the development of smart medical devices.
  • the intelligent stethoscope is less intelligent, and only after recording the cardiopulmonary sound, the doctor can diagnose or monitor the heart and lung sound of the user through the terminal device, or upload the cardiopulmonary sound.
  • the cloud to achieve the doctor's diagnosis of the disease, as well as the monitoring of health, that is, only the basic data collection function, the diagnosis of the disease, and the part of the health monitoring still need to be further analyzed by the doctor to achieve the data. .
  • the embodiment of the present application proposes a heart sound recognition method and a cloud system, so as to solve the technical problem that the existing intelligent stethoscope has low intelligence and only has basic data collection functions.
  • an embodiment of the present application provides a method for identifying a heart sound, including:
  • the heart sound data is identified based on the heart rate, and the heart sound type of the heart sound data is obtained.
  • an embodiment of the present application provides a heart sound recognition cloud system, including:
  • a first identification network configured to identify the collected sound data to obtain a recognition result
  • a second identification network configured to: if the recognition result is heart sound data, identify the heart sound data based on a heart rate, and obtain a heart sound type of the heart sound data.
  • an embodiment of the present application provides an electronic device, where the electronic device includes:
  • Transceiver memory, one or more processors;
  • One or more modules the one or more modules being stored in the memory and configured to be executed by the one or more processors, the one or more modules comprising Instructions for each step.
  • embodiments of the present application provide a computer program product for use with an electronic device, the computer program product comprising a computer readable storage medium and a computer program mechanism embedded therein, the computer program mechanism Instructions are included for performing the various steps in the above methods.
  • the collected sound data is identified to determine whether it is heart sound data. If the heart sound data is recognized, the heart sound data is recognized based on the heart rate, and the heart sound type of the heart sound data is obtained and fed back to the terminal device. In order to inform the user of the current physical health status, and to facilitate the user to determine whether or not the doctor needs remote assistance according to his or her health status, thereby realizing the diagnosis of the user's condition and the intelligence of the health monitoring, and the recognition accuracy is better.
  • FIG. 1 is a schematic diagram of a method for center-tone recognition according to an embodiment of the present application.
  • FIG. 2 is a schematic flow chart of a method for central sound recognition according to an embodiment of the present application
  • FIG. 3 is a structural diagram of a cloud system for center-tone recognition according to Embodiment 2 of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device according to Embodiment 3 of the present application.
  • the existing intelligent stethoscope can only perform the diagnosis or health monitoring of the user's cardiopulmonary sound through the terminal device by recording the cardiopulmonary sound, or by uploading the cardiopulmonary sound to the cloud, and then remotely performed by the doctor. Disease diagnosis or health monitoring is less intelligent.
  • the embodiment of the present application proposes connecting a smart stethoscope to a terminal device, and uploading the collected voice data from the user to the cloud platform, and using the cloud algorithm to identify whether the voice data is heart sound data, and heart sound data. Whether it is a normal heart sound, if it is a normal heart sound, it sends a normal heart sound recognition result to the terminal device to prompt the user to the current physical health; if it is an abnormal heart sound, the specific heart sound abnormal type is recognized, so that the user can The state of health determines whether or not the doctor's remote help is needed, thus achieving wide application in the field of remote intelligent medical care.
  • FIG. 1 is a schematic diagram of a method for center-speech recognition according to a first embodiment of the present application
  • FIG. 2 is a schematic flowchart of a method for center-speech recognition according to an embodiment of the present application, as shown in FIG. 2, as shown in FIG. 1 and FIG. , the method includes:
  • Step 101 Identify the collected sound data to obtain a recognition result.
  • Step 102 If the recognition result is heart sound data, the heart sound data is identified based on the heart rate, and the heart sound type of the heart sound data is obtained.
  • step 101 the smart stethoscope collects the sound data from the user, and uploads the collected sound data to the cloud recognition network via the terminal device, and the cloud recognition network identifies the collected sound data to obtain the recognition result.
  • the cloud recognition network further processes according to the recognition result. If the recognition result is heart sound data, it further identifies whether the heart sound data is a normal heart sound, and if it is a normal heart sound, sends a normal heart sound recognition result to the terminal device, It prompts the user's current physical health; if it is abnormal heart sound, it identifies the specific heart sound abnormality type, so that the user can decide whether the user needs the doctor's remote help according to his or her health status.
  • the identifying the collected sound data to obtain the recognition result includes:
  • the sound data is identified using a preset first neural network to determine whether the sound data is heart sound data.
  • the collected sound data is identified by using a preset first convolutional neural network (CNN: Convolutional Neural Network), and if the recognition result is not heart sound data, the sound data is not the recognition result of the heart sound data.
  • CNN Convolutional Neural Network
  • the terminal device prompts the user to adjust the position of the collected sound; if the recognition result is the heart sound data, the heart sound data is further processed by the cloud system.
  • the heart sound data is identified based on the heart rate, and the heart sound type of the heart sound data is obtained, including:
  • the preset second neural network includes a feature extraction network and a classification network, and the feature extraction network is trained based on heart rate characteristics.
  • the heart sound data is classified and identified by using a preset second neural network, and the heart sound type of the heart sound data is obtained, including:
  • the heart sound data is a normal heart sound, sending a normal heart sound recognition result
  • the abnormal heart sound is classified and recognized by using a preset third neural network to obtain a heart sound abnormal type.
  • the preset second convolutional neural network is used to classify and recognize the heart sound data, that is, whether the heart sound of the user (for example, the heartbeat frequency) is normal, and if the heart sound data is a normal heart sound, the normal sound heart recognition result is normal.
  • the feature extraction network is trained based on the heart frequency rate feature, and includes:
  • the initialized heart sound data is divided into a plurality of heart sound data samples according to the heart rate
  • a plurality of feature extraction networks are respectively trained according to the plurality of heart sound data samples.
  • the sampling frequency is 1000 Hz, and performing band-pass filtering processing on the resampled heart sound data to obtain heart sound data having a heart frequency ratio in the range of 25 Hz-400 Hz;
  • the normalized heart sound data is divided according to the heart sound period (for example, the heartbeat period), that is, the duration of all heart sound periods is uniformly extended to the longest heart sound period of all heart sound data, for example, the heartbeat period is set to 2.5 seconds. ;
  • splitting the heart sound data of each heart sound cycle after expansion into four parts according to the heart rate for example, splitting the heart sound data into heart frequency ratios in the range of 25 Hz-45 Hz, 45 Hz-80 Hz, 80 Hz-200 Hz, 200 Hz-400 Hz. 4 heart sound sub-data within; or, split the heart sound data into 5 heart sound sub-data in the range of 25 Hz-45 Hz, 45 Hz-80 Hz, 80 Hz-200 Hz, 200 Hz-400 Hz, 400 Hz-500 Hz, where
  • the number of splits can be set according to the actual situation. This implementation does not limit the number of splits.
  • the training process of the feature extraction network specifically includes:
  • the four heart sound sub-data obtained by pre-processing are input into four feature extraction networks respectively to realize training of four feature extraction networks, and four trained feature extraction networks are obtained; or, five heart sounds obtained by pre-processing are obtained.
  • the sub-data is input into the five feature extraction networks respectively to realize the training of the five feature extraction networks, and the trained five feature extraction networks are obtained.
  • the number of the feature extraction networks trained here can be set according to the actual situation. Implementation does not limit the number of feature extraction networks.
  • the training process of the classification network specifically includes:
  • the heart rate factor of 4 or 5 heart-note sub-data is extracted by using the trained 4 or 5 feature extraction networks, and the heart-to-audio rate features of 4 or 5 heart-note sub-data are input into the classification network based on the Sigmoid function,
  • the training of the classification network is implemented, so that the predicted classification network can be used to output the prediction results of the heart sound types of the heart sound data according to the heart rate characteristics of the four or five heart sound sub-data, and the prediction result is 2 and 1 Decimal, that is, the confidence level corresponding to normal heart sounds and abnormal heart sounds.
  • the training of the classification network may also be implemented by the heart rate and the auxiliary features of the plurality of heart sound sub-data, and the auxiliary features may be the first heart sound, the second heart sound, and the Mel frequency cepstrum coefficient (MFCC: Mel Frequency Cepstrum) Coefficient), the standard deviation of the amplitude kurtosis of the first heart sound, the mean of the first heart sound, the standard deviation of the first heart sound, the mean of the second heart sound, the standard deviation of the second heart sound, the first heart sound and the second One or more of the mean of the heart sound interval, the standard deviation of the first heart sound and the second heart sound interval, thereby improving the recognition accuracy of the classification network.
  • MFCC Mel Frequency Cepstrum
  • the heart sound data is classified and identified by using a preset second neural network, and the heart sound type of the heart sound data is obtained, including:
  • the heart sound type of the heart sound data is obtained according to the heart rate characteristics of the plurality of heart sound sub-data.
  • the heart sound type of the heart sound data is obtained according to the heart rate factor feature of the plurality of heart sound sub-data, including:
  • a heart sound type of the heart sound data is determined according to a confidence level of each heart sound type.
  • the specific process of classifying and identifying the heart sound data by using a preset second neural network includes:
  • the prediction results of the heart sound types of the heart sound data of the N heart sound cycles respectively accumulating the sum of the confidences of the normal heart sounds and the abnormal heart sounds, and determining the heart sound type with the sum of the confidences greater than 0.5*N as the final recognition result; or
  • the heart sound type with a higher total confidence level in the prediction result of the heart sound data of the N heart sound cycles is used as the final recognition result, and the determination condition of the final recognition result can be set according to the actual situation, and the specific determination condition is not implemented in the present implementation. Limited.
  • the heart sound type of the heart sound data is obtained according to the heart rate factor feature of the plurality of heart sound sub-data, including:
  • the auxiliary features are a first heart sound, a second heart sound, a Mel frequency cepstral coefficient, a standard deviation of amplitude kurtosis of the first heart sound, an average of the first heart sound, a standard deviation of the first heart sound, and a second The mean of the heart sounds, the standard deviation of the second heart sounds, the mean of the first heart sounds and the second heart sound intervals, and one or more of the standard deviations of the first heart sounds and the second heart sound intervals.
  • the input end of the trained classification network can improve the recognition accuracy of the heart sound type by adding auxiliary features in addition to the heart frequency rate feature of the plurality of heart sound sub data, for example, extracting the first heart in each heart sound cycle.
  • the first heart sound and the second heart sound may be the systolic and diastolic phases of the corresponding heartbeat, or may be the heart sound and the lung sound corresponding to the cardiopulmonary sound, where the first heart sound and the second heart sound may be performed according to actual conditions.
  • the first heart sound and the second heart sound are not specifically limited in this embodiment.
  • Step 201 Input the collected sound data into the first convolutional neural network, that is, CNN1, and identify whether it is heart sound data. If it is not heart sound data, the sound data is not the recognition result of the heart sound data to the terminal device.
  • the first convolutional neural network that is, CNN1
  • Step 202 If it is heart sound data, input the heart sound data into the second convolutional neural network, that is, CNN2, to identify whether the heart sound data (for example, the heartbeat frequency) is normal, and if the heart sound data is normal, send the normal recognition result of the heart sound data to the normal
  • the second convolutional neural network that is, CNN2
  • Step 203 If the heart sound data is abnormal, input the heart sound data into the third convolutional neural network, CNN3, to identify a specific type of heart disease, such as arrhythmia, heart valve disease, etc., and whether the user needs a doctor to remotely help. The result is sent to the terminal device.
  • the third convolutional neural network CNN3
  • a cloud system for heart sound recognition is also provided in the embodiment of the present application. Since the principle of solving problems of these devices is similar to the method for heart sound recognition, the implementation of these devices can be referred to the implementation of the method, and the method is repeated. I won't go into details here.
  • FIG. 3 is a structural diagram of a cloud system for central tone recognition according to Embodiment 2 of the present application.
  • the heart sound recognition cloud system 300 may include: a terminal device 301, a first identification network 302, and a second identification network 303.
  • the terminal device 301 is configured to collect sound data.
  • the first identification network 302 is configured to identify the collected sound data to obtain a recognition result.
  • the second identification network 303 is configured to: if the recognition result is heart sound data, identify the heart sound data based on the heart rate, and obtain a heart sound type of the heart sound data.
  • the first identification network includes:
  • the sound data is identified using a preset first neural network to determine whether the sound data is heart sound data.
  • the second identification network includes:
  • the preset second neural network includes a feature extraction network and a classification network, and the feature extraction network is trained based on heart rate characteristics.
  • the heart sound data is classified and identified by using a preset second neural network, and the heart sound type of the heart sound data is obtained, including:
  • the heart sound data is a normal heart sound, sending a normal heart sound recognition result
  • the abnormal heart sound is classified and recognized by using a preset third neural network to obtain a heart sound abnormal type.
  • the feature extraction network is trained based on the heart frequency rate feature, and includes:
  • the initialized heart sound data is divided into a plurality of heart sound data samples according to the heart rate
  • a plurality of feature extraction networks are respectively trained according to the plurality of heart sound data samples.
  • the heart sound data is classified and identified by using a preset second neural network, and the heart sound type of the heart sound data is obtained, including:
  • the heart sound type of the heart sound data is obtained according to the heart rate characteristics of the plurality of heart sound sub-data.
  • the heart sound type of the heart sound data is obtained according to the heart rate factor feature of the plurality of heart sound sub-data, including:
  • a heart sound type of the heart sound data is determined according to a confidence level of each heart sound type.
  • the heart sound type of the heart sound data is obtained according to the heart rate factor feature of the plurality of heart sound sub-data, including:
  • the auxiliary features are a first heart sound, a second heart sound, a Mel frequency cepstral coefficient, a standard deviation of amplitude kurtosis of the first heart sound, an average of the first heart sound, a standard deviation of the first heart sound, and a second The mean of the heart sounds, the standard deviation of the second heart sounds, the mean of the first heart sounds and the second heart sound intervals, and one or more of the standard deviations of the first heart sounds and the second heart sound intervals.
  • an electronic device is also provided in the embodiment of the present application. Since the principle is similar to that of a heart sound recognition method, the implementation of the method may refer to the implementation of the method, and the repeated description is not repeated.
  • the electronic device includes: a transceiver device 401, a memory 402, one or more processors 403, and one or more modules.
  • the one or more modules are stored in the memory and configured to be executed by the one or more processors, the one or more modules including steps for performing the steps of any of the above methods instruction.
  • the embodiment of the present application further provides a computer program product for use in combination with an electronic device. Since the principle is similar to a method for heart sound recognition, the implementation can refer to the implementation of the method, and the repetition is no longer Narration.
  • the computer program product comprises a computer readable storage medium and a computer program mechanism embodied therein, the computer program mechanism comprising instructions for performing the various steps of any of the above methods.
  • embodiments of the present application can be provided as a method, system, or computer program product.
  • the present application can take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment in combination of software and hardware.
  • the application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

一种心音识别方法及云系统,该方法包括:对采集到的声音数据进行识别,得到识别结果(101);若识别结果为心音数据,则基于心音频率对心音数据进行识别,得到心音数据的心音类型(102)。该方法和系统能够通过告知使用者目前的身体健康状况,以及方便使用者根据自身的健康状态决定是否需要医生的远程帮助,实现使用者病情诊断,以及健康监测的智能化,同时识别精度较佳。

Description

心音的识别方法及云系统 技术领域
本申请涉及智能医疗技术领域,特别涉及心音的识别方法及云系统。
背景技术
随着互联网+和移动医疗技术的发展,智能医疗设备越来越多,听诊器作为医生最常用的临床诊断工具之一,其智能化是智能医疗设备的研发首选。但在现有技术中,智能听诊器的智能化程度较低,仅能够通过对心肺声音进行录音后,由医生通过终端设备对使用者的心肺声音进行病情诊断或者健康监测,或者通过将心肺声音上传至云端以实现医生对病情的诊断,以及对健康的监测等,即仅具备基本的数据采集功能,涉及病情诊断,以及健康监测的部分仍需要由医生对所采集的数据作进一步的分析才能实现。
发明内容
本申请实施例提出了心音的识别方法及云系统,以解决现有智能听诊器智能化程度较低,仅具备基本的数据采集功能的技术问题。
在一个方面,本申请实施例提供了一种心音的识别方法,包括:
对采集到的声音数据进行识别,得到识别结果;
若识别结果为心音数据,则基于心音频率对所述心音数据进行识别,得到所述心音数据的心音类型。
在另一个方面,本申请实施例提供了一种心音的识别云系统,包括:
第一识别网络,用于对采集到的声音数据进行识别,得到识别结果;
第二识别网络,用于若识别结果为心音数据,则基于心音频率对所述 心音数据进行识别,得到所述心音数据的心音类型。
在另一个方面,本申请实施例提供了一种电子设备,所述电子设备包括:
收发设备,存储器,一个或多个处理器;以及
一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行上述方法中各个步骤的指令。
在另一个方面,本申请实施例提供了一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算机程序机制包括用于执行上述方法中各个步骤的指令。
有益效果如下:
本实施例中,通过对采集到的声音数据进行识别,确定是否为心音数据,若识别为心音数据,则基于心音频率对心音数据进行识别,得到心音数据的心音类型,并反馈给终端设备,以告知使用者目前的身体健康状况,以及方便使用者根据自身的健康状态决定是否需要医生的远程帮助等,从而实现使用者病情诊断,以及健康监测的智能化,同时识别精度较佳。
附图说明
下面将参照附图描述本申请的具体实施例,其中:
图1为本申请实施例一中心音识别的方法原理图;
图2为本申请实施例一中心音识别的方法流程示意图;
图3为本申请实施例二中心音识别的云系统结构图;
图4为本申请实施例三中电子设备的结构示意图。
具体实施方式
以下通过具体示例,进一步阐明本发明实施例技术方案的实质。
为了使本申请的技术方案及优点更加清楚明白,以下结合附图对本申请的示例性实施例进行进一步详细的说明,显然,所描述的实施例仅是本申请的一部分实施例,而不是所有实施例的穷举。并且在不冲突的情况下,本说明中的实施例及实施例中的特征可以互相结合。
发明人在发明过程中注意到:
现有的智能听诊器仅能够通过对心肺声音进行录音后,仍需要由医生通过终端设备对使用者的心肺声音进行病情诊断或者健康监测,或者通过将心肺声音上传至云端,再由医生进行远程的病情诊断或者健康监测,智能化程度较低。
针对上述不足/基于此,本申请实施例提出了将智能听诊器连接终端设备,并将采集到的来自使用者的声音数据上传至云平台,利用云端算法识别声音数据是否为心音数据,以及心音数据是否为正常心音,若为正常心音,则发送心音正常的识别结果给终端设备,以提示使用者目前的身体健康;若为异常心音,则识别出具体的心音异常类型,以方便使用者能够根据自身的健康状态决定是否需要医生的远程帮助,从而实现在远程智能医疗领域的广泛应用。
为了便于本申请的实施,下面实例进行说明。
实施例1
图1示出了本申请实施例一中心音识别的方法原理图,图2示出了本申请实施例一中心音识别的方法流程示意图,如图2所示,如图1、图2所示,该方法包括:
步骤101:对采集到的声音数据进行识别,得到识别结果。
步骤102:若识别结果为心音数据,则基于心音频率对所述心音数据进 行识别,得到所述心音数据的心音类型。
在步骤101中,智能听诊器采集来自使用者的声音数据,并将采集到的声音数据经由终端设备上传至云端识别网络,由云端识别网络对采集到的声音数据进行识别,得到识别结果。
在步骤102中,云端识别网络根据识别结果作进一步处理,若识别结果为心音数据,则进一步识别该心音数据是否为正常心音,若为正常心音,则发送心音正常的识别结果给终端设备,以提示使用者目前的身体健康;若为异常心音,则识别出具体的心音异常类型,以方便使用者根据自身的健康状态决定使用者是否需要医生的远程帮助。
在本实施例中,所述对采集到的声音数据进行识别,得到识别结果,包括:
利用预置的第一神经网络对所述声音数据进行识别,确定所述声音数据是否为心音数据。
实施中,利用预置的第一卷积神经网络(CNN:Convolutional Neural Network)对采集到的声音数据进行识别,若识别结果为不是心音数据,则将该声音数据不是心音数据的识别结果发送给终端设备,以提示用户对采集声音的位置进行调整;若识别结果为心音数据,则由云系统对心音数据作进一步处理。
在本实施例中,所述基于心音频率对所述心音数据进行识别,得到所述心音数据的心音类型,包括:
利用预置的第二神经网络对所述心音数据进行识别,得到所述心音数据的心音类型;
所述预置的第二神经网络包括特征提取网络和分类网络,所述特征提取网络为基于心音频率特征训练得到的。
在本实施例中,所述利用预置的第二神经网络对所述心音数据进行分 类识别,得到所述心音数据的心音类型,包括:
利用预置的第二神经网络对所述心音数据进行分类识别,确定所述心音数据是否为正常心音;
若所述心音数据为正常心音,则发送心音正常的识别结果;
若所述心音数据为异常心音,则利用预置的第三神经网络对所述异常心音进行分类识别,得到心音异常类型。
实施中,利用预置的第二卷积神经网络对心音数据进行分类识别,即判断使用者的心音(例如,心跳频率)是否正常,若心音数据为正常心音,则将该心音正常的识别结果发送给终端设备,以提示使用者心音正常,同时保存该正常心音,以便用于使用者身体健康状况的统计分析;若心音数据为异常心音,则利用预置的第三卷积神经网络对该异常心音进行分类识别,确定具体的心脏疾病类型,例如心率失常、心脏瓣膜病变等,并将心脏疾病类型的识别结果发送给终端设备,以方便使用者根据自身的健康状态决定是否需要请求医生的远程帮助。
在本实施例中,所述特征提取网络为基于心音频率特征训练得到的,包括:
将初始化的心音数据按照心音频率划分为多个心音数据样本;
根据多个心音数据样本分别训练得到多个特征提取网络。
实施中,具体包括:
1)对初始化的心音数据进行预处理:
a)对初始化的心音数据进行重采样,采样频率为1000Hz,并对重采样后的心音数据进行带通滤波处理,得到心音频率在25Hz-400Hz范围内的心音数据;
b)对带通滤波处理后的心音数据中的尖峰噪声进行去噪处理;
c)对去噪处理后的心音数据进行均值计算和标准差计算,并通过减均 值和除以标准差的方式对去噪处理后的心音数据进行归一化处理;
d)对归一化处理后的心音数据按照心音周期(例如,心跳周期)进行划分,即将所有心音周期时长统一扩展为所有心音数据中最长的心音周期,例如,设定心跳周期为2.5秒;
e)将扩展后的每个心音周期的心音数据按照心音频率拆分成4个部分,例如,将心音数据拆分成心音频率在25Hz-45Hz、45Hz-80Hz、80Hz-200Hz、200Hz-400Hz范围内的4个心音子数据;或者,将心音数据拆分成心音频率在25Hz-45Hz、45Hz-80Hz、80Hz-200Hz、200Hz-400Hz、400Hz-500Hz范围内的5个心音子数据,此处的拆分数量可根据实际情况进行设定,本实施不对拆分数量进行限定。
2)特征提取网络的训练过程具体包括:
将预处理得到的4个心音子数据分别输入4个特征提取网络中,以实现对4个特征提取网络的训练,得到训练好的4个特征提取网络;或者,将预处理得到的5个心音子数据分别输入5个特征提取网络中,以实现对5个特征提取网络的训练,得到训练好的5个特征提取网络,此处训练的特征提取网络的数量可根据实际情况进行设定,本实施不对特征提取网络的数量进行限定。
3)分类网络的训练过程具体包括:
利用训练好的4个或者5个特征提取网络提取得到4个或者5个心音子数据的心音频率特征,将4个或者5个心音子数据的心音频率特征输入基于Sigmoid函数的分类网络中,以实现对分类网络的训练,从而能够利用训练好的分类网络,根据4个或者5个心音子数据的心音频率特征输出心音数据的各心音类型的预测结果,该预测结果为2个和为1的小数,即分别对应正常心音和异常心音的置信度。
其中,对分类网络的训练还可以通过多个心音子数据的心音频率特征 和辅助特征来实现,辅助特征可以是第一心音、第二心音、梅尔频率倒谱系数(MFCC:Mel Frequency Cepstrum Coefficient)、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个,从而提高分类网络的识别精度。
在本实施例中,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
将所述心音数据按照心音频率划分为多个心音子数据;
利用多个特征提取网络,获取所述多个心音子数据的心音频率特征;
利用分类网络,根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型。
在本实施例中,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
根据所述多个心音子数据的心音频率特征,确定所述心音数据中各心音类型的置信度;
根据所述各心音类型的置信度,确定所述心音数据的心音类型。
实施中,利用预置的第二神经网络对所述心音数据进行分类识别的具体过程包括:
1)对采集到的心音数据进行预处理,其中,将N个心音周期的心音数据按照心音频率拆分成4*N个,或者5*N个心音子数据;
2)将每个心音周期的4个或者5个心音子数据分别输入训练得到的4个或者5个特征提取网络,得到4个或者5个心音子数据的心音频率特征,并将得到的4个或者5个心音子数据的心音频率特征输入训练得到的分类网络中,得到每个心音周期的心音数据的各心音类型的预测结果(例如为置信度),心音类型包括正常心音和异常心音,各心音类型的预测结果为2 个和为1的小数,从而得到N个心音周期的心音数据的正常心音和异常心音的预测结果。
3)针对N个心音周期的心音数据的各心音类型的预测结果,分别累加正常心音和异常心音的置信度总和,并确定置信度总和大于0.5*N的心音类型为最终的识别结果;或者,将N个心音周期的心音数据的预测结果中置信度总和较高的心音类型作为最终的识别结果,此处最终的识别结果的确定条件可根据实际情况进行设定,本实施不对具体的确定条件进行限定。
在本实施例中,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
根据所述多个心音子数据的心音频率特征和辅助特征,确定所述心音数据的心音类型;
所述辅助特征为第一心音、第二心音、梅尔频率倒谱系数、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个。
实施中,训练好的分类网络的输入端除了包括多个心音子数据的心音频率特征外,还可以通过增加辅助特征来提高心音类型的识别精度,例如,提取每个心音周期中的第一心音、第二心音、舒张期的梅尔频率倒谱系数、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个,并添加到训练好的分类网络的输入端。其中,第一心音和第二心音可以是对应心跳的收缩期和舒张期,也可以是对应心肺声音的心脏声音和肺部声音,此处第一心音和第二心音可根据实际情况进行设定,本实施不对第一心音和第二心音进行具体限定。
本申请以具体场景为例,对本申请实施例1进行详细描述,具体流程如下:
步骤201:将采集到的声音数据输入第一卷积神经网络,即CNN1,并识别是否为心音数据,若不是心音数据,则将该声音数据不是心音数据的识别结果发送给终端设备。
步骤202:若是心音数据,则将心音数据输入第二卷积神经网络,即CNN2,识别心音数据(例如,心跳频率)是否正常,若心音数据正常,则将该心音数据正常的识别结果发送给终端设备,同时保存心音数据。
步骤203:若心音数据不正常,则将心音数据输入第三卷积神经网络,即CNN3,识别出具体的心脏疾病类型,例如心率失常、心脏瓣膜病变等,并将使用者是否需要医生远程帮助的结果发送给终端设备。
实施例2
基于同一发明构思,本申请实施例中还提供了一种心音识别的云系统,由于这些设备解决问题的原理与一种心音识别的方法相似,因此这些设备的实施可以参见方法的实施,重复之处不再赘述。
图3示出了本申请实施例二中心音识别的云系统结构图,如图3所示,心音识别的云系统300可以包括:终端设备301、第一识别网络302和第二识别网络303。
终端设备301,用于采集声音数据。
第一识别网络302,用于对采集到的声音数据进行识别,得到识别结果。
第二识别网络303,用于若识别结果为心音数据,则基于心音频率对所述心音数据进行识别,得到所述心音数据的心音类型。
在本实施例中,所述第一识别网络包括:
利用预置的第一神经网络对所述声音数据进行识别,确定所述声音数据是否为心音数据。
在本实施例中,所述第二识别网络包括:
利用预置的第二神经网络对所述心音数据进行识别,得到所述心音数据的心音类型;
所述预置的第二神经网络包括特征提取网络和分类网络,所述特征提取网络为基于心音频率特征训练得到的。
在本实施例中,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
利用预置的第二神经网络对所述心音数据进行分类识别,确定所述心音数据是否为正常心音;
若所述心音数据为正常心音,则发送心音正常的识别结果;
若所述心音数据为异常心音,则利用预置的第三神经网络对所述异常心音进行分类识别,得到心音异常类型。
在本实施例中,所述特征提取网络为基于心音频率特征训练得到的,包括:
将初始化的心音数据按照心音频率划分为多个心音数据样本;
根据多个心音数据样本分别训练得到多个特征提取网络。
在本实施例中,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
将所述心音数据按照心音频率划分为多个心音子数据;
利用多个特征提取网络,获取所述多个心音子数据的心音频率特征;
利用分类网络,根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型。
在本实施例中,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
根据所述多个心音子数据的心音频率特征,确定所述心音数据中各心 音类型的置信度;
根据所述各心音类型的置信度,确定所述心音数据的心音类型。
在本实施例中,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
根据所述多个心音子数据的心音频率特征和辅助特征,确定所述心音数据的心音类型;
所述辅助特征为第一心音、第二心音、梅尔频率倒谱系数、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个。
实施例3
基于同一发明构思,本申请实施例中还提供了一种电子设备,由于其原理与一种心音识别的方法相似,因此其实施可以参见方法的实施,重复之处不再赘述。
图4示出了本申请实施例三中电子设备的结构示意图,如图4所示,所述电子设备包括:收发设备401,存储器402,一个或多个处理器403;以及一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行任一上述方法中各个步骤的指令。
实施例4
基于同一发明构思,本申请实施例还提供了一种与电子设备结合使用的计算机程序产品,由于其原理与一种心音识别的方法相似,因此其实施可以参见方法的实施,重复之处不再赘述。所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算机程序机制包括用于执行任一上述方法中各个步骤的指令。
为了描述的方便,以上所述装置的各部分以功能分为各种模块分别描述。当然,在实施本申请时可以把各模块或单元的功能在同一个或多个软件或硬件中实现。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的 功能的步骤。
尽管已描述了本申请的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请范围的所有变更和修改。

Claims (18)

  1. 一种心音的识别方法,其特征在于,包括:
    对采集到的声音数据进行识别,得到识别结果;
    若识别结果为心音数据,则基于心音频率对所述心音数据进行识别,得到所述心音数据的心音类型。
  2. 如权利要求1所述的方法,其特征在于,所述对采集到的声音数据进行识别,得到识别结果,包括:
    利用预置的第一神经网络对所述声音数据进行识别,确定所述声音数据是否为心音数据。
  3. 如权利要求1所述的方法,其特征在于,所述基于心音频率对所述心音数据进行识别,得到所述心音数据的心音类型,包括:
    利用预置的第二神经网络对所述心音数据进行识别,得到所述心音数据的心音类型;
    所述预置的第二神经网络包括特征提取网络和分类网络,所述特征提取网络为基于心音频率特征训练得到的。
  4. 如权利要求3所述的方法,其特征在于,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
    利用预置的第二神经网络对所述心音数据进行分类识别,确定所述心音数据是否为正常心音;
    若所述心音数据为正常心音,则发送心音正常的识别结果;
    若所述心音数据为异常心音,则利用预置的第三神经网络对所述异常心音进行分类识别,得到心音异常类型。
  5. 如权利要求3所述的方法,其特征在于,所述特征提取网络为基于心音频率特征训练得到的,包括:
    将初始化的心音数据按照心音频率划分为多个心音数据样本;
    根据多个心音数据样本分别训练得到多个特征提取网络。
  6. 如权利要求3或5所述的方法,其特征在于,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
    将所述心音数据按照心音频率划分为多个心音子数据;
    利用多个特征提取网络,获取所述多个心音子数据的心音频率特征;
    利用分类网络,根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型。
  7. 如权利要求6所述的方法,其特征在于,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
    根据所述多个心音子数据的心音频率特征,确定所述心音数据中各心音类型的置信度;
    根据所述各心音类型的置信度,确定所述心音数据的心音类型。
  8. 如权利要求6所述的方法,其特征在于,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
    根据所述多个心音子数据的心音频率特征和辅助特征,确定所述心音数据的心音类型;
    所述辅助特征为第一心音、第二心音、梅尔频率倒谱系数、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个。
  9. 一种心音的识别云系统,其特征在于,包括:
    终端设备,用于采集声音数据;
    第一识别网络,用于对采集到的声音数据进行识别,得到识别结果;
    第二识别网络,用于若识别结果为心音数据,则基于心音频率对所述 心音数据进行识别,得到所述心音数据的心音类型。
  10. 如权利要求9所述的云系统,其特征在于,所述第一识别网络包括:
    利用预置的第一神经网络对所述声音数据进行识别,确定所述声音数据是否为心音数据。
  11. 如权利要求9所述的云系统,其特征在于,所述第二识别网络包括:
    利用预置的第二神经网络对所述心音数据进行识别,得到所述心音数据的心音类型;
    所述预置的第二神经网络包括特征提取网络和分类网络,所述特征提取网络为基于心音频率特征训练得到的。
  12. 如权利要求11所述的云系统,其特征在于,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音类型,包括:
    利用预置的第二神经网络对所述心音数据进行分类识别,确定所述心音数据是否为正常心音;
    若所述心音数据为正常心音,则发送心音正常的识别结果;
    若所述心音数据为异常心音,则利用预置的第三神经网络对所述异常心音进行分类识别,得到心音异常类型。
  13. 如权利要求11所述的云系统,其特征在于,所述特征提取网络为基于心音频率特征训练得到的,包括:
    将初始化的心音数据按照心音频率划分为多个心音数据样本;
    根据多个心音数据样本分别训练得到多个特征提取网络。
  14. 如权利要求11或13所述的云系统,其特征在于,所述利用预置的第二神经网络对所述心音数据进行分类识别,得到所述心音数据的心音 类型,包括:
    将所述心音数据按照心音频率划分为多个心音子数据;
    利用多个特征提取网络,获取所述多个心音子数据的心音频率特征;
    利用分类网络,根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型。
  15. 如权利要求14所述的云系统,其特征在于,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
    根据所述多个心音子数据的心音频率特征,确定所述心音数据中各心音类型的置信度;
    根据所述各心音类型的置信度,确定所述心音数据的心音类型。
  16. 如权利要求14所述的云系统,其特征在于,所述根据所述多个心音子数据的心音频率特征识别得到所述心音数据的心音类型,包括:
    根据所述多个心音子数据的心音频率特征和辅助特征,确定所述心音数据的心音类型;
    所述辅助特征为第一心音、第二心音、梅尔频率倒谱系数、第一心音的振幅峰度的标准差、第一心音的均值、第一心音的标准差、第二心音的均值、第二心音的标准差、第一心音与第二心音间隔的均值、第一心音与第二心音间隔的标准差中的一个或多个。
  17. 一种电子设备,其特征在于,所述电子设备包括:
    收发设备,存储器,一个或多个处理器;以及
    一个或多个模块,所述一个或多个模块被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述一个或多个模块包括用于执行权利要求1-8中任一所述方法中各个步骤的指令。
  18. 一种与电子设备结合使用的计算机程序产品,所述计算机程序产品包括计算机可读的存储介质和内嵌于其中的计算机程序机制,所述计算 机程序机制包括用于执行权利要求1-8中任一所述方法中各个步骤的指令。
PCT/CN2018/073237 2018-01-18 2018-01-18 心音的识别方法及云系统 WO2019140600A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880000118.0A CN108323158A (zh) 2018-01-18 2018-01-18 心音的识别方法及云系统
PCT/CN2018/073237 WO2019140600A1 (zh) 2018-01-18 2018-01-18 心音的识别方法及云系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/073237 WO2019140600A1 (zh) 2018-01-18 2018-01-18 心音的识别方法及云系统

Publications (1)

Publication Number Publication Date
WO2019140600A1 true WO2019140600A1 (zh) 2019-07-25

Family

ID=62895870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/073237 WO2019140600A1 (zh) 2018-01-18 2018-01-18 心音的识别方法及云系统

Country Status (2)

Country Link
CN (1) CN108323158A (zh)
WO (1) WO2019140600A1 (zh)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109107016B (zh) * 2018-08-17 2021-05-25 贵州优品睡眠健康产业有限公司 体感振动音乐助眠系统
CN109330622A (zh) * 2018-11-21 2019-02-15 英华达(上海)科技有限公司 智能人体监测系统及其腹部声音监测装置
CN110123367B (zh) * 2019-04-04 2022-11-15 平安科技(深圳)有限公司 计算机设备、心音识别装置、方法、模型训练装置及存储介质
CN110558944A (zh) * 2019-09-09 2019-12-13 成都智能迭迦科技合伙企业(有限合伙) 心音处理方法、装置、电子设备及计算机可读存储介质
CN111904459A (zh) * 2020-08-27 2020-11-10 广东汉泓医疗科技有限公司 指导快速听诊的心肺音听诊检测仪、听诊系统及听诊方法
CN112598086A (zh) * 2021-03-04 2021-04-02 四川大学 基于深度神经网络的常见结肠部疾病分类方法及辅助系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001022883A1 (en) * 1999-09-29 2001-04-05 Siemens Corporate Research, Inc. Multi-modal cardiac diagnostic decision support system and method
US20050043643A1 (en) * 2003-07-18 2005-02-24 Roland Priemer Extraction of one or more discrete heart sounds from heart sound information
CN1850007A (zh) * 2006-05-16 2006-10-25 清华大学深圳研究生院 基于心音分析的心脏病自动分类系统及其心音分段方法
CN102697520A (zh) * 2012-05-08 2012-10-03 天津沃康科技有限公司 基于智能识别功能的电子听诊器
CN105662457A (zh) * 2016-03-22 2016-06-15 宁波元鼎电子科技有限公司 一种智能听诊器
CN107529645A (zh) * 2017-06-29 2018-01-02 重庆邮电大学 一种基于深度学习的心音智能诊断系统及方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008000254A1 (en) * 2006-06-26 2008-01-03 Coloplast A/S Multi parametric classification of cardiovascular sounds
CN102334985A (zh) * 2010-07-16 2012-02-01 香港理工大学 利用多层前馈网络的心音分析法检测肺动脉血压的方法
CN103479383B (zh) * 2013-09-25 2015-05-20 清华大学 心音信号分析的装置和具有其的智能心脏听诊器
CN105982692A (zh) * 2015-04-29 2016-10-05 广东医学院附属医院 一种宽频带参数监测的多功能听诊器及其实现方法
CN106308845A (zh) * 2015-06-23 2017-01-11 黄楚 智能远程听诊器及其使用方法
CN105212960B (zh) * 2015-08-19 2018-03-30 四川长虹电器股份有限公司 心音信号质量评估方法
CN106983519A (zh) * 2017-05-11 2017-07-28 戊星务(大连)医疗器械有限公司 无线电子听诊显示器及音频和心电示波诊断分析方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001022883A1 (en) * 1999-09-29 2001-04-05 Siemens Corporate Research, Inc. Multi-modal cardiac diagnostic decision support system and method
US20050043643A1 (en) * 2003-07-18 2005-02-24 Roland Priemer Extraction of one or more discrete heart sounds from heart sound information
CN1850007A (zh) * 2006-05-16 2006-10-25 清华大学深圳研究生院 基于心音分析的心脏病自动分类系统及其心音分段方法
CN102697520A (zh) * 2012-05-08 2012-10-03 天津沃康科技有限公司 基于智能识别功能的电子听诊器
CN105662457A (zh) * 2016-03-22 2016-06-15 宁波元鼎电子科技有限公司 一种智能听诊器
CN107529645A (zh) * 2017-06-29 2018-01-02 重庆邮电大学 一种基于深度学习的心音智能诊断系统及方法

Also Published As

Publication number Publication date
CN108323158A (zh) 2018-07-24

Similar Documents

Publication Publication Date Title
WO2019140600A1 (zh) 心音的识别方法及云系统
JP7402879B2 (ja) 被験者の生理学的または生物学的状態または疾患を特定するための方法およびシステム
US10796805B2 (en) Assessment of a pulmonary condition by speech analysis
TWI596600B (zh) 識別生理聲音的方法以及系統
US9198634B2 (en) Medical decision support system
CN110123367B (zh) 计算机设备、心音识别装置、方法、模型训练装置及存储介质
EP3453321B1 (en) Non-invasive method and system for estimating blood pressure from photoplethysmogram using statistical post-processing
US9572538B2 (en) System and method for perfusion-based arrhythmia alarm evaluation
Sedighian et al. Pediatric heart sound segmentation using Hidden Markov Model
Fattah et al. Stetho-phone: Low-cost digital stethoscope for remote personalized healthcare
RU2016146176A (ru) Способ и система неинвазивной скрининговой оценки физиологических параметров и патологий
Banerjee et al. Multi-class heart sounds classification using 2D-convolutional neural network
Hao et al. Spectro-temporal feature based multi-channel convolutional neural network for ecg beat classification
Mustafa et al. Detection of heartbeat sounds arrhythmia using automatic spectral methods and cardiac auscultatory
Oliveira et al. Coupled hidden Markov model for automatic ECG and PCG segmentation
KR102400651B1 (ko) 딥러닝 기반 환자의 저혈압 예측 방법 및 장치
CN115089145A (zh) 基于多尺度残差网络和ppg信号的智能血压预测方法
Huang et al. A Deep-Learning-Based Multi-modal ECG and PCG Processing Framework for Cardiac Analysis
Moukadem et al. High Order Statistics and Time‐Frequency Domain to Classify Heart Sounds for Subjects under Cardiac Stress Test
Hasan et al. Cardiac arrhythmia detection in an ECG beat signal using 1D convolution neural network
Lutin et al. Learning based quality indicator aiding heart rate estimation in wrist-worn PPG
Hussain et al. Deep learning based phonocardiogram signals analysis for cardiovascular abnormalities detection
Gawande et al. Empirical review on heart failure detection techniques using heart sound
RU2756157C1 (ru) Способ анализа баллистокардиографического сигнала для детектирования единичных сердечных ударов в реальном времени
Amini et al. Myocardial infarction prediction using RNN deep learning algorithm on phonocardiogram signals

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18900914

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 19/11/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18900914

Country of ref document: EP

Kind code of ref document: A1