CN114376599A - Organ auscultation device and method thereof - Google Patents

Organ auscultation device and method thereof Download PDF

Info

Publication number
CN114376599A
CN114376599A CN202110288344.7A CN202110288344A CN114376599A CN 114376599 A CN114376599 A CN 114376599A CN 202110288344 A CN202110288344 A CN 202110288344A CN 114376599 A CN114376599 A CN 114376599A
Authority
CN
China
Prior art keywords
organ
sound
module
processing module
auscultation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110288344.7A
Other languages
Chinese (zh)
Inventor
刘义昌
孙立民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of CN114376599A publication Critical patent/CN114376599A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Abstract

The invention relates to an organ auscultation device which mainly comprises a radio module, a processing module and a judging module. The sound receiving module receives a sound signal from any part of a human body, the processing module processes the sound signal to capture at least one organ sound source in a plurality of sound characteristic points by using a deep learning algorithm, and auscultation data is generated according to the organ sound source, the judging module judges whether the organ sound source in the auscultation data does not accord with the organ judging threshold according to an organ judging threshold corresponding to the organ sound source, if so, a warning signal is generated to be sent to a warning device or a receiving device, and if not, the other auscultation data is continuously judged. Therefore, the invention can effectively listen to the organ sound source and automatically judge whether the organ sound source is abnormal or not.

Description

Organ auscultation device and method thereof
Technical Field
The present invention relates to organ auscultation technology, and is especially organ auscultation device and method for picking up specific organ sound signal.
Background
Accordingly, when a patient is ill, a physician usually uses a stethoscope to obtain the sound of the heart or other relevant organs of the patient, and then uses the frequency or intensity of the sound to diagnose whether the organs of the patient are abnormal.
However, when a physician listens to the sound of the stomach, for example, the stethoscope receives the sound emitted from the stomach as well as the sound emitted from other parts of the body, so that it is difficult for the physician to accurately determine the sound signal of the stomach in real time. Furthermore, since auscultation sounds are determined by the physician, it is possible that the auscultation sounds have the same signs but different diagnoses among different patients due to external factors or the physician's own factors.
Therefore, there is a need in the art for a technique that can accurately receive organ sounds and perform a correlation determination through electronic data, so as to solve the problems of the prior art.
Disclosure of Invention
The invention aims to provide an organ auscultation device, which mainly utilizes a sound receiving module to receive sound signals from any part of a human body, generates an organ sound source after being processed by a processing module, and then generates an auscultation data according to the organ sound source, and a judging module can judge whether the data is abnormal according to the auscultation data, so that the organ sound source can be effectively heard, and whether the organ sound source is abnormal can be automatically judged, and the problems of the prior art are further improved.
In order to achieve the above object, the present invention provides an organ auscultation device comprising: a sound receiving module, which is arranged to receive a sound signal from any part of a human body; a processing module, which is connected with the sound receiving module to receive the sound signal and separate a plurality of sound sources in the sound signal, and compares a plurality of sound characteristic points in the sound sources according to a plurality of organ audio data, the processing module captures at least one organ sound source in the sound characteristic points by using a deep learning algorithm, and generates a listening and diagnosing data according to the organ sound source; and a judging module, it connects with the processing module in order to receive the auscultation data, the judging module judges whether the organ sound source in the auscultation data is not in accordance with the organ judging threshold according to an organ judging threshold corresponding to the organ sound source, if yes, the judging module produces a warning signal to send to a warning device or a receiving device, if no, the judging module judges another auscultation data.
Preferably, the processing module amplifies a sound feature point audio signal in the organ audio source to generate the retrieved organ audio source, and reduces or masks the sound feature point audio signal in other audio sources to generate the adjusted audio sources, and the processing module performs a synthesis procedure on the retrieved organ audio source and the adjusted audio sources, so that the retrieved organ audio source and the adjusted audio sources are combined to generate an output sound data.
Preferably, the organ auscultation apparatus further comprises: a sound source output element connected with the processing module for receiving and outputting the output sound data.
Preferably, the organ auscultation apparatus further comprises: a temporary storage connected with the processing module and the judging module to store the organ audio data, the sound signal, the sound characteristic points, the organ sound source, the organ judging threshold, the auscultation data or the combination of more than two of the above.
Preferably, the organ auscultation apparatus further comprises: the selection module is connected with the processing module to receive the sound characteristic points, selects at least one sound characteristic point as the sound characteristic point of the organ sound source, and sends the organ sound source corresponding to the selected sound characteristic point to the processing module.
Preferably, the processing module is connected to an external database to retrieve the organ audio data and the organ determination threshold from the database, and the processing module sends the organ determination threshold to the determining module.
Preferably, the organ auscultation apparatus further comprises: a part judging module connected with the processing module and picking up a plurality of judging images from the part of the sound receiving module attached to the human body, the part judging module judges the position of the part of the sound receiving module receiving the sound signal on the human body according to an image contour of each judging image so as to generate a position judging signal and send the position judging signal to the processing module, and the processing module selects part of the organ audio data compared with the sound characteristic points according to the position judging signal.
Preferably, the organ auscultation apparatus further comprises: a part judging module connected with the processing module, the part judging module compares a plurality of sound characteristic points in the sound sources according to all the organ audio data, generates a position judging signal according to the comparison result and sends the position judging signal to the processing module, and the processing module selects part of the organ audio data compared with the sound characteristic points according to the position judging signal.
Preferably, the organ auscultation apparatus further comprises: the setting module is connected with the processing module, receives a setting signal to send the setting signal to the processing module, and the processing module selects part of the organ audio data compared with the sound characteristic points according to the setting signal.
Another objective of the present invention is to provide an organ auscultation method, wherein the sound receiving module is used for receiving sound signals from any part of a human body, the sound signals are processed by the processing module to generate an organ sound source, and the determining module is used for determining whether the data is abnormal according to the auscultation data after generating an auscultation data according to the organ sound source.
In order to achieve another object, the present invention provides an organ auscultation method applied to the organ auscultation device as described above, comprising the steps of: the radio module receives sound signals from any part of a human body; the processing module separates a plurality of sound sources in the sound signal; the processing module compares sound characteristic points in the sound sources according to organ audio data; the processing module captures an organ sound source in the sound characteristic points by using a deep learning algorithm; and the judging module judges whether the organ sound source in the auscultation data does not conform to the organ judging threshold according to the organ judging threshold corresponding to the organ sound source, if so, the judging module generates an alarm signal to be sent to an alarm device or a receiving device, and if not, the judging module judges the other auscultation data.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
FIG. 1 is a schematic diagram of the arrangement of elements according to the present invention;
FIG. 2 is a schematic diagram of an arrangement of sound source output devices according to the present invention;
FIG. 3 is a schematic diagram of the configuration of the components of the selection module according to the present invention;
FIG. 4 is a schematic diagram of an element configuration of a location determination module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a component configuration relationship of a location determination module according to another embodiment of the present invention;
FIG. 6 is a schematic diagram of the configuration of the components of the setting module according to the present invention;
FIG. 7 is a flow chart of the steps of the present invention.
In the figure:
10, a radio module, 11, a sound signal;
20, a processing module, 21, a sound source, 211, sound characteristic points, 212, sound characteristic point audio, 22, organ audio data, 23, an organ sound source, 24, auscultation data, 25, an organ judgment threshold, 26, an organ sound source acquisition, 27, a sound source adjustment and 28, sound data output;
30, a judging module 31, namely an alarm signal;
40, sound source output element;
50, a temporary storage;
60, a database;
70, a selection module;
80 parts judgment module, 801 image capture module, 802 parts processing module, 81 judgment image, 82 position judgment signal;
90, a setting module; 91, setting a signal;
step flow S01-S05.
Detailed Description
The advantages, features and technical solutions of the present invention will be more readily understood by referring to the exemplary embodiments and the accompanying drawings, which are described in greater detail, and the present invention may be implemented in different forms, so it should not be construed that the present invention is limited to the embodiments set forth herein, but rather that the embodiments are provided so that the present disclosure will fully and completely convey the scope of the present invention to those skilled in the art, and the present invention will only be defined by the appended claims.
Furthermore, the terms "comprises" and/or "comprising" refer to the presence of stated features, regions, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, regions, integers, steps, operations, elements, components, and/or groups thereof.
For the purpose of facilitating an understanding of the contents of the present invention and the efficacy achieved, reference will now be made in detail to the following examples of embodiments, which are illustrated in the accompanying drawings:
fig. 1 to fig. 3 are schematic diagrams of the element arrangement relationship, the element arrangement relationship of the audio output element, and the element arrangement relationship of the selection module according to the present invention. As shown in the figure, the present invention is mainly composed of a radio module 10, a processing module 20 and a determining module 30. Wherein the sound receiving module 10 is embodied as a stethoscope head or other related device capable of receiving organ sounds from a human body, and is configured to receive a sound signal 11 from any part of a human body.
The processing module 20 can be a central processing unit or other device capable of processing data, when the processing module 20 is connected to the sound receiving module 10, the processing module 20 can receive the sound signal 11 from the sound receiving module 10, at this time, the processing module 20 will first separate a plurality of sound sources 21 (such as sounds of specific organs and other noises) in the sound signal 11, after each sound source 21 is separated, in order to effectively distinguish the differences of the sound sources 21, the sound of the specific organ is selected through the differences, so the processing module 20 will also compare a plurality of sound characteristic points 211 in the sound sources 21 by using a plurality of organ audio data 22, wherein the organ audio data 22 can be obtained through a period of sound learning or data stored in an original temporary storage. At this time, after the sound feature points 211 are compared, the processing module 20 uses a deep learning algorithm to capture any one of the sound sources 21 corresponding to the organ audio data 22 in the sound feature points 211, and thereby generates an organ sound source 23, and further, the processing module 20 can generate an interpretable auscultation data 24 according to the organ sound source 23.
The determination module 30 is specifically a component for determining data and performing corresponding actions according to the determination result, so that when the determination module 30 is connected to the processing module 20, the determination module 30 can receive the auscultation data 24 from the processing module 20, at this time, the determination module 30 can determine whether the auscultation data 24 is not in accordance with the organ determination threshold 25 according to an originally stored or externally obtained organ determination threshold 25 (for example, when the organ sound source 23 is the sound source 21 of the human stomach, the organ determination threshold 25 is a determination threshold value for determining sounds of the human stomach), and when the determination result is yes, the determination module 30 can generate an alarm signal 31 to be sent to an alarm device or a receiving device, so as to let the relevant person know the abnormal condition through the alarm device or the receiving device, on the contrary, if the determination result is negative, the determining module 30 continues to determine another auscultation data 24.
Thus, by means of the generated digital auscultation data 24, the medical staff or other personnel can know the organ condition in the portion currently being auscultated through the auscultation data 24.
Besides, the organ auscultation device of the present invention can convert the auscultated data into digital signals for the judgment of the relevant personnel, the processing module 20 can be used to further amplify a sound characteristic point audio signal 212 in the organ sound source 23 to generate a detected organ sound source 26, besides the above-mentioned action of highlighting the organ detected sound source 26, the processing module 20 can further reduce or mask the sound characteristic point audio signals 212 in other sound sources 21 to generate the adjusted sound sources 27, and then the processing module 20 can execute a synthesis procedure with respect to the detected organ sound source 26 and the adjusted sound sources 27, so that the detected organ sound source 26 and the adjusted sound sources 27 can be combined to generate an output sound data 28.
As shown in fig. 2, when an audio output device 40 (e.g. the receiver of a stethoscope) is connected to the processing module 20, the audio output device 40 can receive the output sound data 28 from the processing module 20 and further output the output sound data 28 to the medical personnel for listening.
Also, to store the related data of the present invention, a register 50 can be connected to the processing module 20 and the determining module 30 to store the organ audio data 22, the sound signal 11, the sound feature points 211, the organ sound source 23, the organ determination threshold 25, the auscultation data 24, or a combination thereof.
The organ audio data 22 and the organ determination threshold 25 can be obtained by the processing module 20 connecting with an external database 60, retrieving the organ audio data 22 and/or the organ determination threshold 25 from the database 60, and sending the organ determination threshold 25 to the determining module 30 by the processing module 20, or the like.
As shown in fig. 3, in order to provide the user with the capability of manually selecting one of the determined sound feature points 211 as the organ sound source 23, the present invention further provides a selection module 70 connected to the processing module 20 for receiving the sound feature points 211, the user can select at least one of the sound feature points 211 as the sound feature point 211 of the organ sound source 23 through the selection module 70, or the selection module 70 can automatically select at least one of the sound feature points 211 as the sound feature point 211 of the organ sound source 23 according to the related data, and then send the organ sound source 23 corresponding to the selected sound feature point 211 to the processing module 20, so as to facilitate the processing module 20 to perform the subsequent processing determination according to the organ sound source 23.
In addition, since the conversion between digital signals or analog signals, or the processing thereof, is conventional, the operations known in the prior art are not repeated in the above-mentioned operations of signal receiving or outputting.
Please refer to fig. 4 and 5, which are schematic diagrams of an element configuration relationship of a portion determining module according to an embodiment of the present invention and a schematic diagram of an element configuration relationship of a portion determining module according to another embodiment. As shown in the figure, in order to determine from which part of the human body the sound signal 11 received by the sound receiving module 10 is specifically obtained, so that the processing module 20 can compare the organ audio data 22 corresponding to the part, or so that the determining module 30 can determine by using the organ determining thresholds 25 corresponding to the part, as shown in fig. 4, in an embodiment of the present invention, a part determining module 80 can be connected to the processing module 20 to capture a plurality of determining images 81 from the part of the sound receiving module 10 attached to the human body, and the part determining module 80 can determine the position of the sound receiving module 10 on the human body where the sound signal 11 is received according to an image contour of each determining image 81 to generate a position determining signal 82 and send the position determining signal 82 to the processing module 20, specifically, the portion determining module 80 may include at least one image capturing module 801 and a portion processing module 802, the image capturing module 801 may be disposed at a position capable of clearly capturing the portion of the human body, so that when a user uses the sound receiving module 10 to receive the sound signal 11, the image capturing module 801 may simultaneously or nearly simultaneously acquire the determination images 81, at this time, the image capturing module 801 is connected with the portion processing module 802 to transmit the determination images 81, so that after the portion processing module 802 receives the determination images 81, the portion position of the sound receiving module 10 on the human body receiving the sound signal 11 may be determined according to the image contour of each determination image 81, so as to generate the position determination signal 82 and send the position determination signal 82 to the processing module 20.
As shown in fig. 5, in another embodiment of the present invention, the part determining module 80 can be connected to the processing module 10, the part determining module 80 compares the sound feature points 211 in the sound sources 21 according to all the organ audio data 22, and generates the position determining signal 82 according to the comparison result, and sends the position determining signal 82 to the processing module 20, so that the part determining module 80 generates the position determining signal 82 according to the determination data of the relevant organ sound feature points after comparing the sound feature points 211 with the processing module 20, so that the processing module 20 can select the part of the organ audio data 22 compared with the sound feature points 211 according to the position determining signal 82.
Please refer to fig. 6, which is a schematic diagram of the configuration relationship of the components of the setting module according to the present invention. As shown in the figure, the present invention further includes a setting module 90 connected to the processing module 20, wherein the setting module 90 receives a setting signal 91 from the outside or a setting interface and transmits the setting signal 91 to the processing module 20, so that the processing module 20 selects the part of the organ audio data 22 compared with the sound feature points 211 according to the setting signal 91, and thus, the setting module 90 can directly select the part to be auscultated by the user without the part determining module 80.
Please refer to fig. 7, which is a flowchart illustrating steps according to the present invention. As shown in the drawings, the present invention can achieve the above-mentioned function of effectively listening to and searching for a sound source according to the following steps, which includes:
s01: the radio module receives sound signals from any part of a human body;
s02: the processing module separates a plurality of sound sources in the sound signal;
s03: the processing module compares the sound characteristic points in the sound sources according to the organ audio data;
s04: the processing module captures an organ sound source in the sound characteristic points by using a deep learning algorithm;
s05: the judgment module judges whether the organ sound source in the auscultation data does not accord with the organ judgment threshold according to the organ judgment threshold corresponding to the organ sound source, if so, the judgment module generates an alarm signal to be sent to an alarm device or a receiving device, and if not, the judgment module judges the other auscultation data.
Therefore, the invention can effectively listen to the organ sound source and automatically judge whether the organ sound source is abnormal or not.
The preferred embodiments disclosed herein are illustrative, and it will be readily apparent to those skilled in the art that various changes and modifications can be made without departing from the scope of the invention.
In summary, no matter what the purpose, means and efficacy of the present invention are, the technical features different from the conventional ones are shown, and the invention is firstly practical, and also in conformity with the patent requirements of the invention, the invention is solicited from the examination and review board and the patents are granted for the morning to benefit society and bring convenience.

Claims (10)

1. An organ auscultation device, comprising:
the sound receiving module is arranged to receive sound signals from any part of a human body;
the processing module is connected with the sound receiving module to receive the sound signal and separate a plurality of sound sources in the sound signal, and compares a plurality of sound characteristic points in the plurality of sound sources according to a plurality of organ audio data; and
and the judgment module is connected with the processing module to receive the auscultation data, judges whether the organ sound source in the auscultation data does not conform to the organ judgment threshold according to the organ judgment threshold corresponding to the organ sound source, generates a warning signal to be sent to a warning device or a receiving device if the organ sound source in the auscultation data does not conform to the organ judgment threshold, and judges the other auscultation data if the organ sound source in the auscultation data does not conform to the organ judgment threshold.
2. The organ auscultation apparatus of claim 1, wherein the processing module amplifies the sound feature point audio signals within the organ audio source to generate a retrieved organ audio source and reduces or masks the sound feature point audio signals within other audio sources to generate an adjusted audio source, and the processing module performs a synthesis procedure for the retrieved organ audio source and the adjusted audio source such that the retrieved organ audio source and the adjusted audio source are combined to generate output sound data.
3. The organ auscultation apparatus of claim 2, further comprising:
and the sound source output element is connected with the processing module to receive and output the output sound data.
4. The organ auscultation apparatus of claim 1, further comprising:
a temporary storage connected to the processing module and the judging module for storing the organ audio data, the sound signal, the sound feature point, the organ sound source, the organ judgment threshold, the auscultation data, or a combination of the two or more thereof.
5. The organ auscultation apparatus of claim 1, further comprising:
a selection module connected to the processing module for receiving the sound feature points, wherein the selection module selects at least one of the sound feature points as the sound feature point of the organ sound source, and transmits the organ sound source corresponding to the selected sound feature point to the processing module.
6. The organ auscultation apparatus of claim 1, wherein the processing module is coupled to an external database to retrieve the organ audio data and the organ determination threshold from the database, and wherein the processing module sends the organ determination threshold to the determination module.
7. The organ auscultation apparatus of claim 1, further comprising:
the part judging module is connected with the processing module and captures a plurality of judging images from the part of the sound receiving module attached to the human body, the part judging module judges the position of the part of the sound receiving module, which receives the sound signal, on the human body according to the image contour of each judging image so as to generate a position judging signal and send the position judging signal to the processing module, and the processing module selects part of the organ audio data compared with the sound characteristic points according to the position judging signal.
8. The organ auscultation apparatus of claim 1, further comprising:
the part judging module is connected with the processing module, compares the plurality of sound characteristic points in the sound source according to all the organ audio data, generates a position judging signal according to a comparison result and sends the position judging signal to the processing module, and the processing module selects part of the organ audio data compared with the plurality of sound characteristic points according to the position judging signal.
9. The organ auscultation apparatus of claim 1, further comprising:
the setting module is connected with the processing module, receives a setting signal to send the setting signal to the processing module, and the processing module selects part of the organ audio data compared with the sound characteristic points according to the setting signal.
10. An organ auscultation method applied to the organ auscultation apparatus according to any one of claims 1 to 9, comprising the steps of:
the radio module receives sound signals from any part of a human body;
the processing module separates a plurality of sound sources in the sound signal;
the processing module compares sound characteristic points in the sound sources according to organ audio data;
the processing module captures an organ sound source in the sound characteristic points by using a deep learning algorithm; and
the judgment module judges whether the organ sound source in the auscultation data does not conform to the organ judgment threshold according to the organ judgment threshold corresponding to the organ sound source, if so, the judgment module generates an alarm signal to be sent to an alarm device or a receiving device, and if not, the judgment module judges the other auscultation data.
CN202110288344.7A 2020-10-05 2021-03-18 Organ auscultation device and method thereof Pending CN114376599A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW109134472A TW202214187A (en) 2020-10-05 2020-10-05 Organs auscultation device and method thereof
TW109134472 2020-10-05

Publications (1)

Publication Number Publication Date
CN114376599A true CN114376599A (en) 2022-04-22

Family

ID=81195027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110288344.7A Pending CN114376599A (en) 2020-10-05 2021-03-18 Organ auscultation device and method thereof

Country Status (2)

Country Link
CN (1) CN114376599A (en)
TW (1) TW202214187A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589770A (en) * 2011-01-11 2012-07-18 成功大学 Motor torque and rotational speed estimation system and estimation method
CN105534544A (en) * 2015-11-26 2016-05-04 北京航空航天大学 Intelligent heart and lung auscultation apparatus
CN105631224A (en) * 2016-01-05 2016-06-01 惠州Tcl移动通信有限公司 Health monitoring method, mobile terminal and health monitoring system
CN107450883A (en) * 2017-07-19 2017-12-08 维沃移动通信有限公司 A kind of audio data processing method, device and mobile terminal
CN109077751A (en) * 2018-06-28 2018-12-25 上海掌门科技有限公司 For auscultating the method and apparatus of target auscultation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102589770A (en) * 2011-01-11 2012-07-18 成功大学 Motor torque and rotational speed estimation system and estimation method
CN105534544A (en) * 2015-11-26 2016-05-04 北京航空航天大学 Intelligent heart and lung auscultation apparatus
CN105631224A (en) * 2016-01-05 2016-06-01 惠州Tcl移动通信有限公司 Health monitoring method, mobile terminal and health monitoring system
CN107450883A (en) * 2017-07-19 2017-12-08 维沃移动通信有限公司 A kind of audio data processing method, device and mobile terminal
CN109077751A (en) * 2018-06-28 2018-12-25 上海掌门科技有限公司 For auscultating the method and apparatus of target auscultation

Also Published As

Publication number Publication date
TW202214187A (en) 2022-04-16

Similar Documents

Publication Publication Date Title
US6551251B2 (en) Passive fetal heart monitoring system
US5415167A (en) Medical system and associated method for automatic diagnosis and treatment
US8200277B2 (en) Mobile phone with a stethoscope
EP3277176B1 (en) Automatic detection/classification of ecg cable interchange for different ecg lead systems
CN111315299A (en) Sound localization system and method
US20050119585A1 (en) Handheld auscultatory scanner with synchronized display of heart sounds
CN1088422A (en) The diagnostic equipment of labor and method
US20050033144A1 (en) Biological-sound data processing system, program, and recording medium
JPH03504445A (en) Aneurysm sound detector and detection method
US20150088021A1 (en) Vital signs sensing apparatus and associated method
CN105455842A (en) Intelligent auscultation system
CN110960224A (en) Hearing threshold and/or hearing status detection systems and methods
KR101407049B1 (en) An apparatus and method of extracting body signals from signals including noise signals and detected by a stethoscope
CN101128984A (en) Mobile phone with stethoscope
CN107077531B (en) Stethoscope data processing method and device, electronic equipment and cloud server
CN114376599A (en) Organ auscultation device and method thereof
TWI729808B (en) Auscultation device and auscultation method using auscultation device
EP1991128A2 (en) Method, device and system for cardio-acoustic signal analysis
US20220257195A1 (en) Audio pill
Song et al. Passive acoustic maternal abdominal fetal heart rate monitoring using wavelet transform
WO2009053913A1 (en) Device and method for identifying auscultation location
CN113017680A (en) Medical assistance method and system based on data identification technology
US20140012095A1 (en) Storage control apparatus, storage control system, and storage medium
US20170258350A1 (en) Heart Murmur Detection Device and Method Thereof
CN111685791A (en) Collecting and processing system for auscultation data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination