WO2009053913A1 - Device and method for identifying auscultation location - Google Patents
Device and method for identifying auscultation location Download PDFInfo
- Publication number
- WO2009053913A1 WO2009053913A1 PCT/IB2008/054356 IB2008054356W WO2009053913A1 WO 2009053913 A1 WO2009053913 A1 WO 2009053913A1 IB 2008054356 W IB2008054356 W IB 2008054356W WO 2009053913 A1 WO2009053913 A1 WO 2009053913A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- auscultation
- location
- auscultatory
- auscultation device
- Prior art date
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B7/00—Instruments for auscultation
- A61B7/02—Stethoscopes
- A61B7/04—Electric stethoscopes
Definitions
- the present invention relates to a method of auscultating an object, such as a human body.
- the invention also relates to a corresponding auscultation device for performing the auscultation and to a computer program product for carrying out the steps of the method.
- Auscultation is the medical term for listening to the internal sounds of parts of the body, especially heart, lungs, and abdominal organs. Usually, auscultation is done using a stethoscope. A physician learns this skill during his course of study, but also requires substantial clinical experience.
- heart sounds are commonly heard from four locations; (1 ) aortic area, (2) pulmonic area, (3) left sternal edge (tricuspid area) and (4) apex (mitral area) as shown in Figure 1. The sounds will be heard well at these locations as the sound intensity is high.
- US patent application publication 2004/0092846 published on 13 May 2004 and entitled "Graphical user interface and voice-guided protocol for an auscultatory diagnostic decision support system” relates to an apparatus and method for determining an auscultatory diagnostic decision.
- Auscultation is a difficult procedure particularly because a stethoscope transfers only a small fraction of the acoustic signal at the chest surface to the listener's ears and filters the cardiac acoustic signal in the process.
- the system assists listeners by implementing a graphical user interface and voice guided protocol to record data and analyse results for the presence of heart sounds and murmurs. The results are identified in terms of standard clinical auscultatory findings which may then be used to make diagnostic and referral decisions.
- the method involves acquiring signals from different locations on the chest.
- a visual view of a human chest is shown on a display for pointing different locations using positional markers from where the signals need to be acquired.
- this kind of arrangement is referred to as predefined protocol for acquiring signals from plurality of auscultatory locations. That method has the problem that it is very difficult to identify the exact auscultatory locations by seeing a template/picture on a display and also it is very difficult to have positional markers on the acquired image.
- a method of automatically identifying at least one location for auscultation of an object by use of an auscultation device comprising a sensor for receiving auscultatory signals from the object, the method comprising the following steps performed by the auscultation device:
- the present invention provides a method for non-physicians to perform auscultation by providing automatic means to identify the auscultation location.
- the method can automatically guide the user to the desired auscultation location.
- the method is based on the signals coming from the object, such as a human chest, and using pattern matching techniques to identify whether the present location is the location what the physician is looking for, otherwise giving navigation support to the user to assist him in moving the sensor to a proper location.
- This method has the further advantage of working on any person's body and robust in identifying the proper auscultation location. The user need not have the knowledge of exact location of auscultation regions.
- a computer program product comprising instructions for implementing the method according to the first aspect of the invention when loaded and run on computer means of the auscultation device.
- an auscultation device capable of performing auscultation of an object, the auscultation device comprising a sensor for receiving auscultatory signals from the object, and further comprising:
- processing unit for processing at least one of the following: the auscultatory signals and input from a user of the auscultation device;
- each template signal being related to a given location on the object
- - template matching unit for comparing the auscultatory signal and at least one template signal to perform pattern matching
- - output unit for informing the user of the auscultation device about the location of the sensor.
- FIG. 1 shows an upper part of a human body illustrating the four primary locations for auscultation
- - Figure 2 a schematic view of a simplified stethoscope
- - Figure 3 is a block diagram of the electronics part of the stethoscope of
- FIG. 2; and - Figure 4 is a flow chart depicting the method in accordance with an embodiment of the present invention.
- FIG. 2 shows a schematic view of a simplified auscultation device, which in this example is a stethoscope 200.
- the stethoscope includes cardiac acoustic sensor 201 , a display 203, which in this example is a liquid crystal display (LCD), earpieces 205 and an electronics part 207.
- the stethoscope 200 is used to detect heart sounds so that the user can hear these sounds via the provided earpieces 205.
- FIG. 3 shows the structure of the electronics part 207 in more detail.
- the electronics part comprises a user input unit 301 that is arranged to register the user input.
- the user input unit 301 can be a mode selection button or a voice input system for selecting the desired location.
- the user input unit 301 is connected to an input receiver 303 also known as a data processing unit that is arranged to analyse and process the user input and/or signals from the cardiac sensor 201. Based on the analysis, the result is then either fed to a computation unit 305 or to a rules unit 310.
- the computation unit 305 further comprises a database 307 and a template matching unit 309.
- the rules unit 310 is also arranged to receive data from the computation unit 305.
- the rules unit 310 is further connected to a user output unit 312 for outputting information to the user.
- the electronics part 207 further contains a buffer 311 for saving measurement signals from the cardiac sensor 201. The buffer is thus connected to the cardiac sensor 201.
- the buffer 311 is further connected to a segmentation unit 313 which is further connected to the computation unit 305. The purpose of the segmentation unit 313 will be explained later.
- the user of the stethoscope 200 may provide the input on the auscultation location in one of the two possible ways:
- the input receiver unit 303 processes the user input, if it is a choice then sends it to the rules unit 310. This will generate appropriate instructions (in a non-medical language) for the user about the auscultation location using audio. For example, for choice 1 (aortic area) it will tell the user to place the cardiac sensor 201 just below the neckline on the right side. If the input receiver unit 303 detects the sound signal it will send it to the computation unit
- the computation unit 305 comprises two blocks: the database 307 and the template matching unit 309.
- the database 307 contains template heart sounds, both normal and diseased sounds, of the four auscultation areas.
- the heart sounds that are stored in the database 307 are single heart cycle sounds.
- An embodiment of the invention will now be described in more detail with reference to the block diagram of Figure 3 and to a flow chart of Figure 4.
- the user initially communicates in step 401 to the stethoscope 200 the auscultation location he is looking for. Then the stethoscope 200 loads in step 401
- step 403 the template of the heart sound of that location from the database 307 into the template matching unit 309. Once the user places in step 405 the cardiac sensor 201 on the body, the sensor 201 records in step 407 the signal and stores it in the buffer 31 1 .
- the recorded signal is then fed to the segmentation unit 313 for extraction of one heart cycle in step 409.
- the segmentation unit 313, first extracts wavelet and energy based features from the signal and then uses peak detection algorithms to extract one heart cycle.
- This single heart cycle is next aligned in step 41 1 with the template heart signal located in the template matching unit 309 using dynamic time warping (DTW) algorithm. This is required as the recorded signal and template signal differ in length.
- DTW dynamic time warping
- coherence function between the recorded signal and the template signal in the template matching unit 309 is computed in step 413.
- a threshold value corresponding to a coherence function for the chosen location is next determined in step 415.
- the obtained value of the coherence function is next compared in step 417 with the threshold value of the chosen auscultation location. If it is greater than the threshold this means that the two signals match and feedback on auscultation location will be given to the user in step 419. On the other hand, if the obtained value of the coherence function is below the threshold, then feedback is given to the user in step 421.
- This feedback includes instructions for the user in order to help him to navigate to the desired location on the body. This is possible since the current location of the stethoscope is now known. Now the user may reposition the cardiac sensor 201 on the body and the procedure thereby continues in step 405. This iterative procedure will continue till the value of the coherence function exceeds the threshold. Once the threshold is exceeded the user can record the signal at that location and send it to the physician/specialist for diagnosis.
- the feedback information can be provided to the user on the display 203, which is in this example located on the cardiac sensor 201. This is especially advantageous, since the user can see the sensor 201 and thus he can reposition the sensor 201 according to the provided instructions. For instance the display can show a stable green light once the correct auscultation location is reached, otherwise blinking red light is displayed. The effect can further be improved by providing audio instructions on how to reposition the sensor 201 to the desired position. These audio instructions can be provided for instance by playing the words, such as right, left, up and/or down on the earpieces 205. In the above example only one signal corresponding to the desired location, i.e. in this example aortic area, was loaded from the database 307 to the template matching unit 309.
- the user does not provide any indication to the stethoscope 200 on the desired location.
- step 401 would not be performed.
- step 403 template signals corresponding to all four locations are advantageously loaded to the template matching unit 309.
- the user would be instructed to position his sensor on all these four locations.
- the stethoscope 200 identifies a particular location by the process of comparing with the templates then it tags the recorded signal with that location information. After this, it gives feedback to the user to move to the next location. Then the process of comparing the acquired heart signal against the remaining three location templates is completed and the job of recording and tagging the signal with corresponding location information is performed.
- the stethoscope 200 in accordance with the present invention is especially useful for non-physicians to do auscultation themselves. This invention will be applicable for home-use scenario of intelligent stethoscope. The present invention is also useful for remote monitoring and telemedicine applications, where the auscultation can be carried by patients themselves and who can send the heart sounds to the remote specialist.
- the invention equally relates to a computer program product that is able to implement any of the method steps of the embodiments of the invention when loaded and run on computer means of the stethoscope 200.
- the computer program may be stored/distributed on a suitable medium supplied together with or as a part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
- the invention equally relates to an integrated circuit that is arranged to perform any of the method steps in accordance with the embodiments of the invention.
- the present invention relates to a method for appropriately identifying the plurality of locations on any person using signal processing and pattern matching techniques and guiding the physicians/users to the locations using voice/visual display of navigation commands. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not restricted to the disclosed embodiments.
- the display 203 does not have to be located in the cardiac sensor 201 , but it can equally be located elsewhere in the stethoscope or it can even be a physically separate unit. In this case there could be a wireless or wired communication link between the display 203 and the stethoscope 200.
Abstract
A method of automatically identifying at least one location for auscultation of an object by use of an auscultation device (200) comprising a sensor (201) for receiving auscultatory signals from the object. The method comprises the following steps performed by the auscultation device (200) : (a) receiving (407) an auscultatory signal sensed by the sensor (201); (b) comparing (413) the auscultatory signal and at least one template signal to perform pattern matching, each template signal being related to a predetermined location on the object; and (c) based on the result of the pattern matching, informing (419; 421) a user of the auscultation device (200) about the location of the sensor (201).
Description
DEVICE AND METHOD FOR IDENTIFYING AUSCULTATION LOCATION
TECHNICAL FIELD
The present invention relates to a method of auscultating an object, such as a human body. The invention also relates to a corresponding auscultation device for performing the auscultation and to a computer program product for carrying out the steps of the method.
BACKGROUND OF THE INVENTION
Auscultation is the medical term for listening to the internal sounds of parts of the body, especially heart, lungs, and abdominal organs. Usually, auscultation is done using a stethoscope. A physician learns this skill during his course of study, but also requires substantial clinical experience.
It is important to identify the correct auscultation locations, i.e. where to place the stethoscope to listen to respective sounds. For example, heart sounds are commonly heard from four locations; (1 ) aortic area, (2) pulmonic area, (3) left sternal edge (tricuspid area) and (4) apex (mitral area) as shown in Figure 1. The sounds will be heard well at these locations as the sound intensity is high.
US patent application publication 2004/0092846 published on 13 May 2004 and entitled "Graphical user interface and voice-guided protocol for an auscultatory diagnostic decision support system" relates to an apparatus and method for determining an auscultatory diagnostic decision. Auscultation is a difficult procedure particularly because a stethoscope transfers only a small fraction of the acoustic signal at the chest surface to the listener's ears and filters the cardiac acoustic signal in the process. The system assists listeners by implementing a graphical user interface and voice guided protocol to record data and analyse results for the presence of heart sounds and murmurs. The
results are identified in terms of standard clinical auscultatory findings which may then be used to make diagnostic and referral decisions.
However, the above-identified invention offers pretty limited usage.
The method involves acquiring signals from different locations on the chest. A visual view of a human chest is shown on a display for pointing different locations using positional markers from where the signals need to be acquired.
In the above mentioned publication this kind of arrangement is referred to as predefined protocol for acquiring signals from plurality of auscultatory locations. That method has the problem that it is very difficult to identify the exact auscultatory locations by seeing a template/picture on a display and also it is very difficult to have positional markers on the acquired image.
Thus, there is a need for a method of automatically identifying the location for auscultation.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method of automatically identifying at least one location for auscultation of an object by use of an auscultation device comprising a sensor for receiving auscultatory signals from the object, the method comprising the following steps performed by the auscultation device:
- receiving an auscultatory signal sensed by the sensor;
- comparing the auscultatory signal and at least one template signal to perform pattern matching, each template signal being related to a given location on the object; and - based on the result of the pattern matching, informing a user of the auscultation device about the location of the sensor.
Thus, the present invention provides a method for non-physicians to perform auscultation by providing automatic means to identify the auscultation location. The method can automatically guide the user to the desired auscultation location. The method is based on the signals coming from the object, such as a human chest, and using pattern matching techniques to identify whether the present location is the location what the physician is
looking for, otherwise giving navigation support to the user to assist him in moving the sensor to a proper location. This method has the further advantage of working on any person's body and robust in identifying the proper auscultation location. The user need not have the knowledge of exact location of auscultation regions.
According to a second aspect of the invention there is provided a computer program product comprising instructions for implementing the method according to the first aspect of the invention when loaded and run on computer means of the auscultation device. According to a third aspect of the invention there is provided an auscultation device capable of performing auscultation of an object, the auscultation device comprising a sensor for receiving auscultatory signals from the object, and further comprising:
- a processing unit for processing at least one of the following: the auscultatory signals and input from a user of the auscultation device;
- a database for storing at least one template signal, each template signal being related to a given location on the object;
- template matching unit for comparing the auscultatory signal and at least one template signal to perform pattern matching; and - output unit for informing the user of the auscultation device about the location of the sensor.
BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which:
- Figure 1 shows an upper part of a human body illustrating the four primary locations for auscultation;
- Figure 2 a schematic view of a simplified stethoscope; - Figure 3 is a block diagram of the electronics part of the stethoscope of
Figure 2; and
- Figure 4 is a flow chart depicting the method in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION Figure 2 shows a schematic view of a simplified auscultation device, which in this example is a stethoscope 200. The stethoscope includes cardiac acoustic sensor 201 , a display 203, which in this example is a liquid crystal display (LCD), earpieces 205 and an electronics part 207. During operation the stethoscope 200 is used to detect heart sounds so that the user can hear these sounds via the provided earpieces 205.
Figure 3 shows the structure of the electronics part 207 in more detail. The electronics part comprises a user input unit 301 that is arranged to register the user input. The user input unit 301 can be a mode selection button or a voice input system for selecting the desired location. The user input unit 301 is connected to an input receiver 303 also known as a data processing unit that is arranged to analyse and process the user input and/or signals from the cardiac sensor 201. Based on the analysis, the result is then either fed to a computation unit 305 or to a rules unit 310. The computation unit 305 further comprises a database 307 and a template matching unit 309. The rules unit 310 is also arranged to receive data from the computation unit 305. The rules unit 310 is further connected to a user output unit 312 for outputting information to the user. The electronics part 207 further contains a buffer 311 for saving measurement signals from the cardiac sensor 201. The buffer is thus connected to the cardiac sensor 201. The buffer 311 is further connected to a segmentation unit 313 which is further connected to the computation unit 305. The purpose of the segmentation unit 313 will be explained later.
The user of the stethoscope 200 may provide the input on the auscultation location in one of the two possible ways:
1. Selecting the choice 1 to 4 for different locations, this can be realised using a mode button or taking advantage of a voice recognition system if a voice input is provided; or
2. Placing the acoustic sensor 201 of the stethoscope 200 at certain location on the chest thereby providing a sound signal to the stethoscope 200. It is to be noted that four different location alternatives are only given as an example, but depending on the accuracy and/or application environment, fewer or more locations can be provided. In this example the four locations correspond to 1 ) aortic area, 2) pulmonic area, 3) left sternal edge and 4) apex.
The input receiver unit 303 processes the user input, if it is a choice then sends it to the rules unit 310. This will generate appropriate instructions (in a non-medical language) for the user about the auscultation location using audio. For example, for choice 1 (aortic area) it will tell the user to place the cardiac sensor 201 just below the neckline on the right side. If the input receiver unit 303 detects the sound signal it will send it to the computation unit
305. The computation unit 305 comprises two blocks: the database 307 and the template matching unit 309. The database 307 contains template heart sounds, both normal and diseased sounds, of the four auscultation areas.
These will be recorded a priori or generated synthetically using heart models.
In this example the heart sounds that are stored in the database 307 are single heart cycle sounds. An embodiment of the invention will now be described in more detail with reference to the block diagram of Figure 3 and to a flow chart of Figure 4. The user initially communicates in step 401 to the stethoscope 200 the auscultation location he is looking for. Then the stethoscope 200 loads in step
403 the template of the heart sound of that location from the database 307 into the template matching unit 309. Once the user places in step 405 the cardiac sensor 201 on the body, the sensor 201 records in step 407 the signal and stores it in the buffer 31 1 .
The recorded signal is then fed to the segmentation unit 313 for extraction of one heart cycle in step 409. The segmentation unit 313, first extracts wavelet and energy based features from the signal and then uses peak detection algorithms to extract one heart cycle. This single heart cycle is next aligned in step 41 1 with the template heart signal located in the template
matching unit 309 using dynamic time warping (DTW) algorithm. This is required as the recorded signal and template signal differ in length.
Then coherence function between the recorded signal and the template signal in the template matching unit 309 is computed in step 413. This gives correlation between two signals at various frequencies. Since we are interested in heart sounds, which have frequency range of 0-500 Hz, all the correlation values in that range are summed. A threshold value corresponding to a coherence function for the chosen location is next determined in step 415. The obtained value of the coherence function is next compared in step 417 with the threshold value of the chosen auscultation location. If it is greater than the threshold this means that the two signals match and feedback on auscultation location will be given to the user in step 419. On the other hand, if the obtained value of the coherence function is below the threshold, then feedback is given to the user in step 421. This feedback includes instructions for the user in order to help him to navigate to the desired location on the body. This is possible since the current location of the stethoscope is now known. Now the user may reposition the cardiac sensor 201 on the body and the procedure thereby continues in step 405. This iterative procedure will continue till the value of the coherence function exceeds the threshold. Once the threshold is exceeded the user can record the signal at that location and send it to the physician/specialist for diagnosis.
The feedback information can be provided to the user on the display 203, which is in this example located on the cardiac sensor 201. This is especially advantageous, since the user can see the sensor 201 and thus he can reposition the sensor 201 according to the provided instructions. For instance the display can show a stable green light once the correct auscultation location is reached, otherwise blinking red light is displayed. The effect can further be improved by providing audio instructions on how to reposition the sensor 201 to the desired position. These audio instructions can be provided for instance by playing the words, such as right, left, up and/or down on the earpieces 205.
In the above example only one signal corresponding to the desired location, i.e. in this example aortic area, was loaded from the database 307 to the template matching unit 309. But to improve the precision of the feedback information, multiple different template signals corresponding to the desired search area could be equally loaded to the template matching unit 309. In this case the coherence function can be calculated between the signal from the sensor 201 and with several template signals. This would allow more precise feedback information.
In another embodiment, the user does not provide any indication to the stethoscope 200 on the desired location. In this case step 401 would not be performed. Furthermore, in step 403, template signals corresponding to all four locations are advantageously loaded to the template matching unit 309. In this example there would be four threshold values, one threshold value corresponding to each of the four locations. Advantageously the user would be instructed to position his sensor on all these four locations. Once the stethoscope 200 identifies a particular location by the process of comparing with the templates then it tags the recorded signal with that location information. After this, it gives feedback to the user to move to the next location. Then the process of comparing the acquired heart signal against the remaining three location templates is completed and the job of recording and tagging the signal with corresponding location information is performed. Of course it is equally possible to instruct the user to position the sensor 200 on fewer than four locations.
The stethoscope 200 in accordance with the present invention is especially useful for non-physicians to do auscultation themselves. This invention will be applicable for home-use scenario of intelligent stethoscope. The present invention is also useful for remote monitoring and telemedicine applications, where the auscultation can be carried by patients themselves and who can send the heart sounds to the remote specialist. The invention equally relates to a computer program product that is able to implement any of the method steps of the embodiments of the invention when loaded and run on computer means of the stethoscope 200.
The computer program may be stored/distributed on a suitable medium supplied together with or as a part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. The invention equally relates to an integrated circuit that is arranged to perform any of the method steps in accordance with the embodiments of the invention.
Above some embodiments of the invention were described. For the task of automatically analysing the heart signals it is important to capture the sound signals from appropriate locations. The present invention relates to a method for appropriately identifying the plurality of locations on any person using signal processing and pattern matching techniques and guiding the physicians/users to the locations using voice/visual display of navigation commands. While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not restricted to the disclosed embodiments.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims. For instance the display 203 does not have to be located in the cardiac sensor 201 , but it can equally be located elsewhere in the stethoscope or it can even be a physically separate unit. In this case there could be a wireless or wired communication link between the display 203 and the stethoscope 200.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.
Claims
1. A method of automatically identifying at least one location for auscultation of an object by use of an auscultation device (200) comprising a sensor (201 ) for receiving auscultatory signals from the object, the method comprising the following steps performed by the auscultation device (200): - receiving (407) an auscultatory signal sensed by the sensor (201 );
- comparing (413) the auscultatory signal and at least one template signal to perform pattern matching, each template signal being related to a given location on the object; and
- based on the result of the pattern matching, informing (419; 421 ) a user of the auscultation device (200) about the location of the sensor
(201 ).
2. The method according to claim 1 , wherein the method further comprises the auscultation device (200) receiving (401 ) an indication from the user about the desired auscultation location.
3. The method according to any of the preceding claims, wherein the informing comprises providing audio and/or video signals to the user.
4. The method according to any of the preceding claims, wherein the object is a heart and the method further comprises extracting (409) a portion of signal corresponding to one heart cycle from the auscultatory signal and aligning (411 ) the portion of signal with at least one template signal.
5. The method according to claim 4, wherein the alignment is done by using a dynamic time warping algorithm.
6. The method according to any of the preceding claims, wherein the method further comprises computing (413) a coherence function between the auscultatory signal and the at least one template signal.
7. The method according to claim 6, wherein the method further comprises defining (415) a threshold value for the coherence function for a given location.
8. The method according to claim 7, wherein the method further comprises determining (417) whether a value of the computed coherence function exceeds the threshold value and if this is the case, then determining that a correct auscultation location has been found.
9. A computer program product comprising instructions for implementing the steps of a method according to any one of claims 1 through 8 when loaded and run on computer means of the auscultation device (200).
10. An auscultation device (200) capable of performing auscultation of an object, the auscultation device (200) comprising a sensor (201 ) for receiving auscultatory signals from the object, and further comprising:
- a processing unit (303) for processing at least one of the following: the auscultatory signals and input from a user of the auscultation device (200); - a database (307) for storing at least one template signal, each template signal being related to a given location on the object;
- template matching unit (309) for comparing the auscultatory signal and at least one template signal to perform pattern matching; and
- output unit (312) for informing the user of the auscultation device (200) about the location of the sensor (201 ).
1 1 . The auscultation device (200) of claim 10 further comprising a rules unit (310) for providing feedback information to be forwarded to the output unit (312), the feedback information being based on the comparison results of the template matching unit (309).
12. The auscultation device (200) of any one of claims 10-11 further comprising a buffer memory (311 ) connected to the sensor (201 ) for storing the auscultatory signal.
13. The auscultation device (200) of any one of claims 10-12 further comprising a segmentation unit (313) for extracting a portion of signal corresponding to one heart cycle of the auscultatory signal, when the object is a heart.
14. The auscultation device (200) of any of the claims 10-13, wherein the database (307) is arranged to load the at least one template signal to the template matching unit (309).
15. The auscultation device (200) of any of the claims 10-14 further comprising a user input unit (301 ) receiving user input, the user input unit comprising at least one of the following: a mode selection button and voice input system, the user input unit (301 ) is arranged to transfer the user input to the processing unit (303).
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP07301482.1 | 2007-10-22 | ||
EP07301482 | 2007-10-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009053913A1 true WO2009053913A1 (en) | 2009-04-30 |
Family
ID=40351785
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB2008/054356 WO2009053913A1 (en) | 2007-10-22 | 2008-10-22 | Device and method for identifying auscultation location |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2009053913A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010029467A1 (en) * | 2008-09-10 | 2010-03-18 | Koninklijke Philips Electronics N.V. | Method and system for locating a sound source |
US11284827B2 (en) | 2017-10-21 | 2022-03-29 | Ausculsciences, Inc. | Medical decision support system |
WO2022068650A1 (en) * | 2020-09-29 | 2022-04-07 | 华为技术有限公司 | Auscultation position indication method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020183642A1 (en) * | 1998-10-14 | 2002-12-05 | Murphy Raymond L.H. | Method and apparatus for displaying body sounds and performing diagnosis based on body sound analysis |
WO2003011132A2 (en) * | 2001-07-31 | 2003-02-13 | Bluescope Medical Technologies Ltd | Cardio-pulmonary monitoring device |
EP1495721A2 (en) * | 2003-07-08 | 2005-01-12 | Konica Minolta Medical & Graphic, Inc. | Biological-sound data processing system, program, and recording medium |
US20050222515A1 (en) * | 2004-02-23 | 2005-10-06 | Biosignetics Corporation | Cardiovascular sound signature: method, process and format |
US20070055151A1 (en) * | 2005-01-20 | 2007-03-08 | Shertukde Hemchandra M | Apparatus and methods for acoustic diagnosis |
-
2008
- 2008-10-22 WO PCT/IB2008/054356 patent/WO2009053913A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020183642A1 (en) * | 1998-10-14 | 2002-12-05 | Murphy Raymond L.H. | Method and apparatus for displaying body sounds and performing diagnosis based on body sound analysis |
WO2003011132A2 (en) * | 2001-07-31 | 2003-02-13 | Bluescope Medical Technologies Ltd | Cardio-pulmonary monitoring device |
EP1495721A2 (en) * | 2003-07-08 | 2005-01-12 | Konica Minolta Medical & Graphic, Inc. | Biological-sound data processing system, program, and recording medium |
US20050222515A1 (en) * | 2004-02-23 | 2005-10-06 | Biosignetics Corporation | Cardiovascular sound signature: method, process and format |
US20070055151A1 (en) * | 2005-01-20 | 2007-03-08 | Shertukde Hemchandra M | Apparatus and methods for acoustic diagnosis |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010029467A1 (en) * | 2008-09-10 | 2010-03-18 | Koninklijke Philips Electronics N.V. | Method and system for locating a sound source |
US11284827B2 (en) | 2017-10-21 | 2022-03-29 | Ausculsciences, Inc. | Medical decision support system |
WO2022068650A1 (en) * | 2020-09-29 | 2022-04-07 | 华为技术有限公司 | Auscultation position indication method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10362997B2 (en) | System and method of extraction, identification, marking and display of heart valve signals | |
US7300407B2 (en) | Handheld auscultatory scanner with synchronized display of heart sounds | |
Thiyagaraja et al. | A novel heart-mobile interface for detection and classification of heart sounds | |
US6629937B2 (en) | System for processing audio, video and other data for medical diagnosis and other applications | |
CN103313650B (en) | Emergency treatment system information panel | |
US6953436B2 (en) | Multi-modal cardiac diagnostic decision support system and method | |
US9973847B2 (en) | Mobile device-based stethoscope system | |
CN102149329B (en) | Method and system for locating a sound source | |
US20150230751A1 (en) | Information management apparatus, information management method, information management system, stethoscope, information management program, measurement system, control program, and recording medium | |
US20030095148A1 (en) | System and method for analyzing and evaluation of audio signals | |
TW201935468A (en) | System and method for sound localization | |
US8771198B2 (en) | Signal processing apparatus and method for phonocardiogram signal | |
US20230414150A1 (en) | Hand held device for automatic cardiac risk and diagnostic assessment | |
US20050033144A1 (en) | Biological-sound data processing system, program, and recording medium | |
CN107910073A (en) | A kind of emergency treatment previewing triage method and device | |
US20060169529A1 (en) | Diagnosis assist system, data processing terminal and data processing program | |
US20240023817A1 (en) | Compact mobile three-lead cardiac monitoring device with hybrid electrode | |
WO2009053913A1 (en) | Device and method for identifying auscultation location | |
US11232866B1 (en) | Vein thromboembolism (VTE) risk assessment system | |
Omarov et al. | Electronic stethoscope for heartbeat abnormality detection | |
WO2021153863A1 (en) | Method for determining objective target location of body | |
CN107349073A (en) | A kind of clinical care method, apparatus, equipment and storage medium | |
JP7193080B2 (en) | Information processing device, system, information processing method, and program | |
US10952625B2 (en) | Apparatus, methods and computer programs for analyzing heartbeat signals | |
US20220151582A1 (en) | System and method for assessing pulmonary health |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08841739 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08841739 Country of ref document: EP Kind code of ref document: A1 |