WO2023074823A1 - Heart sound acquisition device, heart sound acquisition system, heart sound acquisition method, and program - Google Patents

Heart sound acquisition device, heart sound acquisition system, heart sound acquisition method, and program Download PDF

Info

Publication number
WO2023074823A1
WO2023074823A1 PCT/JP2022/040255 JP2022040255W WO2023074823A1 WO 2023074823 A1 WO2023074823 A1 WO 2023074823A1 JP 2022040255 W JP2022040255 W JP 2022040255W WO 2023074823 A1 WO2023074823 A1 WO 2023074823A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
auscultation
user
heart sound
sound acquisition
Prior art date
Application number
PCT/JP2022/040255
Other languages
French (fr)
Japanese (ja)
Inventor
貴之 内田
知紀 八田
亮 市川
Original Assignee
テルモ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by テルモ株式会社 filed Critical テルモ株式会社
Publication of WO2023074823A1 publication Critical patent/WO2023074823A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes

Definitions

  • the present disclosure relates to a heart sound acquisition device, a heart sound acquisition system, a heart sound acquisition method, and a program.
  • heart sound sensor that measures heart sounds is worn at the same position every time because the output waveform differs depending on the position where the sensor is worn.
  • the heart sound sensor since the heart sound sensor is to be measured at home, it is desirable that it be of a form that is easy for the patient to wear.
  • an electronic listening system includes a position acquisition unit that acquires position information indicating the position of a heart sound sensor abutted against the body to auscultate a patient's heart sounds (see, for example, Patent Reference 1).
  • the position information of the body in contact with the heart sound sensor is recorded at the time of examination by a doctor, and when the user obtains the heart sound from the next time onward, the heart sound is detected based on the recorded position information.
  • Guidance information can be provided to the user to guide the sensor.
  • Patent Document 1 proposes a method of using a laser, a projector, a head-mounted display, or the like to guide the heart sound sensor to the correct listening position.
  • a third party such as a doctor visually determines the position of the heart sound sensor.
  • an object of the present disclosure which focuses on these points, is to provide a heart sound acquisition device, a heart sound acquisition system, and a heart sound acquisition that enable a user to easily position an auscultation unit at a predetermined position to acquire heart sounds.
  • the object is to provide a method and a program.
  • a heart sound acquisition device includes an auscultation unit configured to acquire heart sounds of a user, an imaging unit that captures an image of the user's body including characteristic points, and a storage unit for storing the relative position of the position where the auscultation unit should be arranged; a display unit for displaying the image captured by the imaging unit; and the characteristic points of the user from the image captured by the imaging unit. and calculating the position where the auscultation unit should be placed based on the position of the feature point and the relative position stored in the storage unit, and displaying the image on the display unit and a control unit for guiding the auscultation unit to a position where the auscultation unit should be placed.
  • the auscultation unit includes a marker
  • the control unit recognizes the position of the auscultation unit on the image by detecting the marker from the image captured by the imaging unit.
  • control unit determines at least one of a distance between the imaging unit and the body of the user and an orientation of the body of the user based on the image captured by the imaging unit. configured to be recognizable.
  • the heart sound acquisition device further includes a communication unit configured to be able to communicate with a server, and the control unit transmits the relative position stored in the storage unit to the Configured to retrieve from the server.
  • the heart sound acquisition device further includes a speaker, and the control unit uses the sound from the speaker when guiding the auscultation unit to a position where the auscultation unit should be placed.
  • the control unit when guiding the auscultation unit to a position where the auscultation unit should be placed, the control unit superimposes at least one of characters and graphics on the image captured by the image capture unit, and display on the display.
  • the auscultation unit includes a pressure sensor that detects pressure between the body of the user and the control unit, based on the pressure detected by the pressure sensor, controls the pressure of the auscultation unit. A contact state of the user with respect to the body is determined.
  • the heart sound acquisition device includes a notification unit that notifies that the auscultation unit is positioned at a position where the auscultation unit should be placed.
  • the heart sound acquisition device includes a conversion unit that converts the heart sound acquired by the auscultation unit into an electrical signal.
  • control unit determines a contact state of the auscultation unit with respect to the body of the user based on the waveform of the electrical signal obtained by converting the heart sounds by the conversion unit.
  • control unit is configured to analyze the waveform of the electrical signal into which the heart sounds are converted by the conversion unit.
  • the heart sound acquisition device further includes a physical information acquisition unit configured to be capable of acquiring physical information other than the heart sounds of the user, and the control unit controls the heart sounds converted by the conversion unit.
  • the waveform of the electrical signal and the waveform of the physical information acquired by the physical information acquisition unit are temporally synchronized.
  • the physical information includes at least one of an electrocardiogram and a pulse wave
  • the control unit controls the heart sound of the user acquired from the auscultation unit and the heart sound acquired from the physical information acquisition unit.
  • a hemodynamic parameter is calculated based on the physical information of the user.
  • a heart sound acquisition system includes a server that stores a relative position where an auscultation unit configured to acquire a user's heart sound with respect to a feature point of the user should be placed; the auscultation unit; an imaging unit that captures an image of a body including characteristic points, a display unit that displays the image captured by the imaging unit, a communication unit that acquires the relative position from the server, and an image captured by the imaging unit recognizing the position of the feature point of the user from the image, calculating the position where the auscultation unit should be arranged based on the position of the feature point and the relative position obtained from the server, and displaying the display; a heart sound acquisition device including a control unit for guiding the auscultation unit to a position where the auscultation unit should be placed on the image displayed on the unit.
  • a heart sound acquisition method is a heart sound acquisition method executed by a control unit to acquire heart sounds using an auscultation unit configured to be capable of acquiring heart sounds of a user.
  • a position where the auscultation unit should be arranged is calculated based on the position where the auscultation unit should be arranged relative to the feature points, and the auscultation unit is displayed on the image displayed on the display unit. to the position where it should be placed.
  • a program as one aspect of the present disclosure is a program for acquiring heart sounds using an auscultation unit configured to be capable of acquiring heart sounds of a user, wherein an imaging unit captures an image of the user's body including characteristic points. a process of recognizing the position of the feature point of the user from the image captured by the imaging unit, the position of the feature point, and the auscultation of the feature point of the user stored in a storage unit a process of calculating a position where the auscultation unit should be arranged based on the relative position of the position where the auscultation unit should be arranged; A processor provided in the heart sound acquisition device executes a process of guiding to the desired position.
  • the user can easily position the auscultation unit at a predetermined position to acquire heart sounds.
  • FIG. 1 is a schematic configuration diagram showing an example of a hemodynamic monitoring system including a heart sound acquisition device according to one embodiment.
  • FIG. 2 is a diagram for explaining an example of usage of the hemodynamic monitoring system of FIG.
  • FIG. 3 is a schematic configuration diagram showing an example of the heart sound acquisition device of FIG. 4 is a perspective view showing an example of the appearance of the main body shown in FIG. 3.
  • FIG. 5 is a functional block diagram showing an example of a control unit in FIG. 3; 6 is a perspective view showing an example of the appearance of the auscultation unit of FIG. 3.
  • FIG. FIG. 7 is a flow chart for determining the position of the auscultation unit in a medical institution.
  • FIG. 1 is a schematic configuration diagram showing an example of a hemodynamic monitoring system including a heart sound acquisition device according to one embodiment.
  • FIG. 2 is a diagram for explaining an example of usage of the hemodynamic monitoring system of FIG.
  • FIG. 3 is a schematic configuration diagram showing an example of the heart sound acquisition device of FIG.
  • FIG. 8 is a diagram illustrating a method of determining the position of an auscultation unit in a medical institution.
  • FIG. 9 is a flowchart for explaining the procedure for calculating hemodynamic parameters.
  • FIG. 10 is a flow chart for explaining the process of placing the auscultation unit in FIG.
  • FIG. 11 is a diagram illustrating an example of a method of arranging the auscultation unit when the user is at home.
  • the hemodynamic monitoring system 10 is a system for remotely monitoring the condition of a user who is a heart failure patient discharged from a medical institution.
  • the hemodynamic monitoring system 10 acquires the user's electrocardiogram, pulse wave, and heart sound data, and analyzes the patient's hemodynamics.
  • Hemodynamic parameters indicating hemodynamics include left ventricular pressure and pulmonary artery pressure.
  • Hemodynamic monitoring is performed in cooperation with the medical institution and the user, for example, according to the procedure shown in Figure 2.
  • a physician treating a heart failure patient decides to perform remote monitoring of a calm patient (user).
  • a doctor at a medical institution specifies the position where the auscultation unit should be placed when acquiring the patient's heart sound, and stores it in the storage unit of the monitoring device or the server.
  • the user borrows home equipment for hemodynamic monitoring from a medical institution or rental company, and measures heart sounds, electrocardiograms, and pulse waves at home.
  • Heart sounds, electrocardiograms and pulse waves are included in physical information. For example, the measurement is performed periodically at a time determined by a doctor every day.
  • Heart sounds are measured by the user placing the auscultation unit at a position stored in the device or server by the doctor. The user sends the measurement results to the medical institution with the home device.
  • the user receives the prescription from the doctor and takes medicine based on the changed prescription.
  • the hemodynamic monitoring system 10 monitors changes in the state of the heart failure patient and, if necessary, quickly changes the prescription or the like, thereby reducing the possibility that the user's condition will worsen and the user will be re-hospitalized. can be reduced.
  • the hemodynamic monitoring system 10 includes a main unit 11 arranged on the user side, an auscultation unit 12 connected to the main unit 11, electrodes 13 and an arm band device 14, and a main unit. 11 and a medical institution system 16 that can communicate with each other via a communication network 17 .
  • the medical institution system 16 is an information system within the medical institution and includes computers such as servers within the medical institution.
  • the electrodes 13 and the arm band device 14 for acquiring physical information other than heart sounds, ie, electrocardiogram and pulse wave information, are included in the physical information acquiring section.
  • the main unit 11 is a computer that acquires and analyzes physical information.
  • the auscultation unit 12 is a sensor that acquires the user's heart sounds, that is, heart sounds.
  • the auscultation unit 12 can be rephrased as a heart sound sensor.
  • the electrodes 13 are electrodes attached to a site such as a wrist and an ankle in order to obtain an electrocardiogram of the user.
  • the electrodes 13 can detect minute electricity generated in the heart.
  • the arm band device 14 wraps an arm band around the user's upper arm or the like and sends air into the arm band to compress blood vessels and measure pulse waves transmitted to the blood vessels due to heartbeats.
  • Main unit 11, auscultation unit 12, electrodes 13, and arm band device 14 are connected by wire or wirelessly so that information can be transmitted and received.
  • the main unit 11 can analyze each waveform of heart sounds, electrocardiograms, and pulse waves.
  • the main unit 11 may store a machine-learned model having heart sounds, an electrocardiogram, and a pulse wave as input data and a patient's hemodynamic parameters as output data. Based on this learned model, the main body 11 may estimate hemodynamic parameters and changes in the patient's condition from the input heart sounds, electrocardiograms, and pulse waves.
  • the hemodynamic monitoring system 10 may be further connected with the server 18.
  • the server 18 may store relative position information regarding the position where the auscultation unit 12 should be placed for each user determined by a doctor at the medical institution.
  • the relative position information is information indicating the relative position of the position where the auscultation unit 12 should be placed with respect to the feature points of the user's body.
  • the main unit 11 may read the relative position information of the auscultation unit 12 from the server 18 when acquiring the user's heart sounds.
  • Server 18 may be located at a location separate from the medical institution or may be internal to the medical institution. Further, when the doctor determines the position where the user's auscultation unit 12 should be placed in the medical institution system 16, the relative position information is stored in the storage unit 22 (see FIG. 3) of the main unit 11 and passed to the user without using the server 18. can be registered directly.
  • a heart sound acquisition device 15 of this embodiment includes a main unit 11 and an auscultation unit 12, which are part of the hemodynamic monitoring system 10 shown in FIG.
  • the heart sound acquisition device 15 is not limited to the hemodynamic monitoring system 10, and can be used for other heart sound acquisition applications.
  • a system including the heart sound acquisition device 15 and the server 18 is called a heart sound acquisition system.
  • the hemodynamic monitoring system 10 of FIG. 1 is a heart sound acquisition system.
  • a more detailed configuration of the main unit 11 and the auscultation unit 12 of the heart sound acquisition device 15 will be described below.
  • main unit 11 includes imaging unit 21 , storage unit 22 , display unit 23 , imaging adjustment unit 24 , control unit 25 , communication unit 26 and speaker 27 . Further, as shown in FIG. 4, the main body portion 11 includes a stand 28 capable of adjusting the orientation of the display portion 23. As shown in FIG. 3
  • the imaging unit 21 is a camera capable of imaging the user's face and upper body. Therefore, the imaging unit 21 may be arranged above or below the display unit 23 .
  • the imaging unit 21 may include a lens and an imaging element.
  • the imaging device is, for example, a CCD image sensor (Charge-Coupled Device Image Sensor) or a CMOS image sensor (Complementary MOS Image Sensor).
  • the storage unit 22 is a memory that stores data required for processing performed in the main unit 11 and data generated in the main unit 11 .
  • the storage unit 22 may store programs executed by the control unit 25, which will be described later.
  • the storage unit 22 may include, for example, one or more of a semiconductor memory, a magnetic memory, an optical memory, and the like.
  • Semiconductor memory may include volatile memory and non-volatile memory.
  • the storage unit 22 may store information on the position where the auscultation unit 12 should be placed.
  • the position where the auscultation unit 12 should be placed is stored as relative position information indicating the relative position with respect to the feature points of the user's body, as will be described later.
  • the control unit 25 estimates the user's hemodynamics by machine learning
  • the storage unit 22 may store a learned model.
  • the display unit 23 displays images under the control of the control unit 25 .
  • the display unit 23 can display an image including at least partially the user's face and upper body imaged by the imaging unit 21, as shown in FIG.
  • a commonly known display can be used for the display unit 23 .
  • the display unit 23 can employ, for example, a liquid crystal display (LCD), an organic EL (Electro-Luminescence) display, an inorganic EL display, a plasma display (PDP: Plasma Display Panel), or the like.
  • the imaging adjustment unit 24 adjusts the imaging direction of the imaging unit 21 under the control of the control unit 25 .
  • the imaging adjustment section 24 may include, for example, a driving section that is incorporated inside the main body section 11 and changes the orientation of the imaging section 21 . Further, the imaging adjustment section 24 may be incorporated in the stand 28 and adjust the orientation of the portion of the main body section 11 including the imaging section 21 .
  • Control section 25 controls each section of the main body section 11, and performs various arithmetic processing for positioning the auscultation section 12 and estimating the user's hemodynamic parameters.
  • Control unit 25 includes one or more processors.
  • the processor includes a general-purpose processor that loads a specific program and executes a specific function, and a dedicated processor that specializes in specific processing. Special purpose processors include Application Specific Integrated Circuits (ASICs) and Programmable Logic Devices (PLDs). PLDs include FPGAs (Field-Programmable Gate Arrays).
  • the control unit 25 may be either SoC (System-on-a-Chip) or SiP (System In a Package) in which one or more processors cooperate.
  • the control unit 25 may include memory built into the processor or memory independent of the processor.
  • the control unit 25 can execute a program that defines control procedures.
  • the control unit 25 may be configured to load and implement a program recorded on a non-transitory computer-readable medium into a memory. Processing performed by the control unit 25 will be further described below with reference to FIG. 5 and the like.
  • the communication unit 26 includes hardware and software for communicating with the medical institution system 16 and the server 18 via the communication network 17.
  • the communication unit 26 corresponds to wired and/or wireless communication means.
  • the communication unit 26 performs processing such as protocol processing related to transmission and reception of information, modulation of transmission signals and demodulation of reception signals.
  • the speaker 27 emits sounds for guiding the user.
  • a known speaker can be used as the speaker 27 .
  • the auscultation unit 12 is a sensor that is brought into contact with the user's own chest to acquire heart sounds.
  • the auscultation unit 12 includes a heart sound acquisition unit 31, a conversion unit 32, a pressure sensor 33, and a vibration unit 34, as shown in FIG.
  • Each component of the auscultation unit 12 may be controlled by the control unit 25 of the main unit 11 .
  • the auscultation section 12 may have a control section (processor) for controlling each component of the auscultation section 12 in cooperation with the control section 25 of the main body section 11 .
  • the main unit 11 and the auscultation unit 12 may be connected by wire or wirelessly.
  • the auscultation unit 12 also has a marker 36 on the surface opposite to the portion that contacts the user's chest, as shown in FIG. Further, the auscultation unit 12 has a handle 37 that the user uses when holding the auscultation unit 12 .
  • the heart sound acquisition unit 31 acquires the user's heart sound.
  • the heart sound acquisition unit 31 can be provided on the side of the auscultation unit 12 that contacts the user's chest.
  • the conversion unit 32 converts the heart sounds acquired by the heart sound acquisition unit 31 into electrical signals.
  • the heart sound acquisition unit 31 and the conversion unit 32 may constitute a heart sound sensor that converts heart sounds into electrical signals.
  • Heart sound sensors include, for example, MEMS (Micro Electro Mechanical Systems) heart sound sensors, heart sound sensors using piezoelectric elements, and heart sound sensors using accelerometers. Note that the heart sound acquisition device of the present disclosure is not limited to this embodiment.
  • the heart sound acquisition device of the present disclosure also includes a form of auscultation unit 12 that does not convert heart sounds into electrical signals. The heart sounds are then transmitted to the physician or user as vibrations of air or objects.
  • the pressure sensor 33 is a sensor that detects the pressure with which the auscultation unit 12 is pressed against the user's chest and outputs it as an electrical signal.
  • a pressure sensor 33 using a piezoelectric effect can be used.
  • the pressure sensor 33 may have, for example, a ring shape along the outer circumference of the surface of the auscultation unit 12 that contacts the chest of the user.
  • the pressure sensors 33 may be provided at a plurality of locations on the outer circumference of the surface of the auscultation unit 12 that contacts the user's chest.
  • the vibrating section 34 includes a vibrator that generates vibrations perceptible to humans.
  • Vibrators include those using eccentric motors, those using linear vibrators, and those using piezoelectric elements.
  • the marker 36 is a mark used to specify the position of the auscultation unit 12 from the image captured by the imaging unit 21. It is preferable that the marker 36 can specify the position of the auscultation unit 12 as a point from the image of the auscultation unit 12 captured by the control unit 25 .
  • the marker 36 is composed of, for example, a pattern including circles. In this case, the control unit 25 can determine that the center of the circle is the position of the auscultation unit 12 .
  • the marker 36 is not limited to this.
  • the marker 36 can be a pattern in which two line segments intersect perpendicularly. In this case, it can be determined that the position of the auscultation unit 12 is the point where the two straight lines intersect.
  • the handle 37 is provided to prevent the user's hand from entering the imaging unit 21 side of the marker 36 and obstructing the imaging of the marker 36 when the user takes an image while holding the auscultation unit 12 . . It is also provided so that the patient can easily adjust the position by holding it with both hands. Various shapes can be adopted as the shape of the handle 37 .
  • the control unit 25 of the main unit 11 includes an image recognition unit 25a, a camera adjustment unit 25b, a direction guide unit 25c, a contact determination unit 25d, a waveform processing unit 25e, a waveform analysis unit 25f, an estimation It includes functional blocks of the section 25g and the determination section 25h.
  • the processing of each functional block may be executed by the same processor or by different processors.
  • the processing of each functional block may be performed by a single software module or may be performed by multiple software modules.
  • the processing of each functional block can be rearranged, separated, or combined. All the functions of each functional block can be regarded as functions of the control unit 25 .
  • the image recognition unit 25a recognizes the user's face, skeleton, etc. from the image captured by the imaging unit 21.
  • the image recognition unit 25a can identify feature points from the recognized user's face, skeleton, and the like. Feature points include, for example, the user's left and right shoulder joints or acromion portions, as well as the eyes, ears, nose, mouth and chin of the face.
  • the image recognition unit 25a can further recognize the marker 36 of the auscultation unit 12 from the image captured by the imaging unit 21 and specify the position of the auscultation unit 12 .
  • the image recognition unit 25a can further recognize at least one of the distance between the imaging unit 21 and the user's body and the direction of the user's body from the image captured by the imaging unit 21.
  • the image recognition unit 25a can recognize when the orientation of the user's body deviates from the correct orientation, for example, the front. Further, when a predetermined range of the user's body is out of the field of view of the imaging section 21, the image recognition section 25a can recognize this.
  • the camera adjustment unit 25b controls the imaging adjustment unit 24 to change the imaging orientation of the imaging unit 21. can be adjusted.
  • the direction guide unit 25c identifies the position where the auscultation unit 12 should be placed based on the positions of the feature points recognized by the image recognition unit 25a and the relative position information stored in the storage unit 22.
  • the relative position information is information indicating the relative position of the position where the auscultation unit 12 should be placed with respect to the feature point.
  • the direction guide unit 25c displays the position where the auscultation unit 12 should be placed on the image displayed on the display unit 23.
  • the position where the auscultation unit 12 should be placed may be displayed by displaying a circular or square mark in a specific color such as red, or by blinking the mark.
  • the direction guide section 25c may further superimpose on the image displayed on the display section 23 to display the direction in which the auscultation section 12 should be moved by graphics such as straight lines and arrows, characters, and the like.
  • the direction in which the auscultation unit 12 should move is the direction toward the position where the auscultation unit 12 should be arranged.
  • the direction guide section 25c may guide the user in the direction in which the auscultation section 12 should be moved by voice from the speaker 27 .
  • the direction guide unit 25c vibrates the vibrating unit 34 so that the user does not know that the auscultation unit 12 is moving in the wrong direction. You can let me know you are there. Conversely, the direction guide section 25c may cause the vibration section 34 to vibrate when the user moves the auscultation section 12 in the direction in which the auscultation section 12 should be moved.
  • the direction guide unit 25c superimposes at least one of characters and graphics on the image displayed on the display unit 23, thereby And/or the user may be notified that the auscultation unit 12 is aligned with the correct measurement position by emitting sound using the speaker 27 and/or by vibrating the vibration unit 34 .
  • the contact determination unit 25d can determine the contact state of the auscultation unit 12 with the user's body based on the pressure detected by the pressure sensor 33.
  • the contact determination unit 25d may consider the waveform of the heart sound when determining the contact state. For example, the contact determination unit 25d determines that the pressure detected by the pressure sensor 33 is equal to or less than a predetermined value, and/or the waveform analysis unit 25f, which will be described later, does not detect at least one of sound I and sound II from the waveform of the heart sound. At this time, it may be determined that the pressure of the auscultation unit 12 against the user's body is insufficient.
  • the contact determination unit 25d uses at least one of the image displayed on the display unit 23 and the sound from the speaker 27 to instruct the user to increase the pressing pressure of the auscultation unit 12. can guide you to do so.
  • the auscultation unit 12 may be provided with a lamp that indicates that the pressing force of the auscultation unit 12 is insufficient.
  • the contact determination section 25 d may control lighting of this lamp according to the output of the pressure sensor 33 .
  • the waveform processing unit 25e removes noise from the waveform of the heart sound acquired by the auscultation unit 12 and converted into an electrical signal by the conversion unit 32. For example, the waveform processing unit 25e performs a filtering process on the electrical signal of heart sounds to remove noises not derived from heart beats, such as environmental sounds and breathing sounds. When the main unit 11 acquires electrocardiogram and pulse wave signals from the electrodes 13 and the arm band device 14, the waveform processing unit 25e may remove noise from the waveforms of these signals.
  • the waveform analysis unit 25f analyzes the cardiac waveform.
  • the waveform analysis unit 25f extracts feature amounts from the waveform of the heart sound from which noise has been removed by the waveform processing unit 25e.
  • the waveform analysis unit 25f for example, extracts the timings of the first and second heart sounds.
  • the waveform analysis unit 25f may analyze acceleration and fragmentation of the first sound and the second sound.
  • the waveform analysis unit 25f further analyzes the waveforms of the electrocardiogram and pulse wave.
  • the waveform analysis unit 25f for example, extracts the timing, width, magnitude, etc. of the Q wave, R wave, and S wave from the waveform of the electrocardiogram.
  • the waveform analysis unit 25f can extract, for example, the timing and duration of diastole and systole from the waveform of the pulse wave.
  • the waveform analysis unit 25f can time-synchronize and analyze the waveform of the electrical signal obtained by converting the heart sounds, and the waveforms of the electrocardiogram and the pulse wave obtained by the electrodes 13 and the arm band device 14 .
  • the estimation unit 25g calculates hemodynamic parameters based on the feature amount extracted by the waveform analysis unit 25f.
  • the hemodynamic parameters include at least one of left ventricular pressure and pulmonary artery pressure.
  • the estimation unit 25g may estimate the hemodynamic parameters by machine learning as described above.
  • the means by which the estimation unit 25g calculates the hemodynamic parameters is not limited to one using machine learning.
  • the determination unit 25h acquires hemodynamic parameters from the estimation unit 25g.
  • the determination unit 25h continuously monitors hemodynamic parameters and determines changes and/or abnormalities in the heart condition.
  • control unit 25 does not have to execute at least part of the processing of the waveform processing unit 25e, the waveform analysis unit 25f, the estimation unit 25g, and the determination unit 25h. These processes may be performed by the medical institution system 16 instead of the control unit 25 . Also, these processes may be performed by the server 18 provided at a location different from the medical institution.
  • a doctor at a medical institution uses the heart sound acquisition device 15 to bring the auscultation unit 12 into contact with the user's chest and searches for a position where heart sounds can be acquired satisfactorily (step S101).
  • the heart sound acquisition device 15 used at this time may be the same as or different from the device used by the user at home.
  • the control unit 25 of the main unit 11 determines whether heart sounds can be acquired from the auscultation unit 12 (step S102).
  • the control unit 25 can determine that the heart sounds have been acquired when the electrical signals of the heart sounds converted into electrical signals by the conversion unit 32 include predetermined sounds, such as the I sound and/or the II sound.
  • the control unit 25 can determine that the heart sound could not be acquired when the predetermined sound signal in the electrical signal of the heart sound cannot be identified because it is buried in noise. If the heartbeat could be acquired (step S102: Yes), the controller 25 proceeds to the next step S104. If the heartbeat could not be acquired (step S102: No), the controller 25 proceeds to step S103.
  • step S103 the control unit 25 uses the display unit 23 and/or the speaker 27 to guide the user to correct the position of the auscultation unit 12.
  • the doctor changes the position of the auscultation unit 12 to acquire the heart sound according to the guidance.
  • step S103 the process returns to step S101, and the doctor again brings the auscultation unit 12 into contact with the user's chest.
  • the doctor determines the position of the auscultation unit 12 according to guidance from the control unit 25, but the present invention is not limited to this, and the doctor may determine the position for acquiring heart sounds by auscultation.
  • step S102 when the auscultation unit 12 can acquire heart sounds, the doctor puts the auscultation unit 12 against the user's chest, and the control unit 25 of the main unit 11 controls the imaging unit 21 to detect the upper body of the user. Then, the auscultation unit 12 including the marker 36 is imaged (step S104).
  • step S105 the control unit 25 attempts to recognize the user's face and skeleton in the captured image.
  • step S105: No the control unit 25 proceeds to step S106.
  • step S106 the control unit 25 adjusts the orientation of the imaging unit 21 so that the user's image is included in the captured image.
  • the control section 25 may control the imaging adjustment section 24 to automatically adjust the orientation of the imaging section 21 . If the orientation of the imaging unit 21 cannot be adjusted within the adjustable range of the imaging adjustment unit 24, the control unit 25 controls the display unit 23 and the speaker 27 to move the main unit 11 to adjust the orientation of the imaging unit 21. Advise the physician to change The doctor adjusts the orientation of the imaging unit 21 accordingly. After adjusting the orientation of the imaging unit 21, the control unit 25 returns to the process of step S104.
  • step S105 if the user's face, skeleton, etc. can be recognized (step S105: No), the control unit 25 extracts the user's feature points from the image captured by the imaging unit 21 (step S107).
  • the feature points include, for example, the left shoulder 41 and right shoulder 42, and the eyes, nose, mouth, ears, and chin included in the face 43, as shown in FIG.
  • Left shoulder 41 and right shoulder 42 may be, for example, near the shoulder joint or acromion. These feature points can be distinguished to some extent even if the user is wearing clothes.
  • Feature points may also include the user's clavicle and/or nipple. In order to recognize these feature points, the user needs to be imaged with the upper half of the body naked.
  • feature points of the left shoulder 41, the right shoulder 42, and the face 43 which are recognizable to some extent even when the user is wearing clothes, are used.
  • the control unit 25 determines two axes based on the feature points (step S108).
  • the control unit 25 determines, for example, the median line and the horizontal line as two axes.
  • the median line is a line connecting the center point of a set of symmetrically positioned feature points such as the eyes and the mouth or nose.
  • the midline can be the y-axis.
  • the horizontal line is, for example, a line connecting symmetrically positioned feature points such as the left shoulder 41 and the right shoulder 42 .
  • the horizontal line can be the x-axis perpendicular to the y-axis.
  • the control unit 25 determines the position of the auscultation unit 12 on the image captured by the imaging unit 21 (step S109).
  • the position of the auscultatory portion 12 is determined by the position of the marker 36 on the image.
  • the position of the auscultation unit 12 can be expressed as coordinates on the x-axis and y-axis determined in step S108. That is, the position of the auscultation unit 12 is expressed as relative position information indicating the relative position with respect to the feature point on the image.
  • the control unit 25 stores the relative position information of the auscultation unit 12 in the storage unit 22 and/or transmits it to the server 18 (step S110).
  • the control unit 25 stores the relative position information of the auscultation unit 12 in the storage unit 22. You can remember.
  • the control unit 25 determines the position where the auscultation unit 12 should be placed so that the user can read from home.
  • the relative position information is stored in server 18 .
  • the relative position information of the position where the auscultation unit 12 should be placed stored in the server 18 is acquired from the server 18 by the control unit 25 via the communication unit 26 before the user uses the heart sound acquisition device 15 at home. It is stored in the storage unit 22 .
  • a doctor uses a device similar to the heart sound acquisition device 15 used by a patient at home at a medical institution.
  • a dedicated device different from the device used by the patient at home may be used to specify the position where the auscultation unit 12 should be placed.
  • the user wears the electrodes 13 for acquiring an electrocardiogram and the arm band device 14 for acquiring a pulse wave (step S201).
  • the user positions the auscultation unit 12 to acquire heart sounds (step S202). Details of the positioning of the auscultatory portion 12 are described below with reference to FIG.
  • the procedure shown in the flowchart of FIG. 10 corresponds to the heart sound acquisition method of the present disclosure.
  • the user places the main unit 11 on a desk or the like so that the imaging unit 21 of the heart sound acquisition device 15 is positioned and oriented appropriately for imaging the user's upper body.
  • the imaging section 21 is independent of the main body section 11, the user arranges the imaging section 21 at an appropriate position.
  • the user takes an image of his or her upper body using the imaging unit 21 (step S301).
  • the control section 25 of the main body section 11 displays the image captured by the imaging section 21 on the display section 23 .
  • the image displayed on the display unit 23 can be a left-right reversed image so that it looks like a mirror image to the user.
  • the imaging section 21 of the main body section 11 may continuously capture images of the user. The following processing may be performed on successively captured images.
  • the control unit 25 of the main unit 11 attempts to recognize the user's face and skeleton in the image captured by the imaging unit 21 (step S302).
  • the control unit 25 proceeds to the next step S304.
  • the controller 25 proceeds to step S303.
  • step S303 the control unit 25 adjusts the position and orientation of the imaging unit 21.
  • the control section 25 may control the imaging adjustment section 24 to automatically adjust the orientation of the imaging section 21 .
  • the control unit 25 controls the display unit 23 and/or the speaker 27 to move the main unit 11 to change the direction of the imaging unit 21. to the user. The user adjusts the position and orientation of the imaging unit 21 accordingly.
  • step S ⁇ b>304 the control unit 25 attempts to display the position where the auscultation unit 12 should be placed so as to be superimposed on the image of the user's upper body displayed on the display unit 23 .
  • the control unit 25 extracts feature points included in the image captured by the imaging unit 21 from the user's face and skeleton recognized in step S302. Based on the feature points, the control unit 25 identifies the axes corresponding to the two axes identified in step S107. Based on the relative position information stored in the storage unit 22, the control unit 25 can specify the position where the auscultation unit 12 should be arranged as coordinates on these two axes.
  • the position where the auscultation unit 12 should be placed is displayed as the target position P on the image.
  • the target position P may be displayed on the image with any mark. Since the image of the user's upper body displayed on the display unit 23 is a left-right reversed image, the target position P corresponding to the position where the auscultation unit 12 should be arranged, which is set in advance by the medical institution, is left-right reversed. is displayed at the position where the
  • step S304: Yes If the target position P at which the auscultation unit 12 should be placed can be superimposed on the image displayed on the display unit 23 (step S304: Yes), the control unit 25 proceeds to the next step S305. If the target position P where the auscultation unit 12 should be placed cannot be superimposed on the image displayed on the display unit 23 (step S304: No), it is considered that the range captured by the imaging unit 21 is not appropriate. be done. In this case, the control unit 25 proceeds to step S303 described above and adjusts the position and orientation of the imaging unit 21 .
  • step S304 the distance from the user's body to the imaging unit 21 and the orientation (inclination) of the user's body with respect to the imaging unit 21 may differ from the conditions when the image was captured at the medical institution.
  • the control unit 25 Based on the arrangement of the feature points included in the image captured by the imaging unit 21, the control unit 25 converts the coordinates of the auscultation unit 12 in the image captured by the user at the medical institution to the coordinates of the image captured at home. Coordinates can be transformed.
  • the control unit 25 uses the display unit 23 and/or the speaker 27 so that the position, orientation, and size of the user's body substantially match those of the images taken at the medical institution. , may guide the user to change position and orientation.
  • step S305 while viewing the image displayed on the display unit 23, the user brings the auscultation unit 12 into contact with the position of the user's own chest corresponding to the position where the target position P is displayed on the display unit 23 (step S305). S305).
  • the control unit 25 uses the display on the display unit 23 and/or the sound from the speaker 27 to instruct the user to bring the auscultation unit 12 into contact with the target position P displayed on the display unit 23. can be encouraged.
  • the control unit 25 detects the image of the marker 36 included in the image captured by the imaging unit 21 and attempts to recognize the position of the auscultation unit 12. (Step S306). If the auscultation unit 12 is recognized (step S306: Yes), the control unit 25 proceeds to the process of step S308. If the auscultation unit 12 cannot be recognized (step S306: No), the control unit 25 proceeds to step S307. In this case, it is considered that the reason why the auscultation unit 12 cannot be recognized is that it is not within the recognizable range on the image.
  • step S307 the control unit 25 causes the display unit 23 to display characters and/or the speaker 27 to generate sound, thereby instructing the user to change the position of the auscultation unit 12 because it is not in the correct position. guide you to do so. According to this guidance, the user changes the position of the auscultation unit 12 placed on the chest.
  • step S307 the process of the flowchart returns to step S305.
  • step S308 the control unit 25 guides the movement direction and movement distance of the auscultation unit 12 from the coordinates of the target position P and the coordinates of the current position of the auscultation unit 12 included in the image captured by the imaging unit 21. do.
  • the control unit 25 guides the movement direction and distance by superimposing graphics and/or characters on the image displayed on the display unit 23, and also by sound from the speaker 27 and vibration from the vibration unit 34. can do.
  • the control unit 25 may display "3 cm downward" on the image displayed on the display unit 23.
  • FIG. the control unit 25 may utter the same content by voice.
  • the control unit 25 instructs the user to perform auscultation by blinking the color of the mark indicating the target position P, shortening the interval between sounds and vibrations, and the like. It is possible to guide the direction in which the unit 12 is moved.
  • the control unit 25 changes the current position of the auscultation unit 12 to the target position P (positioning the auscultation unit 12). position).
  • the control unit 25 determines that the current position of the auscultation unit 12 matches the target position P (step S309: Yes)
  • the process proceeds to step S310.
  • the control unit 25 determines that the current position of the auscultation unit 12 does not match the target position P (step S309: No)
  • the control unit 25 returns to step S308 and adjusts the position of the auscultation unit 12.
  • the control unit 25 uses the display unit 23, the speaker 27 and/or the vibration unit 34 to notify the user that the auscultation unit 12 is at the target position P (step S310).
  • the control unit 25 changes the color of the mark indicated by the target position P displayed on the display unit 23, and/or generates a voice saying "correct position set" through the speaker 27, and / Or vibrate the vibrating section 34 .
  • the display unit 23, the speaker 27 and/or the vibration unit 34 are included in the notification unit that notifies that the auscultation unit 12 is positioned at the position where it should be placed.
  • step S310 When it is notified in step S310 that the auscultation unit 12 is at the target position P, the user fixes the auscultation unit 12 at that position (step S311). That is, the user does not move the hand holding the handle 37 of the auscultation unit 12 at this position.
  • the control unit 25 acquires the electrical signal of the heart sound from the auscultation unit 12 .
  • the control unit 25 determines whether the heart sounds are correctly acquired from the waveform of the electrical signal of the heart sounds (step S312).
  • the control unit 25 can determine whether the contact state of the auscultation unit 12 with the user's body is good or bad based on the waveform of the heart sound converted into the electrical signal.
  • the control unit 25 may acquire the pressure detected by the pressure sensor 33 in addition to the waveform of the heart sound converted into the electrical signal, and determine the contact state of the auscultation unit 12 with the user's body.
  • step S313 the control unit 25 causes the display unit 23 and the speaker 27 to finely adjust the pressing force of the auscultation unit 12 and/or the position of the auscultation unit 12.
  • Guidance is provided (step S313). If the pressure of the auscultation unit 12 against the user's chest is insufficient, the control unit 25 may notify the user to increase the pressure. Also, if the pressure of the auscultation unit 12 against the chest of the user is sufficient, the auscultation unit 12 may be guided to move a minute distance from the current position. After step S313, the user fixes the auscultation unit 12 again (step S311), and the control unit 25 performs the process of step S312 again.
  • step S312 Yes
  • the control unit 25 returns to the processing of the flowchart in FIG.
  • the control unit 25 starts automatic measurement of heart sounds, electrocardiograms, and pulse waves.
  • a heart sound to be measured is a waveform converted into an electrical signal by the converter 32 .
  • the control unit 25 analyzes the heart sound, electrocardiogram, and pulse wave waveforms measured in step S203 (step S204).
  • the control unit 25 can time-synchronize each waveform of the heart sound, the electrocardiogram, and the pulse wave.
  • the control unit 25 can extract feature amounts from each waveform.
  • the control unit 25 calculates hemodynamic parameters based on the analysis results of each waveform in step S204.
  • the control unit 25 may calculate the hemodynamic parameters by machine learning in which the feature values of each waveform are used as input parameters and the hemodynamic parameters are output as described above.
  • Hemodynamic parameters include left ventricular pressure and pulmonary artery pressure.
  • the control unit 25 determines whether the hemodynamic parameters calculated in step S205 are normal (step S206). For example, a hemodynamic parameter is determined to be abnormal if the hemodynamic parameter contains abnormal values that cannot be measured by normal human measurements. Alternatively, when the hemodynamic parameters cannot be calculated, such as when each measured waveform is distorted and the feature amount cannot be calculated, it is determined to be abnormal. If it is determined that the hemodynamic parameter is not normal (step S206: No), the control unit 25 returns to step S203 and repeats the measurement. If the hemodynamic parameters are determined to be normal (step S206: Yes), the control unit 25 proceeds to the process of step S207.
  • step S ⁇ b>207 the control unit 25 displays the measurement results including the hemodynamic parameters on the display unit 23 and/or stores them in the storage unit 22 and/or transmits them to the medical institution system 16 .
  • the control unit 25 may cause the display device 23 to display the waveform of the physical information and/or the time-series transition of the hemodynamic parameter as a graph.
  • the control unit 23 may further cause the display device 23 to display the determination result as to whether the hemodynamic parameters are normal. From the display on the display device 23, the user can know information such as the change in the measurement result from the previous day, the trend, and whether or not the measurement result is within the reference value.
  • the control unit 25 can further present a response such as "continue measurement” or "contact a doctor urgently" to the user.
  • doctors and/or nurses can continuously monitor the received hemodynamic parameters to check the status of the user, change prescriptions, and the like. After confirming the hemodynamic parameters, the doctor or nurse at the medical institution can send comments or instructions from the medical institution system 16 to the main unit 11 .
  • the control unit 25 of the main unit 11 may display the comment or instruction from the doctor or nurse on the display unit 23 to notify the user.
  • the processing from steps S204 to S207 is assumed to be executed by the control unit 25 of the main unit 11.
  • the hemodynamic monitoring system 10 is configured such that the main unit 11 transmits the results measured in step S203 to the medical institution system 16, and the medical institution system 16 analyzes the measurement results and calculates hemodynamic parameters. may be
  • the position where the auscultation unit 12 should be placed is determined in advance by the doctor, and this position is used as a relative position with respect to the feature points of the user's body. 22. Then, the control unit 25 arranges the auscultation unit 12 on the image displayed on the display unit 23 based on the user's feature points included in the captured image and the stored relative positions. Guide the user to the location. This allows the user to easily position the auscultation unit 12 and acquire heart sounds.
  • the patient himself/herself can wear the auscultation unit 12 at the same position every time and acquire a stable heart sound waveform. This eliminates the need for support from doctors, family members, caregivers, etc., and makes it easier to continue measurement even when family members, caregivers, etc. are absent. Furthermore, since the cardiac waveform can be stably acquired, the accuracy of the algorithm for analyzing the cardiac waveform can be improved.
  • the heart sound acquisition device 15 of the present embodiment displays an image captured by the imaging unit 21 on the display unit 23, and guides the user on the position where the auscultation unit 12 should be placed on the image. Intuitive and easy to understand. Therefore, the heart sound acquisition device 15 is easy for the patient to use and to continue the measurement.
  • the left shoulder, right shoulder, and face are used as feature points of the user's body used for positioning the auscultation unit 12 . This makes it possible to extract feature points and acquire heart sounds even when the user is wearing clothes.
  • the electrocardiogram and the pulse wave, as well as the cardiac sound waveform can be synchronously and stably acquired. It becomes possible to monitor in a non-invasive way. This makes it possible to grasp the state of a heart failure patient at home and to respond when the state changes.
  • the heart sound acquisition device 15 is used to monitor the hemodynamic parameters of the user, who is a patient, at home.
  • the heart sound acquisition device can be installed in a medical facility to acquire heart sounds under the same conditions each time for each patient.
  • the storage unit 22 stores relative position information regarding the position where the auscultation unit 12 should be placed for each patient.
  • the control unit 25 reads the relative position information according to the patient from the storage unit 22 and positions the auscultation unit 12 with respect to each patient.
  • the display unit 23 may be configured to display an image that is not horizontally reversed toward the doctor side.
  • an external camera can be used as the imaging unit 21 instead of the camera built into the main unit 11 .
  • a smartphone it is possible to connect a smartphone to the main unit 11 and use the camera of the smartphone as the imaging unit 21 .
  • there are advantages such as no need for a camera in the main body 11, easy adjustment of the orientation of the camera since it is separated from the main body 11, and the ability to use two screens of the main body 11 and the smartphone.
  • a smartphone can be used as the main unit 11.
  • a camera of a smart phone is used as the imaging unit 21 in FIG.
  • a built-in memory of the smartphone is used as the storage unit 22 .
  • a smartphone display is used as the display unit 23 .
  • a processor of a smartphone is used as the control unit 25 .
  • a wireless communication function of a smartphone is used as the communication unit 26 .
  • a speaker built into the smartphone is used as the speaker 27 .
  • a dedicated stand capable of adjusting the orientation of the display of the smartphone may be prepared in cooperation with the smartphone.
  • An external connection terminal of the smartphone can be used to connect the auscultation unit 12 and other sensors that acquire body information.
  • the smartphone stores in its memory an application for causing the processor of the smartphone to function as the control unit 25 of the heart sound acquisition device 15 of the present disclosure.
  • the user activates this application.
  • the following operations can be performed in the same manner as the heart sound acquisition device 15 of the above embodiment. This makes it possible to acquire heart sounds without preparing special hardware.
  • the imaging unit 21 may include an infrared camera capable of imaging light with wavelengths in the infrared region. Infrared radiation can partially penetrate clothing. Since the imaging unit 21 includes an infrared camera, it is possible to capture an image of the user's body even if the user is wearing clothes, and extract feature points more accurately. As a result, even if the feature points used for positioning the auscultation unit 12 include a part that is hidden under the clothes, the user can position the auscultation unit 12 while wearing the clothes. become.
  • hemodynamic monitoring system 11 main unit 12 auscultation unit 13 electrode (physical information acquisition unit) 14 arm band device (physical information acquisition unit) 15 heart sound acquisition device 16 medical institution system 17 communication network 18 server 21 imaging unit 22 storage unit 23 display unit 24 imaging adjustment unit 25 control unit 26 communication unit 27 speaker 28 stand 31 heart sound acquisition unit 32 conversion unit 33 pressure sensor 34 vibration unit 36 Marker 37 Handle 41 Left shoulder (feature point) 42 right shoulder (feature point) 43 face P target position

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Acoustics & Sound (AREA)
  • Cardiology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Physiology (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

This heart sound acquisition device comprises an auscultation unit, an imaging unit, a storage unit, a display unit, and a control unit. The auscultation unit is configured to be able to acquire heart sounds of a user. The imaging unit captures an image of the user's body including feature points. The storage unit stores the relative position of where the auscultation unit is to be placed with respect to the feature points of the user. The display unit displays the image captured by the imaging unit. The control unit recognizes the positions of the feature points of the user from the image captured by the imaging unit, calculates the position where the auscultation unit is to be placed on the basis of the positions of the feature points and the relative position stored in the storage unit, and guides the auscultation unit to the position where the auscultation unit is to be placed on the image displayed on the display unit.

Description

心音取得装置、心音取得システム、心音取得方法およびプログラムHeart sound acquisition device, heart sound acquisition system, heart sound acquisition method and program
 本開示は、心音取得装置、心音取得システム、心音取得方法およびプログラムに関する。 The present disclosure relates to a heart sound acquisition device, a heart sound acquisition system, a heart sound acquisition method, and a program.
 患者の健康状態をモニターするために、在宅で心音を取得することが必要とされる場合がある。例えば、心不全患者の遠隔モニタリング手段の一つとして心電図、脈波、および、心音を使った方法が開発されており、この方法では在宅で毎日患者自身が心音を測定する必要がある。心音を測定する心音センサは、装着位置により出力される波形に差異が生じるため毎回同じ位置に装着することが重要である。また、心音センサは、在宅で測定するため、患者自身で装着しやすい形態であることが望ましい。 In order to monitor a patient's health, it may be necessary to obtain heart sounds at home. For example, a method using an electrocardiogram, pulse wave, and heart sound has been developed as one of remote monitoring means for heart failure patients, and this method requires the patient to measure the heart sound every day at home. It is important that the heart sound sensor that measures heart sounds is worn at the same position every time because the output waveform differs depending on the position where the sensor is worn. In addition, since the heart sound sensor is to be measured at home, it is desirable that it be of a form that is easy for the patient to wear.
 遠隔医療および在宅医療において、患者の心音を聴診するため、心音センサの身体に当接される位置を示す位置情報を取得する位置取得部を含む、電子聴音システムが提案されている(例えば、特許文献1参照)。このシステムによれば、医師による診察時に、心音センサに当接される身体の位置情報を記録しておき、次回以降にユーザが心音の取得を行うとき、記録された位置情報に基づいて、心音センサを誘導する誘導情報をユーザに提供することができる。 In telemedicine and home care, an electronic listening system has been proposed that includes a position acquisition unit that acquires position information indicating the position of a heart sound sensor abutted against the body to auscultate a patient's heart sounds (see, for example, Patent Reference 1). According to this system, the position information of the body in contact with the heart sound sensor is recorded at the time of examination by a doctor, and when the user obtains the heart sound from the next time onward, the heart sound is detected based on the recorded position information. Guidance information can be provided to the user to guide the sensor.
特開2017-198JP 2017-198
 心音センサを毎回同じ位置に装着する方法として、服またはバンドなどに心音センサを固定する方法がある。しかし、この方法は、洗濯がしにくいこと、着方により服またはバンド自体がずれること、バンドが自分では装着しにくいこと、などの課題がある。また、上述の特許文献1では、心音センサを正しい聴音位置に誘導するためにレーザー、プロジェクタおよびヘッドマウントディスプレイなどを使う方法が提案されている。しかし、患者の自宅に大型の装置を設置することは容易ではない場合がある。また、これらの方法は、医師等の第3者が、目視により心音センサの位置を決定することを前提としている。しかし、この方法を患者自身で実行しようとした場合、心音センサを装着する胸部を患者自身が目視で確認することは難しく、レーザーやプロジェクタでは心音センサ等により影ができるので、心音センサを正しく位置決めすることは困難である。さらに、医師等のような専門的知識を持たない患者自身により適切に測定できるよう、患者に対して測定方法の案内や補助等が必要である。 As a method of wearing the heart sound sensor in the same position every time, there is a method of fixing the heart sound sensor to clothes or a band. However, this method has problems such as that it is difficult to wash, the clothes or the band itself shifts depending on how it is worn, and the band is difficult to put on by oneself. Moreover, the above-mentioned Patent Document 1 proposes a method of using a laser, a projector, a head-mounted display, or the like to guide the heart sound sensor to the correct listening position. However, installing a large device in a patient's home may not be easy. In addition, these methods are based on the premise that a third party such as a doctor visually determines the position of the heart sound sensor. However, when the patient tries to perform this method by himself/herself, it is difficult for the patient to visually check the chest where the heart sound sensor is attached. It is difficult to Furthermore, it is necessary to guide and assist the patient in the measurement method so that the patient, who does not have specialized knowledge such as a doctor, can perform the measurement appropriately.
 したがって、これらの点に着目してなされた本開示の目的は、ユーザが聴診部を定められた位置に容易に位置決めして心音を取得することができる、心音取得装置、心音取得システム、心音取得方法およびプログラムを提供することにある。 Accordingly, an object of the present disclosure, which focuses on these points, is to provide a heart sound acquisition device, a heart sound acquisition system, and a heart sound acquisition that enable a user to easily position an auscultation unit at a predetermined position to acquire heart sounds. The object is to provide a method and a program.
 本開示の一態様としての心音取得装置は、ユーザの心音を取得可能に構成される聴診部と、ユーザの特徴点を含む身体の画像を撮像する撮像部と、前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置を記憶する記憶部と、前記撮像部により撮像された前記画像を表示する表示部と、前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、該特徴点の位置と、前記記憶部に記憶された前記相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、前記表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する制御部と、を備える。 A heart sound acquisition device according to one aspect of the present disclosure includes an auscultation unit configured to acquire heart sounds of a user, an imaging unit that captures an image of the user's body including characteristic points, and a storage unit for storing the relative position of the position where the auscultation unit should be arranged; a display unit for displaying the image captured by the imaging unit; and the characteristic points of the user from the image captured by the imaging unit. and calculating the position where the auscultation unit should be placed based on the position of the feature point and the relative position stored in the storage unit, and displaying the image on the display unit and a control unit for guiding the auscultation unit to a position where the auscultation unit should be placed.
 一実施形態として、前記聴診部は、マーカーを備え、前記制御部は前記撮像部により撮像された前記画像から前記マーカーを検出することにより、前記画像上で前記聴診部の位置を認識する。 As one embodiment, the auscultation unit includes a marker, and the control unit recognizes the position of the auscultation unit on the image by detecting the marker from the image captured by the imaging unit.
 一実施形態として、前記制御部は、前記撮像部により撮像された前記画像に基づいて、前記撮像部と前記ユーザの前記身体との距離、および、前記ユーザの前記身体の向きの少なくとも何れかを認識可能に構成される。 As one embodiment, the control unit determines at least one of a distance between the imaging unit and the body of the user and an orientation of the body of the user based on the image captured by the imaging unit. configured to be recognizable.
 一実施形態として、前記心音取得装置は、サーバと通信可能に構成される通信部をさらに備え、前記制御部は、前記記憶部に記憶される前記相対的位置を、前記通信部を介して前記サーバから取得するように構成される。 As one embodiment, the heart sound acquisition device further includes a communication unit configured to be able to communicate with a server, and the control unit transmits the relative position stored in the storage unit to the Configured to retrieve from the server.
 一実施形態として、前記心音取得装置は、スピーカをさらに備え、前記制御部は、前記聴診部を該聴診部を配置すべき位置に案内するとき、前記スピーカによる音声を用いる。 As one embodiment, the heart sound acquisition device further includes a speaker, and the control unit uses the sound from the speaker when guiding the auscultation unit to a position where the auscultation unit should be placed.
 一実施形態として、前記制御部は、前記聴診部を該聴診部を配置すべき位置に案内するとき、前記撮像部により撮像された前記画像に文字および図形の少なくとも何れか一方を重畳して前記表示部に表示させる。 As one embodiment, when guiding the auscultation unit to a position where the auscultation unit should be placed, the control unit superimposes at least one of characters and graphics on the image captured by the image capture unit, and display on the display.
 一実施形態として、前記聴診部は、前記ユーザの前記身体との間の圧力を検知する圧力センサを備え、前記制御部は前記圧力センサにより検知される前記圧力に基づいて、前記聴診部の前記ユーザの前記身体に対する接触状態を判定する。 As one embodiment, the auscultation unit includes a pressure sensor that detects pressure between the body of the user and the control unit, based on the pressure detected by the pressure sensor, controls the pressure of the auscultation unit. A contact state of the user with respect to the body is determined.
 一実施形態として、前記心音取得装置は、前記聴診部が該聴診部を配置すべき位置に位置することを報知する報知部を備える。 As one embodiment, the heart sound acquisition device includes a notification unit that notifies that the auscultation unit is positioned at a position where the auscultation unit should be placed.
 一実施形態として、前記心音取得装置は、前記聴診部で取得した前記心音を電気信号に変換する変換部を備える。 As one embodiment, the heart sound acquisition device includes a conversion unit that converts the heart sound acquired by the auscultation unit into an electrical signal.
 一実施形態として、前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形に基づいて、前記聴診部の前記ユーザの前記身体に対する接触状態を判定する。 As one embodiment, the control unit determines a contact state of the auscultation unit with respect to the body of the user based on the waveform of the electrical signal obtained by converting the heart sounds by the conversion unit.
 一実施形態として、前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形を解析するように構成される。 As one embodiment, the control unit is configured to analyze the waveform of the electrical signal into which the heart sounds are converted by the conversion unit.
 一実施形態として、前記心音取得装置は、前記ユーザの前記心音以外の身体情報を取得可能に構成される身体情報取得部をさらに備え、前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形と、前記身体情報取得部が取得した前記身体情報の波形とを時間的に同期させる。 As one embodiment, the heart sound acquisition device further includes a physical information acquisition unit configured to be capable of acquiring physical information other than the heart sounds of the user, and the control unit controls the heart sounds converted by the conversion unit. The waveform of the electrical signal and the waveform of the physical information acquired by the physical information acquisition unit are temporally synchronized.
 一実施形態として、前記身体情報は、心電図および脈波の少なくとも何れか一方を含み、前記制御部は、前記聴診部から取得した前記ユーザの前記心音、および、前記身体情報取得部から取得した前記ユーザの前記身体情報に基づいて、血行動態パラメータを算出する。 As one embodiment, the physical information includes at least one of an electrocardiogram and a pulse wave, and the control unit controls the heart sound of the user acquired from the auscultation unit and the heart sound acquired from the physical information acquisition unit. A hemodynamic parameter is calculated based on the physical information of the user.
 本開示の一態様としての心音取得システムは、ユーザの特徴点に対するユーザの心音を取得可能に構成される聴診部を配置すべき位置の相対的位置を記憶するサーバと、前記聴診部、ユーザの特徴点を含む身体の画像を撮像する撮像部、前記撮像部により撮像された前記画像を表示する表示部、前記サーバから前記相対的位置を取得する通信部、および、前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、該特徴点の位置と、前記サーバから取得した前記相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、前記表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する制御部を含む心音取得装置とを備える。 A heart sound acquisition system according to one aspect of the present disclosure includes a server that stores a relative position where an auscultation unit configured to acquire a user's heart sound with respect to a feature point of the user should be placed; the auscultation unit; an imaging unit that captures an image of a body including characteristic points, a display unit that displays the image captured by the imaging unit, a communication unit that acquires the relative position from the server, and an image captured by the imaging unit recognizing the position of the feature point of the user from the image, calculating the position where the auscultation unit should be arranged based on the position of the feature point and the relative position obtained from the server, and displaying the display; a heart sound acquisition device including a control unit for guiding the auscultation unit to a position where the auscultation unit should be placed on the image displayed on the unit.
 本開示の一態様としての心音取得方法は、ユーザの心音を取得可能に構成される聴診部を用いて心音を取得するために制御部が実行する心音取得方法であって、撮像部にユーザの特徴点を含む身体の画像を撮像させ、前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、該特徴点の位置と、記憶部に記憶された前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する。 A heart sound acquisition method according to one aspect of the present disclosure is a heart sound acquisition method executed by a control unit to acquire heart sounds using an auscultation unit configured to be capable of acquiring heart sounds of a user. capturing an image of the user's body including the feature points; recognizing the positions of the feature points of the user from the image captured by the imaging unit; A position where the auscultation unit should be arranged is calculated based on the position where the auscultation unit should be arranged relative to the feature points, and the auscultation unit is displayed on the image displayed on the display unit. to the position where it should be placed.
 本開示の一態様としてのプログラムは、ユーザの心音を取得可能に構成される聴診部を用いて心音を取得するためのプログラムであって、撮像部にユーザの特徴点を含む身体の画像を撮像させる処理と、前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識する処理と、該特徴点の位置と、記憶部に記憶された前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置とに基づいて、前記聴診部を配置すべき位置を算出する処理と、表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する処理とを心音取得装置の備えるプロセッサに実行させる。 A program as one aspect of the present disclosure is a program for acquiring heart sounds using an auscultation unit configured to be capable of acquiring heart sounds of a user, wherein an imaging unit captures an image of the user's body including characteristic points. a process of recognizing the position of the feature point of the user from the image captured by the imaging unit, the position of the feature point, and the auscultation of the feature point of the user stored in a storage unit a process of calculating a position where the auscultation unit should be arranged based on the relative position of the position where the auscultation unit should be arranged; A processor provided in the heart sound acquisition device executes a process of guiding to the desired position.
 本開示によれば、ユーザが聴診部を予め定められた位置に容易に位置決めして、心音を取得することができる。 According to the present disclosure, the user can easily position the auscultation unit at a predetermined position to acquire heart sounds.
図1は、一実施形態に係る心音取得装置を含む血行動態モニタリングシステムの一例を示す概略構成図である。FIG. 1 is a schematic configuration diagram showing an example of a hemodynamic monitoring system including a heart sound acquisition device according to one embodiment. 図2は、図1の血行動態モニタリングシステムの利用形態の一例を説明する図である。FIG. 2 is a diagram for explaining an example of usage of the hemodynamic monitoring system of FIG. 図3は、図1の心音取得装置の一例を示す概略構成図である。FIG. 3 is a schematic configuration diagram showing an example of the heart sound acquisition device of FIG. 図4は、図3の本体部の外観の一例を示す斜視図である。4 is a perspective view showing an example of the appearance of the main body shown in FIG. 3. FIG. 図5は、図3の制御部の一例を示す機能ブロック図である。FIG. 5 is a functional block diagram showing an example of a control unit in FIG. 3; 図6は、図3の聴診部の外観の一例を示す斜視図である。6 is a perspective view showing an example of the appearance of the auscultation unit of FIG. 3. FIG. 図7は、医療機関において聴診部の位置を決定するフローチャートである。FIG. 7 is a flow chart for determining the position of the auscultation unit in a medical institution. 図8は、医療機関における聴診部の位置決定方法を説明する図である。FIG. 8 is a diagram illustrating a method of determining the position of an auscultation unit in a medical institution. 図9は、血行動態パラメータの算出手順を説明するフローチャートである。FIG. 9 is a flowchart for explaining the procedure for calculating hemodynamic parameters. 図10は、図9の聴診部を配置する処理を説明するフローチャートである。FIG. 10 is a flow chart for explaining the process of placing the auscultation unit in FIG. 図11は、ユーザの在宅における聴診部の配置方法の一例を説明する図である。FIG. 11 is a diagram illustrating an example of a method of arranging the auscultation unit when the user is at home.
 以下、本開示の実施形態について、図面を参照して説明する。 Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
(血行動態モニタリングシステム)
 本開示の心音取得装置15を説明するにあたり、心音取得装置15を含むシステムの一例として、図1に示す血行動態モニタリングシステム10について説明する。血行動態モニタリングシステム10は、医療機関から退院した心不全患者であるユーザの状態を遠隔でモニタリングするためのシステムである。血行動態モニタリングシステム10は、ユーザの心電図、脈波および心音の各データを取得し、患者の血行動態を分析する。血行動態を示す血行動態パラメータには、左心室圧および肺動脈圧等が含まれる。血行動態の変化をモニタリングすることによって、心臓の状態の変化および心不全の再発等を予測および防止することができる。
(Hemodynamic monitoring system)
In describing the heart sound acquisition device 15 of the present disclosure, the hemodynamic monitoring system 10 shown in FIG. 1 will be described as an example of a system including the heart sound acquisition device 15. FIG. The hemodynamic monitoring system 10 is a system for remotely monitoring the condition of a user who is a heart failure patient discharged from a medical institution. The hemodynamic monitoring system 10 acquires the user's electrocardiogram, pulse wave, and heart sound data, and analyzes the patient's hemodynamics. Hemodynamic parameters indicating hemodynamics include left ventricular pressure and pulmonary artery pressure. By monitoring changes in hemodynamics, it is possible to predict and prevent changes in cardiac conditions, recurrence of heart failure, and the like.
 血行動態モニタリングは、例えば、図2に示すような手順で医療機関とユーザとが連携して実施される。まず、心不全患者を治療する医師が、状態の落ち着いている患者(ユーザ)に対して遠隔モニタリングを行うことを決定する。医療機関の医師は、患者の心音を取得する際の聴診部を配置すべき位置を特定し、モニタリング用の装置の記憶部またはサーバに記憶させる。 Hemodynamic monitoring is performed in cooperation with the medical institution and the user, for example, according to the procedure shown in Figure 2. First, a physician treating a heart failure patient decides to perform remote monitoring of a calm patient (user). A doctor at a medical institution specifies the position where the auscultation unit should be placed when acquiring the patient's heart sound, and stores it in the storage unit of the monitoring device or the server.
 ユーザは、医療機関またはレンタル事業者から血行動態モニタリング用の在宅機器を借り受け、在宅で心音、心電図および脈波の測定を行う。心音、心電図および脈波は身体情報に含まれる。測定は、例えば、毎日医師により決められた時刻に定期的に行う。心音の測定は、医師が装置またはサーバに記憶させた位置に、ユーザが聴診部を配置して行う。ユーザは、在宅機器により測定結果を医療機関に送信する。 The user borrows home equipment for hemodynamic monitoring from a medical institution or rental company, and measures heart sounds, electrocardiograms, and pulse waves at home. Heart sounds, electrocardiograms and pulse waves are included in physical information. For example, the measurement is performed periodically at a time determined by a doctor every day. Heart sounds are measured by the user placing the auscultation unit at a position stored in the device or server by the doctor. The user sends the measurement results to the medical institution with the home device.
 医療機関側では、医師等の医療関係者が、ユーザが測定したデータを遠隔でモニタリングする。医師は、ユーザの血行動態のデータをもとに、ユーザの症状が悪化する兆しがみられる場合、ユーザへ処方の変更等を連絡する。 On the medical institution side, medical personnel such as doctors remotely monitor the data measured by the user. Based on the user's hemodynamic data, the doctor notifies the user of a change in prescription or the like if there is a sign that the user's symptoms are aggravating.
 ユーザは、医師による処方を受け取り、変更された処方に基づく薬剤の服用等を行う。このようにすることによって、血行動態モニタリングシステム10は、心不全患者の状態の変化を監視し、必要な場合に迅速な処方変更等を行うことにより、ユーザの病状が悪化し再入院になる可能性を低減することができる。 The user receives the prescription from the doctor and takes medicine based on the changed prescription. By doing so, the hemodynamic monitoring system 10 monitors changes in the state of the heart failure patient and, if necessary, quickly changes the prescription or the like, thereby reducing the possibility that the user's condition will worsen and the user will be re-hospitalized. can be reduced.
 上述のような処理を行うために、血行動態モニタリングシステム10は、ユーザ側に配置される本体部11と、本体部11に接続された聴診部12、電極13および腕帯装置14と、本体部11との間で通信ネットワーク17を介して相互に通信可能な医療機関システム16とを含んで構成される。医療機関システム16は、医療機関内の情報システムであり、医療機関内のサーバ等コンピュータを含む。心音以外の身体情報、すなわち心電図および脈波の情報を取得する電極13および腕帯装置14は、身体情報取得部に含まれる。 In order to perform the above-described processing, the hemodynamic monitoring system 10 includes a main unit 11 arranged on the user side, an auscultation unit 12 connected to the main unit 11, electrodes 13 and an arm band device 14, and a main unit. 11 and a medical institution system 16 that can communicate with each other via a communication network 17 . The medical institution system 16 is an information system within the medical institution and includes computers such as servers within the medical institution. The electrodes 13 and the arm band device 14 for acquiring physical information other than heart sounds, ie, electrocardiogram and pulse wave information, are included in the physical information acquiring section.
 本体部11は、身体情報を取得してこれを分析するコンピュータである。聴診部12は、ユーザの心臓の音、すなわち、心音を取得するセンサである。聴診部12は、心音センサと言い換えることができる。電極13は、ユーザの心電図を取得するために、手首および足首等の部位に装着される電極である。電極13は、心臓で発生する微小な電気を検出することができる。腕帯装置14は、ユーザの上腕等に腕帯を巻いてこの腕帯に空気を送り込むことにより血管を圧迫し、心臓の拍動により血管に伝達される脈波を測定する。本体部11と、聴診部12、電極13および腕帯装置14のそれぞれとは、有線または無線で情報の送受信が可能に接続される。 The main unit 11 is a computer that acquires and analyzes physical information. The auscultation unit 12 is a sensor that acquires the user's heart sounds, that is, heart sounds. The auscultation unit 12 can be rephrased as a heart sound sensor. The electrodes 13 are electrodes attached to a site such as a wrist and an ankle in order to obtain an electrocardiogram of the user. The electrodes 13 can detect minute electricity generated in the heart. The arm band device 14 wraps an arm band around the user's upper arm or the like and sends air into the arm band to compress blood vessels and measure pulse waves transmitted to the blood vessels due to heartbeats. Main unit 11, auscultation unit 12, electrodes 13, and arm band device 14 are connected by wire or wirelessly so that information can be transmitted and received.
 本体部11は、心音、心電図、および、脈波の各波形を分析することができる。例えば、本体部11は、心音、心電図および脈波を入力データとし、患者の血行動態パラメータを出力データとする機械学習による学習済みモデルを記憶してよい。本体部11は、この学習済みモデルに基づいて、入力された心音、心電図および脈波から、血行動態パラメータおよび患者の状態変化を推定してよい。 The main unit 11 can analyze each waveform of heart sounds, electrocardiograms, and pulse waves. For example, the main unit 11 may store a machine-learned model having heart sounds, an electrocardiogram, and a pulse wave as input data and a patient's hemodynamic parameters as output data. Based on this learned model, the main body 11 may estimate hemodynamic parameters and changes in the patient's condition from the input heart sounds, electrocardiograms, and pulse waves.
 血行動態モニタリングシステム10は、さらに、サーバ18と接続されてよい。サーバ18は、医療機関で医師が決定したユーザごとの聴診部12を配置するべき位置に関する相対位置情報を、記憶してよい。相対位置情報は、ユーザの身体の特徴点に対する聴診部12を配置すべき位置の相対的位置を示す情報である。本体部11は、ユーザの心音を取得するとき、サーバ18から聴診部12の相対位置情報を読み出して良い。サーバ18は、医療機関とは別の場所にあってよく、医療機関の内部にあってもよい。また、医師は、医療機関システム16でユーザの聴診部12を配置すべき位置を決定したとき、サーバ18を用いずにユーザに渡す本体部11の記憶部22(図3参照)に相対位置情報を直接登録してよい。 The hemodynamic monitoring system 10 may be further connected with the server 18. The server 18 may store relative position information regarding the position where the auscultation unit 12 should be placed for each user determined by a doctor at the medical institution. The relative position information is information indicating the relative position of the position where the auscultation unit 12 should be placed with respect to the feature points of the user's body. The main unit 11 may read the relative position information of the auscultation unit 12 from the server 18 when acquiring the user's heart sounds. Server 18 may be located at a location separate from the medical institution or may be internal to the medical institution. Further, when the doctor determines the position where the user's auscultation unit 12 should be placed in the medical institution system 16, the relative position information is stored in the storage unit 22 (see FIG. 3) of the main unit 11 and passed to the user without using the server 18. can be registered directly.
(心音取得装置)
 本実施形態の心音取得装置15は、図1に示した血行動態モニタリングシステム10の一部である、本体部11および聴診部12を含む。心音取得装置15は、血行動態モニタリングシステム10に限られず、他の心音取得の用途にも利用することができる。また、心音取得装置15とサーバ18とを含むシステムを、心音取得システムと呼ぶ。図1の血行動態モニタリングシステム10は、心音取得システムである。以下に、心音取得装置15の本体部11と聴診部12のより詳しい構成について説明する。
(Heart sound acquisition device)
A heart sound acquisition device 15 of this embodiment includes a main unit 11 and an auscultation unit 12, which are part of the hemodynamic monitoring system 10 shown in FIG. The heart sound acquisition device 15 is not limited to the hemodynamic monitoring system 10, and can be used for other heart sound acquisition applications. A system including the heart sound acquisition device 15 and the server 18 is called a heart sound acquisition system. The hemodynamic monitoring system 10 of FIG. 1 is a heart sound acquisition system. A more detailed configuration of the main unit 11 and the auscultation unit 12 of the heart sound acquisition device 15 will be described below.
(本体部の構成)
 図3に示すように、本体部11は、撮像部21、記憶部22、表示部23、撮像調整部24、制御部25、通信部26およびスピーカ27を含む。また、図4に示すように、本体部11は、表示部23の向きを調整できるスタンド28を含む。
(Construction of main body)
As shown in FIG. 3 , main unit 11 includes imaging unit 21 , storage unit 22 , display unit 23 , imaging adjustment unit 24 , control unit 25 , communication unit 26 and speaker 27 . Further, as shown in FIG. 4, the main body portion 11 includes a stand 28 capable of adjusting the orientation of the display portion 23. As shown in FIG.
 撮像部21は、ユーザの顔および上半身を撮像可能なカメラである。このため、撮像部21は、表示部23の上側または下側等に配置されてよい。撮像部21は、レンズおよび撮像素子を含んでよい。撮像素子は、例えば、CCDイメージセンサ(Charge-Coupled Device Image Sensor)、または、CMOSイメージセンサ(Complementary MOS Image Sensor)である。 The imaging unit 21 is a camera capable of imaging the user's face and upper body. Therefore, the imaging unit 21 may be arranged above or below the display unit 23 . The imaging unit 21 may include a lens and an imaging element. The imaging device is, for example, a CCD image sensor (Charge-Coupled Device Image Sensor) or a CMOS image sensor (Complementary MOS Image Sensor).
 記憶部22は、本体部11において行う処理に必要とされるデータ、および、本体部11において生成されたデータを格納するメモリである。記憶部22は、後述する制御部25が実行するプログラムを記憶してよい。記憶部22は、例えば、半導体メモリ、磁気メモリ、および光メモリ等の何れか一つ以上を含んで構成されてよい。半導体メモリは、揮発性メモリおよび不揮発性メモリを含んでよい。記憶部22は、聴診部12を配置すべき位置の情報を記憶してよい。聴診部12を配置すべき位置は、後述するように、ユーザの身体の特徴点に対する相対的位置を示す相対位置情報として記憶される。制御部25が機械学習によりユーザの血行動態の推定を行う場合、記憶部22は、学習済みモデルを記憶してよい。 The storage unit 22 is a memory that stores data required for processing performed in the main unit 11 and data generated in the main unit 11 . The storage unit 22 may store programs executed by the control unit 25, which will be described later. The storage unit 22 may include, for example, one or more of a semiconductor memory, a magnetic memory, an optical memory, and the like. Semiconductor memory may include volatile memory and non-volatile memory. The storage unit 22 may store information on the position where the auscultation unit 12 should be placed. The position where the auscultation unit 12 should be placed is stored as relative position information indicating the relative position with respect to the feature points of the user's body, as will be described later. When the control unit 25 estimates the user's hemodynamics by machine learning, the storage unit 22 may store a learned model.
 表示部23は、制御部25の制御により画像を表示する。表示部23は、図4に示すように、撮像部21により撮像されたユーザの顔および上半身を少なくとも部分的に含む画像を表示することができる。表示部23は、一般的に知られるディスプレイを使用することができる。表示部23は、例えば、液晶ディスプレイ(LCD:Liquid Crystal Display)、有機EL(Electro-Luminescence)ディスプレイ、無機ELディスプレイ、プラズマディスプレイ(PDP:Plasma Display Panel)等を採用することができる。 The display unit 23 displays images under the control of the control unit 25 . The display unit 23 can display an image including at least partially the user's face and upper body imaged by the imaging unit 21, as shown in FIG. A commonly known display can be used for the display unit 23 . The display unit 23 can employ, for example, a liquid crystal display (LCD), an organic EL (Electro-Luminescence) display, an inorganic EL display, a plasma display (PDP: Plasma Display Panel), or the like.
 撮像調整部24は、制御部25の制御により、撮像部21の撮像する向きを調整する。撮像調整部24は、例えば、本体部11の内部に組み込まれ、撮像部21の向きを変える駆動部を含んでよい。また、撮像調整部24は、スタンド28に組み込まれ、本体部11の撮像部21を含む部分の向きを調整してよい。 The imaging adjustment unit 24 adjusts the imaging direction of the imaging unit 21 under the control of the control unit 25 . The imaging adjustment section 24 may include, for example, a driving section that is incorporated inside the main body section 11 and changes the orientation of the imaging section 21 . Further, the imaging adjustment section 24 may be incorporated in the stand 28 and adjust the orientation of the portion of the main body section 11 including the imaging section 21 .
 制御部25は、本体部11の各部を制御するとともに、聴診部12の位置決め、および、ユーザの血行動態パラメータの推定を行うための種々の演算処理を実行する。制御部25は、一つまたは複数のプロセッサを含む。プロセッサには、特定のプログラムを読み込ませて特定の機能を実行する汎用のプロセッサ、特定の処理に特化した専用のプロセッサが含まれる。専用のプロセッサには、特定用途向けIC(ASIC;Application Specific Integrated Circuit)、および、プログラマブルロジックデバイス(PLD;Programmable Logic Device)が含まれる。PLDには、FPGA(Field-Programmable Gate Array)が含まれる。制御部25は、一つまたは複数のプロセッサが協働するSoC(System-on-a-Chip)、及びSiP(System In a Package)のいずれかであってよい。制御部25は、プロセッサに内蔵されるメモリまたはプロセッサとは独立したメモリを含んでよい。制御部25は、制御手順を規定したプログラムを実行することができる。制御部25は、非一時的なコンピュータ可読媒体に記録されたプログラムをメモリに読み込んで実装するように構成されてよい。制御部25の行う処理については、図5等を用いて以下においてさらに説明される。 The control section 25 controls each section of the main body section 11, and performs various arithmetic processing for positioning the auscultation section 12 and estimating the user's hemodynamic parameters. Control unit 25 includes one or more processors. The processor includes a general-purpose processor that loads a specific program and executes a specific function, and a dedicated processor that specializes in specific processing. Special purpose processors include Application Specific Integrated Circuits (ASICs) and Programmable Logic Devices (PLDs). PLDs include FPGAs (Field-Programmable Gate Arrays). The control unit 25 may be either SoC (System-on-a-Chip) or SiP (System In a Package) in which one or more processors cooperate. The control unit 25 may include memory built into the processor or memory independent of the processor. The control unit 25 can execute a program that defines control procedures. The control unit 25 may be configured to load and implement a program recorded on a non-transitory computer-readable medium into a memory. Processing performed by the control unit 25 will be further described below with reference to FIG. 5 and the like.
 通信部26は、通信ネットワーク17を介して医療機関システム16およびサーバ18と通信するためのハードウェアおよびソフトウェアを含む。通信部26は、有線および/または無線の通信手段に対応する。通信部26は、情報の送信および受信に係るプロトコル処理、送信信号の変調および受信信号の復調等の処理を行う。 The communication unit 26 includes hardware and software for communicating with the medical institution system 16 and the server 18 via the communication network 17. The communication unit 26 corresponds to wired and/or wireless communication means. The communication unit 26 performs processing such as protocol processing related to transmission and reception of information, modulation of transmission signals and demodulation of reception signals.
 スピーカ27は、制御部25の制御により、ユーザを案内するための音声を発する。スピーカ27としては、公知のものを使用することができる。 Under the control of the control unit 25, the speaker 27 emits sounds for guiding the user. A known speaker can be used as the speaker 27 .
(聴診部の構成)
 聴診部12は、ユーザが自己の胸部に接触させ心音を取得するセンサである。聴診部12は、図3に示すように、心音取得部31、変換部32、圧力センサ33および振動部34を含む。聴診部12の各構成部は、本体部11の制御部25により制御されてよい。また、聴診部12は、本体部11の制御部25と連携して聴診部12の各構成部を制御するための制御部(プロセッサ)を有してもよい。なお、本体部11と聴診部12とは、有線で接続されても良いし、無線で接続されても良い。また、聴診部12は、図6に示すように、ユーザの胸部と接触する部分とは反対側の表面にマーカー36を有する。さらに、聴診部12は、ユーザが聴診部12を持つときに使用する持ち手37を有する。
(Structure of stethoscope)
The auscultation unit 12 is a sensor that is brought into contact with the user's own chest to acquire heart sounds. The auscultation unit 12 includes a heart sound acquisition unit 31, a conversion unit 32, a pressure sensor 33, and a vibration unit 34, as shown in FIG. Each component of the auscultation unit 12 may be controlled by the control unit 25 of the main unit 11 . Further, the auscultation section 12 may have a control section (processor) for controlling each component of the auscultation section 12 in cooperation with the control section 25 of the main body section 11 . Note that the main unit 11 and the auscultation unit 12 may be connected by wire or wirelessly. The auscultation unit 12 also has a marker 36 on the surface opposite to the portion that contacts the user's chest, as shown in FIG. Further, the auscultation unit 12 has a handle 37 that the user uses when holding the auscultation unit 12 .
 心音取得部31は、ユーザの心臓の音を取得する。心音取得部31は、聴診部12の利用者の胸部に接触する側に設けることができる。変換部32は、心音取得部31で取得した心音を電気信号に変換する。心音取得部31と変換部32とは、心音を電気信号に変換する心音センサを構成してよい。心音センサは、例えば、MEMS(Micro Electro Mechanical Systems)心音センサ、圧電素子を用いた心音センサ、および、加速度計を用いた心音センサ等を含む。なお、本開示の心音取得装置は、本実施形態に限定されない。本開示の心音取得装置には、心音を電気信号に変換しない聴診部12の形態も含まれる。その場合、心音は、空気または物体の振動として医師またはユーザに伝達される。 The heart sound acquisition unit 31 acquires the user's heart sound. The heart sound acquisition unit 31 can be provided on the side of the auscultation unit 12 that contacts the user's chest. The conversion unit 32 converts the heart sounds acquired by the heart sound acquisition unit 31 into electrical signals. The heart sound acquisition unit 31 and the conversion unit 32 may constitute a heart sound sensor that converts heart sounds into electrical signals. Heart sound sensors include, for example, MEMS (Micro Electro Mechanical Systems) heart sound sensors, heart sound sensors using piezoelectric elements, and heart sound sensors using accelerometers. Note that the heart sound acquisition device of the present disclosure is not limited to this embodiment. The heart sound acquisition device of the present disclosure also includes a form of auscultation unit 12 that does not convert heart sounds into electrical signals. The heart sounds are then transmitted to the physician or user as vibrations of air or objects.
 圧力センサ33は、聴診部12がユーザの胸部に押圧される圧力を検出し、電気信号として出力するセンサである。圧力センサ33は、例えば、圧電効果を利用した圧力センサ33を使用することができる。圧力センサ33は、例えば、聴診部12のユーザの胸部と接する面の外周に沿うリング状の形状をしてよい。あるいは、圧力センサ33は、聴診部12のユーザの胸部と接する面の外周部に、複数個所設けられてよい。 The pressure sensor 33 is a sensor that detects the pressure with which the auscultation unit 12 is pressed against the user's chest and outputs it as an electrical signal. For the pressure sensor 33, for example, a pressure sensor 33 using a piezoelectric effect can be used. The pressure sensor 33 may have, for example, a ring shape along the outer circumference of the surface of the auscultation unit 12 that contacts the chest of the user. Alternatively, the pressure sensors 33 may be provided at a plurality of locations on the outer circumference of the surface of the auscultation unit 12 that contacts the user's chest.
 振動部34は、人間にとって知覚可能な振動を発生させる振動子を含む。振動子は、偏心モータを用いたもの、リニアバイブレータを用いたもの、および、圧電素子を用いたものを含む。 The vibrating section 34 includes a vibrator that generates vibrations perceptible to humans. Vibrators include those using eccentric motors, those using linear vibrators, and those using piezoelectric elements.
 マーカー36は、撮像部21で撮像した画像から、聴診部12の位置を特定するために使用されるマークである。マーカー36は、制御部25が聴診部12を撮像した画像から聴診部12の位置を点として特定できるものであることが好ましい。マーカー36は、例えば、円を含む模様により構成される。この場合、制御部25は、円の中心が聴診部12の位置であると判定することができる。マーカー36は、これに限られない。例えば、マーカー36は、二本の線分が垂直に交差する模様とすることができる。この場合、二本の直線が交差する点が、聴診部12の位置であると判定することができる。 The marker 36 is a mark used to specify the position of the auscultation unit 12 from the image captured by the imaging unit 21. It is preferable that the marker 36 can specify the position of the auscultation unit 12 as a point from the image of the auscultation unit 12 captured by the control unit 25 . The marker 36 is composed of, for example, a pattern including circles. In this case, the control unit 25 can determine that the center of the circle is the position of the auscultation unit 12 . The marker 36 is not limited to this. For example, the marker 36 can be a pattern in which two line segments intersect perpendicularly. In this case, it can be determined that the position of the auscultation unit 12 is the point where the two straight lines intersect.
 持ち手37は、ユーザが聴診部12を持った状態で画像を撮像する際に、ユーザの手がマーカー36の撮像部21側に入りマーカー36の撮像が妨げられることを防止するために設けられる。また、両手で把持することにより、患者自身による位置調整がしやすくなるために設けられる。持ち手37の形状としては、種々の形状を採用することができる。 The handle 37 is provided to prevent the user's hand from entering the imaging unit 21 side of the marker 36 and obstructing the imaging of the marker 36 when the user takes an image while holding the auscultation unit 12 . . It is also provided so that the patient can easily adjust the position by holding it with both hands. Various shapes can be adopted as the shape of the handle 37 .
(制御部の実行する処理)
 本体部11の制御部25は、一例として、図5に示すように、画像認識部25a、カメラ調整部25b、方向案内部25c、接触判定部25d、波形処理部25e、波形解析部25f、推定部25gおよび判定部25hの各機能ブロックを含んで構成される。各機能ブロックの処理は、同一のプロセッサにより実行されてもよく、異なるプロセッサにより実行されてもよい。各機能ブロックの処理は、単一のソフトウェアモジュールにより実行されてよく、複数のソフトウェアモジュールにより実行されてよい。各機能ブロックの処理は、組み換えたり、分離したり、結合したりすることができる。各機能ブロックの機能は、全て制御部25の機能であるとみなすことができる。
(Processing executed by control unit)
As an example, as shown in FIG. 5, the control unit 25 of the main unit 11 includes an image recognition unit 25a, a camera adjustment unit 25b, a direction guide unit 25c, a contact determination unit 25d, a waveform processing unit 25e, a waveform analysis unit 25f, an estimation It includes functional blocks of the section 25g and the determination section 25h. The processing of each functional block may be executed by the same processor or by different processors. The processing of each functional block may be performed by a single software module or may be performed by multiple software modules. The processing of each functional block can be rearranged, separated, or combined. All the functions of each functional block can be regarded as functions of the control unit 25 .
 画像認識部25aは、撮像部21で撮像した画像から、ユーザの顔および骨格等を認識する。画像認識部25aは、認識したユーザの顔および骨格等から特徴点を識別することができる。特徴点は、例えば、ユーザの左肩および右肩の関節または肩峰の部分、ならびに、顔の両目、耳、鼻、口および顎等を含む。画像認識部25aは、さらに、撮像部21で撮像した画像から聴診部12のマーカー36を認識し、聴診部12の位置を特定することができる。 The image recognition unit 25a recognizes the user's face, skeleton, etc. from the image captured by the imaging unit 21. The image recognition unit 25a can identify feature points from the recognized user's face, skeleton, and the like. Feature points include, for example, the user's left and right shoulder joints or acromion portions, as well as the eyes, ears, nose, mouth and chin of the face. The image recognition unit 25a can further recognize the marker 36 of the auscultation unit 12 from the image captured by the imaging unit 21 and specify the position of the auscultation unit 12 .
 画像認識部25aは、さらに、撮像部21で撮像した画像から、撮像部21とユーザの身体との距離、および、ユーザの身体の向きの少なくとも何れかを認識することができる。画像認識部25aは、ユーザの体の向きが、正しい向き、例えば、正面からずれている場合、これを認識することができる。また、画像認識部25aは、ユーザの身体の所定の範囲が撮像部21の視野から外れている場合、これを認識することができる。 The image recognition unit 25a can further recognize at least one of the distance between the imaging unit 21 and the user's body and the direction of the user's body from the image captured by the imaging unit 21. The image recognition unit 25a can recognize when the orientation of the user's body deviates from the correct orientation, for example, the front. Further, when a predetermined range of the user's body is out of the field of view of the imaging section 21, the image recognition section 25a can recognize this.
 カメラ調整部25bは、画像認識部25aが認識したユーザの身体の画像が、心音測定を行う正しい向きおよび位置から外れている場合、撮像調整部24を制御して、撮像部21の撮像する向きを調整することができる。 If the image of the user's body recognized by the image recognition unit 25a is out of the correct orientation and position for heart sound measurement, the camera adjustment unit 25b controls the imaging adjustment unit 24 to change the imaging orientation of the imaging unit 21. can be adjusted.
 方向案内部25cは、画像認識部25aで認識した特徴点の位置と、記憶部22に記憶された相対位置情報とから、聴診部12を配置すべき位置を特定する。相対位置情報は、聴診部12を配置すべき位置の特徴点に対する相対的位置を示す情報である。方向案内部25cは、撮像部21で撮像した画像から特定された聴診部12の位置が、聴診部12を配置すべき位置からずれている場合、表示部23に表示される画像上で、聴診部12を配置すべき位置に移動させるようにユーザに対して案内する。 The direction guide unit 25c identifies the position where the auscultation unit 12 should be placed based on the positions of the feature points recognized by the image recognition unit 25a and the relative position information stored in the storage unit 22. The relative position information is information indicating the relative position of the position where the auscultation unit 12 should be placed with respect to the feature point. When the position of the auscultation unit 12 specified from the image captured by the imaging unit 21 is deviated from the position where the auscultation unit 12 should be placed, the direction guide unit 25c displays the auscultation unit 12 on the image displayed on the display unit 23. The user is guided to move the unit 12 to the position where it should be arranged.
 例えば、方向案内部25cは、表示部23に表示される画像上で、聴診部12を配置するべき位置を表示する。聴診部12を配置するべき位置の表示は、円形または正方形等のマークを赤色等の特定の色で表示してよく、点滅させて表示してよい。方向案内部25cは、さらに、表示部23に表示される画像上に重畳して、聴診部12の移動すべき方向を、直線および矢印等の図形ならびに文字等で表示してよい。聴診部12の移動すべき方向は、聴診部12を配置すべき位置に向かう方向である。方向案内部25cは、聴診部12の移動すべき方向を、スピーカ27による音声でユーザに案内してよい。方向案内部25cは、ユーザが聴診部12を聴診部12の移動すべき方向とは異なる方向に移動させているとき、振動部34を振動させて、ユーザに聴診部12の移動方向が誤っていることを知らせてよい。これとは逆に、方向案内部25cは、ユーザが聴診部12を聴診部12の移動すべき方向に移動させているとき、振動部34を振動させてもよい。 For example, the direction guide unit 25c displays the position where the auscultation unit 12 should be placed on the image displayed on the display unit 23. The position where the auscultation unit 12 should be placed may be displayed by displaying a circular or square mark in a specific color such as red, or by blinking the mark. The direction guide section 25c may further superimpose on the image displayed on the display section 23 to display the direction in which the auscultation section 12 should be moved by graphics such as straight lines and arrows, characters, and the like. The direction in which the auscultation unit 12 should move is the direction toward the position where the auscultation unit 12 should be arranged. The direction guide section 25c may guide the user in the direction in which the auscultation section 12 should be moved by voice from the speaker 27 . When the user is moving the auscultation unit 12 in a direction different from the direction in which the auscultation unit 12 should be moved, the direction guide unit 25c vibrates the vibrating unit 34 so that the user does not know that the auscultation unit 12 is moving in the wrong direction. You can let me know you are there. Conversely, the direction guide section 25c may cause the vibration section 34 to vibrate when the user moves the auscultation section 12 in the direction in which the auscultation section 12 should be moved.
 方向案内部25cは、聴診部12の位置が、聴診部12を配置するべき位置に一致したとき、表示部23に表示される画像上に文字および図形の少なくとも何れか一方を重畳させることにより、および/または、スピーカ27を用いて音声を発することにより、および/または、振動部34を振動させることにより、ユーザに対して聴診部12が正しい測定位置に一致したことを通知してよい。 When the position of the auscultation unit 12 matches the position where the auscultation unit 12 should be arranged, the direction guide unit 25c superimposes at least one of characters and graphics on the image displayed on the display unit 23, thereby And/or the user may be notified that the auscultation unit 12 is aligned with the correct measurement position by emitting sound using the speaker 27 and/or by vibrating the vibration unit 34 .
 接触判定部25dは、圧力センサ33により検知される圧力に基づいて、ユーザの身体に対する聴診部12の接触状態を判定することができる。接触判定部25dは、接触状態を判定するとき、心音の波形を考慮してよい。例えば、接触判定部25dは、圧力センサ33により検知される圧力が所定値以下であり、かつ/または、後述する波形解析部25fで心音の波形からI音およびII音の少なくとも何れかが検出されないとき、聴診部12のユーザの身体に対する押し圧が不足していると判定してよい。接触判定部25dは、推し圧が不足していると判定したとき、表示部23に表示される画像およびスピーカ27による音声の少なくとも何れかを使用して、ユーザに聴診部12の押し圧を高くするように案内することができる。また、聴診部12は、聴診部12の押し圧が不十分なことを示すランプを備えてよい。接触判定部25dは、圧力センサ33の出力に応じて、このランプの点灯を制御してよい。 The contact determination unit 25d can determine the contact state of the auscultation unit 12 with the user's body based on the pressure detected by the pressure sensor 33. The contact determination unit 25d may consider the waveform of the heart sound when determining the contact state. For example, the contact determination unit 25d determines that the pressure detected by the pressure sensor 33 is equal to or less than a predetermined value, and/or the waveform analysis unit 25f, which will be described later, does not detect at least one of sound I and sound II from the waveform of the heart sound. At this time, it may be determined that the pressure of the auscultation unit 12 against the user's body is insufficient. When the contact determination unit 25d determines that the pressing pressure is insufficient, the contact determination unit 25d uses at least one of the image displayed on the display unit 23 and the sound from the speaker 27 to instruct the user to increase the pressing pressure of the auscultation unit 12. can guide you to do so. Also, the auscultation unit 12 may be provided with a lamp that indicates that the pressing force of the auscultation unit 12 is insufficient. The contact determination section 25 d may control lighting of this lamp according to the output of the pressure sensor 33 .
 波形処理部25eは、聴診部12により取得され変換部32により電気信号に変換された心音の波形からノイズを除去する。例えば、波形処理部25eは、心音の電気信号に対して、環境音および呼吸音等の心臓の拍動に由来しないノイズを除去するためのフィルタリング処理を行う。本体部11が電極13および腕帯装置14から心電図および脈波の信号を取得する場合、波形処理部25eは、これらの信号の波形のノイズの除去も行ってよい。 The waveform processing unit 25e removes noise from the waveform of the heart sound acquired by the auscultation unit 12 and converted into an electrical signal by the conversion unit 32. For example, the waveform processing unit 25e performs a filtering process on the electrical signal of heart sounds to remove noises not derived from heart beats, such as environmental sounds and breathing sounds. When the main unit 11 acquires electrocardiogram and pulse wave signals from the electrodes 13 and the arm band device 14, the waveform processing unit 25e may remove noise from the waveforms of these signals.
 波形解析部25fは、心音波形の解析を行う。波形解析部25fは、波形処理部25eでノイズ除去を行った心音の波形から、特徴量を抽出する。波形解析部25fは、例えば、心音のI音およびII音のタイミングを抽出する。波形解析部25fは、I音およびII音の亢進および分裂等を解析してよい。波形解析部25fは、さらに心電図および脈波の波形の解析を行う。波形解析部25fは、例えば、心電図の波形からQ波、R波およびS波のタイミング、幅および大きさ等を抽出する。波形解析部25fは、例えば、脈波の波形から、拡張期および収縮期のタイミング及び期間等を抽出することができる。波形解析部25fは、心音が変換された電気信号の波形と、電極13および腕帯装置14が取得した、心電図および脈波の波形とを時間的に同期させて解析することができる。 The waveform analysis unit 25f analyzes the cardiac waveform. The waveform analysis unit 25f extracts feature amounts from the waveform of the heart sound from which noise has been removed by the waveform processing unit 25e. The waveform analysis unit 25f, for example, extracts the timings of the first and second heart sounds. The waveform analysis unit 25f may analyze acceleration and fragmentation of the first sound and the second sound. The waveform analysis unit 25f further analyzes the waveforms of the electrocardiogram and pulse wave. The waveform analysis unit 25f, for example, extracts the timing, width, magnitude, etc. of the Q wave, R wave, and S wave from the waveform of the electrocardiogram. The waveform analysis unit 25f can extract, for example, the timing and duration of diastole and systole from the waveform of the pulse wave. The waveform analysis unit 25f can time-synchronize and analyze the waveform of the electrical signal obtained by converting the heart sounds, and the waveforms of the electrocardiogram and the pulse wave obtained by the electrodes 13 and the arm band device 14 .
 推定部25gは、波形解析部25fによって抽出された特徴量に基づいて、血行動態パラメータを算出する。血行動態パラメータは、左心室圧および肺動脈圧の少なくとも一方を含む。推定部25gは、上述のように、機械学習により血行動態パラメータを推定してよい。推定部25gが、血行動態パラメータを算出する手段は、機械学習を用いたものに限定されない。 The estimation unit 25g calculates hemodynamic parameters based on the feature amount extracted by the waveform analysis unit 25f. The hemodynamic parameters include at least one of left ventricular pressure and pulmonary artery pressure. The estimation unit 25g may estimate the hemodynamic parameters by machine learning as described above. The means by which the estimation unit 25g calculates the hemodynamic parameters is not limited to one using machine learning.
 判定部25hは、推定部25gから血行動態パラメータを取得する。判定部25hは、血行動態パラメータを継続的にモニターし、心臓の状態の変化および/または異常を判定する。 The determination unit 25h acquires hemodynamic parameters from the estimation unit 25g. The determination unit 25h continuously monitors hemodynamic parameters and determines changes and/or abnormalities in the heart condition.
 なお、制御部25は、波形処理部25e、波形解析部25f、推定部25gおよび判定部25hの処理の少なくとも一部を実行しなくともよい。これらの処理は、制御部25ではなく、医療機関システム16側で行ってよい。また、これらの処理は、医療機関とは異なる場所に設けられたサーバ18により行ってもよい。 Note that the control unit 25 does not have to execute at least part of the processing of the waveform processing unit 25e, the waveform analysis unit 25f, the estimation unit 25g, and the determination unit 25h. These processes may be performed by the medical institution system 16 instead of the control unit 25 . Also, these processes may be performed by the server 18 provided at a location different from the medical institution.
 次に、血行動態モニタリングシステム10で実行される処理について、フローチャートを用いて説明する。以下の説明において、画像認識部25a、カメラ調整部25b、方向案内部25c、接触判定部25d、波形処理部25e、波形解析部25f、推定部25gおよび判定部25hの実行する処理は、制御部25が実行する処理として説明される。 Next, the processing executed by the hemodynamic monitoring system 10 will be explained using a flowchart. In the following description, the processes executed by the image recognition unit 25a, the camera adjustment unit 25b, the direction guide unit 25c, the contact determination unit 25d, the waveform processing unit 25e, the waveform analysis unit 25f, the estimation unit 25g, and the determination unit 25h are 25 will be described as a process executed by .
(聴診部を配置すべき位置の設定処理)
 血行動態モニタリングシステム10をユーザが在宅で使用するのに先立ち、医療機関において、特徴点に対する心音を取得するときの聴診部12の相対的位置を特定し、記憶する必要がある。図7のフローチャートを参照して、聴診部12で心音を取得する位置を設定する手順について説明する。
(Processing for setting the position where the stethoscope should be placed)
Before the user uses the hemodynamic monitoring system 10 at home, it is necessary to specify and store the relative position of the auscultation unit 12 when acquiring the heart sound with respect to the characteristic points at the medical institution. A procedure for setting the position where the auscultation unit 12 acquires heart sounds will be described with reference to the flowchart of FIG.
 まず、医療機関で医師が、心音取得装置15を使用し、聴診部12をユーザの胸部に当接させ、心音を良好に取得可能な位置を探索する(ステップS101)。このとき使用する心音取得装置15は、ユーザが在宅で使用する装置と同じものであっても、異なるものであってもよい。 First, a doctor at a medical institution uses the heart sound acquisition device 15 to bring the auscultation unit 12 into contact with the user's chest and searches for a position where heart sounds can be acquired satisfactorily (step S101). The heart sound acquisition device 15 used at this time may be the same as or different from the device used by the user at home.
 本体部11の制御部25は、聴診部12から心音が取得できているか判定する(ステップS102)。制御部25は、変換部32で電気信号に変換された心音の電気信号が、所定の音、例えばI音および/またはII音を含む場合、心音が取得できたものと判定することができる。制御部25は、心音の電気信号で所定の音の信号がノイズに埋もれて特定できない場合、心音が取得できなかったと判定することができる。心音が取得できた場合(ステップS102:Yes)、制御部25は次のステップS104に進む。心音が取得できなかった場合(ステップS102:No)、制御部25は、ステップS103に進む。 The control unit 25 of the main unit 11 determines whether heart sounds can be acquired from the auscultation unit 12 (step S102). The control unit 25 can determine that the heart sounds have been acquired when the electrical signals of the heart sounds converted into electrical signals by the conversion unit 32 include predetermined sounds, such as the I sound and/or the II sound. The control unit 25 can determine that the heart sound could not be acquired when the predetermined sound signal in the electrical signal of the heart sound cannot be identified because it is buried in noise. If the heartbeat could be acquired (step S102: Yes), the controller 25 proceeds to the next step S104. If the heartbeat could not be acquired (step S102: No), the controller 25 proceeds to step S103.
 ステップS103において、制御部25は、表示部23および/またはスピーカ27を用いて聴診部12の位置を修正するように案内する。医師は、案内に従って聴診部12の心音を取得する位置を変える。ステップS103の後、ステップS101に戻り、医師が再び聴診部12をユーザの胸部に当接させる。この手順では、制御部25による案内に従って医師が聴診部12の位置を決めるよう説明したが、これに限らず医師が聴診により心音を取得する位置を決めても良い。 In step S103, the control unit 25 uses the display unit 23 and/or the speaker 27 to guide the user to correct the position of the auscultation unit 12. The doctor changes the position of the auscultation unit 12 to acquire the heart sound according to the guidance. After step S103, the process returns to step S101, and the doctor again brings the auscultation unit 12 into contact with the user's chest. In this procedure, it has been described that the doctor determines the position of the auscultation unit 12 according to guidance from the control unit 25, but the present invention is not limited to this, and the doctor may determine the position for acquiring heart sounds by auscultation.
 ステップS102で、聴診部12が心音を取得できた場合、医師が、聴診部12をユーザの胸部に当てた状態で、本体部11の制御部25が、撮像部21を制御してユーザの上半身と、マーカー36を含む聴診部12を撮像する(ステップS104)。 In step S102, when the auscultation unit 12 can acquire heart sounds, the doctor puts the auscultation unit 12 against the user's chest, and the control unit 25 of the main unit 11 controls the imaging unit 21 to detect the upper body of the user. Then, the auscultation unit 12 including the marker 36 is imaged (step S104).
 撮像部21がユーザの上半身および聴診部12を撮像すると、制御部25は、撮像した画像に対してユーザの顔および骨格を認識しようとする(ステップS105)。撮像した画像でユーザの顔および骨格が認識できなかったとき(ステップS105:No)、制御部25は、ステップS106に進む。 When the imaging unit 21 images the user's upper body and the auscultation unit 12, the control unit 25 attempts to recognize the user's face and skeleton in the captured image (step S105). When the user's face and skeleton cannot be recognized in the captured image (step S105: No), the control unit 25 proceeds to step S106.
 ステップS106において、制御部25は、撮像される画像にユーザの画像が含まれるように、撮像部21の向きを調整する。制御部25は、撮像調整部24を制御して、撮像部21の向きを自動調整してよい。撮像調整部24で調整可能な範囲では、撮像部21の向きが調整しきれない場合、制御部25は、表示部23およびスピーカ27を制御して、本体部11を動かして撮像部21の向きを変えるように、医師に対して案内する。医師は、これに従って、撮像部21の向きを調整する。撮像部21の向きを調整した後、制御部25は、ステップS104の処理に戻る。 In step S106, the control unit 25 adjusts the orientation of the imaging unit 21 so that the user's image is included in the captured image. The control section 25 may control the imaging adjustment section 24 to automatically adjust the orientation of the imaging section 21 . If the orientation of the imaging unit 21 cannot be adjusted within the adjustable range of the imaging adjustment unit 24, the control unit 25 controls the display unit 23 and the speaker 27 to move the main unit 11 to adjust the orientation of the imaging unit 21. Advise the physician to change The doctor adjusts the orientation of the imaging unit 21 accordingly. After adjusting the orientation of the imaging unit 21, the control unit 25 returns to the process of step S104.
 ステップS105で、ユーザの顔および骨格等を認識できた場合(ステップS105:No)、制御部25は、撮像部21で撮像した画像から、ユーザの特徴点を抽出する(ステップS107)。特徴点は、例えば、図8に示すように、左肩41および右肩42、ならびに、顔43に含まれる目、鼻、口、耳、および、顎等の部分を含む。左肩41および右肩42は、例えば、肩関節または肩峰の付近としうる。これらの、特徴点は、ユーザが服を着ていてもある程度判別可能である。特徴点は、さらに、ユーザの鎖骨および/または乳頭を含んでよい。これらの特徴点を認識するためには、ユーザは上半身を裸で撮像される必要がある。以下では、ユーザが服を着ていてもある程度認識可能な、左肩41、右肩42および顔43の特徴点を用いるものとする。 In step S105, if the user's face, skeleton, etc. can be recognized (step S105: No), the control unit 25 extracts the user's feature points from the image captured by the imaging unit 21 (step S107). The feature points include, for example, the left shoulder 41 and right shoulder 42, and the eyes, nose, mouth, ears, and chin included in the face 43, as shown in FIG. Left shoulder 41 and right shoulder 42 may be, for example, near the shoulder joint or acromion. These feature points can be distinguished to some extent even if the user is wearing clothes. Feature points may also include the user's clavicle and/or nipple. In order to recognize these feature points, the user needs to be imaged with the upper half of the body naked. In the following, feature points of the left shoulder 41, the right shoulder 42, and the face 43, which are recognizable to some extent even when the user is wearing clothes, are used.
 ステップS107で特徴点を抽出すると、制御部25は、特徴点に基づいて2軸を決定する(ステップS108)。制御部25は、例えば、正中線と水平線とを2軸として決定する。正中線は、目などの左右対称に位置する特徴点の組の中心点、および、口または鼻等を結んだ線である。正中線は、y軸としうる。水平線は、例えば、左肩41および右肩42等の左右対称に位置する特徴点を結んだ線である。水平線は、y軸に垂直なx軸としうる。 After extracting the feature points in step S107, the control unit 25 determines two axes based on the feature points (step S108). The control unit 25 determines, for example, the median line and the horizontal line as two axes. The median line is a line connecting the center point of a set of symmetrically positioned feature points such as the eyes and the mouth or nose. The midline can be the y-axis. The horizontal line is, for example, a line connecting symmetrically positioned feature points such as the left shoulder 41 and the right shoulder 42 . The horizontal line can be the x-axis perpendicular to the y-axis.
 2軸が決定すると、制御部25は、撮像部21で撮像した画像上での聴診部12の位置を決定する(ステップS109)。聴診部12の位置は、画像上でのマーカー36の位置により決定される。聴診部12の位置は、ステップS108で決定したx軸およびy軸上での座標として表現することができる。すなわち、聴診部12の位置は、画像上での特徴点に対する相対的位置を示す相対位置情報として表される。 When the two axes are determined, the control unit 25 determines the position of the auscultation unit 12 on the image captured by the imaging unit 21 (step S109). The position of the auscultatory portion 12 is determined by the position of the marker 36 on the image. The position of the auscultation unit 12 can be expressed as coordinates on the x-axis and y-axis determined in step S108. That is, the position of the auscultation unit 12 is expressed as relative position information indicating the relative position with respect to the feature point on the image.
 ステップS109で、聴診部12の位置が決定されると、制御部25は、聴診部12の相対位置情報を記憶部22に記憶し、および/または、サーバ18に送信する(ステップS110)。ユーザが、医療機関で聴診部12を配置すべき位置を設定するために使用した心音取得装置15を、在宅でそのまま使用する場合、制御部25は聴診部12の相対位置情報を記憶部22に記憶してよい。ユーザが、医療機関で使用した心音取得装置15とは異なる心音取得装置15を在宅で使用する場合、制御部25は、ユーザが自宅から読み込み可能なように、聴診部12を配置すべき位置の相対位置情報をサーバ18に記憶させる。サーバ18に記憶された聴診部12を配置すべき位置の相対位置情報は、ユーザが在宅で心音取得装置15を使用する前に、制御部25が通信部26を介してサーバ18から取得し、記憶部22に記憶させる。 When the position of the auscultation unit 12 is determined in step S109, the control unit 25 stores the relative position information of the auscultation unit 12 in the storage unit 22 and/or transmits it to the server 18 (step S110). When the user uses the heart sound acquisition device 15 at a medical institution to set the position of the auscultation unit 12 at home, the control unit 25 stores the relative position information of the auscultation unit 12 in the storage unit 22. You can remember. When the user uses at home a heart sound acquisition device 15 different from the heart sound acquisition device 15 used at a medical institution, the control unit 25 determines the position where the auscultation unit 12 should be placed so that the user can read from home. The relative position information is stored in server 18 . The relative position information of the position where the auscultation unit 12 should be placed stored in the server 18 is acquired from the server 18 by the control unit 25 via the communication unit 26 before the user uses the heart sound acquisition device 15 at home. It is stored in the storage unit 22 .
 なお、上記説明では、患者が在宅で使用する心音取得装置15と同様の装置を、医療機関で医師が使用するものとした。しかし、医療機関では、聴診部12を配置するべき位置を特定するため、患者が在宅で使用する装置とは異なる専用の装置が用いられてもよい。 In the above explanation, it is assumed that a doctor uses a device similar to the heart sound acquisition device 15 used by a patient at home at a medical institution. However, in a medical institution, a dedicated device different from the device used by the patient at home may be used to specify the position where the auscultation unit 12 should be placed.
(在宅での血行動態パラメータの算出処理)
 次に、患者であるユーザが在宅で行う血行動態パラメータを算出する処理について、図9および図10のフローチャートを参照して説明する。
(Calculation processing of hemodynamic parameters at home)
Next, processing for calculating hemodynamic parameters performed by a user who is a patient at home will be described with reference to the flowcharts of FIGS. 9 and 10. FIG.
 まず、ユーザは、心電図を取得するための電極13、および、脈波を取得するための腕帯装置14を装着する(ステップS201)。 First, the user wears the electrodes 13 for acquiring an electrocardiogram and the arm band device 14 for acquiring a pulse wave (step S201).
 次に、ユーザは、心音を取得するために聴診部12を位置決めする(ステップS202)。聴診部12の位置決めの詳細を、図10を参照して以下に説明する。図10のフローチャートに示す手順は、本開示の心音取得方法に対応する。 Next, the user positions the auscultation unit 12 to acquire heart sounds (step S202). Details of the positioning of the auscultatory portion 12 are described below with reference to FIG. The procedure shown in the flowchart of FIG. 10 corresponds to the heart sound acquisition method of the present disclosure.
 ユーザは、心音取得装置15の撮像部21が、ユーザの上半身を撮像するため適切な位置および向きになるように、本体部11を机の上などに配置する。撮像部21が、本体部11から独立している場合、ユーザは撮像部21を適切な位置に配置する。ユーザは、この状態で、撮像部21を用いて、自らの上半身を撮像する(ステップS301)。本体部11の制御部25は、撮像部21により撮像した画像を、表示部23に表示する。表示部23に表示される画像は、ユーザから見て鏡像のように見えるように、左右反転させた画像とすることができる。本体部11の撮像部21は、ユーザの画像を連続的に撮像してよい。以下の処理は、連続的に撮像される画像に対して実行してよい。 The user places the main unit 11 on a desk or the like so that the imaging unit 21 of the heart sound acquisition device 15 is positioned and oriented appropriately for imaging the user's upper body. When the imaging section 21 is independent of the main body section 11, the user arranges the imaging section 21 at an appropriate position. In this state, the user takes an image of his or her upper body using the imaging unit 21 (step S301). The control section 25 of the main body section 11 displays the image captured by the imaging section 21 on the display section 23 . The image displayed on the display unit 23 can be a left-right reversed image so that it looks like a mirror image to the user. The imaging section 21 of the main body section 11 may continuously capture images of the user. The following processing may be performed on successively captured images.
 本体部11の制御部25は、撮像部21で撮像した画像に対してユーザの顔および骨格を認識しようとする(ステップS302)。撮像した画像でユーザの顔および骨格が認識できたとき(ステップS302:Yes)、制御部25は、次のステップS304に進む。撮像した画像でユーザの顔および骨格が認識できなかったとき(ステップS302:No)、撮像部21が撮像する範囲が適切ではないことが考えられる。この場合、制御部25は、ステップS303に進む。 The control unit 25 of the main unit 11 attempts to recognize the user's face and skeleton in the image captured by the imaging unit 21 (step S302). When the user's face and skeleton can be recognized in the captured image (step S302: Yes), the control unit 25 proceeds to the next step S304. When the user's face and skeleton cannot be recognized in the captured image (step S302: No), it is conceivable that the range captured by the imaging unit 21 is not appropriate. In this case, the controller 25 proceeds to step S303.
 ステップS303において、制御部25は、撮像部21の位置および向きを調整する。制御部25は、撮像調整部24を制御して、撮像部21の向きを自動調整してよい。撮像部21が撮像する範囲を適切な範囲に調整しきれない場合、制御部25は、表示部23および/またはスピーカ27を制御して、本体部11を動かして撮像部21の向きを変えるように、ユーザに対して案内する。ユーザは、これに従って、撮像部21の位置および向きを調整する。ステップS303の後、フローチャートの処理は、ステップS301に戻る。 In step S303, the control unit 25 adjusts the position and orientation of the imaging unit 21. The control section 25 may control the imaging adjustment section 24 to automatically adjust the orientation of the imaging section 21 . When the range captured by the imaging unit 21 cannot be adjusted to an appropriate range, the control unit 25 controls the display unit 23 and/or the speaker 27 to move the main unit 11 to change the direction of the imaging unit 21. to the user. The user adjusts the position and orientation of the imaging unit 21 accordingly. After step S303, the process of the flowchart returns to step S301.
 ステップS304において、制御部25は、聴診部12を配置すべき位置を、表示部23に表示されるユーザの上半身の画像に重ねて表示させようとする。具体的には、制御部25は、ステップS302で認識したユーザの顔および骨格から、撮像部21で撮像した画像に含まれる特徴点を抽出する。制御部25は、特徴点に基づいて、ステップS107で特定した2軸に対応する軸を特定する。制御部25は、記憶部22に記憶される相対位置情報に基づいて、この2軸上の座標として、聴診部12を配置すべき位置を特定することができる。 In step S<b>304 , the control unit 25 attempts to display the position where the auscultation unit 12 should be placed so as to be superimposed on the image of the user's upper body displayed on the display unit 23 . Specifically, the control unit 25 extracts feature points included in the image captured by the imaging unit 21 from the user's face and skeleton recognized in step S302. Based on the feature points, the control unit 25 identifies the axes corresponding to the two axes identified in step S107. Based on the relative position information stored in the storage unit 22, the control unit 25 can specify the position where the auscultation unit 12 should be arranged as coordinates on these two axes.
 例えば、図11に示すように、聴診部12を配置すべき位置は、目標位置Pとして画像上に表示される。目標位置Pは、任意のマークで画像上に表示されてよい。表示部23に表示されるユーザの上半身の画像は左右反転した画像となっているので、目標位置Pも事前に医療機関で設定した聴診部12を配置すべき位置に対応する位置を、左右反転させた位置に表示される。 For example, as shown in FIG. 11, the position where the auscultation unit 12 should be placed is displayed as the target position P on the image. The target position P may be displayed on the image with any mark. Since the image of the user's upper body displayed on the display unit 23 is a left-right reversed image, the target position P corresponding to the position where the auscultation unit 12 should be arranged, which is set in advance by the medical institution, is left-right reversed. is displayed at the position where the
 表示部23に表示される画像に、聴診部12を配置すべき目標位置Pを重ねて表示させることができる場合(ステップS304:Yes)、制御部25は次のステップS305に進む。表示部23に表示される画像に、聴診部12を配置すべき目標位置Pを重ねて表示させることができない場合(ステップS304:No)、撮像部21が撮像する範囲が適切ではないことが考えられる。この場合、制御部25は、上述のステップS303に進んで、撮像部21の位置および向きの調整を行う。 If the target position P at which the auscultation unit 12 should be placed can be superimposed on the image displayed on the display unit 23 (step S304: Yes), the control unit 25 proceeds to the next step S305. If the target position P where the auscultation unit 12 should be placed cannot be superimposed on the image displayed on the display unit 23 (step S304: No), it is considered that the range captured by the imaging unit 21 is not appropriate. be done. In this case, the control unit 25 proceeds to step S303 described above and adjusts the position and orientation of the imaging unit 21 .
 ステップS304において、ユーザの身体から撮像部21までの距離、および、ユーザの身体の撮像部21に対する向き(傾き)が、医療機関で画像を撮像したときの条件とは異なる場合がある。制御部25は、撮像部21で撮像した画像に含まれる特徴点の配置をもとに、ユーザが医療機関で撮像した画像における聴診部12の座標を、在宅で撮像している画像の座標に座標変換することができる。また、別の方法として、制御部25は、表示部23および/またはスピーカ27を用いて、ユーザの身体の位置、向きおよび大きさが、医療機関で撮像した画像のそれらとほぼ一致するように、ユーザに位置および向きを変更するように案内してよい。 In step S304, the distance from the user's body to the imaging unit 21 and the orientation (inclination) of the user's body with respect to the imaging unit 21 may differ from the conditions when the image was captured at the medical institution. Based on the arrangement of the feature points included in the image captured by the imaging unit 21, the control unit 25 converts the coordinates of the auscultation unit 12 in the image captured by the user at the medical institution to the coordinates of the image captured at home. Coordinates can be transformed. As another method, the control unit 25 uses the display unit 23 and/or the speaker 27 so that the position, orientation, and size of the user's body substantially match those of the images taken at the medical institution. , may guide the user to change position and orientation.
 ステップS305において、ユーザは表示部23に表示された画像を見ながら、表示部23に目標位置Pが表示された位置に対応する自らの胸部の位置付近に、聴診部12を当接させる(ステップS305)。このとき、制御部25は、表示部23による表示および/またはスピーカ27による音声を用いて、表示部23に目標位置Pが表示された位置付近に、聴診部12を接触させるようにユーザに対して促してよい。 In step S305, while viewing the image displayed on the display unit 23, the user brings the auscultation unit 12 into contact with the position of the user's own chest corresponding to the position where the target position P is displayed on the display unit 23 (step S305). S305). At this time, the control unit 25 uses the display on the display unit 23 and/or the sound from the speaker 27 to instruct the user to bring the auscultation unit 12 into contact with the target position P displayed on the display unit 23. can be encouraged.
 ステップS305でユーザが聴診部12を胸部に当接させると、制御部25は、撮像部21で撮像した画像に含まれるマーカー36の画像を検出して、聴診部12の位置を認識しようとする(ステップS306)。聴診部12を認識した場合(ステップS306:Yes)、制御部25は、ステップS308の処理に進む。聴診部12を認識できない場合(ステップS306:No)制御部25は、ステップS307に進む。この場合、聴診部12が画像上で認識可能な範囲内にないことが、認識できない理由と考えられる。 When the user brings the auscultation unit 12 into contact with the chest in step S305, the control unit 25 detects the image of the marker 36 included in the image captured by the imaging unit 21 and attempts to recognize the position of the auscultation unit 12. (Step S306). If the auscultation unit 12 is recognized (step S306: Yes), the control unit 25 proceeds to the process of step S308. If the auscultation unit 12 cannot be recognized (step S306: No), the control unit 25 proceeds to step S307. In this case, it is considered that the reason why the auscultation unit 12 cannot be recognized is that it is not within the recognizable range on the image.
 ステップS307で、制御部25は、表示部23に文字を表示させることにより、および/または、スピーカ27に音声を発生させることにより、ユーザに対して聴診部12が正しい位置にないため位置を変更するように案内する。ユーザは、この案内により、胸部に当てた聴診部12の位置を変更する。ステップS307の後、フローチャートの処理は、ステップS305に戻る。 In step S307, the control unit 25 causes the display unit 23 to display characters and/or the speaker 27 to generate sound, thereby instructing the user to change the position of the auscultation unit 12 because it is not in the correct position. guide you to do so. According to this guidance, the user changes the position of the auscultation unit 12 placed on the chest. After step S307, the process of the flowchart returns to step S305.
 ステップS308で、制御部25は、目標位置Pの座標と、撮像部21で撮像した画像に含まれる聴診部12の現在位置の座標とから、聴診部12を移動させる方向およびその移動距離を案内する。制御部25は、移動方向及び距離を、表示部23に表示される画像上に図形および/または文字を重畳して案内すること、ならびに、スピーカ27による音声、および、振動部34による振動で案内することができる。例えば、制御部25は、表示部23に表示される画像上で、「下向きに3cmです」のような表示をしてよい。また、制御部25は、音声で同じ内容を発話してよい。また、例えば、制御部25は、聴診部12が目標位置Pに近づくと、目標位置Pを示すマークの色を点滅させること、音および振動の間隔を短くすること等により、ユーザに対して聴診部12を移動させる方向を案内することができる。 In step S308, the control unit 25 guides the movement direction and movement distance of the auscultation unit 12 from the coordinates of the target position P and the coordinates of the current position of the auscultation unit 12 included in the image captured by the imaging unit 21. do. The control unit 25 guides the movement direction and distance by superimposing graphics and/or characters on the image displayed on the display unit 23, and also by sound from the speaker 27 and vibration from the vibration unit 34. can do. For example, the control unit 25 may display "3 cm downward" on the image displayed on the display unit 23. FIG. Also, the control unit 25 may utter the same content by voice. Further, for example, when the auscultation unit 12 approaches the target position P, the control unit 25 instructs the user to perform auscultation by blinking the color of the mark indicating the target position P, shortening the interval between sounds and vibrations, and the like. It is possible to guide the direction in which the unit 12 is moved.
 制御部25は、目標位置Pの座標と、聴診部12の現在位置の座標との差が所定値以下となったとき、聴診部12の現在位置が、目標位置P(聴診部12を配置するべき位置)に一致したと判定することができる。制御部25は、聴診部12の現在位置が、目標位置Pに一致したと判定した場合(ステップS309:Yes)、ステップS310に進む。制御部25は、聴診部12の現在位置が、目標位置Pに一致していないと判定した場合(ステップS309:No)、再度ステップS308に戻り、聴診部12の位置の調整を行う。 When the difference between the coordinates of the target position P and the coordinates of the current position of the auscultation unit 12 is equal to or less than a predetermined value, the control unit 25 changes the current position of the auscultation unit 12 to the target position P (positioning the auscultation unit 12). position). When the control unit 25 determines that the current position of the auscultation unit 12 matches the target position P (step S309: Yes), the process proceeds to step S310. When the control unit 25 determines that the current position of the auscultation unit 12 does not match the target position P (step S309: No), the control unit 25 returns to step S308 and adjusts the position of the auscultation unit 12. FIG.
 ステップS310において、制御部25は、表示部23、スピーカ27および/または振動部34を用いて、ユーザに聴診部12が目標位置Pにあることを通知する(ステップS310)。例えば、制御部25は、表示部23に表示される目標位置Pに示されるマークの色を変更し、および/または、スピーカ27を通して「正しい位置に設定されました」という音声を発生させ、および/または、振動部34を振動させる。本実施形態において、表示部23、スピーカ27および/または振動部34は、聴診部12が配置すべき位置に位置することを報知する報知部に含まれる。 At step S310, the control unit 25 uses the display unit 23, the speaker 27 and/or the vibration unit 34 to notify the user that the auscultation unit 12 is at the target position P (step S310). For example, the control unit 25 changes the color of the mark indicated by the target position P displayed on the display unit 23, and/or generates a voice saying "correct position set" through the speaker 27, and / Or vibrate the vibrating section 34 . In this embodiment, the display unit 23, the speaker 27 and/or the vibration unit 34 are included in the notification unit that notifies that the auscultation unit 12 is positioned at the position where it should be placed.
 ステップS310で聴診部12が目標位置Pにあることが通知されると、ユーザはその位置に聴診部12を固定する(ステップS311)。すなわち、ユーザは当該位置で、聴診部12の持ち手37を持つ手を動かさないようにする。 When it is notified in step S310 that the auscultation unit 12 is at the target position P, the user fixes the auscultation unit 12 at that position (step S311). That is, the user does not move the hand holding the handle 37 of the auscultation unit 12 at this position.
 制御部25は、聴診部12の位置が固定されると、聴診部12から心音の電気信号を取得する。制御部25は、心音の電気信号の波形から、心音が正しく取得されているか判定する(ステップS312)。制御部25は、電気信号に変換された心音の波形に基づいて、聴診部12のユーザの身体に対する接触状態の良否を判断することができる。制御部25は、電気信号に変換された心音の波形に加え、圧力センサ33により検出される圧力を取得し、聴診部12のユーザの身体に対する接触状態を判断してよい。 When the position of the auscultation unit 12 is fixed, the control unit 25 acquires the electrical signal of the heart sound from the auscultation unit 12 . The control unit 25 determines whether the heart sounds are correctly acquired from the waveform of the electrical signal of the heart sounds (step S312). The control unit 25 can determine whether the contact state of the auscultation unit 12 with the user's body is good or bad based on the waveform of the heart sound converted into the electrical signal. The control unit 25 may acquire the pressure detected by the pressure sensor 33 in addition to the waveform of the heart sound converted into the electrical signal, and determine the contact state of the auscultation unit 12 with the user's body.
 心音の波形が正しく取得されていない場合(ステップ312:No)、制御部25は、表示部23およびスピーカ27により、聴診部12の押し圧および/または聴診部12の位置を微調整するように案内する(ステップS313)。制御部25は、聴診部12のユーザの胸部に対する押し圧が不足している場合、ユーザに対して押し圧を高くするように通知してよい。また、聴診部12のユーザの胸部に対する押し圧が十分である場合は、聴診部12を現在の位置から微小な距離動かすように案内してよい。ステップS313の後、ユーザは再び聴診部12を固定し(ステップS311)、制御部25は再びステップS312の処理を行う。 If the waveform of the heart sound is not correctly acquired (step 312: No), the control unit 25 causes the display unit 23 and the speaker 27 to finely adjust the pressing force of the auscultation unit 12 and/or the position of the auscultation unit 12. Guidance is provided (step S313). If the pressure of the auscultation unit 12 against the user's chest is insufficient, the control unit 25 may notify the user to increase the pressure. Also, if the pressure of the auscultation unit 12 against the chest of the user is sufficient, the auscultation unit 12 may be guided to move a minute distance from the current position. After step S313, the user fixes the auscultation unit 12 again (step S311), and the control unit 25 performs the process of step S312 again.
 ステップS312で心音の波形が正しく取得された場合(ステップS312:Yes)、制御部25は、図9のフローチャートの処理に戻る。 When the waveform of the heart sound is correctly acquired in step S312 (step S312: Yes), the control unit 25 returns to the processing of the flowchart in FIG.
 図9のフローチャートで、ステップS201およびステップS202により聴診部12、電極13、および、腕帯装置14が正しく装着または固定されると、制御部25は心音、心電図および脈波の自動測定を開始する(ステップS203)。測定される心音は、変換部32で電気信号に変換された波形である。 In the flowchart of FIG. 9, when the auscultation unit 12, the electrodes 13, and the arm band device 14 are properly attached or fixed in steps S201 and S202, the control unit 25 starts automatic measurement of heart sounds, electrocardiograms, and pulse waves. (Step S203). A heart sound to be measured is a waveform converted into an electrical signal by the converter 32 .
 制御部25は、ステップS203で測定した心音、心電図および脈波の波形を解析する(ステップS204)。制御部25は、心音、心電図および脈波の各波形を時間的に同期させることができる。制御部25は、各波形から特徴量を抽出することができる。 The control unit 25 analyzes the heart sound, electrocardiogram, and pulse wave waveforms measured in step S203 (step S204). The control unit 25 can time-synchronize each waveform of the heart sound, the electrocardiogram, and the pulse wave. The control unit 25 can extract feature amounts from each waveform.
 制御部25は、ステップS204による各波形の解析結果に基づいて、血行動態パラメータを算出する。制御部25は、前述のように各波形の特徴量を入力パラメータとし、血行動態パラメータを出力とする機械学習により、血行動態パラメータを算出してよい。血行動態パラメータは、左心室圧および肺動脈圧を含む。 The control unit 25 calculates hemodynamic parameters based on the analysis results of each waveform in step S204. The control unit 25 may calculate the hemodynamic parameters by machine learning in which the feature values of each waveform are used as input parameters and the hemodynamic parameters are output as described above. Hemodynamic parameters include left ventricular pressure and pulmonary artery pressure.
 制御部25は、ステップS205で算出した血行動態パラメータが正常か否か判断する(ステップS206)。例えば、血行動態パラメータが、通常の人体の測定では測定され得ない異常値を含む場合、血行動態パラメータは正常ではないと判断される。または、測定した各波形が乱れており特徴量が算出できない場合などのように、血行動態パラメータが算出できない場合も正常ではないと判断される。血行動態パラメータが正常ではないと判断された場合(ステップS206:No)、制御部25は、ステップS203に戻り測定をやりなおす。血行動態パラメータが正常と判断される場合(ステップS206:Yes)、制御部25は、ステップS207の処理に進む。 The control unit 25 determines whether the hemodynamic parameters calculated in step S205 are normal (step S206). For example, a hemodynamic parameter is determined to be abnormal if the hemodynamic parameter contains abnormal values that cannot be measured by normal human measurements. Alternatively, when the hemodynamic parameters cannot be calculated, such as when each measured waveform is distorted and the feature amount cannot be calculated, it is determined to be abnormal. If it is determined that the hemodynamic parameter is not normal (step S206: No), the control unit 25 returns to step S203 and repeats the measurement. If the hemodynamic parameters are determined to be normal (step S206: Yes), the control unit 25 proceeds to the process of step S207.
 ステップS207において、制御部25は、血行動態パラメータを含む測定結果を表示部23に表示し、および/または、記憶部22に記憶し、および/または、医療機関システム16に送信する。制御部25は、表示装置23に身体情報の波形および/または血行動態パラメータの時系列の推移をグラフとして表示させてよい。制御部23は、さらに、表示装置23に、血行動態パラメータが正常か否かの判定結果を表示させてよい。ユーザは、表示装置23の表示から、測定結果の前日比、傾向、および、基準値内か否か等の情報を知ることができる。制御部25は、さらに、ユーザに対して、「このまま測定を継続する」、または、「至急医師に連絡する」等の対応を提示することができる。医療機関では、医師および/または看護師が、受信した血行動態パラメータを継続的にモニターして、ユーザに対する状態の確認および処方の変更等を行うことができる。医療機関の医師または看護師は、血行動態パラメータを確認した後、医療機関システム16から本体部11対して、コメントまたは指示を送信することができる。本体部11の制御部25は、医師または看護師からのコメントまたは指示を表示部23に表示させ、ユーザに通知してよい。 In step S<b>207 , the control unit 25 displays the measurement results including the hemodynamic parameters on the display unit 23 and/or stores them in the storage unit 22 and/or transmits them to the medical institution system 16 . The control unit 25 may cause the display device 23 to display the waveform of the physical information and/or the time-series transition of the hemodynamic parameter as a graph. The control unit 23 may further cause the display device 23 to display the determination result as to whether the hemodynamic parameters are normal. From the display on the display device 23, the user can know information such as the change in the measurement result from the previous day, the trend, and whether or not the measurement result is within the reference value. The control unit 25 can further present a response such as "continue measurement" or "contact a doctor urgently" to the user. In a medical institution, doctors and/or nurses can continuously monitor the received hemodynamic parameters to check the status of the user, change prescriptions, and the like. After confirming the hemodynamic parameters, the doctor or nurse at the medical institution can send comments or instructions from the medical institution system 16 to the main unit 11 . The control unit 25 of the main unit 11 may display the comment or instruction from the doctor or nurse on the display unit 23 to notify the user.
 図9のフローチャートにおいて、ステップS204からS207の処理は、本体部11の制御部25によって実行されるものとした。しかし、血行動態モニタリングシステム10は、ステップS203において測定された結果を、本体部11が医療機関システム16に送信し、医療機関システム16において測定結果の解析および血行動態パラメータの算出を行うように構成されてもよい。  In the flowchart of FIG. 9, the processing from steps S204 to S207 is assumed to be executed by the control unit 25 of the main unit 11. However, the hemodynamic monitoring system 10 is configured such that the main unit 11 transmits the results measured in step S203 to the medical institution system 16, and the medical institution system 16 analyzes the measurement results and calculates hemodynamic parameters. may be
 以上説明したように、本実施形態の心音取得装置15によれば、事前に医師が聴診部12を配置すべき位置を決定し、この位置をユーザの身体の特徴点に対する相対的位置として記憶部22に記憶する。そして、制御部25が、撮像した画像に含まれるユーザの特徴点と記憶された相対的位置とに基づいて、表示部23に表示される画像上で聴診部12が聴診部12を配置すべき位置に来るようにユーザを案内する。これによって、ユーザが聴診部12を容易に位置決めして心音を取得することができる。 As described above, according to the heart sound acquisition device 15 of the present embodiment, the position where the auscultation unit 12 should be placed is determined in advance by the doctor, and this position is used as a relative position with respect to the feature points of the user's body. 22. Then, the control unit 25 arranges the auscultation unit 12 on the image displayed on the display unit 23 based on the user's feature points included in the captured image and the stored relative positions. Guide the user to the location. This allows the user to easily position the auscultation unit 12 and acquire heart sounds.
 また、本実施形態に係る心音取得装置15によれば、患者であるユーザ自身が、聴診部12を毎回同じ位置に装着して、安定した心音波形を取得することができる。これにより、医師、家族および介護者等のサポートが不要になり、家族および介護者等の不在時にも測定を継続し易くなる。さらに、心音波形の取得が安定して行えることにより、心音波形を解析するアルゴリズムの精度を向上させることができる。 Further, according to the heart sound acquisition device 15 according to the present embodiment, the patient himself/herself can wear the auscultation unit 12 at the same position every time and acquire a stable heart sound waveform. This eliminates the need for support from doctors, family members, caregivers, etc., and makes it easier to continue measurement even when family members, caregivers, etc. are absent. Furthermore, since the cardiac waveform can be stably acquired, the accuracy of the algorithm for analyzing the cardiac waveform can be improved.
 また、本実施形態の心音取得装置15は、撮像部21で撮像した画像を表示部23に表示し、当該画像上でユーザに対して聴診部12を配置すべき位置を案内するので、ユーザにとって直感的に理解しやすい。このため、心音取得装置15は、患者が、使用し易く且つ測定を継続しやすい。 In addition, the heart sound acquisition device 15 of the present embodiment displays an image captured by the imaging unit 21 on the display unit 23, and guides the user on the position where the auscultation unit 12 should be placed on the image. Intuitive and easy to understand. Therefore, the heart sound acquisition device 15 is easy for the patient to use and to continue the measurement.
 さらに、本実施形態では一例として、聴診部12の位置決めに使用するユーザの身体の特徴点として、左肩、右肩および顔の部分を使用した。これによって、ユーザが服を着ていても、特徴点を抽出して、心音を取得することが可能になる。 Furthermore, in this embodiment, as an example, the left shoulder, right shoulder, and face are used as feature points of the user's body used for positioning the auscultation unit 12 . This makes it possible to extract feature points and acquire heart sounds even when the user is wearing clothes.
 また、本実施形態の血行動態モニタリングシステム10では、心電図、脈波とともに、心音波形を同期させて安定して取得することができるので、心不全患者の心臓の状態に関連性の高い血行動態パラメータを非侵襲の方法でモニタリングすることが可能になる。これによって、心不全患者の在宅での状態の把握、および、状態が変化した場合の対応が可能になる。 In addition, in the hemodynamic monitoring system 10 of the present embodiment, the electrocardiogram and the pulse wave, as well as the cardiac sound waveform can be synchronously and stably acquired. It becomes possible to monitor in a non-invasive way. This makes it possible to grasp the state of a heart failure patient at home and to respond when the state changes.
 上記実施形態では、心音取得装置15を患者であるユーザの在宅での血行動態パラメータのモニタリングに使用する事例について説明した。他の実施形態において、心音取得装置は、患者ごとに毎回同じ条件で心音を取得するために医療機関に設置することもできる。この場合、記憶部22は、患者ごとの聴診部12を配置すべき位置に関する相対位置情報を記憶する。制御部25は、患者に応じた相対位置情報を記憶部22から読みだして、各患者に対する聴診部12の位置決めを行う。また、この場合、表示部23は医師側に向けて左右を反転させない画像を表示するように構成されてよい。 In the above embodiment, a case has been described in which the heart sound acquisition device 15 is used to monitor the hemodynamic parameters of the user, who is a patient, at home. In another embodiment, the heart sound acquisition device can be installed in a medical facility to acquire heart sounds under the same conditions each time for each patient. In this case, the storage unit 22 stores relative position information regarding the position where the auscultation unit 12 should be placed for each patient. The control unit 25 reads the relative position information according to the patient from the storage unit 22 and positions the auscultation unit 12 with respect to each patient. Further, in this case, the display unit 23 may be configured to display an image that is not horizontally reversed toward the doctor side.
 また、一実施形態において、撮像部21としては、本体部11に内蔵されたカメラではなく、外付けのカメラを使用することができる。例えば、スマートフォンを本体部11に接続して、スマートフォンのカメラを撮像部21として使用することが可能である。この場合、本体部11にカメラを必要としない、本体部11と分離しているためカメラの向きを調整しやすい、本体部11とスマートフォンの2画面を使用できるなどの利点がある。 Also, in one embodiment, an external camera can be used as the imaging unit 21 instead of the camera built into the main unit 11 . For example, it is possible to connect a smartphone to the main unit 11 and use the camera of the smartphone as the imaging unit 21 . In this case, there are advantages such as no need for a camera in the main body 11, easy adjustment of the orientation of the camera since it is separated from the main body 11, and the ability to use two screens of the main body 11 and the smartphone.
 また、一実施形態において、本体部11としてスマートフォンを使用することができる。この場合、図3の撮像部21としてはスマートフォンのカメラが使用される。記憶部22としては、スマートフォンの内蔵メモリが使用される。表示部23としてはスマートフォンのディスプレイが使用される。制御部25としては、スマートフォンのプロセッサが使用される。通信部26としては、スマートフォンの無線通信機能が使用される。スピーカ27としては、スマートフォンに内蔵されるスピーカが使用される。また、撮像調整部24として、スマートフォンと連携して、スマートフォンのディスプレイの向きを調整可能な専用のスタンドが用意されてよい。聴診部12および他の身体情報を取得するセンサを接続するため、スマートフォンの外部接続端子を使用することができる。 Also, in one embodiment, a smartphone can be used as the main unit 11. In this case, a camera of a smart phone is used as the imaging unit 21 in FIG. A built-in memory of the smartphone is used as the storage unit 22 . A smartphone display is used as the display unit 23 . A processor of a smartphone is used as the control unit 25 . A wireless communication function of a smartphone is used as the communication unit 26 . A speaker built into the smartphone is used as the speaker 27 . Further, as the imaging adjustment unit 24, a dedicated stand capable of adjusting the orientation of the display of the smartphone may be prepared in cooperation with the smartphone. An external connection terminal of the smartphone can be used to connect the auscultation unit 12 and other sensors that acquire body information.
 スマートフォンは、スマートフォンのプロセッサを本開示の心音取得装置15の制御部25として機能させるためのアプリケーションをメモリに記憶する。スマートフォンを、心音取得装置15として使用するとき、ユーザはこのアプリケーションを起動する。以下の操作は、上記実施形態の心音取得装置15と同様に行うことができる。これによって、特別なハードウェアを用意しなくとも、心音の取得が可能になる。 The smartphone stores in its memory an application for causing the processor of the smartphone to function as the control unit 25 of the heart sound acquisition device 15 of the present disclosure. When using a smart phone as the heart sound acquisition device 15, the user activates this application. The following operations can be performed in the same manner as the heart sound acquisition device 15 of the above embodiment. This makes it possible to acquire heart sounds without preparing special hardware.
 また、一実施形態において、撮像部21は、赤外線領域の波長の光を撮像可能な赤外線カメラを含んでよい。赤外線は、衣類を部分的に透過することができる。撮像部21が赤外線カメラを含むことにより、ユーザが服を着ていても、身体の画像を撮像することができ、特徴点をより正確に抽出することができる。これにより、聴診部12の位置決めのために使用する特徴点に、服の下に隠れる部分があっても、ユーザは服を着たまま聴診部12を配置すべき位置の位置決めを行うことが可能になる。 Also, in one embodiment, the imaging unit 21 may include an infrared camera capable of imaging light with wavelengths in the infrared region. Infrared radiation can partially penetrate clothing. Since the imaging unit 21 includes an infrared camera, it is possible to capture an image of the user's body even if the user is wearing clothes, and extract feature points more accurately. As a result, even if the feature points used for positioning the auscultation unit 12 include a part that is hidden under the clothes, the user can position the auscultation unit 12 while wearing the clothes. become.
 本開示に係る実施形態について、諸図面及び実施例に基づき説明してきたが、当業者であれば本開示に基づき種々の変形又は修正を行うことが容易であることに注意されたい。従って、これらの変形又は修正は本開示の範囲に含まれることに留意されたい。例えば、各構成部又は各ステップなどに含まれる機能などは論理的に矛盾しないように再配置可能であり、複数の構成部又はステップなどを1つに組み合わせたり、或いは分割したりすることが可能である。本開示に係る実施形態について装置を中心に説明してきたが、本開示に係る実施形態は装置の各構成部が実行するステップを含む方法としても実現し得るものである。本開示に係る実施形態は装置が備えるプロセッサにより実行される方法、プログラム、又はプログラムを記録した非一時的なコンピュータ可読媒体としても実現し得るものである。本開示の範囲にはこれらも包含されるものと理解されたい。 Although the embodiments according to the present disclosure have been described based on the drawings and examples, it should be noted that a person skilled in the art can easily make various modifications or modifications based on the present disclosure. Therefore, it should be noted that these variations or modifications are included within the scope of this disclosure. For example, functions included in each component or each step can be rearranged so as not to be logically inconsistent, and multiple components or steps can be combined into one or divided. is. Although the embodiments of the present disclosure have been described with a focus on the apparatus, the embodiments of the present disclosure may also be implemented as a method including steps performed by each component of the apparatus. Embodiments according to the present disclosure can also be implemented as a non-transitory computer-readable medium recording a method, program, or program executed by a processor provided in an apparatus. It should be understood that these are also included within the scope of the present disclosure.
 10 血行動態モニタリングシステム
 11 本体部
 12 聴診部
 13 電極(身体情報取得部)
 14 腕帯装置(身体情報取得部)
 15 心音取得装置
 16 医療機関システム
 17 通信ネットワーク
 18 サーバ
 21 撮像部
 22 記憶部
 23 表示部
 24 撮像調整部
 25 制御部
 26 通信部
 27 スピーカ
 28 スタンド
 31 心音取得部
 32 変換部
 33 圧力センサ
 34 振動部
 36 マーカー
 37 持ち手
 41 左肩(特徴点)
 42 右肩(特徴点)
 43 顔
  P 目標位置
10 hemodynamic monitoring system 11 main unit 12 auscultation unit 13 electrode (physical information acquisition unit)
14 arm band device (physical information acquisition unit)
15 heart sound acquisition device 16 medical institution system 17 communication network 18 server 21 imaging unit 22 storage unit 23 display unit 24 imaging adjustment unit 25 control unit 26 communication unit 27 speaker 28 stand 31 heart sound acquisition unit 32 conversion unit 33 pressure sensor 34 vibration unit 36 Marker 37 Handle 41 Left shoulder (feature point)
42 right shoulder (feature point)
43 face P target position

Claims (16)

  1.  ユーザの心音を取得可能に構成される聴診部と、
     ユーザの特徴点を含む身体の画像を撮像する撮像部と、
     前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置を記憶する記憶部と、
     前記撮像部により撮像された前記画像を表示する表示部と、
     前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、該特徴点の位置と、前記記憶部に記憶された前記相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、前記表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する制御部と、
    を備える心音取得装置。
    an auscultation unit configured to acquire a user's heart sound;
    an imaging unit that captures an image of the user's body including feature points;
    a storage unit that stores the relative position of the position where the auscultation unit should be arranged with respect to the feature point of the user;
    a display unit that displays the image captured by the imaging unit;
    The positions of the feature points of the user are recognized from the image captured by the imaging unit, and the auscultation unit is arranged based on the positions of the feature points and the relative positions stored in the storage unit. a control unit that calculates the position where the auscultation unit should be placed and guides the auscultation unit to the position where the auscultation unit should be placed on the image displayed on the display unit;
    A heart sound acquisition device comprising:
  2.  前記聴診部は、マーカーを備え、前記制御部は前記撮像部により撮像された前記画像から前記マーカーを検出することにより、前記画像上で前記聴診部の位置を認識する、請求項1に記載の心音取得装置。 2. The auscultation unit according to claim 1, wherein the auscultation unit includes a marker, and the control unit recognizes the position of the auscultation unit on the image by detecting the marker from the image captured by the imaging unit. Heart sound acquisition device.
  3.  前記制御部は、前記撮像部により撮像された前記画像に基づいて、前記撮像部と前記ユーザの前記身体との距離、および、前記ユーザの前記身体の向きの少なくとも何れかを認識可能に構成される、請求項1または2に記載の心音取得装置。 The control unit is configured to be able to recognize at least one of a distance between the imaging unit and the body of the user and an orientation of the body of the user based on the image captured by the imaging unit. 3. The heart sound acquisition device according to claim 1 or 2.
  4.  サーバと通信可能に構成される通信部をさらに備え、
     前記制御部は、前記記憶部に記憶される前記相対的位置を、前記通信部を介して前記サーバから取得するように構成される、請求項1から3の何れか一項に記載の心音取得装置。
    further comprising a communication unit configured to be communicable with the server,
    The heart sound acquisition according to any one of claims 1 to 3, wherein the control unit is configured to acquire the relative position stored in the storage unit from the server via the communication unit. Device.
  5.  スピーカをさらに備え、
     前記制御部は、前記聴診部を該聴診部を配置すべき位置に案内するとき、前記スピーカによる音声を用いる、請求項1から4の何れか一項に記載の心音取得装置。
    Equipped with additional speakers,
    5. The heart sound acquiring apparatus according to claim 1, wherein said control unit uses voice from said speaker when guiding said auscultation unit to a position where said auscultation unit should be placed.
  6.  前記制御部は、前記聴診部を該聴診部を配置すべき位置に案内するとき、前記撮像部により撮像された前記画像に文字および図形の少なくとも何れか一方を重畳して前記表示部に表示させる、請求項1から5のいずれか一項に記載の心音取得装置。 When guiding the auscultation unit to a position where the auscultation unit should be placed, the control unit superimposes at least one of characters and graphics on the image captured by the imaging unit and causes the display unit to display the image. A heart sound acquisition device according to any one of claims 1 to 5.
  7.  前記聴診部は、前記ユーザの前記身体との間の圧力を検知する圧力センサを備え、前記制御部は前記圧力センサにより検知される前記圧力に基づいて、前記聴診部の前記ユーザの前記身体に対する接触状態を判定する、請求項1から6の何れか一項に記載の心音取得装置。 The auscultation unit includes a pressure sensor that detects pressure between the auscultation unit and the body of the user, and the control unit controls the pressure of the auscultation unit against the body of the user based on the pressure detected by the pressure sensor. 7. The heart sound acquisition device according to any one of claims 1 to 6, which determines a contact state.
  8.  前記聴診部が該聴診部を配置すべき位置に位置することを報知する報知部を備える、請求項1から7の何れか一項に記載の心音取得装置。 The heart sound acquisition device according to any one of claims 1 to 7, further comprising a notification unit that notifies that the auscultation unit is positioned at a position where the auscultation unit should be placed.
  9.  前記聴診部で取得した前記心音を電気信号に変換する変換部を備える、請求項1から8のいずれか一項に記載の心音取得装置。 The heart sound acquisition device according to any one of claims 1 to 8, comprising a conversion unit that converts the heart sounds acquired by the auscultation unit into electrical signals.
  10.  前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形に基づいて、前記聴診部の前記ユーザの前記身体に対する接触状態を判定する、請求項9に記載の心音取得装置。 The heart sound acquisition device according to claim 9, wherein the control unit determines a contact state of the auscultation unit with respect to the user's body based on the waveform of the electrical signal obtained by converting the heart sounds by the conversion unit.
  11.  前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形を解析するように構成される、請求項9または10に記載の心音取得装置。 The heart sound acquisition device according to claim 9 or 10, wherein the control unit is configured to analyze the waveform of the electrical signal into which the heart sounds have been converted by the conversion unit.
  12.  前記ユーザの前記心音以外の身体情報を取得可能に構成される身体情報取得部をさらに備え、
     前記制御部は、前記変換部により前記心音が変換された前記電気信号の波形と、前記身体情報取得部が取得した前記身体情報の波形とを時間的に同期させる、請求項9から11の何れか一項に記載の心音取得装置。
    further comprising a physical information acquisition unit configured to be able to acquire physical information other than the heartbeat of the user;
    12. The control unit according to any one of claims 9 to 11, wherein the waveform of the electrical signal converted by the conversion unit and the waveform of the physical information acquired by the physical information acquisition unit are temporally synchronized. 1. The heart sound acquisition device according to 1. or 1. above.
  13.  前記身体情報は、心電図および脈波の少なくとも何れか一方を含み、前記制御部は、前記聴診部から取得した前記ユーザの前記心音、および、前記身体情報取得部から取得した前記ユーザの前記身体情報に基づいて、血行動態パラメータを算出する、請求項12に記載の心音取得装置。 The physical information includes at least one of an electrocardiogram and a pulse wave, and the control unit controls the heart sound of the user acquired from the auscultation unit and the physical information of the user acquired from the physical information acquisition unit. 13. The heart sound acquisition device according to claim 12, wherein the hemodynamic parameter is calculated based on.
  14.  ユーザの特徴点に対するユーザの心音を取得可能に構成される聴診部を配置すべき位置の相対的位置を記憶するサーバと、
     前記聴診部、ユーザの特徴点を含む身体の画像を撮像する撮像部、前記撮像部により撮像された前記画像を表示する表示部、前記サーバから前記相対的位置を取得する通信部、および、前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、該特徴点の位置と、前記サーバから取得した前記相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、前記表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する制御部を含む心音取得装置と
    を備える心音取得システム。
    a server that stores the relative position of the position where the auscultation unit configured to be able to acquire the user's heart sound with respect to the user's characteristic point should be placed;
    the auscultation unit, an imaging unit that captures an image of the user's body including characteristic points, a display unit that displays the image captured by the imaging unit, a communication unit that acquires the relative position from the server, and recognizing the position of the feature point of the user from the image captured by the imaging unit, and positioning the auscultation unit based on the position of the feature point and the relative position obtained from the server; , and guides the auscultation unit to a position where the auscultation unit should be placed on the image displayed on the display unit.
  15.  ユーザの心音を取得可能に構成される聴診部を用いて心音を取得するために制御部が実行する心音取得方法であって、
     撮像部にユーザの特徴点を含む身体の画像を撮像させ、
     前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識し、
     該特徴点の位置と、記憶部に記憶された前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置とに基づいて、前記聴診部を配置すべき位置を算出し、
     表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する、
    心音取得方法。
    A heart sound acquisition method executed by a control unit to acquire heart sounds using an auscultation unit configured to acquire heart sounds of a user, comprising:
    causing the imaging unit to capture an image of the user's body including the feature points;
    recognizing the position of the feature point of the user from the image captured by the imaging unit;
    calculating the position where the auscultation unit should be arranged based on the position of the feature point and the relative position of the position where the auscultation unit should be arranged with respect to the user's feature point stored in a storage unit;
    guiding the auscultation unit to a position where the auscultation unit should be placed on the image displayed on the display unit;
    Heart sound acquisition method.
  16.  ユーザの心音を取得可能に構成される聴診部を用いて心音を取得するためのプログラムであって、
     撮像部にユーザの特徴点を含む身体の画像を撮像させる処理と、
     前記撮像部により撮像された前記画像から前記ユーザの前記特徴点の位置を認識する処理と、
     該特徴点の位置と、記憶部に記憶された前記ユーザの前記特徴点に対する前記聴診部を配置すべき位置の相対的位置とに基づいて、前記聴診部を配置すべき位置を算出する処理と、
     表示部に表示される前記画像上で、前記聴診部を該聴診部を配置すべき位置に案内する処理と
    を心音取得装置の備えるプロセッサに実行させるプログラム。
    A program for acquiring heart sounds using an auscultation unit configured to acquire heart sounds of a user,
    a process of causing the imaging unit to capture an image of the user's body including feature points;
    a process of recognizing the position of the feature point of the user from the image captured by the imaging unit;
    a process of calculating the position where the auscultation unit should be arranged based on the position of the feature point and the relative position of the position where the auscultation unit should be arranged with respect to the user's feature point stored in a storage unit; ,
    A program for causing a processor included in a heart sound acquisition device to execute a process of guiding the auscultation unit to a position where the auscultation unit should be placed on the image displayed on the display unit.
PCT/JP2022/040255 2021-10-28 2022-10-27 Heart sound acquisition device, heart sound acquisition system, heart sound acquisition method, and program WO2023074823A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021176898 2021-10-28
JP2021-176898 2021-10-28

Publications (1)

Publication Number Publication Date
WO2023074823A1 true WO2023074823A1 (en) 2023-05-04

Family

ID=86159948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/040255 WO2023074823A1 (en) 2021-10-28 2022-10-27 Heart sound acquisition device, heart sound acquisition system, heart sound acquisition method, and program

Country Status (1)

Country Link
WO (1) WO2023074823A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525363A (en) * 2007-04-23 2010-07-22 サムスン エレクトロニクス カンパニー リミテッド Telemedicine diagnostic system and method
WO2013089072A1 (en) * 2011-12-13 2013-06-20 シャープ株式会社 Information management device, information management method, information management system, stethoscope, information management program, measurement system, control program and recording medium
JP2013123493A (en) * 2011-12-13 2013-06-24 Sharp Corp Information processing apparatus, stethoscope, control method for information processing apparatus, control program, and recording medium
JP2017000198A (en) * 2015-06-04 2017-01-05 日本光電工業株式会社 Electronic stethoscope system
US20170185737A1 (en) * 2014-09-12 2017-06-29 Gregory T. Kovacs Physical examination method and apparatus
JP2017136248A (en) * 2016-02-04 2017-08-10 公立大学法人岩手県立大学 Auscultation system
JP2021010576A (en) * 2019-07-05 2021-02-04 株式会社村田製作所 Heart monitor system and synchronizer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010525363A (en) * 2007-04-23 2010-07-22 サムスン エレクトロニクス カンパニー リミテッド Telemedicine diagnostic system and method
WO2013089072A1 (en) * 2011-12-13 2013-06-20 シャープ株式会社 Information management device, information management method, information management system, stethoscope, information management program, measurement system, control program and recording medium
JP2013123493A (en) * 2011-12-13 2013-06-24 Sharp Corp Information processing apparatus, stethoscope, control method for information processing apparatus, control program, and recording medium
US20170185737A1 (en) * 2014-09-12 2017-06-29 Gregory T. Kovacs Physical examination method and apparatus
JP2017000198A (en) * 2015-06-04 2017-01-05 日本光電工業株式会社 Electronic stethoscope system
JP2017136248A (en) * 2016-02-04 2017-08-10 公立大学法人岩手県立大学 Auscultation system
JP2021010576A (en) * 2019-07-05 2021-02-04 株式会社村田製作所 Heart monitor system and synchronizer

Similar Documents

Publication Publication Date Title
US20230029639A1 (en) Medical device system for remote monitoring and inspection
US10635782B2 (en) Physical examination method and apparatus
JP7132853B2 (en) Method and apparatus for determining the position and/or orientation of a wearable device on an object
KR102265934B1 (en) Method and apparatus for estimating ppg signal and stress index using a mobile terminal
Sun et al. Photoplethysmography revisited: from contact to noncontact, from point to imaging
JP6381918B2 (en) Motion information processing device
JP6072893B2 (en) Pulse wave velocity measurement method, measurement system operating method using the measurement method, pulse wave velocity measurement system, and imaging apparatus
JP7247319B2 (en) Systems and methods for monitoring head, spine and body health
WO2014112631A1 (en) Movement information processing device and program
CN104486989A (en) CPR team performance
JP2017512510A (en) Body position optimization and biological signal feedback of smart wearable device
JP6692420B2 (en) Auxiliary device for blood pressure measurement and blood pressure measurement device
CA2906856A1 (en) Ear-related devices implementing sensors to acquire physiological characteristics
KR20130010207A (en) System for analyze the user&#39;s health and stress
JP2001198110A (en) Body action sensing device
KR20160108967A (en) Device and method for bio-signal measurement
TWM485701U (en) Wearable pieces with healthcare functions
JP3569247B2 (en) Biological information measuring device and health management system
Horta et al. A mobile health application for falls detection and biofeedback monitoring
US20210321886A1 (en) Portable monitoring apparatus, monitoring device, monitoring system and patient status monitoring method
US11232866B1 (en) Vein thromboembolism (VTE) risk assessment system
EP3437550A1 (en) Health surveillance method and health surveillance garment
WO2023074823A1 (en) Heart sound acquisition device, heart sound acquisition system, heart sound acquisition method, and program
JP7209954B2 (en) Nystagmus analysis system
WO2023112057A1 (en) &#34;wearable device for collecting physiological health data&#34;

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22887148

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE