WO2022040456A1 - Wearable auscultation device - Google Patents

Wearable auscultation device Download PDF

Info

Publication number
WO2022040456A1
WO2022040456A1 PCT/US2021/046754 US2021046754W WO2022040456A1 WO 2022040456 A1 WO2022040456 A1 WO 2022040456A1 US 2021046754 W US2021046754 W US 2021046754W WO 2022040456 A1 WO2022040456 A1 WO 2022040456A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
auditory signals
elements
khz
combination
Prior art date
Application number
PCT/US2021/046754
Other languages
French (fr)
Inventor
Mark A. Moehring
Anthony J. ALLEMAN
Original Assignee
Otonexus Medical Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Otonexus Medical Technologies, Inc. filed Critical Otonexus Medical Technologies, Inc.
Priority to CA3190577A priority Critical patent/CA3190577A1/en
Priority to CN202180071076.1A priority patent/CN116322513A/en
Priority to JP2023512114A priority patent/JP2023539116A/en
Priority to KR1020237006931A priority patent/KR20230051516A/en
Priority to EP21859153.5A priority patent/EP4199817A1/en
Priority to AU2021328481A priority patent/AU2021328481A1/en
Publication of WO2022040456A1 publication Critical patent/WO2022040456A1/en
Priority to US18/171,215 priority patent/US20230190222A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/02Stethoscopes
    • A61B7/04Electric stethoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6804Garments; Clothes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B7/00Instruments for auscultation
    • A61B7/003Detecting lung or respiration noise
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present disclosure provides devices, systems, and methods to measure auditory signals emitted by a subject.
  • the disclosure describes one or more auscultation modules positioned around a subject, each capable of independently measuring auditory signals at a discrete surface on the subject.
  • the one or more auscultation modules positioned around the subject may comprise a unique spatial address that in combination with gyroscopic and accelerometer information may provide 3-D spatial localization of detected auditory signals.
  • the present disclosure addresses the aforementioned unmet needs by automating and multiplexing the measurement of auditory signals of a subject.
  • the present disclosure in some cases, provides an array of auscultation modules positioned around a subject with a known spacing and angular displacement such that the spatial position of an auditory signal may be calculated between the measured auditory signals between each of the one or more auscultation modules.
  • the present disclosure provides a processor and/or computational system configured to interpret and classify the auditory signals, removing subjective interpretation by a physician and enabling the wider use of a platform in circumstances where an expert interpreter (e.g., physician, respiratory therapist, etc) is unavailable.
  • an expert interpreter e.g., physician, respiratory therapist, etc
  • the devices and systems of the disclosure described herein may be fastened or otherwise worn by the subject in a non-obtrusive manner enabling, in some embodiments, continual monitoring of auditory signals.
  • continuous monitoring of auditory signals and the non-obtrusive nature of the device provides the unexpected result of determining early changes in a subjects anatomy or physiology that may be correlated and/or associated to development or changes in a disease state.
  • the disclosure provided herein describes a device to measure biological auditory signals
  • the device comprises: a wearable housing; one or more transducers coupled to the wearable housing configured to receive one or more auditory signals from a subject when the wearable housing is worn by the subject, wherein the one or more transducers are coupled to the wearable housing such that the one or more transducers are spaced away at a distance from skin of the subject by a distance of at least about 1 millimeter.
  • the device further comprises one or more pressure sources configured to induce a pressure force onto one or more regions of the subject to generate the one or more auditory signals from said subject.
  • the one or more pressure sources comprise an air puff.
  • the one or more pressure sources comprise a mechanical actuator. In some embodiments, the one or more pressure sources comprise a voice coil, speaker, or any combination thereof. In some embodiments, the housing is a garment. In some embodiments, the housing is a rigid mechanical structure. In some embodiments, the one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of the subject. In some embodiments, the one or more transducers are circular. In some embodiments, the device further comprises a processor in electrical communication with the one or more pressure sources, the one or more transducers, a control module, or any combination thereof. In some embodiments, the control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof.
  • the disclosure provided herein describes a system to determine a physiologic state of a subject
  • the system in some embodiments comprises: a wearable housing; one or more transducers coupled to the wearable housing configured to receive one or more auditory signals from the subject when the wearable housing is worn by the subject, wherein the one or more transducers are coupled to the wearable housing such that the one or more transducers are spaced away from skin of the subject by a distance; and one or more processors configured to process the one or more auditory signals thereby determining the physiologic state of the subject.
  • the system further comprises one or more pressure sources configured to induce a pressure force onto one or more regions of the subject to generate the one or more auditory signals from the subject.
  • the one or more pressure sources comprise an air puff.
  • the one or more pressure sources comprise a mechanical actuator.
  • the one or more pressure sources comprise a voice coil, speaker, or any combination thereof.
  • the housing is a garment.
  • the housing is a rigid mechanical structure.
  • the one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of the subject.
  • the one or more transducers are circular.
  • the system further comprises a control module in electrical communication with the one or more processor, the one or more pressure sources, the one or more transducers or any combination thereof.
  • the control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof.
  • the state is: healthy, chronic obstructive pulmonary disease, asthma, emphysema, pneumonia, congestive heart failure, any combination thereof states, or an indeterminant state.
  • the disclosure provided herein describes a method of determining a physiologic state of a subject, the method comprises: detecting one or more auditory signals from the subject using one or more air coupled auscultation modules; processing the one or more auditory signals to determine a correlative relationship between the one or more auditory signals from the subject and a library of one or more auditory signals; and determining the physiological state of the subject based on the correlative relationship between the one or more auditory signals.
  • the one or more air coupled auscultation modules comprise one or more transducers, one or more percussive elements, one or more processors, or any combination thereof.
  • the physiological state comprises a diseased state, wherein the diseased state comprises cancer, chronic obstructive pulmonary disease, emphysema, or any combination thereof.
  • the library comprises a correlative dataset correlating the subject’s physiological state and a corresponding one or more auditory signals.
  • determining is accomplished by one or more machine learning algorithms.
  • the one or more machine learning algorithms comprise k-means clustering, neural network, random forest, Naive bayes, support vector machine, decision tree, logistic regression, linear regression, or any combination thereof.
  • processing is completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server or any combination thereof.
  • determining is completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server or any combination thereof.
  • the disclosure provided herein in some embodiments, describes a device to measure biological auditory signals, the device comprises: one or more transducers configured to receive one or more auditory signals from a subject, wherein the one or more transducers are not in contact with the subject.
  • Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto.
  • the computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
  • FIGS. 1A-1B illustrate a garment configured to house one or more auscultation modules (FIG. 1A) and show a detailed view of the auscultation module and internal components (FIG. IB), as described in some embodiments herein.
  • FIG. 2 illustrates a schematic diagram of the auscultation system disclosed herein, as described in some embodiments herein.
  • FIG. 3 illustrates a flow chart for a method of determining the physiologic state of a subject, as described in some embodiments herein.
  • the present disclosure provides devices, systems, and methods configured to detect, analyze, or interpret one or more auditory signals generated by a subject.
  • the subject is a mammalian subject.
  • the mammalian subject is a human.
  • the one or more auditory signals may provide data or information to determine a physiological state of the subject.
  • the physiologic state of the subject may comprise a presence or lack thereof physiologic or anatomical changes of the subject that may be indicative of development of disease.
  • the disease may be cancer, chronic obstructive pulmonary disease, emphysema, asthma, acute respiratory distress syndrome, congestive heart failure, heart murmur, atrial fibrillation, blood clot, heart attack, vascular aneurysm, ventricular and/or atrial hypertrophy, or any combination thereof.
  • the detection of auditory signals may comprise the passive detection of auditory signals.
  • the auditory signals may be classified by anatomical or physiologic characteristics. For example, one or more auditory signals may be classified as lung wheezing, crackling, or other sounds indicative of a subject’s lung function or the presence or lack thereof fluid in a subject’s lungs.
  • the devices, systems, and methods described herein may provide external physical force to determine mechanical properties of the subject.
  • the mechanical properties may comprise a presence or lack thereof fluid in the body of the subject, a change in tissue mechanical properties, or any combination thereof.
  • the change in tissue mechanical properties may be indicative of a change in physiologic or anatomical state of the subj ect.
  • the systems may comprise one or more elements in electrical communication configured to detect auditory signals, process auditory signals, display information to a user of the system, receive input from a user of the system, or any combination thereof actions.
  • the user may be a medical doctor, nurse, nurse practitioner, or the subject themselves.
  • the information may comprise data and analytics regarding a physiologic state of the subject.
  • the system may comprise one or more auscultation modules in electrical communication with elements of a control system configured to detect auditory signals from the subject.
  • the system may comprise one or more pressure sources configured to apply pressure to the subject.
  • the system may comprise a control module in electrical communication with the one or more pressure sources and the one or more auscultation modules to detect auditory signals generated by the interaction of the subject and the pressure applied by the one or more pressure sources.
  • the control module may be in electrical communication with the one or more auscultation modules to detect one or more auditory signals of the subject without the generation of a pressure by the one or more pressure sources.
  • the disclosure provided herein describes an auscultation module 104, as shown in FIGS. 1A-1B.
  • One or more auscultation modules may be configured to detect auditory signals from a subject, described elsewhere herein.
  • the one or more auscultation modules may be positioned at a distance from the subject. In some cases, the one or more auscultation modules may not be in contact with the subject.
  • one or more auscultation modules 104 may be mechanically coupled within a housing 102 configured to position the one or more auscultation modules with respect to a subject to measure auditory signals of the subject.
  • the housing may comprise a garment, as shown in FIG. 1A.
  • the garment may be worn underneath clothing of the subject.
  • the garment may cover the thorax of the subject.
  • the garment may be loose fitting on the subject.
  • the garment may provide access to areas of the subject’s center thorax for cardio-thoracic procedures.
  • the cardio thoracic procedures may comprise repairing a pneumothorax, laparoscopic surgery, cardiac catherization, percutaneous coronary intervention, or any combination thereof procedure.
  • the garment may comprise antimicrobial properties.
  • the housing may comprise a wrist band or wrist strap. In some cases, the wrist band or wrist strap may wholly or partially encase or surround an arm or wrist of the subject. In some instances, the housing may comprise a rigid mechanical structure.
  • the one or more auscultation modules may be in electrical communication with one or more power supplies 106.
  • the one or more power supplies may comprise one or more batteries.
  • the one or more batteries may be rechargeable.
  • the one or more power supplies may comprise an analog-current (AC) to direct-current (DC) converter that may convert the output of an electrical socket to power the one or more auscultation modules.
  • AC analog-current
  • DC direct-current
  • the distance between the one or more auscultation modules and the subject may be about 1 mm to about 25 mm. In some cases, the distance between the one or more auscultation modules and the subject may be about 1 mm to about 2 mm, about 1 mm to about 3 mm, about 1 mm to about 4 mm, about 1 mm to about 5 mm, about 1 mm to about 8 mm, about 1 mm to about 10 mm, about 1 mm to about 12 mm, about 1 mm to about 14 mm, about 1 mm to about 16 mm, about 1 mm to about 18 mm, about 1 mm to about 25 mm, about 2 mm to about 3 mm, about 2 mm to about 4 mm, about 2 mm to about 5 mm, about 2 mm to about 8 mm, about 2 mm to about 10 mm, about 2 mm to about 12 mm, about 2 mm to about 14 mm, about 2 mm to about 16 mm, about 2 mm to about 18 mm, about 1 mm to about 25
  • the distance between the one or more auscultation modules and the subject may be about 1 mm, about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, about 18 mm, or about 25 mm. In some cases, the distance between the one or more auscultation modules and the subject may be at least about 1 mm, about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, or about 18 mm.
  • the distance between the one or more auscultation modules and the subject may be at most about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, about 18 mm, or about 25 mm.
  • the auscultation module 104 may comprise: (a) one or more transducer elements 114 configured to detect acoustic and/or pressure waves of the auditory signals generated by the subject; (b) one or more pressure sources 112; (c) a processor 108 in electrical communication with the one or more transducer elements 114 and/or the one or more pressure sources 112.
  • the one or more transducer elements may be a micro-machined ultrasonic transducer, such as a capacitive micro-machined ultrasonic transducer (cMUT) or a piezoelectric micro-machined ultrasonic transducer (pMUT). Examples of cMUTs are provided in U.S. Patent Application No.
  • the processor may be in electrical communication with one or more circuit elements.
  • the one or more circuit elements may comprise: a wireless (e.g., Bluetooth) transmitter and/or receiver, ultrasound digital signal processing (DSP) application specific integrated circuit, power regulator, a wireless (e.g., Bluetooth) transmitter and receiver antenna, or any combination thereof.
  • the auscultation module may comprise a heat dissipation structure, e.g., a heat sink.
  • the one or more transducer elements 114 may comprise about 1 element to about 20 elements. In some cases, the one or more transducer elements 114 may comprise about 1 element to about 2 elements, about 1 element to about 4 elements, about 1 element to about 6 elements, about 1 element to about 8 elements, about 1 element to about 10 elements, about 1 element to about 12 elements, about 1 element to about 14 elements, about 1 element to about 16 elements, about 1 element to about 18 elements, about 1 element to about 20 elements, about 2 elements to about 4 elements, about 2 elements to about 6 elements, about 2 elements to about 8 elements, about 2 elements to about 10 elements, about 2 elements to about 12 elements, about 2 elements to about 14 elements, about 2 elements to about 16 elements, about 2 elements to about 18 elements, about 2 elements to about 20 elements, about 4 elements to about 6 elements, about 4 elements to about 8 elements, about 4 elements to about 10 elements, about 4 elements to about 12 elements, about 4 elements to about 14 elements, about 4 elements to about 16 elements, about 2 elements to about 18 elements, about 2 elements to about 20 elements, about 4
  • the one or more transducer elements 114 may comprise about 1 element, about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, about 18 elements, or about 20 elements. In some cases, the one or more transducer elements 114 may comprise at least about 1 element, about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, or about 18 elements. In some cases, the one or more transducer elements 114 may comprise at most about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, about 18 elements, or about 20 elements.
  • the processor may be configured to process detected auditory signals by the one or more transducer elements 114.
  • the auscultation module 104 may comprise a circuitry 110 that may be a printed circuit board.
  • the processor 108, one or more circuit element, one or more transducer elements 114 and the one or more pressure sources 112 may be in electrical communication through the printed circuit board circuitry.
  • the printed circuit board may comprise at least 1 conductive layer, at least 2 conductive layers, at least 3 conductive layers, or at least 4 conductive layers.
  • the printed circuit board may comprise up to 1 conductive layer, up to 2 conductive layers, up to 3 conductive layers, or up to 4 conductive layers.
  • the one or more transducer elements 114 may be arranged in an array on the circuitry 110. In some cases, the one or more transducer elements 114 may be arranged in a circular array, linear array, polygonal array, or any combination thereof array.
  • the auscultation module 104 may comprise one or more pressure sources 112 configured to generate pressure directed towards the subject.
  • the one or more pressure sources 112 may comprise a mechanical percussor, e.g., a spring-loaded CAM configured to transmit a mechanical vibration into the subject.
  • the one or more pressure sources 112 may comprise an acoustic percussor e.g., a magnetic voice coil, and/or speaker configured to transmit a low frequency pressure wave into the subject.
  • the auscultation module 104 may be sealed wholly or partially within an enclosure.
  • the enclosure may comprise a plastic enclosure.
  • the auscultation module may comprise a circular, rectangular, square, triangular, trapezoidal, or any combination of shapes thereof.
  • the enclosure may provide one or more openings such that the one or more transducer elements 114 may receive and/or transmit auditory signals from the subject.
  • the enclosure may wholly or partially encase the one or more pressure sources such that the one or more pressure sources may be positioned in contact with the subject, yet the one or more transducer elements 114 may maintain a distance between the subj ect.
  • the diameter of the enclosed auscultation module 104 may be about 5 mm to about 50 mm. In some cases, the diameter of the enclosed auscultation module 104 may be about 5 mm to about 10 mm, about 5 mm to about 15 mm, about 5 mm to about 20 mm, about 5 mm to about 25 mm, about 5 mm to about 30 mm, about 5 mm to about 35 mm, about 5 mm to about 40 mm, about 5 mm to about 45 mm, about 5 mm to about 50 mm, about 10 mm to about 15 mm, about 10 mm to about 20 mm, about 10 mm to about 25 mm, about 10 mm to about 30 mm, about 10 mm to about 35 mm, about 10 mm to about 40 mm, about 10 mm to about 45 mm, about 10 mm to about 50 mm, about 15 mm to about 20 mm, about 15 mm to about 25 mm, about 15 mm to about 30 mm, about 15 mm to about 35 mm, about 10 mm
  • the diameter of the enclosed auscultation module 104 may be about 5 mm, about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, about 45 mm, or about 50 mm. In some cases, the diameter of the enclosed auscultation module 104 may be at least about 5 mm, about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, or about 45 mm.
  • the diameter of the enclosed auscultation module 104 may be at most about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, about 45 mm, or about 50 mm.
  • the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz to about 20 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz to about 2 kHz, about 1 kHz to about 4 kHz, about 1 kHz to about 6 kHz, about 1 kHz to about 8 kHz, about 1 kHz to about 10 kHz, about 1 kHz to about 12 kHz, about 1 kHz to about 14 kHz, about 1 kHz to about 16 kHz, about 1 kHz to about 18 kHz, about 1 kHz to about 20 kHz, about 2 kHz to about 4 kHz, about 2 kHz to about 6 kHz, about 2 kHz to about 8 kHz, about 2 kHz to about 10 kHz, about 2 kHz to about 12 kHz, about 2 kHz to about 14 kHz, about
  • the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz, about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, about 18 kHz, or about 20 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from at least about 1 kHz, about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, or about 18 kHz.
  • the one or more transducer elements 114 may be configured to detect auditory signals from at most about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, about 18 kHz, or about 20 kHz.
  • an auscultation system 201 configured to detect auditory signals 218 and/or transmit auditory data of a subject to a control module 208 and/or a user interface 210.
  • the transmission of auditory data may be accomplished through a Bluetooth, WIFI, or any combination thereof transmission 205.
  • the system may comprise an auscultation module 200, described elsewhere herein.
  • the auscultation module may be configured to detect auditory signals 218 from a surface 216 of the subject.
  • the one or more auscultation modules may comprise a processing back end 202 that may comprise Bluetooth and/or WIFI data transmission and receiving 244 and/or ultrasound digital signal processing 240 integrated circuitry.
  • the auscultation module 200 may comprise one or more ultrasound transducer elements 226, positioned at a distance 222 from a surface of the subject 216 configured to detect auditory signals 218 from the surface 216 of the subject.
  • the auditory signals 218 may be generated by the subject.
  • the auditory signals 218 may be generated by the interaction of one or more pressure sources (224,220) and the subject, described elsewhere herein.
  • the processing back end 202 may comprise circuitry e.g., a clock 241, a central processing unit (CPU) 238, analog to digital converter 235, digital to analog converter 232, filter 234, transmit pulser 236, percussion controller 230, doppler detector 240, wireless data transmitter and receiver 244, accelerometer gyroscope integrated circuit 246, or any combination thereof configured to control system elements (e.g., one or more ultrasound transducer elements 226 and/or one or more pressure sources 224), transmit data, receive data, or any combination thereof.
  • auditory signals 218 produced by the subject 216 may be detected by the one or more ultrasound transducer elements 226 in electrical communication with an ultrasound transmit/receive controller 228.
  • the transmit pulser 236 in electrical communication with the CPU 238 may generate one or more pulse signals that may be in electrical communication with the digital to analog converter 235.
  • the one or more pulse signals transmitted to the digital to analog converter 235 may then be transmitted electrically to the ultrasound transducer element 226 to generate ultrasound signal directed to one or more regions of the subject.
  • the ultrasound signal directed to the one or more regions of the subject 216 may then be used to detect motion of the one or more regions of the subject as a result of audio signals generated by the subject 218.
  • the CPU 238 may provide a driving signal to a percussion controller 230 configured to provide a driving signal for the one or more pressure sources 224, that may then produce auditory signals within the subject 216 that may be detected by the one or more ultrasound transducer elements 226.
  • the clock 241 of an auscultation module 200 may provide a common temporal signal to compare the detected auditory signals by the one or more ultrasound transducer elements 226 thereby determining a directionality or directional vector of an auditory signal wave front.
  • the clock 241 may provide a temporal clock signal to the transmit/receive controller 228 to sample the detected auditory signals with a known time interval. The detected auditory signal may then be filtered by the filter 234.
  • the filter 234 may comprise a bandpass, notch, low-pass, high-pass, or any combination thereof filter.
  • the signal may then be digitized by an analog to digital converter 235 and passed to a doppler detection circuit 240.
  • the doppler detection circuit 240 may convert the digitized data (i.e., the Doppler ultrasound data of surface displacement of the subject in units of distance) into a relative displacement. The relative displacement may then be converted into audio data.
  • the clock 241 may provide a temporal clock signal to the doppler detection circuit to sample the digitized analog auditory signal with a known time interval.
  • the data may then be prepared into a data packet buffer 242 with discrete channels for each auscultation module 200 to determine the origin of the detected auditory signals.
  • simultaneous accelerometer and/or gyroscope data may be generated by the accelerometer gyroscope integrated circuit 246 and bundled by the CPU 238 with the digitized auditory signal data in the data packet buffer 242.
  • the accelerometer gyroscope integrated circuit 246 may measure spatial orientation (e.g., roll, pitch, yaw), angular orientation, acceleration, velocity, or any combination thereof data.
  • the data measured by the accelerometer gyroscope integrated circuit 246 may provide one or more spatial vectors to localize where within the subject the auditory signal originated.
  • the system may then transmit data wirelessly to a control module 208 for further processing via the wireless data transmitter and receiver 244 in electrical communication with an antenna 204.
  • the wireless transmission may be Bluetooth transmission, WIFI transmission, or any combination thereof.
  • the signal may then be detected by the control module 208 corresponding antenna 206 and wireless data transmitter and receiver 245.
  • the control module CPU 238 may then generate a clock signal 252 driving an analyzing circuit 250 to process all and/or a portion thereof the channels of auditory signals stored in the data packet buffer 243.
  • the channels of auditory signals may transmit via a wireless transmission system 244, 204 to be processed in a cloud-based processing architecture.
  • the analyzing circuit 250 and/or cloud-based processing architecture may perform one or more processing operation to classify an auditory signal of the auditory signals.
  • the processing operation may comprise a cross-correlation, eigenvectorcorrelation, Ahn-park correlation, or any combination thereof.
  • the processing operation may be a classification by a machine learning algorithm trained previously on a library of labeled auditory signals.
  • the machine learning algorithm may comprise a deep neural network (DNN).
  • the deep neural network may comprise a convolutional neural network (CNN).
  • the CNN may be, for example, U-Net, ImageNet, LeNet-5, Al exNet, ZFNet, GoogleNet, VGGNet, ResNetl8 or ResNet, etc.
  • Other neural networks may be, for example, deep feed forward neural network, recurrent neural network, LSTM (Long Short Term Memory), GRU (Gated Recurrent Unit), Auto Encoder, variational autoencoder, adversarial autoencoder, denoising auto encoder, sparse auto encoder, boltzmann machine, RBM (Restricted BM), deep belief network, generative adversarial network (GAN), deep residual network, capsule network, or attention/transformer networks, etc.
  • the machine learning model may comprise clustering, scalar vector machines, kernel SVM, linear discriminant analysis, Quadratic discriminant analysis, neighborhood component analysis, manifold learning, convolutional neural networks, reinforcement learning, random forest, Naive Bayes, gaussian mixtures, Hidden Markov model, Monte Carlo, restrict Boltzmann machine, linear regression, or any combination thereof.
  • the machine learning algorithm may include ensemble learning algorithms such as bagging, boosting and stacking.
  • the machine learning algorithm may be individually applied to the plurality of features extracted for each channel, such that each channel may have a separate iteration of the machine learning algorithm or applied to the plurality of features extracted from all channels or a subset of channels at once.
  • the classified channels of auditory signals and the spatial information for each channel determined by the accelerometer gyroscope integrated circuit 246 may be utilized to determine a 3-D spatial position of an auditory signal of a channel within a subject.
  • the system may comprise a user interface 210 where a user may interact with, explore, or visualize raw auditory signal for each channel, the classified auditory signal, reconstructed spatial image of auditory signal classification, or any combination thereof signals.
  • the user interface 210 may display a 3-D spatial map and/or image of auditory signal classification overlaid over a model of a human torso for aid of visualization.
  • the CPU 238 may transmit the auditory signals to a user interface that may comprise a personal computer 212, laptop computer, smartphone, tablet, or any combination thereof.
  • the cloud-based processing architecture may wirelessly transmit the channels of auditory signals to the user interface 210.
  • the user may interact with the auditory signals via a keyboard 214 and mouse 215.
  • a user through the use of the user interface, may adjust or tune parameters of the auscultation system 201 (e.g., sensitivity and/or gain of the one or more ultrasound transducer elements 226, pressure force generated by the one or more pressure sources 224, the frequency of the pressure applied by the one or more pressure sources 224, etc, or any combination thereof) to improve signal-to-noise of the channels of detected auditory signals.
  • parameters of the auscultation system 201 e.g., sensitivity and/or gain of the one or more ultrasound transducer elements 226, pressure force generated by the one or more pressure sources 224, the frequency of the pressure applied by the one or more pressure sources 224, etc, or any combination thereof
  • aspects of the disclosure provided herein may comprise a method 300 of determining a physiologic state of a subject, as seen in FIG. 3.
  • the method 300 may comprise the steps of: (a) detecting one or more auditory signals from a subject using one or more air coupled auscultation modules 302; (b) processing the one or more auditory signals to determine a correlative relationship between the one or more auditory signals from the subject and a library of one or more auditory signals 304; and (c) determining the physiologic state of the subject based on the correlative relationship between the one or more auditory signals 306.
  • the air coupled auscultation modules described elsewhere herein, may comprise one or more transducers, one or more pressure sources, one or more processors, or any combination thereof.
  • the physiologic state may comprise a vital.
  • the vital may comprise blood pressure, pulse, blood flow, hematocrit, or any combination thereof.
  • the physiologic state may comprise a diseased state.
  • the diseased state may comprise cancer, chronic obstructive pulmonary disease, emphysema, asthma, acute respiratory distress syndrome, congestive heart failure, heart murmur, atrial fibrillation, blood clot, heart attack, vascular aneurysm, ventricular hypertrophy, pneumonia or any combination thereof.
  • the library may comprise a correlative dataset correlating a subject’s physiological state and a corresponding one or more classified auditory signals.
  • the one or more classified auditory signals may be classified by an expert interpreter (e.g., medical personnel, resident physician, attending physician, respiratory therapist, nurse, etc.)
  • determining of step 306 may be accomplished by one or more machine learning algorithms, described elsewhere herein.
  • processing of step 304 may be completed in a cloud base architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server, or any combination thereof.
  • determining of step 306 may be completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server, or any combination thereof.
  • the disclosure provided herein may comprise a method of determining the spatial origin of auditory signals.
  • the method may comprise the steps of: (a) detecting one or more auditory signals from a subject using one or more air coupled auscultation modules; (b) determining a wave front orientation of the auditory signals from one or more ultrasound transducers within the one or more air coupled auscultation modules; and (c) comparing the spatial overlap of the wave front orientation of similar auditory signals thereby determining the spatial origin of the auditory signal.
  • the one or more auscultation modules may comprise Bluetooth transmission circuitry.
  • the Bluetooth transmission circuitry may be configured to communication between one or more auscultation modules to determine the relative angle and distances between the one or more auscultation modules.
  • the relative angle of a given auscultation module of the one or more auscultation modules may be determined by an accelerometer or gyroscopic circuit of the auscultation module.
  • the relative angle and distance between the one or more auscultation modules may be transmitted between the one or more auscultation modules via a Bluetooth antenna.
  • One or more of the steps of each method or sets of operations may be performed with circuitry as described herein, for example, one or more of the processor or logic circuitry such as programmable array logic for a field programmable gate array.
  • the circuitry may be programmed to provide one or more of the steps of each of the methods or sets of operations and the program may comprise program instructions stored on a non-transitory computer readable memory or programmed steps of the logic circuitry such as the programmable array logic or the field programmable gate array, for example.
  • ranges include the range endpoints. Additionally, every sub range and value within the range is present as if explicitly written out.
  • the term “about” or “approximately” may mean within an acceptable error range for the particular value, which will depend in part on how the value is measured or determined, e.g., the limitations of the measurement system. For example, “about” may mean within 1 or more than 1 standard deviation, per the practice in the art. Alternatively, “about” may mean a range of up to 20%, up to 10%, up to 5%, or up to 1% of a given value. Where particular values are described in the application and claims, unless otherwise stated the term “about” meaning within an acceptable error range for the particular value may be assumed.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Acoustics & Sound (AREA)
  • Pathology (AREA)
  • Pulmonology (AREA)
  • Biophysics (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Fuzzy Systems (AREA)
  • Signal Processing (AREA)
  • Psychiatry (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Electrotherapy Devices (AREA)
  • Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)

Abstract

Provided herein are systems, devices, and methods to measure auditory signals from a subject to determine a state of a subject. The auditory signals measured may provide a tool to monitor the development of disease state(s) or abnormal physiologic conditions (e.g., wheezing, fluid accumulation, abnormal heart murmur or rhythm, etc).

Description

WEARABLE AUSCULTATION DEVICE
CROSS-REFERENCE
[0001] This application claims the benefit of U.S. Provisional Application No. 63/067,502 filed 8/19/2020, which application is incorporated herein by reference.
BACKGROUND
[0002] Traditional auscultation utilizes a stethoscope to observe auditory signals emitted from biological systems, e.g., the human lungs, gastro-intestinal track, heart, etc., indicative of a state of health for a subject. Unfortunately, due to the typical single point measurement nature of auscultation, spatial information of where such a signal originates from can be lost. Additionally, the cumbersome nature of typical auscultation achieved by a stethoscope limits the application of such a technique to specialized physician interpretation providing a single measurement in time. Therefore, there exists unmet needs for a platform capable of continual auscultation to determine a change in the state of health of a subject.
SUMMARY
[0003] The present disclosure provides devices, systems, and methods to measure auditory signals emitted by a subject. In some cases, the disclosure describes one or more auscultation modules positioned around a subject, each capable of independently measuring auditory signals at a discrete surface on the subject. The one or more auscultation modules positioned around the subject may comprise a unique spatial address that in combination with gyroscopic and accelerometer information may provide 3-D spatial localization of detected auditory signals.
[0004] The present disclosure addresses the aforementioned unmet needs by automating and multiplexing the measurement of auditory signals of a subject. The present disclosure, in some cases, provides an array of auscultation modules positioned around a subject with a known spacing and angular displacement such that the spatial position of an auditory signal may be calculated between the measured auditory signals between each of the one or more auscultation modules. Additionally, the present disclosure provides a processor and/or computational system configured to interpret and classify the auditory signals, removing subjective interpretation by a physician and enabling the wider use of a platform in circumstances where an expert interpreter (e.g., physician, respiratory therapist, etc) is unavailable. Lastly, the devices and systems of the disclosure described herein may be fastened or otherwise worn by the subject in a non-obtrusive manner enabling, in some embodiments, continual monitoring of auditory signals. Such continuous monitoring of auditory signals and the non-obtrusive nature of the device provides the unexpected result of determining early changes in a subjects anatomy or physiology that may be correlated and/or associated to development or changes in a disease state.
[0005] In some aspects, the disclosure provided herein, in some embodiments, describes a device to measure biological auditory signals, the device comprises: a wearable housing; one or more transducers coupled to the wearable housing configured to receive one or more auditory signals from a subject when the wearable housing is worn by the subject, wherein the one or more transducers are coupled to the wearable housing such that the one or more transducers are spaced away at a distance from skin of the subject by a distance of at least about 1 millimeter. In some embodiments, the device further comprises one or more pressure sources configured to induce a pressure force onto one or more regions of the subject to generate the one or more auditory signals from said subject. In some embodiments, the one or more pressure sources comprise an air puff. In some embodiments, the one or more pressure sources comprise a mechanical actuator. In some embodiments, the one or more pressure sources comprise a voice coil, speaker, or any combination thereof. In some embodiments, the housing is a garment. In some embodiments, the housing is a rigid mechanical structure. In some embodiments, the one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of the subject. In some embodiments, the one or more transducers are circular. In some embodiments, the device further comprises a processor in electrical communication with the one or more pressure sources, the one or more transducers, a control module, or any combination thereof. In some embodiments, the control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof.
[0006] In some aspects, the disclosure provided herein, in some embodiments, describes a system to determine a physiologic state of a subject, the system, in some embodiments comprises: a wearable housing; one or more transducers coupled to the wearable housing configured to receive one or more auditory signals from the subject when the wearable housing is worn by the subject, wherein the one or more transducers are coupled to the wearable housing such that the one or more transducers are spaced away from skin of the subject by a distance; and one or more processors configured to process the one or more auditory signals thereby determining the physiologic state of the subject. In some embodiments, the system further comprises one or more pressure sources configured to induce a pressure force onto one or more regions of the subject to generate the one or more auditory signals from the subject. In some embodiments, the one or more pressure sources comprise an air puff. In some embodiments, the one or more pressure sources comprise a mechanical actuator. In some embodiments, the one or more pressure sources comprise a voice coil, speaker, or any combination thereof. In some embodiments, the housing is a garment. In some embodiments, the housing is a rigid mechanical structure. In some embodiments, the one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of the subject. In some embodiments, the one or more transducers are circular. In some embodiments, the system further comprises a control module in electrical communication with the one or more processor, the one or more pressure sources, the one or more transducers or any combination thereof. In some embodiments, the control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof. In some embodiments, the state is: healthy, chronic obstructive pulmonary disease, asthma, emphysema, pneumonia, congestive heart failure, any combination thereof states, or an indeterminant state.
[0007] In some aspects, the disclosure provided herein, in some embodiments, describes a method of determining a physiologic state of a subject, the method comprises: detecting one or more auditory signals from the subject using one or more air coupled auscultation modules; processing the one or more auditory signals to determine a correlative relationship between the one or more auditory signals from the subject and a library of one or more auditory signals; and determining the physiological state of the subject based on the correlative relationship between the one or more auditory signals. In some embodiments, the one or more air coupled auscultation modules comprise one or more transducers, one or more percussive elements, one or more processors, or any combination thereof. In some embodiments, the physiological state comprises a diseased state, wherein the diseased state comprises cancer, chronic obstructive pulmonary disease, emphysema, or any combination thereof. In some embodiments, the library comprises a correlative dataset correlating the subject’s physiological state and a corresponding one or more auditory signals. In some embodiments, determining is accomplished by one or more machine learning algorithms. In some embodiments, the one or more machine learning algorithms comprise k-means clustering, neural network, random forest, Naive bayes, support vector machine, decision tree, logistic regression, linear regression, or any combination thereof. In some embodiments, processing is completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server or any combination thereof. In some embodiments, determining is completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server or any combination thereof.
[0008] In some aspects, the disclosure provided herein, in some embodiments, describes a device to measure biological auditory signals, the device comprises: one or more transducers configured to receive one or more auditory signals from a subject, wherein the one or more transducers are not in contact with the subject. [0009] Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
[0010] Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
INCORPORATION BY REFERENCE
[0011] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The novel features of the present disclosure are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the present disclosure are utilized, and the accompanying drawings (also “Figure” and “FIG.” herein), of which:
[0013] FIGS. 1A-1B illustrate a garment configured to house one or more auscultation modules (FIG. 1A) and show a detailed view of the auscultation module and internal components (FIG. IB), as described in some embodiments herein.
[0014] FIG. 2 illustrates a schematic diagram of the auscultation system disclosed herein, as described in some embodiments herein.
[0015] FIG. 3 illustrates a flow chart for a method of determining the physiologic state of a subject, as described in some embodiments herein. DETAILED DESCRIPTION
[0016] The present disclosure provides devices, systems, and methods configured to detect, analyze, or interpret one or more auditory signals generated by a subject. In some cases, the subject is a mammalian subject. In some instances, the mammalian subject is a human. In some cases, the one or more auditory signals may provide data or information to determine a physiological state of the subject. In some cases, the physiologic state of the subject may comprise a presence or lack thereof physiologic or anatomical changes of the subject that may be indicative of development of disease. In some cases, the disease may be cancer, chronic obstructive pulmonary disease, emphysema, asthma, acute respiratory distress syndrome, congestive heart failure, heart murmur, atrial fibrillation, blood clot, heart attack, vascular aneurysm, ventricular and/or atrial hypertrophy, or any combination thereof. In some instances, the detection of auditory signals may comprise the passive detection of auditory signals. In some cases, the auditory signals may be classified by anatomical or physiologic characteristics. For example, one or more auditory signals may be classified as lung wheezing, crackling, or other sounds indicative of a subject’s lung function or the presence or lack thereof fluid in a subject’s lungs.
[0017] In some cases, the devices, systems, and methods described herein may provide external physical force to determine mechanical properties of the subject. In some cases, the mechanical properties may comprise a presence or lack thereof fluid in the body of the subject, a change in tissue mechanical properties, or any combination thereof. In some cases, the change in tissue mechanical properties may be indicative of a change in physiologic or anatomical state of the subj ect.
[0018] In some cases, the systems may comprise one or more elements in electrical communication configured to detect auditory signals, process auditory signals, display information to a user of the system, receive input from a user of the system, or any combination thereof actions. In some instances, the user may be a medical doctor, nurse, nurse practitioner, or the subject themselves. In some cases, the information may comprise data and analytics regarding a physiologic state of the subject. The system may comprise one or more auscultation modules in electrical communication with elements of a control system configured to detect auditory signals from the subject. The system may comprise one or more pressure sources configured to apply pressure to the subject. In some instances, the system may comprise a control module in electrical communication with the one or more pressure sources and the one or more auscultation modules to detect auditory signals generated by the interaction of the subject and the pressure applied by the one or more pressure sources. Alternatively, or in combination, the control module may be in electrical communication with the one or more auscultation modules to detect one or more auditory signals of the subject without the generation of a pressure by the one or more pressure sources.
Auscultation Module
[0019] In some embodiments, the disclosure provided herein describes an auscultation module 104, as shown in FIGS. 1A-1B. One or more auscultation modules may be configured to detect auditory signals from a subject, described elsewhere herein. In some instances, the one or more auscultation modules may be positioned at a distance from the subject. In some cases, the one or more auscultation modules may not be in contact with the subject.
[0020] In some cases, one or more auscultation modules 104 may be mechanically coupled within a housing 102 configured to position the one or more auscultation modules with respect to a subject to measure auditory signals of the subject. In some cases, the housing may comprise a garment, as shown in FIG. 1A. In some instances, the garment may be worn underneath clothing of the subject. In some cases, the garment may cover the thorax of the subject. In some cases, the garment may be loose fitting on the subject. In some instances, the garment may provide access to areas of the subject’s center thorax for cardio-thoracic procedures. In some cases, the cardio thoracic procedures may comprise repairing a pneumothorax, laparoscopic surgery, cardiac catherization, percutaneous coronary intervention, or any combination thereof procedure. In some cases, the garment may comprise antimicrobial properties. In some cases, the housing may comprise a wrist band or wrist strap. In some cases, the wrist band or wrist strap may wholly or partially encase or surround an arm or wrist of the subject. In some instances, the housing may comprise a rigid mechanical structure.
[0021] In some cases, the one or more auscultation modules may be in electrical communication with one or more power supplies 106. In some cases, the one or more power supplies may comprise one or more batteries. In some cases, the one or more batteries may be rechargeable. In some instances, the one or more power supplies may comprise an analog-current (AC) to direct-current (DC) converter that may convert the output of an electrical socket to power the one or more auscultation modules.
[0022] In some cases, the distance between the one or more auscultation modules and the subject may be about 1 mm to about 25 mm. In some cases, the distance between the one or more auscultation modules and the subject may be about 1 mm to about 2 mm, about 1 mm to about 3 mm, about 1 mm to about 4 mm, about 1 mm to about 5 mm, about 1 mm to about 8 mm, about 1 mm to about 10 mm, about 1 mm to about 12 mm, about 1 mm to about 14 mm, about 1 mm to about 16 mm, about 1 mm to about 18 mm, about 1 mm to about 25 mm, about 2 mm to about 3 mm, about 2 mm to about 4 mm, about 2 mm to about 5 mm, about 2 mm to about 8 mm, about 2 mm to about 10 mm, about 2 mm to about 12 mm, about 2 mm to about 14 mm, about 2 mm to about 16 mm, about 2 mm to about 18 mm, about 2 mm to about 25 mm, about 3 mm to about 4 mm, about 3 mm to about 5 mm, about 3 mm to about 8 mm, about 3 mm to about 10 mm, about 3 mm to about 12 mm, about 3 mm to about 14 mm, about 3 mm to about 16 mm, about 3 mm to about 18 mm, about 3 mm to about 25 mm, about 4 mm to about 5 mm, about 4 mm to about 8 mm, about 4 mm to about 10 mm, about 4 mm to about 12 mm, about 4 mm to about 14 mm, about 4 mm to about 16 mm, about 4 mm to about 18 mm, about 4 mm to about 25 mm, about 5 mm to about 8 mm, about 5 mm to about 10 mm, about 5 mm to about 12 mm, about 5 mm to about 14 mm, about 5 mm to about 16 mm, about 5 mm to about 18 mm, about 5 mm to about 25 mm, about 8 mm to about 10 mm, about 8 mm to about 12 mm, about 8 mm to about 14 mm, about 8 mm to about 16 mm, about 8 mm to about 18 mm, about 8 mm to about 25 mm, about 10 mm to about 12 mm, about 10 mm to about 14 mm, about 10 mm to about 16 mm, about 10 mm to about 18 mm, about 10 mm to about 25 mm, about 12 mm to about 14 mm, about 12 mm to about 16 mm, about 12 mm to about 18 mm, about 12 mm to about 25 mm, about 14 mm to about 16 mm, about 14 mm to about 18 mm, about 14 mm to about 25 mm, about 16 mm to about 18 mm, about 16 mm to about 25 mm, or about 18 mm to about 25 mm. In some cases, the distance between the one or more auscultation modules and the subject may be about 1 mm, about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, about 18 mm, or about 25 mm. In some cases, the distance between the one or more auscultation modules and the subject may be at least about 1 mm, about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, or about 18 mm. In some cases, the distance between the one or more auscultation modules and the subject may be at most about 2 mm, about 3 mm, about 4 mm, about 5 mm, about 8 mm, about 10 mm, about 12 mm, about 14 mm, about 16 mm, about 18 mm, or about 25 mm.
[0023] The auscultation module 104 may comprise: (a) one or more transducer elements 114 configured to detect acoustic and/or pressure waves of the auditory signals generated by the subject; (b) one or more pressure sources 112; (c) a processor 108 in electrical communication with the one or more transducer elements 114 and/or the one or more pressure sources 112. In some instances, the one or more transducer elements may be a micro-machined ultrasonic transducer, such as a capacitive micro-machined ultrasonic transducer (cMUT) or a piezoelectric micro-machined ultrasonic transducer (pMUT). Examples of cMUTs are provided in U.S. Patent Application No. 17/004,568, which is incorporated herein by reference. In some cases, the processor may be in electrical communication with one or more circuit elements. In some cases, the one or more circuit elements may comprise: a wireless (e.g., Bluetooth) transmitter and/or receiver, ultrasound digital signal processing (DSP) application specific integrated circuit, power regulator, a wireless (e.g., Bluetooth) transmitter and receiver antenna, or any combination thereof. In some cases, the auscultation module may comprise a heat dissipation structure, e.g., a heat sink.
[0024] In some cases, the one or more transducer elements 114 may comprise about 1 element to about 20 elements. In some cases, the one or more transducer elements 114 may comprise about 1 element to about 2 elements, about 1 element to about 4 elements, about 1 element to about 6 elements, about 1 element to about 8 elements, about 1 element to about 10 elements, about 1 element to about 12 elements, about 1 element to about 14 elements, about 1 element to about 16 elements, about 1 element to about 18 elements, about 1 element to about 20 elements, about 2 elements to about 4 elements, about 2 elements to about 6 elements, about 2 elements to about 8 elements, about 2 elements to about 10 elements, about 2 elements to about 12 elements, about 2 elements to about 14 elements, about 2 elements to about 16 elements, about 2 elements to about 18 elements, about 2 elements to about 20 elements, about 4 elements to about 6 elements, about 4 elements to about 8 elements, about 4 elements to about 10 elements, about 4 elements to about 12 elements, about 4 elements to about 14 elements, about 4 elements to about 16 elements, about 4 elements to about 18 elements, about 4 elements to about 20 elements, about 6 elements to about 8 elements, about 6 elements to about 10 elements, about 6 elements to about 12 elements, about 6 elements to about 14 elements, about 6 elements to about 16 elements, about 6 elements to about 18 elements, about 6 elements to about 20 elements, about 8 elements to about 10 elements, about 8 elements to about 12 elements, about 8 elements to about 14 elements, about 8 elements to about 16 elements, about 8 elements to about 18 elements, about 8 elements to about 20 elements, about 10 elements to about 12 elements, about 10 elements to about 14 elements, about 10 elements to about 16 elements, about 10 elements to about 18 elements, about 10 elements to about 20 elements, about 12 elements to about 14 elements, about 12 elements to about 16 elements, about 12 elements to about 18 elements, about 12 elements to about 20 elements, about 14 elements to about 16 elements, about 14 elements to about 18 elements, about 14 elements to about 20 elements, about 16 elements to about 18 elements, about 16 elements to about 20 elements, or about 18 elements to about 20 elements. In some cases, the one or more transducer elements 114 may comprise about 1 element, about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, about 18 elements, or about 20 elements. In some cases, the one or more transducer elements 114 may comprise at least about 1 element, about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, or about 18 elements. In some cases, the one or more transducer elements 114 may comprise at most about 2 elements, about 4 elements, about 6 elements, about 8 elements, about 10 elements, about 12 elements, about 14 elements, about 16 elements, about 18 elements, or about 20 elements.
[0025] In some cases, the processor may be configured to process detected auditory signals by the one or more transducer elements 114.
[0026] In some cases, the auscultation module 104 may comprise a circuitry 110 that may be a printed circuit board. In some cases, the processor 108, one or more circuit element, one or more transducer elements 114 and the one or more pressure sources 112 may be in electrical communication through the printed circuit board circuitry. In some cases, the printed circuit board may comprise at least 1 conductive layer, at least 2 conductive layers, at least 3 conductive layers, or at least 4 conductive layers. In some instances, the printed circuit board may comprise up to 1 conductive layer, up to 2 conductive layers, up to 3 conductive layers, or up to 4 conductive layers. In some cases, the one or more transducer elements 114 may be arranged in an array on the circuitry 110. In some cases, the one or more transducer elements 114 may be arranged in a circular array, linear array, polygonal array, or any combination thereof array.
[0027] In some cases, the auscultation module 104 may comprise one or more pressure sources 112 configured to generate pressure directed towards the subject. In some cases, the one or more pressure sources 112 may comprise a mechanical percussor, e.g., a spring-loaded CAM configured to transmit a mechanical vibration into the subject. In some cases, the one or more pressure sources 112 may comprise an acoustic percussor e.g., a magnetic voice coil, and/or speaker configured to transmit a low frequency pressure wave into the subject.
[0028] In some cases, the auscultation module 104 may be sealed wholly or partially within an enclosure. In some cases, the enclosure may comprise a plastic enclosure. In some cases, the auscultation module may comprise a circular, rectangular, square, triangular, trapezoidal, or any combination of shapes thereof. In some cases, the enclosure may provide one or more openings such that the one or more transducer elements 114 may receive and/or transmit auditory signals from the subject. In some instances, the enclosure may wholly or partially encase the one or more pressure sources such that the one or more pressure sources may be positioned in contact with the subject, yet the one or more transducer elements 114 may maintain a distance between the subj ect.
[0029] In some cases, the diameter of the enclosed auscultation module 104 may be about 5 mm to about 50 mm. In some cases, the diameter of the enclosed auscultation module 104 may be about 5 mm to about 10 mm, about 5 mm to about 15 mm, about 5 mm to about 20 mm, about 5 mm to about 25 mm, about 5 mm to about 30 mm, about 5 mm to about 35 mm, about 5 mm to about 40 mm, about 5 mm to about 45 mm, about 5 mm to about 50 mm, about 10 mm to about 15 mm, about 10 mm to about 20 mm, about 10 mm to about 25 mm, about 10 mm to about 30 mm, about 10 mm to about 35 mm, about 10 mm to about 40 mm, about 10 mm to about 45 mm, about 10 mm to about 50 mm, about 15 mm to about 20 mm, about 15 mm to about 25 mm, about 15 mm to about 30 mm, about 15 mm to about 35 mm, about 15 mm to about 40 mm, about 15 mm to about 45 mm, about 15 mm to about 50 mm, about 20 mm to about 25 mm, about 20 mm to about 30 mm, about 20 mm to about 35 mm, about 20 mm to about 40 mm, about 20 mm to about 45 mm, about 20 mm to about 50 mm, about 25 mm to about 30 mm, about 25 mm to about 35 mm, about 25 mm to about 40 mm, about 25 mm to about 45 mm, about 25 mm to about 50 mm, about 30 mm to about 35 mm, about 30 mm to about 40 mm, about 30 mm to about 45 mm, about 30 mm to about 50 mm, about 35 mm to about 40 mm, about 35 mm to about 45 mm, about 35 mm to about 50 mm, about 40 mm to about 45 mm, about 40 mm to about 50 mm, or about 45 mm to about 50 mm. In some cases, the diameter of the enclosed auscultation module 104 may be about 5 mm, about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, about 45 mm, or about 50 mm. In some cases, the diameter of the enclosed auscultation module 104 may be at least about 5 mm, about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, or about 45 mm. In some cases, the diameter of the enclosed auscultation module 104 may be at most about 10 mm, about 15 mm, about 20 mm, about 25 mm, about 30 mm, about 35 mm, about 40 mm, about 45 mm, or about 50 mm.
[0030] In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz to about 20 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz to about 2 kHz, about 1 kHz to about 4 kHz, about 1 kHz to about 6 kHz, about 1 kHz to about 8 kHz, about 1 kHz to about 10 kHz, about 1 kHz to about 12 kHz, about 1 kHz to about 14 kHz, about 1 kHz to about 16 kHz, about 1 kHz to about 18 kHz, about 1 kHz to about 20 kHz, about 2 kHz to about 4 kHz, about 2 kHz to about 6 kHz, about 2 kHz to about 8 kHz, about 2 kHz to about 10 kHz, about 2 kHz to about 12 kHz, about 2 kHz to about 14 kHz, about 2 kHz to about 16 kHz, about 2 kHz to about 18 kHz, about 2 kHz to about 20 kHz, about 4 kHz to about 6 kHz, about 4 kHz to about 8 kHz, about 4 kHz to about 10 kHz, about 4 kHz to about 12 kHz, about 4 kHz to about 14 kHz, about 4 kHz to about 16 kHz, about 4 kHz to about 18 kHz, about 4 kHz to about 20 kHz, about 6 kHz to about 8 kHz, about 6 kHz to about 10 kHz, about 6 kHz to about 12 kHz, about 6 kHz to about 14 kHz, about 6 kHz to about 16 kHz, about 6 kHz to about 18 kHz, about 6 kHz to about 20 kHz, about 8 kHz to about 10 kHz, about 8 kHz to about 12 kHz, about 8 kHz to about 14 kHz, about 8 kHz to about 16 kHz, about 8 kHz to about 18 kHz, about 8 kHz to about 20 kHz, about 10 kHz to about 12 kHz, about 10 kHz to about 14 kHz, about 10 kHz to about 16 kHz, about 10 kHz to about 18 kHz, about 10 kHz to about 20 kHz, about 12 kHz to about 14 kHz, about 12 kHz to about 16 kHz, about 12 kHz to about 18 kHz, about 12 kHz to about 20 kHz, about 14 kHz to about 16 kHz, about 14 kHz to about 18 kHz, about 14 kHz to about 20 kHz, about 16 kHz to about 18 kHz, about 16 kHz to about 20 kHz, or about 18 kHz to about 20 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from about 1 kHz, about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, about 18 kHz, or about 20 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from at least about 1 kHz, about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, or about 18 kHz. In some instances, the one or more transducer elements 114 may be configured to detect auditory signals from at most about 2 kHz, about 4 kHz, about 6 kHz, about 8 kHz, about 10 kHz, about 12 kHz, about 14 kHz, about 16 kHz, about 18 kHz, or about 20 kHz.
Auscultation Systems
[0031] Aspects of the disclosure provided herein may comprise an auscultation system 201, as shown in FIG. 2, configured to detect auditory signals 218 and/or transmit auditory data of a subject to a control module 208 and/or a user interface 210. In some cases, the transmission of auditory data may be accomplished through a Bluetooth, WIFI, or any combination thereof transmission 205. In some cases, the system may comprise an auscultation module 200, described elsewhere herein. The auscultation module may be configured to detect auditory signals 218 from a surface 216 of the subject. The one or more auscultation modules may comprise a processing back end 202 that may comprise Bluetooth and/or WIFI data transmission and receiving 244 and/or ultrasound digital signal processing 240 integrated circuitry.
[0032] In some cases, the auscultation module 200 may comprise one or more ultrasound transducer elements 226, positioned at a distance 222 from a surface of the subject 216 configured to detect auditory signals 218 from the surface 216 of the subject. In some cases, the auditory signals 218 may be generated by the subject. In some cases, the auditory signals 218 may be generated by the interaction of one or more pressure sources (224,220) and the subject, described elsewhere herein.
[0033] In some cases, the processing back end 202 may comprise circuitry e.g., a clock 241, a central processing unit (CPU) 238, analog to digital converter 235, digital to analog converter 232, filter 234, transmit pulser 236, percussion controller 230, doppler detector 240, wireless data transmitter and receiver 244, accelerometer gyroscope integrated circuit 246, or any combination thereof configured to control system elements (e.g., one or more ultrasound transducer elements 226 and/or one or more pressure sources 224), transmit data, receive data, or any combination thereof. [0034] In some cases, auditory signals 218 produced by the subject 216 may be detected by the one or more ultrasound transducer elements 226 in electrical communication with an ultrasound transmit/receive controller 228. In some instances, the transmit pulser 236 in electrical communication with the CPU 238 may generate one or more pulse signals that may be in electrical communication with the digital to analog converter 235. The one or more pulse signals transmitted to the digital to analog converter 235 may then be transmitted electrically to the ultrasound transducer element 226 to generate ultrasound signal directed to one or more regions of the subject. The ultrasound signal directed to the one or more regions of the subject 216 may then be used to detect motion of the one or more regions of the subject as a result of audio signals generated by the subject 218.
[0035] Alternatively or in combination, the CPU 238 may provide a driving signal to a percussion controller 230 configured to provide a driving signal for the one or more pressure sources 224, that may then produce auditory signals within the subject 216 that may be detected by the one or more ultrasound transducer elements 226. In some cases, the clock 241 of an auscultation module 200 may provide a common temporal signal to compare the detected auditory signals by the one or more ultrasound transducer elements 226 thereby determining a directionality or directional vector of an auditory signal wave front. In some cases, the clock 241 may provide a temporal clock signal to the transmit/receive controller 228 to sample the detected auditory signals with a known time interval. The detected auditory signal may then be filtered by the filter 234. In some cases, the filter 234 may comprise a bandpass, notch, low-pass, high-pass, or any combination thereof filter. After filtering the auditory signal, the signal may then be digitized by an analog to digital converter 235 and passed to a doppler detection circuit 240. In some cases, the doppler detection circuit 240 may convert the digitized data (i.e., the Doppler ultrasound data of surface displacement of the subject in units of distance) into a relative displacement. The relative displacement may then be converted into audio data. In some cases, the clock 241 may provide a temporal clock signal to the doppler detection circuit to sample the digitized analog auditory signal with a known time interval. The data may then be prepared into a data packet buffer 242 with discrete channels for each auscultation module 200 to determine the origin of the detected auditory signals. In some cases, simultaneous accelerometer and/or gyroscope data may be generated by the accelerometer gyroscope integrated circuit 246 and bundled by the CPU 238 with the digitized auditory signal data in the data packet buffer 242. In some cases, the accelerometer gyroscope integrated circuit 246 may measure spatial orientation (e.g., roll, pitch, yaw), angular orientation, acceleration, velocity, or any combination thereof data. In some instances, the data measured by the accelerometer gyroscope integrated circuit 246 may provide one or more spatial vectors to localize where within the subject the auditory signal originated.
[0036] After or during (e.g., asynchronously) acquisition and bundling of the channels of auditory signals into a data packet buffer 244, the system may then transmit data wirelessly to a control module 208 for further processing via the wireless data transmitter and receiver 244 in electrical communication with an antenna 204. In some cases, the wireless transmission may be Bluetooth transmission, WIFI transmission, or any combination thereof. The signal may then be detected by the control module 208 corresponding antenna 206 and wireless data transmitter and receiver 245. The control module CPU 238 may then generate a clock signal 252 driving an analyzing circuit 250 to process all and/or a portion thereof the channels of auditory signals stored in the data packet buffer 243.
[0037] Alternatively, the channels of auditory signals may transmit via a wireless transmission system 244, 204 to be processed in a cloud-based processing architecture. [0038] In some cases, the analyzing circuit 250 and/or cloud-based processing architecture may perform one or more processing operation to classify an auditory signal of the auditory signals. In some cases, the processing operation may comprise a cross-correlation, eigenvectorcorrelation, Ahn-park correlation, or any combination thereof. Alternatively or in combination, the processing operation may be a classification by a machine learning algorithm trained previously on a library of labeled auditory signals. In some embodiments, the machine learning algorithm may comprise a deep neural network (DNN). The deep neural network may comprise a convolutional neural network (CNN). The CNN may be, for example, U-Net, ImageNet, LeNet-5, Al exNet, ZFNet, GoogleNet, VGGNet, ResNetl8 or ResNet, etc. Other neural networks may be, for example, deep feed forward neural network, recurrent neural network, LSTM (Long Short Term Memory), GRU (Gated Recurrent Unit), Auto Encoder, variational autoencoder, adversarial autoencoder, denoising auto encoder, sparse auto encoder, boltzmann machine, RBM (Restricted BM), deep belief network, generative adversarial network (GAN), deep residual network, capsule network, or attention/transformer networks, etc.
[0039] In some instances, the machine learning model may comprise clustering, scalar vector machines, kernel SVM, linear discriminant analysis, Quadratic discriminant analysis, neighborhood component analysis, manifold learning, convolutional neural networks, reinforcement learning, random forest, Naive Bayes, gaussian mixtures, Hidden Markov model, Monte Carlo, restrict Boltzmann machine, linear regression, or any combination thereof.
[0040] In some cases, the machine learning algorithm may include ensemble learning algorithms such as bagging, boosting and stacking. The machine learning algorithm may be individually applied to the plurality of features extracted for each channel, such that each channel may have a separate iteration of the machine learning algorithm or applied to the plurality of features extracted from all channels or a subset of channels at once.
[0041] In some cases, the classified channels of auditory signals and the spatial information for each channel determined by the accelerometer gyroscope integrated circuit 246 may be utilized to determine a 3-D spatial position of an auditory signal of a channel within a subject. [0042] In some cases, the system may comprise a user interface 210 where a user may interact with, explore, or visualize raw auditory signal for each channel, the classified auditory signal, reconstructed spatial image of auditory signal classification, or any combination thereof signals. In some instances, the user interface 210 may display a 3-D spatial map and/or image of auditory signal classification overlaid over a model of a human torso for aid of visualization. In some cases, the CPU 238 may transmit the auditory signals to a user interface that may comprise a personal computer 212, laptop computer, smartphone, tablet, or any combination thereof.
Alternatively or in combination, the cloud-based processing architecture may wirelessly transmit the channels of auditory signals to the user interface 210. In some cases, the user may interact with the auditory signals via a keyboard 214 and mouse 215. In some cases, a user, through the use of the user interface, may adjust or tune parameters of the auscultation system 201 (e.g., sensitivity and/or gain of the one or more ultrasound transducer elements 226, pressure force generated by the one or more pressure sources 224, the frequency of the pressure applied by the one or more pressure sources 224, etc, or any combination thereof) to improve signal-to-noise of the channels of detected auditory signals.
Methods
[0043] Aspects of the disclosure provided herein may comprise a method 300 of determining a physiologic state of a subject, as seen in FIG. 3. In some cases, the method 300 may comprise the steps of: (a) detecting one or more auditory signals from a subject using one or more air coupled auscultation modules 302; (b) processing the one or more auditory signals to determine a correlative relationship between the one or more auditory signals from the subject and a library of one or more auditory signals 304; and (c) determining the physiologic state of the subject based on the correlative relationship between the one or more auditory signals 306. In some cases, the air coupled auscultation modules, described elsewhere herein, may comprise one or more transducers, one or more pressure sources, one or more processors, or any combination thereof.
[0044] In some cases, the physiologic state may comprise a vital. In some cases, the vital may comprise blood pressure, pulse, blood flow, hematocrit, or any combination thereof. In some cases, the physiologic state may comprise a diseased state. The diseased state may comprise cancer, chronic obstructive pulmonary disease, emphysema, asthma, acute respiratory distress syndrome, congestive heart failure, heart murmur, atrial fibrillation, blood clot, heart attack, vascular aneurysm, ventricular hypertrophy, pneumonia or any combination thereof. In some cases, the library may comprise a correlative dataset correlating a subject’s physiological state and a corresponding one or more classified auditory signals. In some instances, the one or more classified auditory signals may be classified by an expert interpreter (e.g., medical personnel, resident physician, attending physician, respiratory therapist, nurse, etc.) In some cases, determining of step 306 may be accomplished by one or more machine learning algorithms, described elsewhere herein. In some instances, processing of step 304 may be completed in a cloud base architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server, or any combination thereof. In some instances, determining of step 306 may be completed in a cloud-based architecture, on-board within the one or more air coupled auscultation modules, on a remote computer server, or any combination thereof.
[0045] In some cases, the disclosure provided herein may comprise a method of determining the spatial origin of auditory signals. In some cases, the method may comprise the steps of: (a) detecting one or more auditory signals from a subject using one or more air coupled auscultation modules; (b) determining a wave front orientation of the auditory signals from one or more ultrasound transducers within the one or more air coupled auscultation modules; and (c) comparing the spatial overlap of the wave front orientation of similar auditory signals thereby determining the spatial origin of the auditory signal. In some cases, the one or more auscultation modules may comprise Bluetooth transmission circuitry. In some cases, the Bluetooth transmission circuitry may be configured to communication between one or more auscultation modules to determine the relative angle and distances between the one or more auscultation modules. In some cases, the relative angle of a given auscultation module of the one or more auscultation modules may be determined by an accelerometer or gyroscopic circuit of the auscultation module. In some instances, the relative angle and distance between the one or more auscultation modules may be transmitted between the one or more auscultation modules via a Bluetooth antenna.
[0046] Although the above steps show each of the methods or sets of operations in accordance with embodiments, a person of ordinary skill in the art will recognize many variations based on the teaching described herein. The steps may be completed in a different order. Steps may be added or omitted. Some of the steps may comprise sub-steps. Many of the steps may be repeated as often as beneficial.
[0047] One or more of the steps of each method or sets of operations may be performed with circuitry as described herein, for example, one or more of the processor or logic circuitry such as programmable array logic for a field programmable gate array. The circuitry may be programmed to provide one or more of the steps of each of the methods or sets of operations and the program may comprise program instructions stored on a non-transitory computer readable memory or programmed steps of the logic circuitry such as the programmable array logic or the field programmable gate array, for example.
[0048] Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.
[0049] Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.
[0050] Certain inventive embodiments herein contemplate numerical ranges. When ranges are present, the ranges include the range endpoints. Additionally, every sub range and value within the range is present as if explicitly written out. The term “about” or “approximately” may mean within an acceptable error range for the particular value, which will depend in part on how the value is measured or determined, e.g., the limitations of the measurement system. For example, “about” may mean within 1 or more than 1 standard deviation, per the practice in the art. Alternatively, “about” may mean a range of up to 20%, up to 10%, up to 5%, or up to 1% of a given value. Where particular values are described in the application and claims, unless otherwise stated the term “about” meaning within an acceptable error range for the particular value may be assumed.
[0051] While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A device to measure biological auditory signals, the device comprising:
(a) a wearable housing;
(b) one or more transducers coupled to said wearable housing and configured to receive one or more auditory signals from a subject when said wearable housing is worn by said subject, wherein said one or more transducers are coupled to said wearable housing such that said one or more transducers are spaced away at a distance from skin of said subject by a distance of at least about 1 millimeter.
2. The device of claim 1, further comprising one or more pressure sources configured to induce a pressure force onto one or more regions of said subject to generate said one or more auditory signals from said subject.
3. The device of claim 2, wherein said one or more pressure sources comprise an air puff.
4. The device of claim 2, wherein said one or more pressure sources comprise a mechanical actuator.
5. The device of claim 2, wherein said one or more pressure sources comprise a voice coil, speaker, or any combination thereof.
6. The device of claim 1, wherein said housing is a garment.
7. The device of claim 1, wherein said housing is a rigid mechanical structure.
8. The device of claim 1, wherein said one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of said subject.
9. The device of claim 1, wherein said one or more transducers are circular.
10. The device of claim 1, further comprising a processor in electrical communication with said one or more pressure sources, said one or more transducers, a control module, or any combination thereof.
11. The device of claim 10, wherein said control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof.
12. A system to determine a physiologic state of a subject, the system comprising:
(a) a wearable housing;
(b) one or more transducers coupled to said wearable housing configured to receive one or more auditory signals from said subject when said wearable housing is worn by said subject, wherein said one or more transducers are coupled to said wearable housing such that said one or more transducers are spaced away from skin of said subject by a distance; and (c) one or more processors configured to process said one or more auditory signals thereby determining said physiologic state of said subject. The system of claim 12, further comprising one or more pressure sources configured to induce a pressure force onto one or more regions of said subject to generate said one or more auditory signals from said subject. The system of claim 13, wherein said one or more pressure sources comprise an air puff. The system of claim 13, wherein said one or more pressure sources comprise a mechanical actuator. The system of claim 13, wherein said one or more pressure sources comprise a voice coil, speaker, or any combination thereof. The system of claim 12, wherein said housing is a garment. The system of claim 12, wherein said housing is a rigid mechanical structure. The system of claim 12, wherein said one or more auditory signals comprise data capable of differentiating a healthy or an unhealthy state of said subject. The system of claim 12, wherein said one or more transducers are circular. The system of claim 12, further comprising a control module in electrical communication with said one or more processor, said one or more pressure sources, said one or more transducers or any combination thereof. The system of claim 21, wherein said control module comprises a personal computer, cloud processing architecture, a personal mobile computing device, or any combination thereof. The system of claim 12, wherein said state is: healthy, chronic obstructive pulmonary disease, asthma, emphysema, pneumonia, congestive heart failure, any combination thereof states, or an indeterminant state. A method of determining a physiologic state of a subject, the method comprising:
(a) detecting one or more auditory signals from said subject using one or more air coupled auscultation modules;
(b) processing said one or more auditory signals to determine a correlative relationship between said one or more auditory signals from said subject and a library of one or more auditory signals; and
(c) determining said physiological state of said subject based on said correlative relationship between said one or more auditory signals. The method of claim 24, wherein said one or more air coupled auscultation modules comprise one or more transducers, one or more pressure sources, one or more processors, or any combination thereof. The method of claim 24, wherein said physiological state comprises a diseased state, wherein said diseased state comprises cancer, chronic obstructive pulmonary disease, emphysema, or any combination thereof. The method of claim 24, wherein said library comprises a correlative dataset correlating said subject’s physiological state and a corresponding one or more auditory signals. The method of claim 24, wherein determining is accomplished by one or more machine learning algorithms. The method of claim 28, wherein said one or more machine learning algorithms comprise k-means clustering, neural network, random forest, Naive bayes, support vector machine, decision tree, logistic regression, linear regression, or any combination thereof. The method of claim 24, wherein processing is completed in a cloud based architecture, on-board within said one or more air coupled auscultation modules, on a remote computer server or any combination thereof. The method of claim 24, wherein said determining is completed in a cloud based architecture, on-board within said one or more air coupled auscultation modules, on a remote computer server or any combination thereof. A device to measure biological auditory signals, the device comprising: one or more transducers configured to receive one or more auditory signals from a subject, wherein said one or more transducers are not in contact with the subject.
PCT/US2021/046754 2020-08-19 2021-08-19 Wearable auscultation device WO2022040456A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CA3190577A CA3190577A1 (en) 2020-08-19 2021-08-19 Wearable auscultation device
CN202180071076.1A CN116322513A (en) 2020-08-19 2021-08-19 Wearable auscultation device
JP2023512114A JP2023539116A (en) 2020-08-19 2021-08-19 wearable auscultation device
KR1020237006931A KR20230051516A (en) 2020-08-19 2021-08-19 wearable stethoscope
EP21859153.5A EP4199817A1 (en) 2020-08-19 2021-08-19 Wearable auscultation device
AU2021328481A AU2021328481A1 (en) 2020-08-19 2021-08-19 Wearable auscultation device
US18/171,215 US20230190222A1 (en) 2020-08-19 2023-02-17 Wearable auscultation device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063067502P 2020-08-19 2020-08-19
US63/067,502 2020-08-19

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/171,215 Continuation US20230190222A1 (en) 2020-08-19 2023-02-17 Wearable auscultation device

Publications (1)

Publication Number Publication Date
WO2022040456A1 true WO2022040456A1 (en) 2022-02-24

Family

ID=80323190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/046754 WO2022040456A1 (en) 2020-08-19 2021-08-19 Wearable auscultation device

Country Status (8)

Country Link
US (1) US20230190222A1 (en)
EP (1) EP4199817A1 (en)
JP (1) JP2023539116A (en)
KR (1) KR20230051516A (en)
CN (1) CN116322513A (en)
AU (1) AU2021328481A1 (en)
CA (1) CA3190577A1 (en)
WO (1) WO2022040456A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7976480B2 (en) * 2004-12-09 2011-07-12 Motorola Solutions, Inc. Wearable auscultation system and method
US20130226019A1 (en) * 2010-08-25 2013-08-29 Diacoustic Medical Devices (Pty) Ltd System and method for classifying a heart sound
US8827920B2 (en) * 2011-03-30 2014-09-09 Byung Hoon Lee Telemedical stethoscope
US20190099152A1 (en) * 2017-10-04 2019-04-04 Ausculsciences, Inc. Auscultatory sound-or-vibration sensor
US20190357777A1 (en) * 2006-12-19 2019-11-28 Valencell, Inc. Apparatus, systems and methods for obtaining cleaner physiological information signals
US20200178923A1 (en) * 2009-10-15 2020-06-11 Masimo Corporation Acoustic respiratory monitoring systems and methods

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2249706B1 (en) * 2008-03-04 2013-05-29 Koninklijke Philips Electronics N.V. Non invasive analysis of body sounds
JP5328990B2 (en) * 2010-12-10 2013-10-30 三菱電機株式会社 Aerial ultrasonic sensor
US11190868B2 (en) * 2017-04-18 2021-11-30 Massachusetts Institute Of Technology Electrostatic acoustic transducer utilized in a headphone device or an earbud

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7976480B2 (en) * 2004-12-09 2011-07-12 Motorola Solutions, Inc. Wearable auscultation system and method
US20190357777A1 (en) * 2006-12-19 2019-11-28 Valencell, Inc. Apparatus, systems and methods for obtaining cleaner physiological information signals
US20200178923A1 (en) * 2009-10-15 2020-06-11 Masimo Corporation Acoustic respiratory monitoring systems and methods
US20130226019A1 (en) * 2010-08-25 2013-08-29 Diacoustic Medical Devices (Pty) Ltd System and method for classifying a heart sound
US8827920B2 (en) * 2011-03-30 2014-09-09 Byung Hoon Lee Telemedical stethoscope
US20190099152A1 (en) * 2017-10-04 2019-04-04 Ausculsciences, Inc. Auscultatory sound-or-vibration sensor

Also Published As

Publication number Publication date
CA3190577A1 (en) 2022-02-24
CN116322513A (en) 2023-06-23
AU2021328481A1 (en) 2023-04-13
KR20230051516A (en) 2023-04-18
JP2023539116A (en) 2023-09-13
US20230190222A1 (en) 2023-06-22
EP4199817A1 (en) 2023-06-28

Similar Documents

Publication Publication Date Title
US20210259560A1 (en) Methods and systems for determining a physiological or biological state or condition of a subject
US10117635B2 (en) Electronic acoustic stethoscope with ECG
US10092268B2 (en) Method and apparatus to monitor physiologic and biometric parameters using a non-invasive set of transducers
EP2440139B1 (en) Method and apparatus for recognizing moving anatomical structures using ultrasound
US11647992B2 (en) System and method for fusing ultrasound with additional signals
TW201806370A (en) System and method for providing a real-time signal segmentation and fiducial points alignment framework
US20170188978A1 (en) System and method of measuring hemodynamic parameters from the heart valve signal
TW200526174A (en) Analysis of auscultatory sounds using single value decomposition
CN104254283A (en) Diagnosing lung disease using transthoracic pulmonary doppler ultrasound during lung vibration
Malek et al. Design and development of wireless stethoscope with data logging function
US20230190222A1 (en) Wearable auscultation device
Saeidi et al. 3D heart sound source localization via combinational subspace methods for long-term heart monitoring
CN106264598A (en) The auscultation system that a kind of multiple instruments combines
US20190175141A1 (en) Detection and Quantification of Brain Motion and Pulsatility
Saeidi et al. Automatic cardiac phase detection of mitral and aortic valves stenosis and regurgitation via localization of active valves
De Panfilis et al. Multi-point accelerometric detection and principal component analysis of heart sounds
Areiza-Laverde et al. Analysis of cardiac vibration signals acquired from a novel implant placed on the gastric fundus
TWI840265B (en) Inspiration system related symptom sensing system and apparatus
US20230389869A1 (en) Multimodal physiological sensing systems and methods
CN110584599A (en) Wavelet transformation data processing system and method based on cardiac function dynamic monitoring
TWM647887U (en) Inspiration system related symptom sensing system and apparatus
Dosko et al. Human Precardiac Zone Vibrations Analysis Using Parametric Spectral Methods
WO2021250048A1 (en) Method and device for multidimensional analysis of the dynamics of cardiac activity
EP4161377A1 (en) Method and device for multidimensional analysis of the dynamics of cardiac activity

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21859153

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3190577

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2023512114

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 20237006931

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021859153

Country of ref document: EP

Effective date: 20230320

ENP Entry into the national phase

Ref document number: 2021328481

Country of ref document: AU

Date of ref document: 20210819

Kind code of ref document: A