WO2022201152A1 - Détection de respiration orale par rapport à une respiration nasale - Google Patents

Détection de respiration orale par rapport à une respiration nasale Download PDF

Info

Publication number
WO2022201152A1
WO2022201152A1 PCT/IL2022/050317 IL2022050317W WO2022201152A1 WO 2022201152 A1 WO2022201152 A1 WO 2022201152A1 IL 2022050317 W IL2022050317 W IL 2022050317W WO 2022201152 A1 WO2022201152 A1 WO 2022201152A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
mouth
computing device
indicator
therapist
Prior art date
Application number
PCT/IL2022/050317
Other languages
English (en)
Inventor
Michael Finkelstein
Original Assignee
Michael Finkelstein
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Michael Finkelstein filed Critical Michael Finkelstein
Publication of WO2022201152A1 publication Critical patent/WO2022201152A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/087Measuring breath flow
    • A61B5/0873Measuring breath flow using optical means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates generally to the field of automated physiological training.
  • Mouth breathing may lead to postural changes of the mouth and jaw. These changes may include lip incompetence, a low position of the tongue on the floor of the mouth, or a forward position of the tongue, pressing against the teeth, and an increase in the vertical facial height.
  • a class II malocclusion and a skeletal class II profile with increased overjet, may develop when a child who mouth breathes also rotates the mandible in a posterior and inferior direction. The problem may be that muscles that depress the jaw when the mouth is opened exert a backward pressure on the jaw. This displaces the mandible distally and retards its growth.
  • the buccinator muscles become tense when the mouth is opened and may exert lingual pressure on the maxillary bicuspids and molars. These, in turn, do not receive support from the tongue, which then causes a narrowing of the palate and the upper dental arch. Finally, lip function becomes abnormal, as the lower lip may become enlarged compared with the upper lip, which can force the lower lip under the upper incisors. The upper incisors may then become further protruded, with increased overbite.
  • the speech therapy process may be improved by training patients to breathe nasally rather than orally.
  • Embodiments of the present invention include systems and methods for enabling both a patient and a patient's speech therapist to monitor the patient's open-mouth breathing, in order to train the patient to increase nasal breathing.
  • An embodiment of the present invention provides a computer-based system for detecting nasal and oral breathing of a patient, including a patient computing device having a camera, a microphone, a processor and a memory.
  • the memory has instructions that when executed by the processor cause the processor to implement steps of: capturing from the camera a sequence of images of the patient's face and capturing from the microphone an audio stream of the patient's speech; determining from the audio stream non-speaking intervals; and calculating from the images acquired during the non-speaking intervals a metric of open-mouth vs. closed-mouth periods.
  • the system then notifies the patient of the calculated metric, thereby enabling the patient to overcome improper oral breathing habits.
  • the calculated metric of open-mouth vs. closed-mouth periods may be calculated in several ways, such as the sum of open-mouth periods divided by the sum of open-mouth and closed- mouth periods, or as the ratio of open-mouth to closed-mouth periods, or similar functions. Presentation of the calculated metric to the patient enables the patient to understand the extent of his open-mouth breathing and thereby correct the habit.
  • the system may also include alerting the patient when the patient's mouth is open.
  • the system may alert the patient according to a preset alert frequency.
  • Notifying the patient of the calculated metric may include presenting the calculated metric to the patient together with an indicator of a progress goal.
  • Initiating the facial monitoring may include providing an "off" indicator when the patient's mouth is not identified in the sequence of images and providing an "on" indicator when the patient's mouth is identified in the sequence of images.
  • Alerting the patient comprises notifying the patient according to a preset alert frequency.
  • the patient may be notified by one or more of a light indicator, a vibration indicator, and an audio indicator of the patient computing device, which in some embodiments is a mobile device.
  • the steps implemented by the patient computing device may further include sending the calculated metric to a therapist computing device, which may be configured to implement steps of: registering the patient for subsequent tracking by the patient monitor; receiving the calculated metric from the patient computing device and responsively determining an indicator of patient progress with nasal breathing during facial monitoring over multiple training sessions; and presenting the indictor to a therapist.
  • the patient computing device includes a processor having a memory with instructions that implement steps of: capturing from the camera a sequence of images of the patient's face and capturing from the microphone an audio stream of the patient's speech; determining, from the audio stream, non-speaking intervals; calculating, from the images acquired during the non-speaking intervals, a metric of open-mouth vs. closed-mouth periods; and notifying the patient of the calculated metric.
  • FIG. 1 is a schematic diagram illustrating a system of monitoring a patient's breathing, in accordance with an embodiment of the present invention
  • FIG. 2 is a flow diagram depicting a process for monitoring a patient's breathing, in accordance with an embodiment of the present invention.
  • FIGs. 3-7 are a set of screenshots of mobile or other computing devices configured to monitor a patient's breathing, in accordance with an embodiment of the present invention.
  • FIG. 1 is a schematic diagram illustrating a system 100 of monitoring breathing of breathing by a patient 110, in accordance with an embodiment of the present invention.
  • a patient computing device 120 is configured to monitor the patient's breathing.
  • Software loaded onto the patient computing device 120 includes a "patient monitor" application (or "app").
  • the patient app includes visual recognition functions, typically trained by machine learning, to identify when the patient's mouth is open or closed, in photos or in frames of real-time video captured by a camera 125.
  • the patient app also includes audio recognition functions, trained to recognize the patient’s speech sensed by a microphone 135.
  • the patient app may also be configured to present feedback to the patient, which may be provided on a device screen 130 associated with the computing device, or by lights, audio speakers, or a vibrator of the patient computing device.
  • the patient app may also transmit breathing statistics, during or after a training session to a "therapist monitor" app used by a remote operator, such as a speech therapist, operating a therapist computing device 140.
  • Statistics may include metrics of oral and nasal breathing, which may be determined as time periods of open-mouth and closed-mouth breathing in audio or video streams, or as the number of open-mouth and close mouth images recorded at intervals during the training session.
  • the patient computing device 120 may be a personal mobile phone of the patient.
  • the patient computing device 120 may be any type of computer controller or personal computer (PC) workstation that includes a processor and memory storage storing executable code, as well as a camera (indicated as the camera 125).
  • the therapist computing device 140 may be a personal mobile phone of a therapist
  • computing device 120 may be any type of computer controller, such as a personal computer (PC) workstation, that includes a processor and memory storage, on which is stored executable code. Executable code may also be stored on a web server and downloaded at runtime to a browser of one or both of the patient and therapist computing devices.
  • Audio and video stream processing described below is typically performed in real time by the respective patient or therapist computing devices, that is, the processes of identifying speech intervals from the audio stream and of detecting non-speaking open-mouth and closed-mouth time periods, where the intervals are defined by start and stop times, and the periods may be similarly defined as start and stop times or as the number of image frames recorded in a given open- or closed-mouth state.
  • the patient and therapist computing devices may also be configured to upload audio and video streams to a remote server, such as a cloud-based server, for subsequent review by the therapist and/or patient.
  • Input from the patient to the patient computing device may be entered on a keyboard and mouse or on a touch screen, such as the device screen 130. Similar input devices may also operate with the therapist computing device 140.
  • the patient computing device 120 may have a means of remote communications with the therapist computing device 140, such as a Wi-Fi or Bluetooth connection, and vice versa.
  • the therapist app and the patient app may communicate directly with each other over an internet protocol, such as TCP/IP.
  • the two apps may communicate through a communications server. Communications primarily consists of transfer of monitoring data from the patient computing device to the therapist computing device. In some embodiments, this transfer may be implemented by transfer messages or email from the patient computing device to the therapist computing device.
  • Therapy conducted with system 100 typically includes multiple training sessions.
  • a patient sets the patient computing device 120 in a position such that his face is in the field of view of the camera 125, such that the camera can capture a video stream or sequence of photographs of the patient's face.
  • a microphone may be positioned near the patient to record sounds of breathing and/or speech. From the video stream and/or photographs and/or breathing sounds, the patient app determines when the patient is breathing through his mouth and when he is breathing through his nose. The patient app then provides feedback about the breathing metrics, either in real time, or at the end of each session, to the patient and to the therapist. [0021] Fig.
  • Process 200 is a flow diagram depicting a process 200 for monitoring a patient's breathing, in accordance with an embodiment of the present invention.
  • Process 200 includes steps typically performed by the therapist app (indicated as therapist application 210), which is configured to run on the therapist computing device 140.
  • the process also includes steps typically performed by the patient app 220, which is configured to run on the patient computing device 120.
  • the therapist app 210 may be configured to initially register a new patient, at a step 212. Registration typically includes acquiring identifying information about a patient, such as age and gender, and contact information. As described below, this process may also be performed by the patient app 220.
  • the registration process may also include entry of initial test data regarding the patient’s breathing patterns, based on previous therapy or diagnosis.
  • Initial test data may be stored in memory storage of the therapist computing device 140, for comparison with subsequently acquired data during system operation.
  • Goals for breathing training may also be set, to determine aspects of how the patient's progress may be subsequently presented with respect to target goals. For example, a goal may be to raise a patient's nose breathing time during training sessions from 20% to 50%, over the course of a week.
  • the registration process prepares the therapist app to begin tracking a patient's progress.
  • the therapist app and/or the patient app may also provide a form for entering a patient notification interval defining the interval between feedback notifications or "alerts" provided to the patient while the patient is monitored by the patient app.
  • the alert may indicate that the patient is mouth breathing, or may simply indicate a cumulative ratio of mouth to nose breathing.
  • Other parameters that may be set include a sampling frequency, that is, the rate of image capture and analysis, which is reduced from a camera video streaming rate to reduce the computing load on the patient computing device.
  • Steps described above as associated with step 212 of the therapist app 210 may be implemented by the patient app 220, enabling the patient to enter initial registration data and system settings, such as feedback intervals, as the patient may operate the system on a standalone basis without guidance from a therapist.
  • a patient may also initialize the system and may even begin working with the system and subsequently initiate interaction with a therapist who would use the therapist app.
  • the patient app 220 is configured for operation at a step 222.
  • Configuration may include testing that the patient app can recognize when the patient's mouth is open or closed, indicating mouth vs. nose breathing. This may include customizing the recognition for the specific patient, based on images acquired from the user, as described further hereinbelow.
  • configuration may include testing and/or customizing an audio recognition system for the specific patient, for recognizing the patient’s speech. In some embodiments, the system may also track the extent of the mouth opening.
  • the patient app includes visual recognition functions, typically implemented with a model of facial recognition, as well as audio recognition functions for detecting speech. These functions are typically pre-trained to recognize the difference between open- or closed-mouth (i.e., nasal) breathing of individuals. The pre training of these functions is typically implemented by machine learning training steps.
  • Step 222 of the patient app may also include establishing a communications connection with the therapist app, which is primarily used to facilitate transmission of patient monitor data from the patient app to the therapist app. After the communications connection is established, set-up parameters set at the therapist app may also be transferred to the patient app. These set-up parameters may include patient goals and alert intervals, as described above.
  • the apps can also be used as a means of real-time messaging between the therapist and the patient, that is, messaging can be carried out while monitoring is also being performed.
  • the configuration also typically includes determining that the patient's face is in the field of view of the camera for facial recognition, as indicated below with respect to Fig. 7.
  • the microphone may also be set in a position to detect the patient’s speech. After positioning the camera and/or the microphone, the patient initiates the patient app monitoring.
  • the patient app may include a feature that provides an "off" indicator if the patient's mouth is not in the field of view of the camera (i.e., the mouth is not identified by the face recognition engine), and provides an "on” indicator if the mouth is identified.
  • the on/off indicator may be a light of the device, or a light on the device screen, or it may a vibration or sound.
  • a specific type of beep sound or pattern may be defined to ring to indicate that the mouth is being tracked, and a different sound or pattern may be defined to ring if tracking stops because the mouth is no longer identified.
  • indicators may be presented to the patient if there is no audio recognition of oral and/or nasal breathing.
  • Feedback may also be provided as an image or "still" of the patient's current pose, which may be contrasted to a correct pose (i.e., with the mouth closed) on a screen of the patient computing device, as indicated below with respect to Fig. 3.
  • Feedback may also be provided as a live or recorded video of the patient’s face.
  • monitoring begins at a step 226.
  • the monitoring device(s) capture data sets of images, video streams, and audio streams, respectively of the patient's face and speech.
  • monitoring continues and the patient's open and closed-mouth times may be measured.
  • the data captured over time may be stored, so that a graph over time may be subsequently presented to the patient and/or therapist.
  • the actual video and/or audio streams may be stored, or, in order to reduce memory storage requirements, video frames at a reduced rate, such as one per second, may be stored.
  • the patient app provides indications of proper or improper breathing at intervals, where the interval time may be set as described above.
  • the patient app may be configured to check the patient every five minutes, or other set time interval, and to issue a warning beep or other indicator if the patient is breathing through the mouth when the check is performed. That is, the patient app may be configured to provide real-time (or approximately real-time) feedback to the patient (for example, with the sound or light indicator as a means of feedback) when the breathing needs to be corrected.
  • the patient app may be set to provide no real-time indicator to the patient, but rather to provide feedback to the patient only after a session of monitoring ends.
  • a session of monitoring of the patient's breathing may proceed, with the steps 226 to 230 being repeated as indicated by the arrow in the figure from step 230 back to step 226.
  • the patient app completes a monitoring session, which may end either after a preset time or when stopped by the patient. Either at the end of the monitoring, or in approximately real-time during the course of the monitoring, the patient app may send data to the therapist app. The patient app may send data, such as a length of time of the monitoring and the metric of mouth to nose breathing during the periods that the patient was not speaking (speaking intervals being irrelevant to the measure of improper breathing).
  • the patient app may send more data, such as a data point at regular intervals, such as every second, the data point indicating whether at that point in time the mouth was open or closed (or speaking).
  • the patient app may also send to the therapist app a video stream, or images at regular time intervals, and/or an audio stream, as described above.
  • Data sent to the therapist also may indicate the number of data measurements made, such as the number of images recorded, and may indicate separate metrics of mouth and nasal breathing, such as the number of images indicating each type of breathing, or total lengths of time that each type of breathing was indicated in non-speech video streams during a training session.
  • the therapist app may receive the data from the patient app, and, at the prompting of the therapist, present indicators of progress of the patient's training, such as a comparison of the oral vs nasal breathing metric over the course of multiple training sessions.
  • the therapist app may include multiple options for presenting the patient's progress, including indicators of comparative cases, such as graphs of typical progress. Additionally, such indicators of progress may also be provided to the patient through the patient app.
  • Figs. 3-6 are a set of exemplary screenshots of mobile or other computing devices, configured with respective patient and therapist apps, to track progress of a patient's breathing progress, in accordance with an embodiment of the present invention.
  • FIG. 3 is a screenshot of the patient computing device, showing exemplary feedback, indicating a correct pose of the patient, juxtaposed with an image (or alternatively, a video) of the patient's current pose.
  • Fig. 4 is a screenshot of a typical settings configuration screen for a patient app.
  • the screenshot shows some of the typical settings that the patient can configure, such as the sampling frequency, which is the rate at which images are capture and analyzed.
  • the screenshot also shows an option for the patient to set how alerts are to be provided, that is, whether by light or by ringing, and if by ringing, a volume option is provided. Additional options shown are for the patient to get a cumulative rating (such as a nasal breathing percent) and for the patient to bring up some educational material on nose vs. mouth breathing.
  • Fig. 5 is a screenshot of a typical settings screen for a therapist app.
  • Fig. 6 is a screenshot of a training progress screen, which can be presented by both the therapist app and the patient app.
  • the training progress screen indicates a cumulative "rate of success" over several training sessions, where the rate of success is indicated as the ratio of nose breathing to total breathing time during monitoring.
  • the rate of success is indicated as the ratio of nose breathing to total breathing time during monitoring.
  • Fig. 7 is a screenshot of a configuration interface for determining that the patient's face is in the field of view of the camera for facial recognition, as described above. Acquisition of the patient’s face may also be applied to a facial recognition engine of the patient app thereby personalizing the app to better recognize (i.e., distinguish) open- and closed-mouth periods.
  • Executable code of the patient and therapist computing devices may include an application, or "app," a program, a process, task or script, and may be executed possibly under control of an operating system.
  • the system may be an add-on, or upgrade, or a retrofit to a commercial product, such as an image recognition program.
  • Processing elements of the system described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof.
  • Such elements can be implemented as a computer program product, tangibly embodied in an information carrier, such as a non-transient, machine -readable storage device, for execution by, or to control the operation of, data processing apparatus, such as a programmable processor, computer, or deployed to be executed on multiple computers at one site or one or more across multiple sites.
  • Memory storage for software and data may include multiple one or more memory units, including one or more types of storage media. Examples of storage media include, but are not limited to, magnetic media, optical media, and integrated circuits such as read-only memory devices (ROM) and random access memory (RAM).
  • Network interface modules may control the sending and receiving of data packets over networks. Method steps associated with the system and process can be rearranged and/or one or more such steps can be omitted to achieve the same, or similar, results to those described herein.
  • Terms for computer processing used herein may include the terms “calculating,” “determining,” “establishing”, “analyzing”, or the like, and may refer to operations and processes of a computer, a computing platform, a computing system, or other electronic computing devices that manipulate or otherwise transform data represented as electronic quantities within the computing device's registers and/or memories into other data similarly represented as physical quantities within the computer's registers and/or memories or other information non-transitory storage medium that may store instructions to perform operations and/or processes.
  • the terms “plurality” and “a plurality” as used herein include, for example, “multiple” or “two or more”.
  • the terms “plurality” or “a plurality” may be used throughout the specification to describe two or more components, devices, elements, units, parameters, or the like.
  • the phrase “real-time” means concurrently, within the computational limits of the employed processors.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Rheumatology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

L'invention concerne un système et des procédés pour détecter la respiration nasale et orale d'un patient. Un dispositif informatique de patient comprend une caméra et un microphone et est conçu pour mettre en œuvre : une capture à partir du dispositif de prise de vues d'une séquence d'images du visage du patient et une capture à partir du microphone d'un flux audio du discours d'un patient ; la détermination, à partir du flux audio, d'intervalles sans prise de parole ; le calcul, à partir des images acquises pendant les intervalles sans prise de parole, d'une mesure de périodes de bouche ouverte par rapport à des périodes de bouche fermée ; et la notification au patient de la mesure calculée.
PCT/IL2022/050317 2021-03-21 2022-03-21 Détection de respiration orale par rapport à une respiration nasale WO2022201152A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163163878P 2021-03-21 2021-03-21
US63/163,878 2021-03-21

Publications (1)

Publication Number Publication Date
WO2022201152A1 true WO2022201152A1 (fr) 2022-09-29

Family

ID=83395246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2022/050317 WO2022201152A1 (fr) 2021-03-21 2022-03-21 Détection de respiration orale par rapport à une respiration nasale

Country Status (1)

Country Link
WO (1) WO2022201152A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012058727A2 (fr) * 2010-11-05 2012-05-10 Resmed Limited Systèmes et/ou procédés de masque de détection acoustique
US20170333754A1 (en) * 2016-05-17 2017-11-23 Kuaiwear Limited Multi-sport biometric feedback device, system, and method for adaptive coaching
US9892655B2 (en) * 2012-11-28 2018-02-13 Judy Sibille SNOW Method to provide feedback to a physical therapy patient or athlete

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012058727A2 (fr) * 2010-11-05 2012-05-10 Resmed Limited Systèmes et/ou procédés de masque de détection acoustique
US9892655B2 (en) * 2012-11-28 2018-02-13 Judy Sibille SNOW Method to provide feedback to a physical therapy patient or athlete
US20170333754A1 (en) * 2016-05-17 2017-11-23 Kuaiwear Limited Multi-sport biometric feedback device, system, and method for adaptive coaching

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LESTER ROSEMARY A., HOIT JEANNETTE D.: "Nasal and Oral Inspiration During Natural Speech Breathing", JOURNAL OF SPEECH, LANGUAGE AND HEARING RESEARCH, AMERICAN SPEECH-LANGUAGE-HEARING ASSOCIATION, ROCKVILLE, MD, US, vol. 57, no. 3, 1 June 2014 (2014-06-01), US , pages 734 - 742, XP055974970, ISSN: 1092-4388, DOI: 10.1044/1092-4388(2013/13-0096) *

Similar Documents

Publication Publication Date Title
JP7453195B2 (ja) 薬剤投与訓練
AU2018335288B2 (en) Apparatus and method for recognition of suspicious activities
US20100305466A1 (en) Incentive spirometry and non-contact pain reduction system
US20110117528A1 (en) Remote physical therapy apparatus
KR102388337B1 (ko) 턱관절 질환 개선 서비스용 어플리케이션의 서비스 제공방법
CN109044303B (zh) 一种基于可穿戴设备的血压测量方法、装置和设备
JP2019509094A (ja) 被験者の嚥下障害の検出および監視のための装置、システムおよび方法
US20190192033A1 (en) Neurofeedback systems and methods
US20140350355A1 (en) Monitoring and managing sleep breathing disorders
US20110263997A1 (en) System and method for remotely diagnosing and managing treatment of restrictive and obstructive lung disease and cardiopulmonary disorders
JP2021508577A (ja) 吸入器訓練システムおよび方法
EP1997428A1 (fr) Systeme de diagnostic et programme a utiliser dans le systeme diagnostic
JP2023553957A (ja) 身体画像に基づいて睡眠分析を決定するシステム及び方法
WO2022201152A1 (fr) Détection de respiration orale par rapport à une respiration nasale
WO2016108225A1 (fr) Systèmes et procédés permettant de surveiller et d'encourager l'exercice physique chez les nourrissons
US20220215926A1 (en) System for measuring breath and for adapting breath exercices
CN115551579A (zh) 用于评估通风患者状况的系统和方法
JP2023501176A (ja) 音声ベースの呼吸の予測
JP2020035122A (ja) 顔運動量測定装置
WO2023000599A1 (fr) Procédé et appareil, basé sur la conduction osseuse, de surveillance de l'alimentation, dispositif terminal et support
JP2022126288A (ja) 咀嚼改善提案システム、咀嚼改善提案プログラム、及び咀嚼改善提案方法
US20200138547A1 (en) A myofunctional orthodontics system and method
JP2021105878A (ja) 咀嚼支援システム
US20230181116A1 (en) Devices and methods for sensing physiological characteristics
US20240055117A1 (en) System and method for vestibular and/or balance rehabilitation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22774499

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22774499

Country of ref document: EP

Kind code of ref document: A1