CN116888682A - System and method for continuously adjusting personalized mask shape - Google Patents

System and method for continuously adjusting personalized mask shape Download PDF

Info

Publication number
CN116888682A
CN116888682A CN202280016909.9A CN202280016909A CN116888682A CN 116888682 A CN116888682 A CN 116888682A CN 202280016909 A CN202280016909 A CN 202280016909A CN 116888682 A CN116888682 A CN 116888682A
Authority
CN
China
Prior art keywords
user
data
sensor
interface
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280016909.9A
Other languages
Chinese (zh)
Inventor
格雷戈里·罗伯特·皮克
内森·泽雷什·刘
萨基娜·德·索萨
迈克尔·克里斯托弗·豪格
雷德蒙德·舒尔德迪斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Resmed Pty Ltd
Resmed Paris SAS
Resmed Sensor Technologies Ltd
Original Assignee
Resmed Pty Ltd
Resmed Paris SAS
Resmed Sensor Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Resmed Pty Ltd, Resmed Paris SAS, Resmed Sensor Technologies Ltd filed Critical Resmed Pty Ltd
Priority claimed from PCT/US2022/018178 external-priority patent/WO2022183116A1/en
Publication of CN116888682A publication Critical patent/CN116888682A/en
Pending legal-status Critical Current

Links

Abstract

A system and method of matching an interface with a user's face for respiratory therapy. The facial image of the user is stored. Facial features based on the facial image are determined. The database stores facial contours based on facial features and a corresponding plurality of interfaces. The database stores operational data of a respiratory therapy device having a plurality of corresponding interfaces. The selection engine is coupled to the database. The selection engine is operable to select an interface for the user from the plurality of corresponding interfaces according to stored operational data and facial features based on the desired effect. The collected data may also be employed to determine whether the selected interface is properly fitted to the user's face.

Description

System and method for continuously adjusting personalized mask shape
Priority statement
The present disclosure claims priority and benefit from U.S. provisional patent application Ser. No. 63/154,223, filed on month 2 of 2021, and U.S. provisional patent application Ser. No. 63/168,635, filed on month 3 of 2021. The contents of these applications are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates generally to interfaces for respiratory therapy devices, and more particularly to a system for better selection of a mask based on individual user data.
Background
There are a range of respiratory disorders. Certain disorders may be characterized by specific events such as apneas, hypopneas, and hyperapneas. Obstructive Sleep Apnea (OSA) is a form of Sleep Disordered Breathing (SDB) characterized by events that include occlusion or blockage of the upper airway during sleep. It results from the combination of abnormally small upper airway and normal loss of muscle tone in the tongue, soft palate, and posterior oropharyngeal wall partitions during sleep. The condition causes the affected patient to stop breathing, typically for a period of 30 seconds to 120 seconds, sometimes 200 to 300 times per night. This often results in excessive daytime sleepiness, and may lead to cardiovascular disease and brain damage. This syndrome is a common disorder, especially in overweight men in middle age, but the affected person may not be aware of the problem.
Other sleep-related disorders include tidal breathing (CSR), obese Hyperventilation Syndrome (OHS), and Chronic Obstructive Pulmonary Disease (COPD). COPD encompasses any of a group of lower airway diseases that have some common characteristics. These include increased airflow resistance, prolonged expiratory phases of breathing, and loss of normal elasticity of the lungs. Examples of COPD are emphysema and chronic bronchitis. COPD is caused by chronic smoking (major risk factor), occupational exposure, air pollution and genetic factors.
Continuous Positive Airway Pressure (CPAP) therapy has been used to treat Obstructive Sleep Apnea (OSA). The application of continuous positive airway pressure acts as a pneumatic splint and may prevent upper airway occlusion by pushing the soft palate and tongue forward and away from the posterior oropharyngeal wall.
Non-invasive ventilation (NIV) provides ventilation support to the patient through the upper airway to assist the patient in making a full breath and/or to maintain proper oxygen levels within the body by performing some or all of the work of breathing. Ventilation support is provided via a user interface. NIV has been used to treat CSR, OHS, COPD and chest wall disorders. In some forms, the comfort and effectiveness of these therapies may be improved. Invasive Ventilation (IV) provides ventilation support for patients that are no longer able to breathe effectively themselves, and may be provided using an aerocut tube.
The treatment system (also identified herein as respiratory treatment system) may include a respiratory pressure treatment device (RPT device), an air circuit, a humidifier, a user interface, and data management. The patient or user interface may be used to engage the respiratory equipment to its wearer, for example by providing an air flow to the inlet of the airway. The air flow may be provided to the patient's nose and/or mouth via a mask, to the patient's mouth via a tube, or to the patient's trachea via an tracheostomy tube. Depending on the therapy to be applied, the user interface may form a seal with, for example, a region of the patient's face in order to deliver gas at a pressure that is sufficiently different from ambient pressure (e.g., at a positive pressure of about 10cm h2o relative to ambient pressure) to effect the treatment. For other forms of therapy, such as oxygen delivery, the user interface may not include a seal sufficient to facilitate delivery of the gas supply to the airway at a positive pressure of about 10cm h2 o. Treatment of respiratory diseases by such therapy may be voluntary, and thus, if the patient finds that the means for providing such treatment is: uncomfortable, difficult to use, expensive, and/or unsightly, the patient may choose not to follow the treatment.
The design of the user interface presents several challenges. The face has a complex three-dimensional shape. The size and shape of the nose varies greatly from individual to individual. Since the head includes bone, cartilage and soft tissue, different sections of the face respond differently to mechanical forces. The mandible or mandible may be moved relative to the other bones of the skull. The entire head may move during the respiratory treatment session.
Because of these challenges, some masks face one or more of the following problems: abrupt, unsightly, expensive, poorly adapted, difficult to use, and uncomfortable especially when worn for a long period of time or when the user is unfamiliar with the system. For example, masks designed for pilots only, masks designed as part of personal protective equipment (e.g., filtering masks), SCUBA masks, or masks designed for applying anesthetic agents are acceptable for the original application of the mask, but such masks are not as comfortable as desired for extended periods of wear (e.g., hours). Such discomfort may cause reduced user compliance with the treatment. This is especially true if the mask is worn during sleep.
CPAP therapy is very effective in treating certain respiratory disorders, provided that the user is following the treatment. Obtaining a user interface allows the user to engage in positive pressure therapy. Users seeking their first user interface or new user interface to replace the old interface typically consult the durable medical equipment provider to determine recommended user interface dimensions based on measurements of the user's facial anatomy, which is typically performed by the durable medical equipment provider. If the mask is uncomfortable or difficult to use, the user may not follow the treatment. Because users are often recommended to regularly clean their masks, if the masks are difficult to clean (e.g., difficult to assemble or disassemble), users may not be able to clean their masks, and this may affect user compliance. In order for air pressure therapy to be effective, not only must comfort be provided to the user wearing the mask, but a solid seal must be formed between the face and the mask to minimize air leakage.
As described above, the user interface may be provided to the user in various forms, such as a nasal mask or full face mask/oronasal mask (FFM) or nasal pillow mask. Such user interfaces are manufactured using various dimensions to accommodate the anatomical features of a particular user in order to provide a comfortable interface, for example, for providing positive pressure therapy. Such user interface dimensions may be customized to correspond to a particular facial anatomy of a particular user, or may be designed to accommodate a population of individuals having anatomies that fall within a predefined spatial boundary or range. However, in some cases, the mask may be provided in a variety of standard sizes, from which the appropriate size must be selected.
In this regard, sizing the user interface for the user is typically performed by a trained individual, such as a Durable Medical Equipment (DME) provider or physician. Typically, a user who needs a user interface to initiate or continue positive pressure therapy will access a trained individual at an accommodation facility where a series of measurements are made in an effort to determine the appropriate user interface size from the standard size. Suitable dimensions are intended to mean a particular combination of dimensions of certain features of the user interface, such as the seal-forming structure, that provide sufficient comfort and sealing to achieve positive pressure therapy. Sizing in this manner is not only labor intensive but also inconvenient. The inconvenience of spending time in a busy schedule or in some cases having to travel a great distance is a barrier to many users receiving a new or replacement user interface and ultimately to receiving treatment. This inconvenience prevents the user from receiving the required user interface and engaging in positive pressure therapy. However, the choice of the most suitable size is important for the quality of treatment and compliance.
There is a need for a system that allows for accurate personalized adaptation of a user interface based on selected facial dimension data. There is a need for a system that incorporates data relating to other similar users using respiratory therapy devices to further select an interface that provides comfort of respiratory therapy in use. There is also a need to select an interface that minimizes leakage between the interface and the user's face.
Disclosure of Invention
The disclosed system provides an adaptable system for sizing masks for use with respiratory therapy devices to better follow the therapy of individual users. The system collects facial data from the primary user as well as the respiratory therapy device usage and other data from a large number of users to assist in selecting the best mask for the primary user.
One disclosed example is a system that selects an interface for respiratory therapy that is appropriate for the face of a user. The system includes a storage device that stores an image of a face of a user. The facial profile engine is operable to determine facial features based on the facial image. One or more databases store a plurality of facial features from a community of users and a corresponding plurality of interfaces. One or more databases store operational data of respiratory therapy devices used by a community of users having a plurality of corresponding interfaces. The selection engine is coupled to the database. The selection engine is operable to select an interface for the user from a plurality of corresponding interfaces based on the desired effect from the stored operational data and the determined facial features.
Another disclosed example is a method of selecting an interface for respiratory therapy that is appropriate for a user's face. The facial image of the user is stored in the storage means. Facial dimensions are determined based on these landmarks. A plurality of facial features from a user population and a corresponding plurality of interfaces used by the patient population are stored in one or more databases. Operational data of respiratory therapy devices having a plurality of corresponding interfaces for use by a user population is stored in the one or more databases. An interface is selected for the user from a plurality of corresponding interfaces based on the desired effect according to the stored operational data and the determined facial features.
According to some implementations, an example method includes receiving sensor data associated with a current suitability of an interface on a face of a user. The interface may be coupled to a respiratory device. The sensor data is collected by one or more sensors of a mobile device that is separable from the respiratory device. The method also includes generating a facial map using the sensor data. The face map indicates one or more features of the user's face. The method also includes identifying characteristics associated with the current suitability using the sensor data and the face map. This characteristic indicates the quality of the current suitability. The characteristic is associated with a characteristic location on the facial map. The method also includes generating output feedback based on the identified characteristics and the characteristic location. Output feedback may be generated to assess or improve current suitability.
According to some implementations, an example system includes an electronic interface, a memory, and a control system. The electronic interface is configured to receive data associated with a current suitability of the interface. The memory stores machine readable instructions. The control system includes one or more processors configured to execute machine-readable instructions to generate a facial map using the received data and identify characteristics associated with the current suitability based on the received data and the facial map. The control system is also configured to generate output feedback based on the identified characteristic. Output feedback may be generated to assess or improve current suitability.
The above summary is not intended to represent each embodiment, or every aspect, of the present disclosure. Rather, the foregoing summary merely provides an example of some of the novel aspects and features set forth herein. The above features and advantages, and other features and advantages of the present disclosure will be readily apparent from the following detailed description of the representative embodiments and modes for carrying out the invention when taken in connection with the accompanying drawings and appended claims.
Drawings
The disclosure will be better understood from the following description of exemplary embodiments taken in conjunction with the accompanying drawings in which:
Fig. 1 shows a system comprising a user wearing a user interface in the form of a full face mask to receive PAP therapy from an example respiratory pressure therapy device;
FIG. 2 illustrates a user interface in the form of a nasal mask with a headgear in accordance with one form of the present technique;
FIG. 3A is an elevation view of a face with several features of a surface anatomy;
FIG. 3B is a side view of a head having several features of an identified surface anatomy;
FIG. 3C is a bottom view of a nose with several features identified;
FIG. 4A illustrates a respiratory pressure treatment apparatus in accordance with one form of the present technique;
FIG. 4B is a schematic illustration of the pneumatic path of a respiratory pressure treatment apparatus in accordance with one form of the present technique;
FIG. 4C is a schematic diagram of electrical components of a respiratory pressure treatment apparatus in accordance with one form of the present technique;
FIG. 4D is a schematic diagram of the primary data processing components of a respiratory pressure treatment system in accordance with one form of the present technique;
FIG. 5 is a block diagram of the arrangement of acoustic sensors in a hose supplying air to the user interface in FIG. 2;
FIG. 6 is a diagram of components of a computing device for capturing facial data;
FIG. 7 is a diagram of an example system for automatically selecting a patient interface, the system including a computing device;
FIG. 8A is an example face scan showing different landmark points to identify face dimensions for mask sizing;
FIG. 8B is a view of the facial scan of FIG. 8A showing different landmark points to identify a first facial measurement;
FIG. 8C is a view of the facial scan of FIG. 8A showing different landmark points to identify a second facial measurement;
FIG. 8D is a view of the facial scan of FIG. 8A showing different landmark points to identify a third facial measurement;
FIG. 9 is a flow chart of a process for selecting a mask for a user based on a scan and analysis of user inputs in view of big data collected from a user database;
FIG. 10 is a flow chart of a subsequent evaluation process based on an initial selection of a mask to adjust relevant parameters based on data related to a primary user;
FIG. 11 is a perspective view of a user controlling a user device to collect sensor data associated with a current suitability of a user interface;
FIG. 12 is a user view of a user device for identifying thermal characteristics associated with a current suitability of a user interface;
FIG. 13 is a user view of a user device for identifying profile-based characteristics associated with a current suitability of a user interface;
FIG. 14 is a flow chart depicting a process for evaluating suitability of a user interface for conversion events across a user interface; and
FIG. 15 is a flow chart depicting a process for evaluating the suitability of a user interface.
The present disclosure is susceptible to various modifications and alternative forms. Some representative embodiments are shown by way of example in the drawings and will be described in detail herein. It should be understood, however, that the invention is not intended to be limited to the particular forms disclosed. On the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the invention as defined by the appended claims.
Detailed Description
The invention may be embodied in many different forms. Representative embodiments are shown in the drawings and will be described in detail herein. The disclosure is an example or illustration of the principles of the disclosure and is not intended to limit the broad aspects of the disclosure to the illustrated embodiments. To the extent that elements and limitations disclosed in, for example, abstract, summary, and detailed description sections, but not explicitly set forth in the claims, are not intended to be individually or collectively incorporated into the claims by implication, inference, or otherwise. For the purposes of this detailed description, the singular includes the plural and vice versa unless explicitly denied; and the word "comprising" means "including but not limited to". Moreover, approximating words such as "about," "nearly," "substantially," "approximately," etc. may be used herein to mean, for example, "being," "nearly," or "within 3 to 5 percent," or "within acceptable manufacturing tolerances," or any logical combination thereof.
The present disclosure relates to a system and method for selecting an optimal interface for a user using a respiratory therapy device. The system collects individual facial feature data from a user from a facial scanning application on the mobile device and generates a 3D model of the face from the scanned image. The system accurately determines key landmarks on the scanned image to determine different facial dimensions. The system collects data from multiple users to learn correlations between different masks and successful respiratory therapies based on user data and operational data from respiratory therapy devices using different masks. The system uses facial dimension data and data collected from multiple users to select an appropriate mask size and type for the user based on data collected from multiple users using the interface and respiratory therapy device as compared to successful matches in the past. The data may also be used to further learn new insights that may be provided to the user to increase compliance/adherence to the treatment. Thus, the method combines appropriate facial scanning methods with landmark recognition, mask selection, therapy monitoring, and feedback to provide efficient user interface selection.
Certain aspects and features of the present disclosure relate to assessing and improving the fit of a user interface (e.g., a user interface of a filtering face mask or respiratory therapy device). Sensor data from one or more sensors of a user device (e.g., a portable user device such as a smart phone, also known as a mobile computing device or mobile device) may be utilized to help a user ensure that a wearable user interface (e.g., a user interface to be used with a respiratory device) is properly fitted to a user's face. The sensor may i) be prior to wearing the user interface; ii) while wearing the user interface; and/or iii) after removing the user interface, collecting data about the user's face. A facial map may be generated to identify one or more features of the user's face. The sensor data and facial map may then be utilized to identify characteristics associated with the suitability of the user interface, which may be used to generate output feedback to evaluate and/or improve the suitability.
In other embodiments of the disclosed example system, the interface is a mask. In another embodiment, the respiratory therapy device is configured to provide one or more of Positive Airway Pressure (PAP) or non-invasive ventilation (NIV). In another embodiment, at least one of the respiratory therapy devices includes an audio sensor to collect audio data during operation of the at least one of the respiratory therapy devices. In another embodiment, the selection engine is operable to analyze the audio data to determine a type of corresponding interface of at least one of the respiratory therapy devices based on matching the audio data with acoustic signatures of known interfaces. In another embodiment, the selection engine selects the interface based on a comparison of the demographic data of the user with the demographic data of the user population stored in one or more databases. In another embodiment, the operational data includes one of a flow rate, a motor speed, and a treatment pressure. In another embodiment, the facial image is determined from a scan from a mobile device that includes a camera. In another embodiment, the mobile device includes a depth sensor. The camera is a 3D camera. Facial features are three-dimensional features derived from a gridded surface derived from a facial image. In another embodiment, the facial image is a two-dimensional image comprising landmarks, and the facial features are three-dimensional features derived from the landmarks. In another embodiment, the facial image is one of a plurality of two-dimensional facial images. Facial features are three-dimensional features derived from a 3D deformation model adapted to match a facial image. In another embodiment, the facial image includes landmarks associated with at least one facial dimension. In another embodiment, the facial dimensions include facial height, nose width, and nose depth. In another embodiment, the desired effect is a seal between the interface and the face surface to prevent leakage. In another embodiment, the desired effect is user compliance with the treatment. In another embodiment, the system includes a mobile computing device operable to collect subjective data input from a user, and the selection of the interface is based at least in part on the subjective data. In another embodiment, the system includes a machine learning module operable to determine a type of operational data associated with an interface that achieves a desired effect. In another embodiment, the selected interface includes one of a plurality of types of interfaces and one of a plurality of sizes of interfaces. In another embodiment, the selection engine receives feedback from the user based on the interface selected by the operation and selects another interface based on the desired effect based on the undesired results. In another embodiment, the undesirable outcome is one of low compliance of the user with the therapy, high leakage, or unsatisfactory subjective outcome data.
In other embodiments of the above disclosed method of providing a suitable fitting interface, the interface is a mask. In another embodiment, the respiratory therapy device is configured to provide one or more of Positive Airway Pressure (PAP) or non-invasive ventilation (NIV). In another embodiment, at least one of the respiratory therapy devices includes an audio sensor to collect audio data during operation of the at least one of the respiratory therapy devices. In another embodiment, selecting includes analyzing the audio data to determine a type of corresponding interface of at least one of the respiratory therapy devices based on matching the audio data with acoustic signatures of known interfaces. In another embodiment, the selecting includes comparing the demographic data of the user with demographic data of a population of users stored in one or more databases. In another embodiment, the operational data includes one of a flow rate, a motor speed, and a treatment pressure. In another embodiment, the facial image is determined from a scan from a mobile device that includes a camera. In another embodiment, the mobile device includes a depth sensor. The camera is a 3D camera and the facial features are three-dimensional features derived from a gridded surface derived from the facial image. In another embodiment, the facial image is a two-dimensional image comprising landmarks, and the facial features are three-dimensional features derived from the landmarks. In another embodiment, the facial image is one of a plurality of two-dimensional facial images. These facial features are three-dimensional features derived from a 3D deformation model adapted to match the facial image. In another embodiment, the facial image includes landmarks related to at least one facial dimension. In another embodiment, the facial dimensions include facial height, nose width, and nose depth. In another embodiment, the desired effect is a seal between the interface and the face surface to prevent leakage. In another embodiment, the desired effect is user compliance with the treatment. In another embodiment, the method includes collecting, by at least one mobile computing device, subjective data input from a user, and the selection of the interface is based at least in part on the subjective data. In another embodiment, the method includes determining, via a machine learning module, a type of operational data associated with an interface that achieves a desired effect. In another embodiment, the selected interface includes one of a plurality of types of interfaces and one of a plurality of sizes of interfaces. In another embodiment, the method includes receiving feedback from a user based on an interface selected by the operation; and selecting another interface based on the desired effect based on the undesired result. In another embodiment, the undesirable outcome is one of low compliance of the user with the therapy, high leakage, or unsatisfactory subjective outcome data.
In other embodiments of the above-disclosed method of providing a customized fit to a user on an interface that is fluidly coupleable to a respiratory device. In another embodiment, generating the output feedback includes determining a suggested action that, if implemented, would affect the characteristic to improve the current suitability; and presenting the suggested action using an electronic interface of the mobile device. In another embodiment, the sensor data comprises infrared data from the following: i) A passive thermal sensor; ii) an active thermal sensor; or iii) both i and ii. In another embodiment, the method further includes receiving sensor data associated with a current suitability of the selected interface on the face of the user. The sensor data is collected by one or more sensors of the mobile device. The method also includes generating a facial map using the sensor data. The face map indicates one or more features of the user's face. The method also includes identifying characteristics associated with the current suitability using the sensor data and the face map. The characteristic indicates a quality of the current suitability and the characteristic is associated with a characteristic location on the facial map. The method also includes generating output feedback based on the identified characteristics and the characteristic location. In another embodiment, the sensor data includes distance data indicating one or more distances between one or more sensors of the mobile device and the face of the user. In another embodiment, the one or more sensors include: i) A proximity sensor; ii) an infrared-based lattice sensor; iii) LiDAR sensor; iv) MEMS micromirror projector based sensors; or v) any combination of i to iv. In another embodiment, the collection of sensor data is performed by: i) Before the user wears the interface; ii) when the user wears an interface with current suitability; iii) After the user removes the interface; or iii) any combination of i, ii, and iii. In another embodiment, the method further comprises receiving initial sensor data. The initial sensor data is associated with the face of the user before the interface is donned. Identifying the characteristic includes comparing the initial sensor data to the sensor data. In another embodiment, the characteristic comprises: i) Local temperature on the face of the user; ii) local temperature variations on the face of the user; iii) A local color on the face of the user; iv) a change in local color on the face of the user; v) a local contour on the face of the user; vi) a change in local contour on the face of the user; vii) a local profile on the interface; viii) local changes on the interface; ix) a local temperature at the interface; or any combination of x) i through ix. In another embodiment, the characteristic comprises: i) The vertical position of the interface relative to one or more features of the user's face; ii) the horizontal position of the interface relative to one or more features of the user's face; iii) Rotational orientation of the interface relative to one or more features of the user's face; iv) a distance between the identified feature of the interface relative to one or more features of the user's face; or v) any combination of i to iv. In another embodiment, the one or more sensors include one or more orientation sensors, and the sensor data includes orientation sensor data from the one or more orientation sensors. Receiving sensor data includes: scanning a face of a user using a mobile device when the mobile device is oriented such that one or more orientation sensors are oriented toward the face of the user; and tracking progress of the scan of the face. An invisible stimulus is generated that indicates progress of scanning of the face. In another embodiment, the method further comprises receiving motion data associated with movement of the mobile device; and applying the motion data to the sensor data to account for movement of the mobile device relative to the user's face. In another embodiment, the method further comprises generating an interface map using the sensor data. The interface map indicates a relative position of one or more features of the interface with respect to the face map. Identifying the characteristic includes using the interface map. In another embodiment, the one or more sensors include a camera and the sensor data includes camera data. The method also includes receiving sensor calibration data collected by the camera when the camera is oriented toward a calibration surface; and calibrating camera data of the sensor data based on the sensor calibration data. In another embodiment, the method further includes identifying additional characteristics associated with a possible future failure of the interface using the sensor data and the face map. The method further includes generating output feedback based on the identified additional characteristic. The output feedback may be used to reduce the likelihood that a possible future failure will occur or delay the occurrence of a possible future failure. In another embodiment, the method further includes accessing historical sensor data associated with one or more historical adaptations of the interface on the user's face prior to receiving the sensor data. The identification characteristics also use historical sensor data. In another embodiment, the method further includes generating a current suitability score using the sensor data and the facial map. Output feedback is generated to improve the subsequent suitability score. In another embodiment, receiving sensor data includes receiving audio data from one or more audio sensors. Identifying characteristics includes identifying unintentional leakage using the audio data. In another embodiment, the one or more sensors include a camera, an audio sensor, and a thermal sensor. The sensor data includes camera data, audio data, and thermal data. The identification characteristics include: identifying potential characteristics using at least one of camera data, audio data, and thermal data; and using at least one of the camera data, the audio data, and the thermal data to confirm the potential characteristic. In another embodiment, the method further comprises presenting user instructions indicating an action to be performed by the user. The method also includes receiving a completion signal indicating that the user has performed the action. A first portion of the sensor data is collected before the completion signal is received and a second portion of the sensor data is collected after the completion signal is received. Identifying the characteristic includes comparing the first portion of the sensor data with the second portion of the sensor data. In another embodiment, generating the facial map includes: identifying a first individual and a second individual using the received sensor data; identifying a first individual as being associated with an interface; and generating a facial map for the first individual. In another embodiment, receiving sensor data includes determining adjustment data based on the received sensor data. The adjustment data is associated with: i) Movement of the mobile device, ii) inherent noise of the mobile device, iii) breathing noise of the user, iv) speaking noise of the user, v) changes in ambient lighting, vi) detected transient shadows cast on the user's face, vii) detected transient colored light cast on the user's face, or viii) any combination of i through vii. Receiving sensor data further includes applying an adjustment to at least some of the received sensor data based on the adjustment data. In another embodiment, receiving the sensor data includes: receiving image data associated with a camera of the one or more sensors, the camera operating in the visible spectrum; receiving unstable data associated with an additional sensor of the one or more sensors, the additional sensor being a ranging sensor or an image sensor operating outside the visible spectrum; determining image stability information associated with stability of the image data; and stabilizing the unstable data using image stability information associated with stability of the image data. In another embodiment, output feedback may be used to improve the current suitability. In another embodiment, the method further includes generating an initial score based on the current suitability using the sensor data. The method also includes receiving subsequent sensor data associated with a subsequent fit of the interface on the face of the user. The subsequent suitability is based on the current suitability after the output feedback is implemented. The method further includes generating a subsequent score based on the subsequent suitability using the subsequent sensor data; and evaluating a subsequent score, the subsequent score indicating a quality improvement relative to the initial score. In another embodiment, identifying the characteristic associated with the current suitability includes: determining a breathing pattern of the user based on the received sensor data; determining a thermal model associated with the user's face based on the received sensor data and the facial map; and determining a leakage characteristic using the breathing pattern and the thermal pattern. The leakage characteristic indicates a balance between intentional vent leakage and unintentional seal leakage. In another embodiment, the one or more sensors comprise at least two sensors selected from the group consisting of: i) A passive thermal sensor; ii) an active thermal sensor; iii) A camera; iv) an accelerometer; v) a gyroscope; vi) an electronic compass; vii) a magnetometer; viii) a pressure sensor; ix) a microphone; x) a temperature sensor; xi) a proximity sensor; xii) an infrared-based lattice sensor; xiii) LiDAR sensor; xiv) MEMS micromirror projector based sensor; xv) a radio frequency based ranging sensor; and xvi) a wireless network interface. In another embodiment, the method includes receiving additional sensor data from one or more additional sensors of the respiratory device. Identifying the characteristic includes using the additional sensor data. In another embodiment, the method further comprises transmitting a control signal that, when received by the respiratory device, causes the respiratory device to operate using the set of defined parameters. A first portion of the sensor data is collected when the respiratory device is operating using the set of defined parameters, and a second portion of the sensor data is collected when the respiratory device is not operating using the set of defined parameters. Identifying the characteristic includes comparing the first portion of the sensor data with the second portion of the sensor data.
Fig. 1 shows a system comprising a user 10 wearing a user interface 100 in the form of a Full Face Mask (FFM), the user 10 receiving a supply of positive pressure air from a respiratory therapy device, such as a Positive Airway Pressure (PAP) device, and in particular a Respiratory Pressure Therapy (RPT) device 40. Air from the RPT device 40 is humidified in a humidifier 60 and flows along the air circuit 50 to the user 10.
In this example, the respiratory therapy devices described herein may include any respiratory therapy device configured to provide one or more of Positive Airway Pressure (PAP), non-invasive ventilation (NIV), or invasive ventilation. In this example, the PAP device may be a Continuous Positive Airway Pressure (CPAP) device, an automatic positive airway pressure device (APAP), a bi-level or variable positive airway pressure device (BPAP or VPAP), or any combination thereof. The CPAP device delivers a predetermined air pressure to the user (e.g., as determined by a sleeping physician). The APAP device automatically changes the air pressure delivered to the user based on, for example, respiratory data associated with the user. The BPAP or VPAP device is configured to deliver a first predetermined pressure (e.g., inspiratory positive airway pressure or IPAP) and a second predetermined pressure (e.g., expiratory positive airway pressure or EPAP) that is lower than the first predetermined pressure.
Fig. 2A depicts a user interface 100 in accordance with an aspect of the present technique, the user interface 100 including the following functional aspects: seal-forming structure 160, plenum 120, positioning and stabilizing structure 130, vents 140, forehead support 150, and one form of connection port 170 for connection to air circuit 50 in fig. 1. In some forms, the functional aspects may be provided by one or more physical components. In some forms, one physical component may provide one or more functional aspects. In use, the seal-forming structure 160 is arranged to surround the entrance to the airway of a user in order to facilitate the supply of positive pressure air to the airway.
In one form of the present technique, the seal-forming structure 160 provides a seal-forming surface and may additionally provide a cushioning function. The seal-forming structure 160 according to the present technology may be constructed of a soft, flexible, and resilient material such as silicone. In one form, the seal-forming portion of the non-invasive user interface 100 includes a pair of nasal puffs or pillows, each of which is constructed and arranged to form a seal with a corresponding nostril of the user's nose.
The nasal pillow according to the present technology comprises: a frustoconical body, at least a portion of which forms a seal on the underside of the nose of the user; a handle; on the frustoconical floor and connecting the frustoconical to the flexible partition of the stem. Furthermore, the nasal pillow attachment structure of the present technology includes a flexible partition adjacent the bottom of the handle. The flexible segments may cooperate to facilitate a universal connection structure that is adaptable with relative movement of both displacement and angle between the frustoconical and nasal pillow connected structures. For example, the frustoconical position may be axially moved toward the stem-engaging structure.
In one form, the non-invasive user interface 100 includes a seal-forming portion that forms a seal on an upper lip partition (i.e., an upper lip) of the user's face in use. In one form, the non-invasive user interface 100 includes a seal-forming portion that forms a seal over a chin partition of the user's face in use.
Preferably the plenum 120 has a perimeter shaped to complement the surface contour of an average human face in the partition that will form the seal in use. In use, the boundary edge of the plenum 120 is positioned in close proximity to the adjacent surface of the face. Actual contact with the face is provided by seal-forming structure 160. The seal-forming structure 160 may extend along the entire perimeter of the plenum 120 when in use.
Preferably the seal-forming structure 160 of the user interface 100 of the present technology is maintained in a sealed position by the positioning and stabilizing structure 130 when in use.
In one form, the user interface 100 includes a vent 140 constructed and arranged to allow flushing of exhaled carbon dioxide. One form of vent 140 in accordance with the present technique includes a plurality of holes, for example, about 20 to about 80 holes, or about 40 to about 60 holes, or about 45 to about 55 holes.
Fig. 3A shows a front view of a person's face including the inner canthus, the nasalar, the nasolabial folds, the superior and inferior labial portions, and the corner points. Also shown is the mouth width, sagittal plane separating the head into left and right parts, and an orientation indicator. The orientation indicator indicates a radial inward/outward and up/down direction. Fig. 3B shows a side view of a person's face, including the eyebrow points, nose bridge points, nasal ridges, nasal protrusions points, under-the-nose points, upper and lower lips, on-chin points, nasal alar ridge points, and on-and under-the-ear points. An orientation indicator indicating up/down and front/back directions is also shown. Figure 3C shows a bottom view of a nose with several features identified, including the nasolabial folds, the lower labial portion, the upper labial portion, the nostrils, the subnasal points, the columella, the point of the nasal protrusion, the long axis of the nostrils, and the sagittal plane.
Features of the human face shown in fig. 3A to 3C are described in more detail below.
Nose wing (Ala): the outer wall or "wings" of each naris (plural: alar)
Nose wing end: the outermost points on the nose wings.
Nose alar curve (or nose alar ridge) point: the rearmost point in the curved baseline of each alar is found in the folds formed by the combination of the alar and cheek.
Auricle: the entire outer visible portion of the ear.
Nose post: skin strips separating the nostrils and extending from the nasal projection to the upper lip.
Nose columella angle: the angle between a line drawn through the midpoint of the nostril and a line drawn perpendicular to the Frankfort (Frankfort) plane (with both lines intersecting at the point under the nose).
Point between eyebrows: is located on the soft tissue, the most prominent point in the mid-forehead sagittal plane.
Nostrils (nose-eyes): forming an approximately oval aperture of the nasal cavity entrance. The singular form of a nostril (nare) is a nostril (nares) (nose-eye). The nostrils are separated by the nasal septum.
Nasolabial folds or folds: extending from each side of the nose to the corners of the mouth, skin folds or furrows separating the cheeks from the upper lip.
Nose lip angle: the angle between the columella and the upper lip (while intersecting at the subnasal point).
Sub-aural point: the pinna is attached to the lowest point of the facial skin.
Ear point: the auricle attaches to the highest point of facial skin.
Nose point: the most prominent point or tip of the nose, which can be identified in the side view of the rest of the head.
In humans: a midline groove extending from the lower boundary of the nasal septum to the top of the lip in the upper lip region.
Anterior chin point: is located on the soft tissue, the foremost midpoint of the chin.
Ridge (nasal): the nasal ridge is a midline protrusion of the nose that extends from the nasal bridge point to the nasal bump point.
Sagittal plane: a vertical plane passing from front (anterior) to back (posterior) dividing the body into right and left halves.
Nose bridge point: is positioned on the soft tissue and is overlapped with the most concave point of the frontal nasal suture area.
Septal cartilage (nose): the septum cartilage forms part of the septum and separates the anterior portion of the nasal cavity.
Rear upper side sheet: at the point at the lower edge of the base of the nose, where the base of the nose engages the skin of the upper (upper) lip.
Subnasal point: is positioned on the soft tissue, and is positioned at the junction of the nose columella and the upper lip in the median sagittal plane.
Chin upper point: the midline of the lower lip is at the point of greatest concavity between the midpoint of the lower lip and the anterior genitalia of the soft tissue.
As will be explained below, there are several key dimensions from the face that can be used to select sizing of a user interface such as mask 100 in fig. 1. In this example, there are three dimensions including face height, nose width, and nose depth. Fig. 3A to 3B show a line 3010 indicating the height of the face. As can be seen in fig. 3A, the facial height is the distance between the nose bridge point to the on-chin point. Line 3020 in fig. 3A represents the nasal width, which may be the distance between the alar ends of the nose (e.g., the leftmost and rightmost points on the alar). Line 3030 in fig. 3B represents the nose depth, which may be the distance between the nasal projection point and the nasal alar ridge point in a direction parallel to the sagittal plane.
Fig. 4A illustrates an enlarged view of components of an example RPT device 40 in accordance with an aspect of the present technique, the example RPT device 40 including mechanical, pneumatic, and/or electrical components and configured to execute one or more algorithms, such as any of the methods described in whole or in part herein. Fig. 4B shows a block diagram of an example RPT device 40. Fig. 4C shows a block diagram of the electrical control components of an example RPT device 40. The upstream and downstream directions are indicated with reference to the blower and user interface. The blower is defined upstream of the user interface and the user interface is defined downstream of the blower, regardless of the actual flow direction at any particular moment. An object located in the pneumatic path between the blower and the user interface is located downstream of the blower and upstream of the user interface. RPT device 40 may be configured to generate an airflow for delivery to the airway of a user, such as to treat one or more respiratory disorders.
The RPT device 40 may have an external housing 4010, the external housing 4010 being formed in two parts: an upper portion 4012 and a lower portion 4014. Further, the outer housing 4010 can include one or more panels 4015. The RPT device 40 includes a chassis 4016, the chassis 4016 supporting one or more internal components of the RPT device 40. The RPT device 40 may include a handle 4018.
The pneumatic path of RPT device 40 may include one or more air path items such as inlet air filter 4112, inlet muffler 4122, pressure generator 4140 (e.g., blower 4142) capable of supplying positive pressure air, outlet muffler 4124, and one or more transducers 4270, such as pressure sensor 4272, flow sensor 4274, and motor speed sensor 4276.
One or more of the air path items may be located within a removable unitary structure referred to as a pneumatic block 4020. The pneumatic block 4020 may be located within the outer housing 4010. In one form, the pneumatic block 4020 is supported by the chassis 4016 or formed as part of the chassis 4016.
RPT device 40 may have a power supply 4210, one or more input devices 4220, a central controller 4230, a pressure generator 4140, a data communication interface 4280, and one or more output devices 4290. A separate controller may be provided for the treatment apparatus. The electrical component 4200 may be mounted on a single Printed Circuit Board Assembly (PCBA) 4202. In an alternative form, the RPT device 40 may include more than one PCBA 4202. Other components, such as one or more protection circuits 4250, a transducer 4270, a data communication interface 4280, and a storage device, may also be mounted on PCBA 4202.
The RPT device may include one or more of the following components in the overall unit. In one alternative, one or more of the following components may be located as respective individual units.
An RPT device in accordance with one form of the present technique may include one air filter 4110, or a plurality of air filters 4110. In one form, inlet air filter 4112 is located at a start point upstream of the pneumatic path of pressure generator 4140. In one form, an outlet air filter 4114, such as an antimicrobial filter, is located between the outlet of the pneumatic block 4020 and the user interface 100.
An RPT device in accordance with one form of the present technique may include one muffler 4120, or a plurality of mufflers 4120. In one form of the present technique, the inlet muffler 4122 is located in the pneumatic path upstream of the pressure generator 4140. In one form of the present technique, the outlet muffler 4124 is located in the pneumatic path between the pressure generator 4140 and the user interface 100 in fig. 1.
In one form of the present technique, the pressure generator 4140 for generating an air stream or air supply at positive pressure is a controllable blower 4142. For example, the blower 4142 may include a brushless DC motor 4144 having one or more impellers. The impellers may be located in a volute. The blower can deliver the air supply, for example, at a rate of up to about 120 liters/minute, and at a positive pressure ranging from about 4cm h2o to about 20cm h2o, or in other forms of up to about 30cm h2 o. The blower may be as follows: U.S. patent No. 7,866,944; U.S. patent No. 8,638,014; U.S. patent No. 8,636,479; and PCT patent application publication No. WO 2013/020167, the contents of which are incorporated herein by reference in their entirety.
The pressure generator 4140 is under the control of the treatment device controller 4240. In other words, the pressure generator 4140 may be a piston driven pump, a pressure regulator (e.g., a compressed air reservoir) connected to a high pressure source, or a bellows.
The air circuit 4170 in accordance with one aspect of the present technique is a conduit or tube that is constructed and arranged to allow pressurized air flow to travel between two components, such as the humidifier 60 and the user interface 100, when in use. Specifically, the air circuit 4170 may be in fluid communication with an outlet of the humidifier 60 and the plenum 120 of the user interface 100.
In one form of the present technique, an anti-spill back valve 4160 is located between the humidifier 60 and the pneumatic block 4020. The back-overflow valve is constructed and arranged to reduce the risk that water will flow upstream from the humidifier 60 to, for example, the motor 4144.
The power supply 4210 may be located inside or outside the external housing 4010 of the RPT device 40. In one form of the present technique, the power supply 4210 provides power only to the RPT device 40. In another form of the present technique, the power supply 4210 provides power to both the RPT device 40 and the humidifier 60.
The RT system may include one or more transducers (sensors) 4270 configured to measure one or more of any number of parameters related to the RT system, its user, and/or its environment. The transducer may be configured to generate an output signal representative of one or more parameters that the transducer is configured to measure.
The output signal may be one or more of an electrical signal, a magnetic signal, a mechanical signal, a visual signal, an optical signal, an acoustic signal, or any number of other signals known in the art.
The transducer may be integrated with another component of the RT system, with one exemplary arrangement being that the transducer is internal to the RPT device. The transducer may be essentially a 'stand-alone' component of the RT system, with an exemplary arrangement of the transducer being that the transducer is external to the RPT device.
The transducer may be configured to transmit its output signal to one or more components of the RT system, such as the RPT device, a local external device, or a remote external device. The external transducer may be located, for example, on a user interface or in an external computing device such as a smart phone. The external transducer may be located on or form part of an air circuit (e.g. a user interface), for example.
Fig. 4D illustrates a system 4300 depicted in accordance with some embodiments of the present disclosure. The system 4300 includes a control system 4310, the control system 4310 comprising a processor 4312, a memory device 4314, an electronic interface 4316, and one or more sensors 4270. In some embodiments, the system 4300 also optionally includes a respiratory therapy system 4320, which respiratory therapy system 4320 may be the system of fig. 2, including an RPT device 40, a user device such as a mobile device 234, and an activity tracker 4322. The device 234 may include a display 4344. In some cases, some or most of the system 4300 may be implemented as the mobile device 232, RPT device 40, or in other external devices, such as an external computing device. As explained above, respiratory therapy system 4320 includes RPT device 40, user or user interface 100, conduit 4326, a display such as output 4290, and humidifier 60.
The one or more sensors or transducers 4270 may be constructed and arranged to generate signals representative of air characteristics such as flow rate, pressure, or temperature. The air may be an air flow from the RPT device to the user, an air flow from the user to the atmosphere, ambient air or any other air. These signals may be representative of characteristics of the air flow at a particular point, such as the air flow in the pneumatic path between the RPT device and the user. In one form of the present technique, one or more transducers 4270 are located in the pneumatic path of the RPT device, such as downstream of the humidifier 60.
In accordance with one aspect of the present technique, the one or more transducers 4270 comprise pressure sensors positioned in fluid communication with the pneumatic path. An example of a suitable pressure sensor is a transducer from the HONEYWELL (r) ASDX series. An alternative suitable pressure sensor is the NPA series transducer from general electric company (GENERAL ELECTRIC). In one embodiment, the pressure sensor is located in the air circuit 4170 adjacent to the outlet of the humidifier 60.
The acoustic/audio sensor 4278 (such as a microphone pressure sensor 4278) is configured to generate an acoustic signal representative of a pressure change within the air circuit 4170. An "acoustic sensor" or "audio sensor" is an interchangeable term and refers to a sensor that can detect sounds that are audible and/or inaudible to humans. The sound signals from the audio sensor 4278 may be received by the central controller 4230 for acoustic processing and analysis as configured by one or more algorithms described below. The audio sensor 4278 may be directly exposed to the air path to be more sensitive to sound, or may be encapsulated behind a thin layer of flexible film material. The film may be used to protect the audio sensor 4278 from heat and/or humidity. Alternatively, the audio sensor 4278 may be coupled to or integrated in the RPT device 40, the user interface 100, or a catheter or external user device.
The audio data generated by the audio sensor 4278 may be reproduced as one or more sounds (e.g., sound from the user 10). The audio data from the audio sensor 4278 may also be used to identify (e.g., using the central controller 4230) or confirm characteristics associated with the user interface, such as the sound of air escaping from the valve.
The speaker may output sound waves audible to a user, such as user 10. The speaker may be used, for example, to provide audio feedback, such as to indicate how to maneuver the RPT device 40 or another device to obtain desired sensor data, or to indicate when collection of sensor data is sufficiently complete. In some implementations, the speaker may be used to communicate audio data generated by the audio sensor 4278 to a user. The speaker may be coupled to or integrated in the RPT device 40, the user interface 100, the catheter or an external device.
The audio sensor 4278 and the speaker may be used as separate devices. In some embodiments, the audio sensor 4278 and speaker may be combined into an acoustic sensor (e.g., a SONAR sensor), as described, for example, in WO 2018/050913 and WO 2020/104465, each of which is incorporated herein by reference in its entirety. In such an embodiment, the speaker generates or emits sound waves at predetermined intervals, and the audio sensor 4278 detects reflection of the emitted sound waves from the speaker. The sound waves generated or emitted by the speaker have frequencies that are not audible to the human ear (e.g., below 20Hz or above about 18 kHz) so as not to interfere with the user 10. Based at least in part on data from the audio sensor 4278 and/or speaker, the control system may determine location information about the user and/or the user interface 100 (e.g., a location of the user's face, a location of a feature on the user's face, a location of the user interface 100, a location of a feature on the user interface 100), physiological parameters (e.g., respiratory rate), etc. In such a context, sonar sensors may be understood as referring to active acoustic sensing, such as by generating and/or transmitting ultrasound and/or low frequency ultrasound sensing signals (e.g., in a frequency range of, for example, about 17-23kHz, 18-22kHz, or 17-18 kHz) through air. Such a system may be considered with respect to WO 2018/050913 and WO 2020/104465 mentioned above, each of which is incorporated herein by reference in its entirety. In some implementations, additional microphone pressure sensors may be used.
Data from transducers 4270, such as pressure sensor 4272, flow sensor 4274, motor speed sensor 4276, and audio sensor 4278, may be periodically collected by central controller 4230. Such data typically relates to the operational state of RPT device 40. In this example, the central controller 4230 encodes such data from the sensors in a proprietary data format. The data may also be encoded in a standardized data format.
The one or more sensors or transducers 4270 may also include temperature sensors 4330, motion sensors 4332, speakers 4334, radio Frequency (RF) receivers 4336, RF transmitters 4338, cameras 4340, infrared sensors 4342 (e.g., passive infrared sensors or active infrared sensors), photoplethysmogram (PPG) sensors 4344, electrocardiogram (ECG) sensors 4346, electroencephalogram (EEG) sensors 4348, capacitive sensors 4350, force sensors 4352, strain gauge sensors 4354, electromyogram (EMG) sensors 4356, oxygen sensors 4358, analyte sensors 4360, moisture sensors 4362, liDAR sensors 4364, or any combination thereof. Typically, each of the one or more sensors or transducers 4270 is configured to output sensor data that is received and stored in a memory device.
Although one or more transducers 4270 are shown and described as including each of pressure sensor 4272, flow sensor 4274, motor speed sensor 4276, and audio sensor 4278, other sensors as described above may include any combination and any number of each of the sensors described and/or shown herein.
The one or more sensors 4270 may be used to generate sensor data, such as image data, audio data, ranging data, profile map data, thermal data, physiological data, environmental data, and the like. The sensor data may be used by the control system to determine the user interface and identify characteristics associated with the current suitability of the user interface 100.
Example pressure sensor 4272 is an air pressure sensor (e.g., an atmospheric pressure sensor) that generates sensor data indicative of the respiration (e.g., inhalation and/or exhalation) and/or ambient pressure of a user of the respiratory therapy system of fig. 1. In such embodiments, the pressure sensor 4272 may be coupled to the RPT device 40 or integrated into the RPT device 40. The pressure sensor 4272 may be, for example, a capacitive sensor, an electromagnetic sensor, a piezoelectric sensor, a strain gauge sensor, an optical sensor, a potentiometric sensor, or any combination thereof.
Examples of flow sensors (e.g., flow sensor 4274) are described in WO 2012/012835, which is herein incorporated by reference in its entirety. In some embodiments, the flow sensor 4274 is used to determine the air flow rate from the RPT device 40, the air flow rate through the conduit of the air circuit 4170, the air flow rate through the user interface 100, or any combination thereof. In such embodiments, the flow sensor 4274 may be coupled to the RPT device 40, the user interface 100, or a conduit attaching the user interface 100 to the RPT device 40, or integrated in the RPT device 40, the user interface 100, or a conduit attaching the user interface 100 to the RPT device 40. The flow sensor 4274 may be a mass flow sensor such as a rotary flow meter (e.g., hall effect flow meter), a turbine flow meter, an orifice plate flow meter, an ultrasonic flow meter, a hot wire sensor, a vortex sensor, a membrane sensor, or any combination thereof. In some implementations, the flow sensor 4274 is configured to measure vent flow (e.g., intentional "leakage"), unintentional leakage (e.g., mouth leakage and/or mask leakage), user flow (e.g., air into and/or out of the lungs), or any combination thereof. In some embodiments, the flow rate data may be analyzed to determine the cardiogenic oscillations of the user.
In some implementations, the temperature sensor 4330 generates temperature data indicative of: the core body temperature of the user 10, the local or average skin temperature of the user, the local or average temperature of the air flowing from the RPT device 40 and/or through the catheter, the local or average temperature in the user interface 100, the ambient temperature, or any combination thereof. The temperature sensor 4330 may be, for example, a thermocouple sensor, a thermistor sensor, a silicon bandgap temperature sensor, or a semiconductor-based sensor, a resistive temperature detector, or any combination thereof. In some cases, the temperature sensor is a non-contact temperature sensor, such as an infrared pyrometer.
The RF transmitter 4338 generates and/or transmits radio waves having a predetermined frequency and/or a predetermined amplitude (e.g., in a high frequency band, in a low frequency band, a long wave signal, a short wave signal, etc.). The RF receiver 4336 detects reflections of radio waves transmitted from the RF transmitter 4338 and this data may be analyzed by the control system to determine positional information about the user 10 and/or the user interface 100, and/or one or more of the physiological parameters described herein. An RF receiver (RF receiver and RF transmitter or another RF pair) may also be used for wireless communication between the external components and the RPT device 40 or any combination thereof. The RF receiver 4336 and RF transmitter 4338 may be combined as part of an RF sensor (e.g., RADAR sensor). In some such embodiments, the RF sensor includes control circuitry. The particular format of the RF communication may be WiFi, bluetooth, etc.
In some embodiments, the RF sensor is part of a grid system. One example of a mesh system is a WiFi mesh system, which may include mesh nodes, mesh routers, and mesh gateways, each of which may be mobile/movable or fixed. In such embodiments, the Wi-Fi mesh system includes a Wi-Fi router and/or Wi-Fi controller and one or more satellites (e.g., access points), each of which includes the same or similar RF sensors as the RF sensors. The WiFi router and satellite continuously communicate with each other using WiFi signals. The WiFi grid system may be used to generate motion data based on changes in the WiFi signal (e.g., differences in received signal strength) between the router and the satellite due to the moving object or person partially blocking the signal. The motion data may be indicative of motion, respiration, heart rate, gait, fall, behavior, or the like, or any combination thereof.
The image data from the camera 4340 may be used by the control system to determine information associated with the user's face, the user interface 100, and/or one or more of the physiological parameters described herein. For example, image data from the camera 4340 may be used to identify the position of the user, the local color of a portion of the user's face, the relative position of features on the user interface to features on the user's face, and so on. In some embodiments, the camera includes a wide angle lens or a fisheye lens. The camera may be a camera operating in the visible spectrum, such as a wavelength between 380nm or about 380nm and 740nm or about 740 nm.
The IR sensor 4342 may be a passive sensor or an active sensor. Passive IR sensors may measure natural infrared emissions or reflections from a remote surface, such as measuring IR energy radiated from the surface to determine the temperature of the surface. The active IR sensor may include an IR transmitter that generates an IR signal that is then received by an IR receiver. Such active IR sensors may be used to measure IR reflections off of an object and/or IR transmissions through an object. For example, an IR emitter as a point projector may project an identifiable array of points onto a user's face using IR light, and the reflection of the IR emitter may then be detected by an IR receiver to determine ranging data (e.g., data associated with the distance between the IR sensor 4342 and a remote surface (such as a portion of the user's face) or profile data associated with the user's face (e.g., data associated with the relative height characteristics of the surface relative to the nominal height of the surface).
In general, infrared data from the IR sensor 4342 may be used to determine information about the user 10 and/or the interface 100, and/or one or more of the physiological parameters described herein. In one example, infrared data from an IR sensor may be used to detect a local temperature on a portion of the user's face or a portion of the interface 100. The IR sensor 4342 may also be used in conjunction with a camera, such as associating IR data (e.g., temperature data or ranging data) with camera data (e.g., local color). The IR sensor 4342 may detect infrared light having a wavelength between 700nm or about 700nm and 1mm or about 1 mm.
The PPG sensor 4344 outputs physiological data associated with the user 10, which may be used to determine one or more sleep related parameters, such as heart rate, heart rate variability, cardiac cycle, respiratory rate, inhalation amplitude, exhalation amplitude, inhalation-to-exhalation ratio, estimated blood pressure parameters, or any combination thereof. The PPG sensor 4344 may be worn by the user, embedded in clothing and/or fabric worn by the user, embedded in and/or coupled to the interface 100 and/or its associated head-mounted device (e.g., strap, etc.), etc.
ECG sensor 4346 outputs physiological data associated with the electrical activity of the heart of user 10. In some implementations, the ECG sensor 4346 includes one or more electrodes positioned on or around a portion of the user 10 during the sleep period. The physiological data from the ECG sensor 4346 may be used, for example, to determine one or more of the sleep related parameters described herein.
The EEG sensor 4348 outputs physiological data associated with the electrical activity of the brain of the user 10. In some implementations, the EEG sensor 4348 includes one or more electrodes positioned on or around the scalp of the user 10 during the sleep period. The physiological data from the EEG sensors 4348 can be used to determine the sleep state and/or sleep stage of the user 10 at any given time during a sleep session, for example. In some implementations, the EEG sensor 4348 can be integrated in the interface 100 and/or associated head-mounted device (e.g., strap, etc.).
The EMG sensors 4356 output physiological data associated with electrical activity produced by one or more muscles. The oxygen sensor 4358 outputs oxygen data indicative of the oxygen concentration of the gas (e.g., in a conduit or at the interface 100). The oxygen sensor 4358 may be, for example, an ultrasonic oxygen sensor, an electrical oxygen sensor, a chemical oxygen sensor, a photo oxygen sensor, a pulse oximeter (e.g., spO2 sensor), or any combination thereof. In some embodiments, the one or more sensors 4270 further comprise a Galvanic Skin Response (GSR) sensor, a blood flow sensor, a respiration sensor, a pulse sensor, a blood pressure meter sensor, an oximetry sensor, or any combination thereof.
Analyte sensor 4360 may be used to detect the presence of an analyte in an exhalation, such as user 10. The data output by analyte sensor 4360 may be used by the control system to determine the identity and concentration of any analyte, such as in the breath of user 10. In some embodiments, an analyte sensor is positioned near the mouth of user 10 to detect an analyte in breath exhaled from the mouth of user 10. For example, when interface 100 is a mask covering the nose and mouth of user 10, analyte sensor 4360 may be positioned within the mask to monitor the mouth breathing of user 10. In other embodiments, such as when the interface 100 is a nasal mask or a nasal pillow mask, an analyte sensor may be positioned near the nose of the user 10 to detect an analyte in breath exhaled through the nose. In other embodiments, when the interface 100 is a nasal mask or nasal pillow mask, the analyte sensor may be positioned near the mouth. In this embodiment, analyte sensor 4360 may be used to detect whether any air has been inadvertently leaked from the mouth of user 10. In some embodiments, analyte sensor 4360 is a Volatile Organic Compound (VOC) sensor that can be used to detect carbon-based chemicals or compounds. In some embodiments, analyte sensor 4360 may also be used to detect whether user 10 breathes through his nose or mouth. For example, if the presence of an analyte is detected by data output by an analyte sensor positioned near the mouth or within the mask (in embodiments where the interface 100 is a mask), the control system may use that data as an indication that the user 10 is breathing through his mouth.
The moisture sensors 4362 may be used to detect moisture in various areas around the user 10 (e.g., inside the catheter or interface 100, near the user's 10 face, near the connection between the catheter and interface 100, near the connection between the catheter and RPT device 40, etc.). Thus, in some embodiments, a moisture sensor 4362 may be coupled to the interface 100 or integrated in the interface 100 or conduit to monitor the humidity of the pressurized air from the RPT device 40. In other embodiments, moisture sensor 4362 is placed near any area where monitoring moisture content is desired. The moisture sensor 4362 may also be used to monitor the humidity of the surrounding environment around the user 10, e.g., the air inside a bedroom.
A light detection and ranging (LiDAR) sensor 4364 may be used for depth sensing. This type of optical sensor (e.g., a laser sensor) may be used to detect objects and construct a three-dimensional (3D) map (e.g., a contour map) of the object, such as a user's face, interface 100, or the surrounding environment (e.g., living space). LiDAR can typically utilize pulsed lasers for time-of-flight measurements. LiDAR is also known as 3D laser scanning. In examples using such sensors, a fixed or mobile device (such as a smart phone) with a LiDAR sensor may measure and map an area that extends 5 meters or more away from the sensor. For example, liDAR data may be fused with point cloud data estimated by electromagnetic RADAR sensors. LiDAR sensors may also use Artificial Intelligence (AI) to automatically establish a geofence for a RADAR system, such as a glazing (which may be highly reflective to RADAR) by detecting and classifying features in a space that may cause problems with the RADAR system. LiDAR, for example, may also be used to provide an estimate of a person's height, as well as changes in height when a person sits down or falls. LiDAR may be used to form a 3D grid representation of a user's face, user interface 100 (e.g., when worn on the user's face), and/or environment. In further use, for solid surfaces (e.g., transmissive wire materials) through which radio waves pass, liDAR may reflect off such surfaces, allowing classification of different types of obstructions. Although LiDAR sensors are described herein, in some cases one or more other ranging sensors may be used in place of or in addition to LiDAR sensors, such as ultrasonic ranging sensors, electromagnetic RADAR sensors, and the like.
Any or all of the above sensors may be located on external devices such as mobile user devices or activity trackers, except on RPT device 40, catheter or interface 100. For example, audio sensor 4278 and speaker 4334 are integrated in and/or coupled to the mobile device, and pressure sensor 4272 and/or flow sensor 4274 are integrated in and/or coupled to RPT device 40. In some embodiments, at least one of the one or more sensors may be positioned substantially adjacent to the user 10 during the sleep period (e.g., positioned on a portion of the user 10 or in contact with a portion of the user 10, worn by the user 10, coupled to or on a bedside table, coupled to a mattress, coupled to a ceiling, etc.).
In one form of the present technology, RPT device 40 includes one or more input devices 4220 in the form of buttons, switches, or dials to allow a person to interact with the device. The buttons, switches, or dials may be physical devices, or software devices accessible via a touch screen. In one form, the buttons, switches, or dials may be physically connected to the external housing 4010, or in another form, may be in wireless communication with a receiver electrically connected to the central controller 4230. In one form, the input device 4220 may be constructed or arranged to allow a person to select values and/or menu options.
In one form of the present technology, a central controller4230 is one or more processors adapted to control the RPT device 40. Suitable processors may include x86 Intel processors based on ARM holders from EndologA processor such as an STM32 series microcontroller from an artificial semiconductor company (ST MICROELECTRONIC). In certain alternative forms of the present technology, a 32-bit RISC CPU, such as an STR9 series microcontroller from the company of legal semiconductors, or a 16-bit RISC CPU, such as a processor from an MSP430 family microcontroller manufactured by texas instruments (TEXAS INSTRUMENTS), may be equally suitable. In one form of the present technique, the central controller 4230 is a dedicated electronic circuit. In one form, the central controller 4230 is an application specific integrated circuit. In another form, the central controller 4230 comprises discrete electronic components. The central controller 4230 may be configured to receive input signals from one or more transducers 4270, one or more input devices 4220, and the humidifier 60.
The central controller 4230 may be configured to provide output signals to one or more of the output device 4290, the treatment device controller 4240, the data communication interface 4280, and the humidifier 60.
In some forms of the present technology, the central controller 4230 is configured to implement one or more methods described herein, such as one or more algorithms represented as computer programs in a non-transitory computer readable storage medium stored on an internal memory. In some forms of the present technology, the central controller 4230 may be integrated with the RPT device 40. However, in some forms of the present technology, some methods may be performed by a remotely located device, such as a mobile computing device. For example, the remotely located device may determine control settings of the ventilator or detect respiratory-related events by analyzing stored data, such as from any of the sensors described herein. As explained above, ownership of all data and operations of the external source or central controller 4230 is typically attributed to the manufacturer of the RPT device 40. Thus, data from the sensor and any other additional operational data is generally not accessible by any other device.
In one form of the present technology, a data communication interface is provided and is connected to the central controller 4230. The data communication interface may be connected to a remote external communication network and/or a local external communication network. The remote external communication network may be connected to a remote external device such as a server or database. The local external communication network may be connected to a local external device, such as a mobile device or a health monitoring device. Thus, the local external communication network may be used by the RPT device 40 or the mobile device to collect data from other devices.
In one form, the data communication interface is part of the central controller 4230. In another form, the data communication interface 4280 is separate from the central controller 4230 and may comprise an integrated circuit or processor. In one form the remote external communication network is the internet. The data communication interface may connect to the internet using wired communication (e.g., via ethernet or fiber optic) or wireless protocols (e.g., CDMA, GSM, 2G, 3G, 4G/LTE, LTE Cat-M, NB-IoT, 5G new air interface, satellite, over 5G). In one form, the local external communication network 4284 utilizes one or more communication standards, such as bluetooth or consumer infrared protocol.
The example RPT device 40 includes integrated sensors and communication electronics as shown in fig. 4C. Older RPT devices may be retrofitted with sensor modules that may include communication electronics for transmitting collected data. Such a sensor module may be attached to the RPT device and thereby send the operational data to a remote analysis engine in the server 210.
Fig. 5 shows a diagram of an audio sensor 4278 in an air circuit 4170 connecting the RPT device 40 to the interface 100 of fig. 1. In this example, the conduit 180 (of length L) effectively acts as an acoustic waveguide for the sound generated by the RPT device 40. In this example, the input signal is sound emitted by the RPT device 40. The input signal (e.g., pulse) enters an audio sensor 4278 positioned at one end of the conduit 180, propagates along an air path in the conduit 180 to the mask 100, and is reflected back along the conduit 180 by features in the air path (which includes the conduit 180 and the mask 100) to reenter the audio sensor 4278. Thus, the system IRF (the output signal generated by the input pulse) contains an input signal component and a reflected component. One key feature is the time it takes for sound to travel from one end of the air path to the opposite end. This interval is shown in the system IRF because the audio sensor 4278 receives an input signal from the RPT device 40, which is then received after a period of time, filtered by the conduit 180, and reflected and filtered by the mask 100 (and possibly any other systems 190 attached to the mask, such as the human respiratory system when the mask 100 is secured to the user). This means that the components of the system IRF associated with the reflection from the mask end of the conduit 180 (reflected components) are delayed relative to the components of the system IRF associated with the input signal (input signal components) which reaches the audio sensor 4278 after a relatively short delay. (for practical purposes, this short delay may be ignored, and the time is approximately zero when the microphone responds to the input signal first.) this delay is equal to 2L/c (where L is the length of the conduit and c is the speed of sound in the conduit).
Another feature is that, because the air path is prone to loss, if the conduit is long enough, the input signal component decays to a negligible amount when the reflected component of the system IRF has begun. If this is the case, the input signal component may be separated from the reflected component of the system IRF. Alternatively, the input signal may originate from a speaker at the device end of the air path.
Some embodiments of the disclosed acoustic analysis techniques may implement cepstrum analysis. Cepstrum can be considered as the inverse fourier transform of the logarithmic spectrum of the forward fourier transform of the decibel spectrum, etc. This operation can basically convert the convolution of the Impulse Response Function (IRF) and the sound source into an addition operation so that the sound source can then be more easily considered or removed in order to separate the data of the IRF for analysis. The technique of cepstrum analysis is described in detail under the heading "cepstrum: treatment guidelines (The Cepstru: A Guide to Processing) "(Childers et al, institute of Electrical and electronics Engineers (Proceedings of The IEEE), volume 65, 10 th edition, 10 month 1977) scientific papers and Randall RB, frequency analysis (Frequency Analysis), copenhagen: bruel & Kjaer Press, page 344 (1977, revision 1987). The application of cepstrum analysis in respiratory therapy system component identification is described in detail in PCT publication No. WO2010/091462 entitled "acoustic detection for respiratory therapy device (Acoustic Detection for Respiratory Treatment Apparatus)", the entire contents of which are incorporated herein by reference.
As mentioned previously, respiratory therapy systems generally include a respiratory therapy device (e.g., an RPT device), a humidifier, an air delivery conduit, and a patient interface, such as those components shown in fig. 1. Many different forms of patient interfaces may be used with a given RPT device, such as nasal pillows, nasal prongs, nasal masks sealed on the nasal ridge, nasal-only masks sealed at the lower perimeter of the nose instead of on the nasal ridge, nasal and mouth (oronasal) masks sealed on the nasal bridge or at the lower perimeter of the nose instead of on the nasal ridge in the manner of conventional full face masks, tube-down masks (where the tube is connected to the forward facing portion of the mask), masks with tube headgear, masks with integrated short tubes connected to the main tube, and masks with uncoupled structures such as elbows, where the main tube is directly connected to the elbow, as well as other types of masks and variations of the foregoing. Furthermore, different forms of air delivery conduits may be used. In order to provide improved control of the therapy delivered to the user interface, measured or estimated therapy parameters, such as pressure and vent flow in the user interface, may be analyzed. In older systems, knowledge of the type of component used by the user may be determined, as will be explained below, to determine the user's best interface. Some RPT devices include a menu system that allows a user to select the type of system component (including the user interface) to use, such as brand, form, model, etc. Once the user has entered the type of component, the RPT device may select the appropriate operating parameters of the flow generator that best coordinates with the selected component. The data collected by the RPT device may be used to evaluate the effectiveness of a particular selected component, such as a user interface, in supplying pressurized air to a user.
The acoustic analysis may be used to identify components of the respiratory pressure treatment system, as explained above with reference to fig. 5. In this specification, "identification" of a component refers to identification of the type of the component. Hereinafter, for the sake of brevity, "mask" is used synonymously with "user interface" even though there are user interfaces that are not typically described as "mask".
The system may identify the length of the conduit in use and the mask connected to the conduit via analysis of the sound signals acquired by the audio sensor 4278. The technique may identify the mask and conduit regardless of whether the user is wearing the mask at the time of identification.
The technique includes an analysis method that can separate acoustic mask reflections from other system noise and responses, including but not limited to blower sound. This allows for the identification of differences between acoustic reflections (typically dictated by mask shape, configuration and materials) from different masks and may allow for the identification of different masks without user or user intervention.
An example method of identifying a mask is to sample the output sound signal y (t) acquired by the audio sensor 4278 at least a nyquist rate (e.g., 20 kHz), calculate a cepstrum from the sampled output signal, and then separate the reflected component of the cepstrum from the input signal component of the cepstrum. The reflected component of the cepstrum includes acoustic reflections from the mask of the input sound signal and is therefore referred to as the "acoustic signature" or "mask signature" of the mask. The acoustic signatures are then compared to a predefined or predetermined database of previously measured acoustic signatures obtained from systems containing known masks. Alternatively, some criteria will be set to determine the appropriate similarity. In one example embodiment, these comparisons may be done based on a single maximum data peak in the cross-correlation between measured and stored acoustic signatures. However, the method may be improved by making a comparison over several data peaks, or alternatively, where the comparison is done over an extracted unique set of cepstral features.
Alternatively, the same method may be used to determine the conduit length by finding the delay between the sound received from the RPT device 40 and its reflection from the receiving mask 100; the delay may be proportional to the length of the tube 180. In addition, the variation in tube diameter may increase or decrease the amplitude of the reflected signal and thus be identifiable as well. Such evaluation may be performed by comparing the current reflection data with the previous reflection data. The diameter change may be considered as a proportion of the amplitude change from the reflected signal (i.e., the reflected data).
In accordance with the present technique, the data associated with the reflected component may then be compared to similar data from previously identified mask reflected components, such as mask reflected components contained in a memory or database of mask reflected components.
For example, the reflected component of the tested mask ("mask signature") may be separated from the cepstrum of the output signal generated by the microphone. The mask signature may be compared to mask signatures of previous or predetermined mask signatures of known masks stored as data templates of the device. One way to do this is to calculate the interrelationship between the mask signature of the tested mask and the previously stored mask signature for all known masks or data templates. The cross-correlation with the highest peak corresponds to a high probability of the mask being tested and the location of the peak should be proportional to the length of the conduit.
As explained above, the RPT device 40 may provide data of the user interface type as well as operational data. The operational data may be associated with the mask type and data related to the user to determine whether a particular mask type is valid. For example, the operational data reflects the time of use of the RPT device 40 and whether the use provides effective therapy. The type of user interface may be associated with a degree of user compliance or treatment effectiveness as determined from the operational data collected by RPT device 40. The relevant data may be used to better determine an effective interface for new users who need respiratory therapy from similar RPT devices. The selection may be combined with facial dimensions obtained from a facial scan of the new user to aid in selecting the interface.
Thus, examples of the present technology may allow a user to more quickly and conveniently obtain a user interface such as a mask by integrating data collected from the use of an RPT device relating to different masks of a population of users with facial features of individual users determined by a scanning process. This scanning process allows users to comfortably and quickly measure their facial anatomy in their own home using a computing device, such as a desktop computer, tablet, smart phone, or other mobile device. Then, after analyzing the facial dimensions of the user and data from the general population of users using the various different interfaces, the computing device may receive or generate recommendations of the appropriate user interface size and type.
In an advantageous embodiment, the present technology may employ an application that is downloadable from a manufacturer or third party server to a smart phone or tablet with an integrated camera. When launched, the application may provide visual and/or audio instructions. According to the guidance, the user (i.e. the user) may stand in front of the mirror and press a camera button on the user interface. The activated process may then take a series of photographs of the user's face and then obtain the face dimensions for example in seconds to select an interface (based on the processor analyzing the photograph).
The user/user may capture an image or series of images of their facial anatomy. One example may be instructions provided by an application program stored on a computer readable medium, such as when executed by a processor, detecting various facial landmarks within an image, measuring and scaling distances between such landmarks, comparing these distances to a data record, and recommending an appropriate user interface size. Thus, the consumer's automation device may allow for accurate user interface selections, such as at home, to allow the customer to determine sizing without trained colleagues.
Other examples may include identifying three-dimensional facial features from an image. Facial feature recognition is sized based on the "shape" of the different features. The shape is described as a nearly continuous surface of the user's face. In practice, a continuous surface is not possible, but collecting about 10k to 100k points on the face provides an approximation of the continuous surface of the face. There are several example techniques for collecting facial image data to identify three-dimensional facial features.
One method may be to determine a facial image from a 2D image. In this approach, computer Vision (CV) and trained Machine Learning (ML) models are employed to extract key facial markers. For example, the OpenCV and DLib libraries may be used for landmark comparison by having a training number of standard facial landmarks. Once the preliminary facial markers are extracted, the derived three-dimensional features must be scaled appropriately. Scaling involves determining an object such as a coin, credit card, or user iris to provide a known scaling. For example, google (Google) Mediapipe Facemesh and Iris (Iris) models may track the Iris of a user and scale facial markers for mask sizing purposes. These models contain 468 facial markers and 10 eye markers. The iris data is then used to scale other identified facial features.
Another method of determining three-dimensional features may be from facial data obtained from a 3D camera with a depth sensor. A 3D camera (such as cameras on iPhone X and above) may perform a 3D scan of the face and return to the gridded (triangle) surface. The number of surface points is typically about 50k. In this example, there are 2 types of outputs from a 3D camera such as an iPhone. These are: (a) Raw scan data, and (b) a lower resolution hybrid shape model for face detection and tracking. The latter includes automatic logo processing, while the former does not. The grid surface data does not need to be scaled.
Another approach is to directly generate a 3D model from the 2D image. This involves using a 3D deformation model (or 3 DMM) and machine learning to adjust the shape of the 3DMM to match the face in the image. The single or multiple image views may come from multiple angles and may be derived from video captured on a digital camera. The 3DMM may be adapted to match data obtained from a plurality of 2D images via a machine learning matching routine. The 3DMM may be adapted to illustrate the shape, pose, and expression shown in the facial image to modify facial features. Scaling may still be required and thus the detection and scaling of known objects such as eye features (such as irises) may be used as a reference to account for scaling errors due to factors such as age.
Three-dimensional feature or shape data may be used for mask sizing. One way to match the mask is to align the facial recognition surface with the known surface of the proposed mask. These surfaces are then aligned. The alignment may be performed by a nearest point of closest iteration (NICP) technique. The suitability score may then be calculated by determining an average distance, which is an average of the distances between the nearest or corresponding points of the facial feature and the mask contact surface. A low score corresponds to a good fit.
Another approach to mask sizing may be to use 3D facial scans collected from different users. In this example, 3D data may be collected for more than 1,000 users. These users are grouped according to their ideal mask size. In this example, the number of ideal mask sizes available is determined by the mask designer to cover different user types. Such grouping methods may be based on other types of data, such as grouping according to traditional 2D markers or grouping according to the principal component of the face shape. Principal component analysis can be used to determine a reduced set of characteristics of facial features. An average set of 3D facial features representing each mask size is calculated based on the groupings of mask sizes.
To size the new user, a 3D face scan is performed or 3D data is derived from the 2D image, and a suitability score for the new user is calculated for each of the average faces. The mask size and mask type selected are those that correspond to the average face with the lowest fit score. Additional personal preferences may be incorporated. The specific facial features may also be used to create custom sizing based on modifying one of the available mask types.
FIG. 7 depicts an example system 200 that may be implemented for automatic facial feature measurement and user interface selection. The system 200 may generally include one or more of a server 210, a communication network 220, and a computing device 230. The server 210 and the computing device 230 may communicate via a communication network 220, the communication network 220 may be a wired network 222, a wireless network 224, or a wired network with a wireless link 226. In some versions, server 210 may communicate unidirectionally with computing device 230 by providing information to computing device 230, and vice versa. In other embodiments, server 210 and computing device 230 may share information and/or processing tasks. For example, the system may be implemented to allow for automatic purchase of a user interface such as mask 100 in fig. 1, where the process may include an automatic sizing process described in more detail herein. For example, a customer may order masks online after running a mask selection process that automatically identifies the appropriate mask size, type, and/or model by image analysis of the customer's facial features in combination with operational data from other masks and RPT device operational data from a patient population using different types and sizes of masks. The system 200 may include one or more databases. The one or more databases may include a patient database 260, a patient interface database 270, and any other database described herein. It should be appreciated that in some instances of the present technology, all data that needs to be accessed during execution of a system or method may be stored in a single database. In other examples, the data may be stored in two or more separate databases. Thus, where there is a reference to a particular database, it should be understood that in some instances the particular database may be a different database, while in other instances it may be part of a larger database.
Server 210 and/or computing device 230 may also communicate with respiratory therapy devices, such as RPT device 250, which is similar to RPT device 40 shown in fig. 1. In this example, RPT device 250 collects operational data and other relevant data related to user usage, mask leakage, to provide feedback related to mask usage. Data from the RPT device 250 is collected and associated in a user database 260 with individual user data of a user using the RPT device 250. The user interface database 270 includes data regarding different types and sizes of interfaces, such as masks that are available to new users. The user interface database 270 may also include acoustic signature data for each type of mask that may enable mask type to be determined from audio data collected from the respiratory therapy device. The mask analysis engine performed by server 210 is used to correlate and determine the effective mask size and type from individual facial dimension data, and the corresponding validity from operational data collected by RPT devices 250 encompassing the entire user population. For example, effective compliance may be demonstrated by minimum leakage detected, maximum compliance with a treatment plan (e.g., mask opening and closing times, frequency of opening and closing events, treatment pressure used), number of overnight apneas, AHI level, pressure settings used on its device, and prescribed pressure settings. The data may be associated with facial dimension data or other data based on a facial image of the new user.
For example, the facial shape derived from imaging the user's face (such as 3D scan data) may be compared to the geometry of the features of the proposed mask (cushion, catheter, headgear). The differences between shape and geometry can be analyzed to determine if there are problems with the suitability of leaks or high contact pressure zones that can cause redness/pain in the contact area. As explained herein, the data collected for a population of users may be combined with other forms of data, such as detected leaks, to identify an optimal mask system for a particular facial shape (i.e., the shape of the mouth, nose, cheek, head, etc.).
As will be explained, the server 210 collects data from multiple users stored in the database 260 and corresponding mask size and type data stored in the database 270 to select an appropriate mask based on the best mask that best fits the scanned facial dimension data collected from the new user and the mask that implements the best operational data for users with facial dimensions and other features, sleep behavior data, and demographic data similar to the new user. In some examples, system 200 includes one or more databases for storing a plurality of facial features from a user population and a corresponding plurality of user interfaces used by the user population, and operational data of a respiratory pressure treatment device used by the user population with a plurality of corresponding user interfaces. A selection engine may be coupled to the one or more databases, the selection engine being operable to select a user interface for the user based on the desired effect and based on the stored operational data and facial features of the user. The system 200 may be configured to perform a corresponding method of selecting a user interface. Thus, the system 200 may provide mask recommendations to a new user by determining what masks have been shown to be optimal for an existing user that is similar in various ways to the new user. For example, the optimal mask may be a mask type, model, and/or size that has been shown to be associated with maximum compliance with treatment, minimum leakage, minimum apnea, minimum AHI, and most aggressive subjective user feedback. In various examples of the present technology, each of these results may be given a variety of different weights in determining the optimal mask.
The computing device 230 may be a desktop or laptop computer 232 or a mobile device, such as a smart phone 234 or tablet 236. Fig. 7 depicts a general architecture 300 of a computing device 230. The apparatus 230 may include one or more processors 310. The device 230 may also include a display interface 320, a user control/input interface 331, sensors 340 and/or sensor interfaces for one or more sensors, an Inertial Measurement Unit (IMU) 342, and a non-volatile memory/data storage device 350.
The sensor 340 may be one or more cameras (e.g., CCD charge coupled devices or active pixel sensors) integrated into the computing device 230, such as those provided in a smart phone or laptop computer. Alternatively, where computing device 230 is a desktop computer, device 230 may include a sensor interface for coupling with an external camera, such as webcam 233 depicted in fig. 6. Other exemplary sensors that may be integrated with or external to the computing device that may be used to facilitate the methods described herein include stereo cameras for capturing three-dimensional images, or photodetectors capable of detecting reflected light from lasers or stroboscopic/structured light sources.
The user control/input interface 331 allows a user to provide commands or responses to prompts or instructions provided to the user. This may be, for example, a touch panel, keyboard, mouse, microphone and/or speaker.
The display interface 320 may include a monitor, LCD panel, etc. to display prompts, output information (such as facial measurements or interface size recommendations), and other information, such as a captured display, as described in further detail below.
Memory/data storage 350 may be internal memory of a computing device such as RAM, flash memory, or ROM. In some embodiments, memory/data storage 350 may also be an external memory linked to computing device 230, such as an SD card, a server, a USB flash drive, or an optical disk. In other embodiments, memory/data storage 350 may be a combination of external memory and internal memory. Memory/data storage 350 includes stored data 354 and processor control instructions 352 that direct processor 310 to perform certain tasks. The stored data 354 may include data received by the sensor 340, such as captured images, as well as other data provided as part of the application. Processor control instructions 352 may also be provided as part of an application program.
As explained above, the facial image may be captured by a mobile computing device such as a smart phone 234. An appropriate application executing on computing device 230 or server 210 may provide three-dimensional relevant facial data to assist in selecting an appropriate mask. The application may use any suitable face scanning method. Such applications may include Capture from Standard Cyborg (https:// www.standardcyborg.com /), applications from Scandy Pro (https:// www.scandy.co/products/Scandy-Pro), beauty3D applications from Qianxun3D (http:// www.qianxun3d.com/scanpage), un3D FaceApp (http:// www.unre.ai/index. Phprogram = ios/detail), and applications from Bellus3D (https:// www.bellus3d.com /). The detailed procedure of facial scanning includes the technique disclosed in WO 2017000031, which is incorporated herein by reference in its entirety.
One such application is an application 360 for facial feature measurement and/or user interface sizing, which may be an application that is downloadable to a mobile device such as the smart phone 234 and/or tablet 236. The application 360, which may be stored on a computer-readable medium such as the memory/data storage 350, includes programming instructions for the processor 310 to perform certain tasks related to facial feature measurement and/or user interface sizing. The application also includes data that can be processed by algorithms of the automated method. Such data may include data records, reference features, and correction factors, as explained in more detail below.
Application 360 is executed by processor 310 to measure facial features of a user using a two-dimensional or three-dimensional image and select an appropriate user interface size and type based on the resulting measurements, such as from a set of standard sizes. The method can be generally characterized as comprising three or four distinct stages: a pre-capture stage, a post-capture image processing stage, and a comparison and output stage.
In some cases, an application for facial feature measurement and user interface sizing may control the processor 310 to output a visual display including reference features on the display interface 320. The user may position the feature adjacent to its facial features, such as by movement of the camera. The processor may then capture and store one or more images of facial features associated with the reference feature when certain conditions, such as alignment conditions, are met. This can be done with the aid of a mirror. The mirror reflects the displayed reference features and the user's face to the camera. The application then controls the processor 310 to identify certain facial features within the images and measure the distance between them. Then, through image analysis processing, a scaling factor may be used to convert facial feature measurements (which may be pixel counts) into standard mask measurements based on reference features. Such values may be, for example, standardized units of measurement, such as meters or inches, and values expressed in such units that are suitable for mask sizing.
Additional correction factors may be applied to these measurements. Facial feature measurements may be compared to data records that include measurement ranges corresponding to different user interface dimensions for a particular user interface modality (such as nasal mask and FFM). The recommended size may then be selected based on the comparison and output to the user/patient as a recommendation. This process can be conveniently implemented within the comfort of any preferred user location. The application may perform the method within seconds. In one example, the application executes the method in real time.
In the pre-capture stage, the processor 310 assists the user in, among other things, establishing appropriate conditions for capturing one or more images for sizing. For example, some of these conditions include proper illumination and camera orientation, and motion blur caused by an unstable hand holding computing device 230.
The user may conveniently download an application for performing automatic measurement and sizing at a user device, such as computing device 230, from a server, such as a third party application store server, onto their computing device 230. When the download is complete, such application may be stored on an internal non-volatile memory of the computing device, such as RAM or flash memory. Computing device 230 is preferably a mobile device such as a smart phone 234 or tablet 236.
When the user launches the application, the processor 310 may prompt the user via the display interface 320 to provide user-specific information, such as age, gender, weight, and height. However, the processor 310 may prompt the user to enter this information at any time, such as after the facial features of the user are measured. The processor 310 may also present a tutorial that may be presented audibly and/or visually (as provided by an application) to help users understand their roles during the process. The prompt may also require information about the type of user interface (e.g., nasal interface or full face interface, etc.) and the type of device for which the user interface is to be used. Moreover, at a pre-capture stage, the application may extrapolate the user-specific information based on information already collected by the user (such as after receiving a captured image of the user's face) and based on machine learning techniques or through artificial intelligence.
When the user is ready to continue (which may be indicated by user input or in response to prompts via the user control/input interface 331), the processor 310 activates the sensor 340 as directed by the processor control instructions 352 of the application. The sensor 340 is preferably a front facing camera of the mobile device, the sensor 340 being located on the same side of the mobile device as the display interface 320. The camera is typically configured to capture two-dimensional images. Mobile device cameras that capture two-dimensional images are ubiquitous. The present technology exploits this ubiquity to avoid the need to burden the user with obtaining specialized equipment.
At about the same time as the sensor/camera 340 is activated, the processor 310 presents a captured display on the display interface 320 as directed by the application. The capture display may include a camera live action preview, a reference feature, a target box, and one or more status indicators, or any combination thereof. In this example, the reference feature is displayed centered on the display interface and has a width corresponding to the width of the display interface 320. The vertical position of the reference feature may be such that a top edge of the reference feature abuts an uppermost edge of the display interface 320 or a bottom edge of the reference feature abuts a lowermost edge of the display interface 320. A portion of the display interface 320 will display a camera live action preview 324, typically displaying in real-time the user facial features captured by the sensor/camera 340 while the user is in the correct position and orientation.
The reference feature is a known (predetermined) feature of the computing device 230 and provides the processor 310 with a frame of reference that allows the processor 310 to scale the captured image. The reference feature may preferably be a feature other than a facial or anatomical feature of the user. Thus, during the image processing stage, the reference feature helps the processor 310 determine when certain alignment conditions are met, such as during a pre-capture stage. The reference feature may be a Quick Response (QR) code or a known sample or marker that may provide certain information to the processor 310, such as scaling information, orientation, and/or any other desired information that may optionally be determined from the structure of the QR code. The QR code may have a square or rectangular shape. When displayed on the display interface 320, the reference feature has a predetermined dimension, such as in millimeters or centimeters, and the value of the reference feature may be encoded into the application and transmitted to the processor 310 at an appropriate time. The actual dimensions of the reference feature 326 may vary between various computing devices. In some versions, the application may be configured to be specific to a computing device model, where the dimensions of the reference feature 326 are known when displayed on the specific model. However, in other embodiments, the application may direct the processor 310 to obtain certain information from the device 230, such as display size and/or zoom characteristics that allow the processor 310 to calculate real world/actual dimensions of the reference feature as displayed on the display interface 320 via scaling. Regardless, the actual dimensions of the reference features as displayed on the display interface 320 of such computing devices are generally known prior to post-capture image processing.
Along with the reference feature, a target frame may be displayed on the display interface 320. The target box allows the user to aim at certain components within the capture display 322 in the target box, which is desirable for successful image capture.
The status indicator provides information to the user regarding the status of the process. This helps to ensure that the user does not make a large adjustment to the sensor/camera positioning before image capture is completed.
Thus, when the user maintains the display interface 320 parallel to the facial features to be measured and presents the user display interface 320 to a mirror or other reflective surface, the reference features significantly display and overlay real-time images that are seen by the camera/sensor 340 and reflected by the mirror. The reference feature may be fixed near the top of the display interface 320. The reference feature is at least partially prominently displayed in such a way that the reference feature is clearly visible to the sensor 340 so that the feature can be readily identified by the processor 310. Furthermore, the reference feature may overlay a real-time view of the user's face, which helps avoid confusion for the user.
The user may also be instructed by the processor 310 via the display interface 320, by audible instructions via a speaker of the computing device 230, or by a tutorial in advance to position the display interface 320 in the face of the facial feature to be measured. For example, the user may be instructed to position the display interface 320 so that it faces forward and is placed under, against, or adjacent to the chin of the user in a plane aligned with certain facial features to be measured. For example, the display interface 320 may be placed in planar alignment with the nose bridge and chin points. Since the final captured image is two-dimensional, planar alignment helps ensure that scaling of the reference features 326 is equally applicable to facial feature measurements. In this regard, the distance between the mirror and the user's facial features and display will be approximately the same.
When the user is positioned in front of the mirror and the display interface 320 including the reference feature is placed roughly in alignment with the facial feature plane to be measured, the processor 310 checks certain conditions to help ensure adequate alignment. As previously described, one exemplary condition that may be established by an application is that the entire reference feature must be detected within the target box 328 in order to continue. If the processor 310 detects that the reference feature is not completely located within the target frame, the processor 310 may disable or delay image capture. The user may then move their face along with the display interface 320 to maintain planarity until the reference feature as displayed in the live action preview is within the target box. This facilitates optimal alignment of the facial features and display interface 320 relative to the mirror used for image capture.
When the processor 310 detects an entire reference feature within the target frame, the processor 310 may read the IMU 342 of the computing device for detecting the device tilt angle. For example, IMU 342 may include an accelerometer or gyroscope. Thus, the processor 310 may evaluate the device tilt, such as by comparing to one or more thresholds, to ensure that the device tilt is within a suitable range. For example, if it is determined that computing device 230 (and thus display interface 320 and facial features of the user) is tilted within approximately ±5 degrees in any direction, the process may continue to the capture phase. In other embodiments, the tilt angle for continuation may be within about + -10 degrees, + -7 degrees, + -3 degrees, or + -1 degree. If excessive tilt is detected, a warning message may be displayed or issued to correct for the undesired tilt. This is particularly useful for helping the user to inhibit or reduce excessive tilt (particularly in the anterior-posterior direction), which may be a source of measurement error if not corrected, because the captured reference image will not have an appropriate aspect ratio.
When alignment has been determined by the processor 310 as controlled by the application, the processor 310 proceeds to the capture phase. The capturing phase preferably occurs automatically once the alignment parameters and any other preconditions are met. However, in some embodiments, the user may initiate capture in response to a prompt to do so.
When image capture is initiated, the processor 310 captures n images, preferably more than one image, via the sensor 340. For example, the processor 310 may capture about 5 to 20 images, 10 to 15 images, or the like via the sensor 340. The number of images captured may be time-based. In other words, the number of images captured may be based on the number of images of a predetermined resolution that may be captured by the sensor 340 during a predetermined time interval. For example, if the number of images that the sensor 340 can capture at a predetermined resolution within 1 second is 40 images and the predetermined time interval for capture is 1 second, the sensor 340 will capture 40 images for processing using the processor 310. The number of images may be user defined, artificial intelligence or machine learning based on detected environmental conditions by server 210, or target determination based on expected accuracy. For example, if high accuracy is desired, more captured images may be required. Although it is preferred to capture multiple images for processing, one image is also contemplated and can be successfully used to obtain accurate measurements. However, more than one image allows an average measurement to be obtained. This may reduce errors/inconsistencies and improve accuracy. The processor 310 may place these images in the stored data 354 of the memory/data storage 350 for post-capture processing.
Once the image is captured, the processor 310 processes the image to detect or identify facial features/landmarks and measure distances between the landmarks. The resulting measurements may be used to recommend appropriate user interface dimensions. Alternatively, the processing may be performed by the server 210 receiving the transmitted captured image and/or on a computing device (e.g., a smart phone) of the user. Processing may also be performed by a combination of processor 310 and server 210. In one example, the recommended user interface size may be based primarily on the nose width of the user. In other examples, the recommended user interface dimensions may be based on the mouth and/or nose dimensions of the user.
The processor 310, as controlled by the application, retrieves one or more captured images from the stored data 354. The image is then extracted by the processor 310 to identify each pixel comprising the two-dimensional captured image. The processor 310 then detects certain pre-specified facial features within the pixel formation.
The detection may be performed by the processor 310 using edge detection, such as Canny, prewitt, sobel or Robert edge detection. These edge detection techniques/algorithms help identify the location of certain facial features within the pixel formation, which correspond to the actual facial features of the user presented for image capture. For example, edge detection techniques may first identify the user's face within the image, and also identify the pixel locations within the image that correspond to specific facial features such as each eye and its boundary, mouth and its corners, left and right nosewings, nose bridge points, suprachin points, inter-eyebrow points, left and right nasolabial folds, and so forth. The processor 310 may then mark, annotate, or store the particular pixel location of each of these features. Alternatively, or if such detection by the processor 310/server 210 is unsuccessful, pre-specified facial features may be manually detected and marked, annotated, or stored by a human operator looking at access to the captured image through the user interface of the processor 310/server 210.
Once the pixel coordinates of these facial features are identified, the application control processor 310 measures the pixel distance between certain identified features. For example, the distance may generally be determined by the number of pixels per feature, and may include scaling. For example, measurements may be made between the left and right wings of the nose to determine the pixel width of the nose and/or between the nose bridge point and the on-chin point to determine the pixel height of the face. Other examples include pixel distances between each eye, between the corners of the mouth, and between the left and right nasolabial folds to obtain additional measurement data for a particular configuration like a mouth. Further distances between facial features may be measured. In this example, certain facial dimensions are used in the user interface selection process.
Once the pixel measurements of the pre-specified facial features are obtained, a anthropometric correction factor may be applied to these measurements. It should be appreciated that the correction factor may be applied before or after the scaling factor is applied, as described below. The anthropometric correction factor may correct errors that may occur during an automated process, which may be observed to occur consistently from user to user. In other words, without the correction factor, a separate automated process may result in consistent results from user to user, but these results may trigger a certain amount of missized user interface. Correction factors that can be empirically extracted from population testing move the results closer to true measurements, which helps reduce or eliminate sizing inconsistencies. The accuracy of the correction factor may be improved or improved over time as the measurement and sizing data for each user is transferred from the respective computing device to the server 210 (such data may be further processed in the server 210 to improve the correction factor). The anthropometric correction factor may also vary between forms of the user interface. For example, the correction factor for a particular user seeking an FFM may be different than the correction factor when seeking a nasal mask. Such correction factors may be derived from tracking of mask purchases, such as by monitoring mask returns and determining a dimensional difference between replacement masks and returned masks.
To apply facial feature measurements to user interface sizing, whether corrected by anthropometric correction factors or uncorrected, these measurements may be scaled from pixel units to other values that accurately reflect, for example, the distance between facial features of a user presented for image capture. The reference feature may be used to obtain one or more scaling values. Thus, the processor 310 similarly determines dimensions of reference features, which may include pixel width and/or pixel height (x and y) measurements (e.g., pixel counts) of the entire reference feature. It is also possible to determine the pixel dimensions of a number of squares/points comprising the QR code reference feature, and/or a more detailed measurement of the pixel area occupied by the reference feature and its components. Thus, each square or dot of the QR code reference feature may be measured in units of pixels to determine a scaling factor based on the pixel measurement of each dot, and then averaged across all squares or dots measured, which may improve the accuracy of the scaling factor as compared to a single measurement of the full size of the QR code reference feature. However, it should be appreciated that regardless of what measurements of the reference feature are taken, these measurements may be utilized to scale the pixel measurements of the reference feature to the corresponding known dimensions of the reference feature.
Once the measurement of the reference feature is made by the processor 310, a scaling factor is calculated by the processor 310 as controlled by the application. The pixel measurements of the reference feature are related to a known corresponding dimension of the reference feature (e.g., reference feature 326 as displayed by display interface 320 for image capture) to obtain a conversion or scaling factor. Such scaling factors may be in the form of length/pixel or area/pixel A2. In other words, the known dimension may be divided by the corresponding pixel measurement (e.g., count).
The processor 310 then applies a scaling factor to the facial feature measurements (pixel counts) to convert the measurements from pixel units to other units to reflect the distance between the actual facial features of the user appropriate for mask sizing. This may typically include multiplying the scaling factor by the pixel count of the distance of the facial feature associated with mask sizing.
These measurement steps and calculation steps of facial features and reference features are repeated for each captured image until each image in the set has scaled and/or corrected facial feature measurements.
The processor 310 may then optionally average the corrected and scaled measurements of the image set to obtain a final measurement of the facial anatomy of the user. Such measurements may reflect the distance between facial features of the user.
In the compare and output stage, the results from the post-capture image processing stage may be output (displayed) directly to the person of interest or compared to a data record to obtain an automatic recommendation of the user interface size.
Once all measurements are determined, the processor 310 may display the results (e.g., average) to the user via the display interface 320. In one embodiment, this may end the automated process. The user/patient may record these measurements for further use by the user.
Alternatively, the final measurement may be forwarded from the computing device 230 to the server 210 via the communication network 220, either automatically or at the command of the user. The server 210 or server-side individual may perform further processing and analysis to determine the appropriate user interface and user interface dimensions.
In further embodiments, the final facial feature measurement reflecting the distance between the actual facial features of the user is compared by the processor 310 to user interface size data, such as in a data record. The data record may be part of an application for automatic facial feature measurement and user interface sizing. The data record may include, for example, a look-up table that may be accessed by the processor 310, which may include user interface dimensions corresponding to a range of facial feature distances/values. The data record may include a plurality of tables, many of which may correspond to a particular form of user interface and/or a particular model of user interface provided by the manufacturer.
An example process for selecting a user interface identifies key landmarks from facial images captured by the above-described method. In this example, the initial relevance to the potential interface involves facial landmarks including facial height, nose width, and nose depth, as represented by lines 3010, 3020, and 3030 in fig. 3A-3B. These three facial landmark measurements are collected by an application to assist in selecting the size of a compatible mask, such as through one or more of the look-up tables described above. Alternatively, other data relating to the 3D shape of the face may be used to match the derived shape data to the surface of the available mask as described above. For example, landmarks and any region of the face (i.e., mouth, nose, etc.) may be obtained by fitting a 3D deformation model (3 DMM) to a 3D facial scan of the user. This fitting process is also called non-rigid registration or (shrink) packaging. Once the 3DMM is registered to the 3D scan, any number of methods may be used to determine mask size, as both the points and the surface of the user's face are known.
Fig. 8A is a facial image 800, such as captured by the application described above, the facial image 800 may be used to determine a face height dimension, a nose width dimension, and a nose depth dimension. The image 800 includes a series of landmark points 810, which landmark points 810 may be determined from the image 800 via any standard known method. In this example, 100 Standard Cyborg landmark points are identified and displayed on the facial image 800. In this example, the method requires seven landmarks on the facial image to determine facial height, nose width and nose depth for mask sizing associated with the user. As will be explained, two existing landmarks (e.g., based on Standard Cyborg landmarks) may be used. The three-dimensional position requires five additional landmarks to be identified on the image via a processing method. Based on imaging data and/or existing landmarks, new landmarks may be determined. Two existing landmarks to be used include a point on the nasal bridge (nasal bridge) and a point on the nasal tip. Five new markers are needed including the suprachin point (top of chin), the left and right alar points, and the left and right alar-facial sulcus points.
Fig. 8B shows a facial image 800 in which the facial height dimension (nose bridge point to chin point) is defined via landmark points 812 and landmarks 814. Landmark 812 is an existing landmark point on the nose bridge point. The landmark point 814 is the on-chin point. The face height dimension is determined from the distance between landmark points 812 and 814.
Fig. 8C shows a facial image 800 with new landmark points 820 and 822 to locate the nose width dimension. This requires two new landmarks, one on each side of the nose. These are referred to as the right and left alar points and may correspond to the right and left alar ends. The distance between these points provides the nasal width dimension. The alar points are different but similar to the alar-facial sulcus points.
Fig. 8D shows a facial image 800 with landmark points 830, 832 and 834 to determine the nose depth dimension. Suitable landmarks may be obtained for landmark points 830 at the tip of the nose. Landmark points 832 and 834 are determined on the left and right sides of the nose. The landmark points 832 and 834 are located at the alar-facial sulcus on the left and right sides of the nose. These are similar to the alar points of the nose, but in the posterior part of the nose.
As explained above, operational data for each RPT device may be collected for a large population of users. This may include usage data based on when each user operates the RPT device and the duration of the operation. Thus, compliance data, such as how long the user uses the RPT device within a predetermined period of time and how often the user uses the RPT device within the predetermined period of time, the treatment pressure used and/or the amount and manner of use of the RPT device may be determined from the collected operational data as to whether the user's respiratory treatment prescription is consistent. For example, one compliance criterion may be that the user acceptably uses the RPT device over a period of 90 days. The leak data may be determined from analytical operational data such as flow rate data or pressure data. Mask switching data analyzed using acoustic signals may be derived to determine whether a user is switching masks. The RPT device may operate normally to determine mask type based on an internal or external audio sensor (such as audio sensor 4278 in fig. 4B) using cepstrum analysis as explained above. Alternatively, for older masks, the operational data may be used to determine the type of mask by correlating the collected acoustic data with acoustic signatures of known masks.
In this example, user input of other data may be collected via a user application executing on computing device 230. The user application may be part of the user application 360 or a separate application that directs the user to obtain facial landmark features. This may also include subjective data obtained via questionnaires with questions to gather data about comfort preferences, whether the user is a mouth or nasal respiratory (e.g., questions such as "do you wake up the mouth. For example, user input may be collected by a user responding to subjective questions related to the comfort of the user interface via the user application. Other problems may relate to related user behavior, such as sleep characteristics. For example, subjective questions may include, for example, do you wake up the mouth? Is you a mouth-breathing person? Or what is your comfort preference? Such sleep information may include sleep duration, how the user sleeps, and external influences such as temperature, pressure factors, etc. Subjective data may be as simple as a numerical rating of the response in terms of comfort or more detail. Such subjective data may also be collected from a Graphical User Interface (GUI). For example, input data regarding leakage from the user interface experienced by the user during treatment may be collected by the user selecting a portion of the user interface that is displayed graphically on the user interface on the GUI. The collected user input data may be assigned to user database 260 in fig. 7. Subjective input data from the user may be used as input to select example mask types and sizes. Other subjective data relating to the mental safety of the user may be collected. For example, questions such as whether the user feels claustrophobia using the particular mask or how the user feels psychological comfort wearing the mask beside their bed partner may be asked, and inputs may be collected. If the answer to these questions is at the lower end of the negative response, the system may recommend a less offensive interface from interface database 270, such as a mask smaller than the user's existing mask, which may be a nose-pad mask (a mask that seals to the user's face at the lower perimeter of the user's nose and leaves the user's mouth and nose bridge uncovered) or a nose and mouth mask that seals around the user's mouth and also at the lower perimeter of the user's nose but does not engage the nose bridge (which may be referred to as an ultra-compact full-face mask). Other issues regarding preferred sleep positions, as well as issues regarding whether the user prefers to move too much around the evening, may also be included, and the user will prefer 'freedom' in the form of a tube-up mask (e.g., a mask with a catheter headset) that may allow more movement. Alternatively, if the user tends to remain supine or lying on his side, an under-tube mask (e.g., a mask of the conventional type having a tube extending forward and/or downward from the mask at a location near the user's nose or mouth) would be acceptable.
Other data sources may collect data that may be relevant to mask selection outside the use of the RPT device. This may include user demographics such as age, gender, or location; the AHI severity indicates the level of sleep apnea experienced by the user. Another example may be determining soft tissue thickness based on a Computed Tomography (CT) scan of the face. The other data may be a prescribed pressure setting for a new user of the RPT device. If the user is designated a lower pressure, such as 10cm h2o, this may enable the user to wear a lighter mask that is suitable for lower pressures, resulting in greater comfort and/or a smaller mask on the face than a full face with a very firm seal that is more suitable for 20cm h2o but may result in less comfort. However, if the user has a high pressure requirement, such as 20cm h2o, the user may be recommended a full face mask with a very firm seal.
After selecting the mask, the system continues to collect operational data from the RPT device 250. The collected data is added to database 260 and database 270. Feedback from new and existing users may be used to improve recommendations for providing better mask options for subsequent users. For example, if the operational data determines that the recommended mask has a high level of leakage, another mask type may be recommended to the user. By means of a feedback loop, the selection algorithm can be modified to learn the specific aspects of the facial geometry that are best suited for a specific mask. The correlation may be used to use the facial geometry to improve mask recommendations for new users. The collected data and associated mask type data may thus provide additional updates to the mask's selection criteria. Thus, the system may provide the user with additional insight for improving the selection of the mask.
In addition to mask selection, the system may allow analysis of mask selection related to respiratory therapy effectiveness and user compliance. The additional data allows optimizing the respiratory therapy based on the data through the feedback loop.
Machine learning may be applied to optimize the mask selection process and provide a correlation between mask type and increase user compliance with respiratory therapy. Such machine learning may be performed by server 210. The mask selection algorithm may be trained with a training data set based on the output of advantageous operational results and input including user demographics, mask size and type, and subjective data collected from the user. Machine learning may be used to find correlations between desired mask sizes and predicted inputs such as face dimensions, user demographics, operational data from RPT devices, and environmental conditions. Machine learning may employ techniques such as neural networks, clustering, or conventional regression techniques. The test data may be used to test different types of machine learning algorithms and determine which has the best accuracy with respect to predicted relevance.
The model for selecting the best interface may be continuously updated with new input data from the system in fig. 7. Thus, as the analysis platform is used more, the model may become more accurate.
As explained above, a portion of the system in fig. 6 involves recommending an interface to a user using an RPT device. A second function of the system is an optimized verification process that includes interface selection. Once the user is provided with the recommended mask and the recommended mask is used for a period of time, such as two days, two weeks, or another period of time, the system may monitor the RPT device usage and collect other data. Based on this collected data, if it is determined from adverse data indicating leakage, poor or declining compliance, or unsatisfactory feedback that the mask has not met the high criteria, the system may re-evaluate the mask selection and update the database 260 and machine learning algorithm for the user using the results. The system may then recommend a new mask to fit the new collected data. For example, if a relatively high leak rate is determined from data based on acoustic signatures or other sensors, the user may be under the jaw during REM sleep, which may indicate that a different type of interface is required, such as a full-face mask rather than the initially selected nasal-only or smaller full-face mask.
The system may also adjust the recommendation in response to satisfactory follow-up data. For example, if the operational data indicates that the selected full face mask is not leaking, the routine may recommend trying a smaller face mask for a better experience. The follow-up recommendations may be provided using trade-offs between style, materials, variations, and relevance to user preferences to maximize user compliance with the treatment. The individual user's trade-off may be determined by the input tree displayed to the user by the application. For example, if the user indicates that skin irritation is one problem from a menu of potential problems, a graphic with the location of the potential irritation on the facial image may be displayed to gather data from the user such as the specific location of the irritation. The specific data may provide a better correlation with the optimal mask for a particular user.
Fig. 9 is a flowchart of a process of user interface selection that may be performed by the interface selection engine performed by the server 210 in fig. 7. The process collects an image of the user's face (face image) which can then be stored on a storage device. The system 200 may include a facial contour engine operable to determine facial features of a patient based on the facial image. In some examples, the facial contour engine may perform any step involving facial analysis. In this example, the user's face is scanned via a depth camera on a mobile device, such as the smart phone 234 or tablet 236 in fig. 7, to produce a 3D image of the face (900). Alternatively, the 3D face scan data may be obtained from a storage device having a 2D or 3D face image of the user that has been scanned. Landmark points are determined in a facial grid from the facial 3D scan, and key dimensions and collections of points related to user interface suitability, such as facial height, nose depth, and nose width, are measured from the image (902). The measured critical dimensions are then associated with the potential interfaces by size and type (904). For example, the association may include an adaptive or non-rigid registration with the 3 DMM. The irregular triangular surface mesh from the 3D scan fits with a 3DMM surface that contains information about the face, including the location and area of key facial regions.
Operational data is collected from a database associated with the respiratory device and corresponding user interface (906). Operational data is analyzed relating to desired effects such as reduction of leakage and optimal compliance with respiratory therapy involving use of respiratory therapy devices. The determined facial dimensions and operational data of the desired effect of the points are then associated with different sizes and types of available interfaces (908). Based on the correlation between the operational data and the facial dimensions and points, a model is employed to select an appropriate interface to obtain a desired result that accounts for minimizing leakage, optimal fit, and optimal compliance with the respiratory therapy device being used (910). Other factors may also be considered, including user input, such as satisfaction with the selected interface. The selected interface is then stored and sent to the user (912). As explained above, once the user uses the RPT device with the selected interface, the scanned face dimension data, as well as the data collected from the RPT device, is added to the database 260 and database 270 to further improve the analysis of the selection process.
Fig. 10 is a follow-up routine that may run for a particular period of time or periods of time following the initial selection of the interface detailed in fig. 9. For example, the follow-up routine may be run during the first two days of the RPT device's use of the selected interface. As will be explained below, the routine in fig. 10 may provide a recommendation to continue using the initially selected interface or to switch to another type of interface. The collection of additional objective and subjective data is recorded in the interface database 270. Thus, the routine in FIG. 10 records in the interface database 270 the ongoing usage, switching, and feedback data, as well as the indicia in the mask type that was originally selected as "successful" or "failed". This data continually updates the recommendation engine of the example machine learning drive performed by the server 210.
The routine first collects operational data over a set period of time, such as two days of use (1010). For example, the system in fig. 2 may collect consistent objective data from two days of use, such as time of use or leakage data from RPT device 250. Of course, other suitable periods of time greater or less than two days may be used as the period of time for collecting operational data and other related data from the RPT device.
Further, subjective feedback data such as seal quality/performance, comfort, general likes/dislikes, etc. may be collected from the interface of the user application executed by computing device 230 (1012). The subjective data may be collected by subjective questions asked via a two-way communication device or a client application executing on the computing device 230. Thus, the subjective data may include answers related to discomfort or leakage and mental safety issues (such as whether the user is psychologically comfortable with the mask). The subjective data may also include information provided by the patient regarding fatigue experienced during the awake time.
The routine then correlates the objective data and subjective data with the selected mask size/type and facial scan data of the user (1014). In the case of good results, the routine will determine that the operational data exhibits high compliance, low leakage, and good subjective result data from the user (1016). The routine then updates the database and learning algorithm as a successful match to the relevant data (1018). The routine also analyzes the relevant data and determines if the results can be improved by a more desirable mask (1020). The routine then suggests attempting a more appropriate mask according to the routine of fig. 9.
In the event of an undesired result from the association (1014), the routine determines that the undesired result is from at least one of low compliance, high leakage, or unsatisfactory subjective result data (1022). The routine then updates the database 270 and learning algorithm as an unsuccessful match between the user data and the selected mask (1024). The routine then suggests attempting a more appropriate mask (1026) as per the routine of fig. 9.
If the user uses the user interface as specified and based on a number of factors that may be weighted equally or differently, the user's compliance with the therapy may be determined to be high. The use of a user interface can be credited with high compliance with treatment as often as the user interface is prescribed for use by the user. Good compliance may also be accounted for using the user interface for a sufficiently long period of time, such as throughout the night, rather than discontinuing treatment during the night. Consistent adherence to their prescription (e.g., rarely missed nights) also accounts for good compliance. Furthermore, if the user is using prescribed therapeutic pressure instead of lower pressure, this may facilitate assessment of good compliance.
In some instances, users may be required to replace their user interfaces or components thereof according to a predetermined replacement schedule. For example, users may be required to replace the cushion module of their user interface based on prescribed pressures and usage strengths according to predetermined intervals, as wear on seals and vents may adversely affect the performance of the treatment and thus the effectiveness of the treatment due to use and cleaning. In some examples of the present technology, timely replacement of components may be factored into assessment of good compliance with treatment. If prescription data is not available, user compliance may be determined based on at least a predetermined number of nights per week (e.g., at least 4, 5, 6, or 7 nights) for at least a predetermined number of hours (e.g., 4, 5, 6, or 7 hours), using a constant high treatment pressure (e.g., at least 4, 6, 8, or 10cm h2 o) to use the user interface and corresponding respiratory therapy device, and/or replacing the user interface or components thereof in time.
As mentioned above, there are a number of ways in which a user may follow a treatment. In some examples of the present technology, the degree of compliance is determined to be "non-compliance" or "compliance" based on whether one or more criteria are met. In some examples, compliance may be one of "low," medium, "or" high. In further examples, compliance may be expressed as a score from 100%. The good compliance level is not necessarily 100%, and may alternatively be at least 75% or 80% in some examples.
The above-described systems and mobile devices may also be used to determine an appropriate fit once a physical user interface, such as a mask, is made available to a user.
Fig. 11 is a perspective view of a user, such as user 10, controlling a user device, such as smart phone 234 in fig. 7, to collect sensor data associated with a current fit of a user interface, such as mask 100, in accordance with some embodiments of the present disclosure. For example, user 10 may be a new user of a respiratory therapy system such as in fig. 1. The user 10 may have just donned the user interface 100 and now orient one or more sensors of the user device 234 toward the user's face. Mask 100 may be a mask selected based on the above-described operational data and facial data. Although depicted in fig. 11 as a smart phone 234, any suitable user device may be used.
To obtain a measurement, the user 10 may press an appropriate button or otherwise interact with the smart phone 234 to begin the measurement process. Once activated, the smart phone 234 may optionally provide the user 10 with stimuli (e.g., prompts and/or instructions) indicating different actions that the user 10 should take or stop in order to achieve the desired measurements. For example, the cues and/or instructions may include text instructions such as "hold phone at face height and jog in a 8-shape"; an audible cue 382, such as a particular chime play when the measurement process is complete; and/or haptic feedback 380, such as an increased vibration pattern that signals the user 10 to slow down movement of the smart phone 234. The use of non-visual cues (e.g., audible cues, tactile feedback, etc.) may be particularly useful when the smartphone 234 must remain in an orientation that prohibits the user 10 from viewing the display of the smartphone 234. In some cases, the cues and/or instructions may be presented as a overlay on the image of the user 10, such as in the form of an augmented reality overlay. For example, icons, highlighting, text, and other indicia may be superimposed on the image of the user 10 (e.g., live or non-live) to provide instructions on how to perform the measurement process.
The smart phone 234 may include one or more sensors. In some cases, the user 10 may be instructed to hold the smartphone 234 in a particular orientation to ensure that the desired sensor or sensors are acquiring the desired data. For example, when an infrared sensor in front of the smartphone (e.g., an infrared sensor for unlocking the phone) is being used, the user 10 may be instructed to hold the phone with his face facing the smartphone 234 so that the infrared sensor faces the user's face and acquire data of the user's face. In another example, the smart phone 234 may include a LiDAR sensor on its back side, in which case the user 10 may be instructed to hold the phone with its face facing away from the user 10 so that the LiDAR sensor faces the user's face and acquires data of the user's face. In some cases, user 10 may be instructed to make multiple measurements with smartphone 234 positioned in different orientations.
The smart phone 234 may provide feedback regarding the current suitability of the user interface 100, whether during the measurement process (e.g., real-time feedback) and/or after the end of the measurement process (e.g., non-real-time feedback). In one example, when the user 10 holds the smart phone 234 to obtain a measurement, the smart phone 234 may provide feedback indicating that the user 10 should make adjustments to improve the current suitability of the user interface 100, such as tightening a particular strap. In such an instance, the user 10 may make adjustments while continuing to hold the smartphone 234 so that it may continue to acquire measurements of the user's face and/or the user interface 100. In such cases, the smart phone 234 may provide dynamic feedback that shows how the current suitability improves or otherwise changes.
Fig. 12 is a user view of a smart phone 234 for identifying thermal characteristics associated with the current suitability of the user interface 100, according to some embodiments of the present disclosure. The smart phone 234 and the user interface 100 may be any suitable user device and user interface. The view depicted in fig. 12 may be made during the measurement process (e.g., live view, such as real-time view) or may be made after the measurement is made (e.g., view after the measurement process is completed).
The user 10 may hold the user device (e.g., the smart phone 234) so that they can see the display device 472 (e.g., the display screen) of the smart phone 234. The display device 472 may depict a Graphical User Interface (GUI) that may provide feedback associated with the current suitability of the user interface 100. The GUI may include an image of the user 10 wearing the user interface 100. The image may be acquired by an infrared sensor and may be a heat map of the user 10 wearing the user interface 100. The heat map may depict the local temperature at different points on the user's face and user interface 100.
As shown in the enlarged view, the region 482 of the user's face near the seal between the user interface 100 and the user's face is shown as being substantially cooler than surrounding regions of the user's face. The cooler partition 482 may indicate an unintentional air leak because air leaking from the seal of the user interface 100 may cool the user's skin by an amount perceivable by the IR sensor.
In some cases, the GUI may provide feedback in the form of a score 484 associated with the current suitability of the user interface 100. For example, the score 484 may be depicted as a filler bar that may be filled between 0% and 100%. As depicted in fig. 12, the score 484 for the current suitability of the user interface 100 is currently about 65%.
In some cases, the GUI may provide feedback in the form of text instructions 486 to improve the current suitability. The text instructions 486 may provide instructions for the user 10 to take action to improve the current suitability, such as tightening the upper left band as depicted in fig. 12. An instruction to tighten the upper left band is selected because doing so should reduce or eliminate air leakage detected by thermal differences at partition 482.
Fig. 13 is a user view of a user device (such as smart phone 234) for identifying profile-based characteristics associated with a current suitability of a user interface, according to some embodiments of the present disclosure. The user device may be any suitable user device, such as those shown in fig. 7.
The display 572 (e.g., display screen) of the smart phone 234 may display a GUI including a live image of the user 10. To create a live image of the user 10, the camera 550 of the smartphone 234 may be pointed at the user 10.
In the view depicted in fig. 13, user 10 has just removed the user interface, leaving indentations 590 around the user's face portion. The indentations 590 may be detected by visual sensing (e.g., sensing the color of the indentations 590 or a color difference between the indentations 590 and the adjacent surface of the user's face), ranging sensing (e.g., sending a local depth difference between the indentations 590 and the adjacent surface of the user's face), and the like. As depicted, the sections 592 of the indentations 590 are less distinct than the other sections. The zone 592 can be identified as a zone in which the user interface is not sufficiently pressed against facial skin so that an effective seal is not established.
In some cases, the GUI may provide feedback in the form of a score 584 associated with the current suitability of the user interface. For example, the scope of partition 592 may indicate that the current suitability of the user interface (e.g., the suitability of the user interface that was removed prior to collecting the sensor data that caused the identification of partition 592) is not optimal. A score 584 may be generated and depicted, such as via a numerical score of "65%". In some cases, an alarm may additionally be provided, such as alarm 585 indicating that a poor fit was detected. In some cases, the GUI may provide feedback in the form of text instructions 586 to improve current suitability. The text instructions 586 may provide instructions for the user 10 to take action to improve the current fit, such as by using a different type of user interface (e.g., a nasal pillow mask rather than a full-face user interface). Instructions to use the nasal pillows rather than the full user interface may have been selected because of the detected poor fit and/or one or more previously attempted nature of improving the fit of the current user interface. For example, if a user attempts to improve the suitability of a current user interface beyond a threshold number of times, the system may determine that it may be prudent to attempt to establish good suitability using a different type of user interface. The available user interface may be selected using a routine that combines facial data and operational data from the RPT device described above.
When the indentations 590 are detected as a detected contour on the user's face in fig. 13, in some cases, a color change in the user's face (e.g., blanching, or when some blood is pushed away from tissue near the skin surface on which the seal of the user interface rests) may be detected to identify the location and manner in which the seal of the user interface engages the user's face.
Fig. 14 is a flow chart depicting a process 1400 for evaluating suitability of a user interface for a transition event across a user interface, in accordance with some embodiments of the present disclosure. The user interface may be any suitable user interface, such as user interface 100 of fig. 1. The process 1400 may be performed using a control system such as a processor of the smart phone 234 or a processor of a server such as the server 210 of fig. 7. Sensor data collected during process 1400 may be collected from one or more sensors (e.g., one or more sensors of RPT device 40 in fig. 4A-4C), one, more than one, or all of which may be incorporated into and/or coupled to a user device (e.g., smart phone 234 of fig. 7). Examples of user interface conversion events include donning the user interface, removing the user interface, adjusting the user interface (e.g., adjusting an adjustable portion of the user interface, such as an adjustable strap, moving the user interface to a different position or orientation on the user's face, etc.), and adjusting a respiratory therapy system, such as RPT device 40 coupled to the user interface (e.g., turning on or turning on the respiratory therapy system, adjusting a parameter of the respiratory therapy system, such as humidity or heat, etc.).
At block 1402, first sensor data may be collected before the user interface is donned. First sensor data of the user's face, and optionally of the user interface (prior to being donned by the user), may be collected.
In some cases, the first sensor data may be used at block 1412 to identify one or more characteristics associated with potential suitability of the user interface. For example, based on the characteristics identified based on the first sensor data alone, the system may determine that the user will be best suited for using the nasal pillow mask rather than the full face user interface (e.g., if a particularly large profile is detected around the location where the full face user interface seal is typically located, or if a particularly thick beard is detected around the location where the full face user interface seal is typically located).
However, in some cases, the first sensor data is compared to other sensor data at block 1412. For example, in some cases, the first sensor data may establish a baseline for future comparison, such as a baseline profile of the face, a baseline heat map of the face, a baseline detection of one or more features of the face (e.g., detection of eyes, mouth, nose, ears, etc.), and so forth. The first sensor data may be considered to be sensor data associated with the current suitability of the user interface, such as if the first sensor data is collected before the user interface is worn for the purpose of establishing a baseline, further sensor data is compared to the baseline to assess the suitability of the user interface when worn. For example, such data may be stored from the face scan performed as explained above with reference to fig. 8A to 8D.
At block 1404, a user interface may be donned. Wearing the user interface may involve the user placing the user interface on the user's face as if they were using the user interface (e.g., for respiratory therapy). In some cases, wearing the user interface at block 1404 may include adjusting the user interface prior to wearing the user interface, although this need not always be the case. In the case where the user interface is adjusted prior to wearing the user interface, further analysis of the sensor data may be compared to historical sensor data, historical characteristic data, historical suitability scores, and the like. For example, wearing the user interface at block 1404 may occur immediately following a prior evaluation of the suitability of the user interface that identifies an action to be taken to improve the suitability. In such an instance, the user may take this action as part of block 1404 and then compare the resulting suitability of the user interface with the previous suitability of the user interface to determine if the suitability has improved.
In some cases, wearing the user interface at block 1404 may include wearing the user interface for a threshold duration. For example, if an assessment of user interface suitability is made by comparing sensor data before and after wearing the user interface, process 1400 may proceed from block 1404 to block 1408. In such cases, to ensure that the user interface is worn for a sufficient duration to affect the user's face (e.g., for a sufficient duration to establish an indentation or color change in the user's face), wearing the user interface at block 1404 may include wearing the user interface for a threshold amount of time (e.g., at least 10 seconds, 20 seconds, 30 seconds, 40 seconds, 50 seconds, 1 minute, 1.5 minutes, 2 minutes, 5 minutes, 10 minutes, or 15 minutes).
At block 1406, the second sensor data may be collected while the user interface is worn (e.g., worn on the face of the user). Second sensor data of the user face and a user interface worn by the user may be collected. In some cases, the second sensor data may be collected while one or more adjustments are made to the user interface and/or respiratory therapy system coupled to the user interface. In such cases, at block 1412, the second sensor data taken over time may be used to detect a change in the identified characteristic, which may be used to evaluate the current suitability of the user interface. In one example, when the heater of the respiratory therapy system in fig. 1 is engaged, the thermal data collected at block 1406 may identify a change in temperature of a region of the user's face that is external to and adjacent to the user interface, indicating that there may be an inadvertent air leak at the seal of the user interface near the region of the user's face. In such an instance, additional sensor data, such as audio data collected by audio sensor 4278 in fig. 4C, may be used to confirm the presence of a leak (e.g., via detecting a characteristic acoustic signal associated with an unintentional leak, such as an audible or inaudible signal).
At block 1408, the user interface may be removed. The removal of the user interface may be performed while one or more sensors are still collecting sensor data, although this need not always be the case.
At block 1410, third sensor data may be collected after the user has removed the user interface. The third sensor data may be similar to the first sensor data from block 1402 except that the sensor data is affected by changes in the user's face due to the user interface having been worn.
Collecting sensor data at blocks 1402, 1406, and 1410 may include collecting the same type of sensor data (including sensor data from the same or different sensors) and/or different types of sensor data. For example, the first sensor data and the second sensor data collected at block 1402 and block 1410, respectively, may each include ranging sensor data and image data in the visual and IR spectra, while the second sensor data collected at block 1406 may include audio data and image data in the visual and IR spectra.
At block 1412, one or more characteristics are identified from the sensor data (e.g., the first, second, and/or third sensor data). Identifying characteristics at block 1412 may include analyzing a given set of sensor data (e.g., second sensor data analyzed separately) and/or comparing a set of sensor data (e.g., first sensor data compared to third sensor data). Identifying characteristics at block 1412 may include identifying characteristics that indicate quality of suitability of the user interface. For example, some characteristics may indicate a poor fit of the user interface, while other characteristics may indicate a good fit of the user interface. In one example, characteristics of sound that are indicative of unintentional air leakage may indicate poor fit, while characteristics of a heat map that is a user face wearing a user interface that displays consistent and/or desired temperatures on a surface of the user face adjacent to the user interface may indicate good fit.
The characteristics may be output in a suitable manner, including as a value (e.g., a numerical value or a boolean value), a set of values, and/or a signal (e.g., a set of values that vary over time). In one example, characteristics associated with thermal mapping of a user's face while wearing a user interface may be output as i) a boolean value of whether a local temperature change above a threshold (e.g., a change in temperature over time or distance across the face surface) is detected; ii) a set of temperature values taken at different locations on the user's face around the circumference of the user interface seal; iii) Thermal images or videos of the user's face; iv) or any combination of i to iii, among others.
In some cases, at block 1414, a score associated with the suitability of the user interface may be generated. Generating the suitability score may include analyzing the characteristics identified at block 1412 and generating a score based on those characteristics. In some cases, the score may be based on, or at least in part, a calculation using one or more characteristics of block 1412 as input and/or sensor data from block 1402, block 1406, and/or block 1410.
In some cases, the score may be calculated using a machine learning algorithm using one or more characteristics of block 1412 and/or sensor data from block 1402, block 1406, and/or block 1410 as input. Such machine learning algorithms may be trained using characteristics and/or sensor data associated with a training set of suitability assessments of a user wearing a user interface. The suitability assessment in the training set may be based on subjective assessment of the user, objective values collected using other equipment (e.g., laboratory sensors and equipment, such as user interfaces equipped with dedicated sensors and/or dedicated sensing equipment), and so forth.
At block 1416, an output may be generated. The output or output feedback may include any suitable output for relaying information about the current suitability of the user interface. The output may be based on one or more characteristics of block 1412; sensor data from block 1402, block 1406, and/or block 1410; and/or suitability scores from block 1414. In some cases, the output may be the suitability score generated at block 1414. In some cases, the output may be raw or processed sensor data (e.g., a portion of a thermal image or audio recording presented to the user) from block 1402, block 1406, and/or block 1410.
In some cases, the output may be instructions or suggestions for actions selected to improve the current suitability of the user interface. For example, if the characteristic identified at block 1412 indicates that there is an unintentional air leak at a location relative to the user interface, the output may include instructions to adjust the user interface (e.g., tighten the straps) to reduce the unintentional air leak, thereby improving the fit of the user interface. In some cases, the improvement in suitability of the user interface may be based on the current suitability score generated at block 1414 and the past or future suitability score generated at the past or future instance of block 1414.
In some cases, one or more characteristics of block 1412 may be used using a machine learning algorithm; sensor data from block 1402, block 1406, and/or block 1410; and/or the suitability score from block 1414 as an input to select an output selected to improve the current suitability. Such machine learning algorithms may be trained using characteristics, sensor data, and/or suitability scores associated with a training set of suitability assessments of a user wearing a particular user interface. The training data may include information regarding adjustments made, such as adjustments made to the user interface, adjustments made to the user's face (e.g., shaving beards), and/or selections of different user interfaces.
In the example of an assessment of user interface suitability by comparing sensor data prior to wearing the user interface with sensor data while wearing the user interface, process 1400 includes only blocks 1402, 1404, 1406, 1412, and 1414. In another example, where the assessment of the current suitability is made while the user is wearing the user interface, process 1400 includes only blocks 1406, 1412, and 1414. In another example, where the current suitability of the user interface is evaluated by comparing characteristics of the user's face before and after wearing the user interface, process 1400 includes only blocks 1402, 1404, 1408, 1410, 1412, and 1414. Other arrangements may be used. Additionally, in some cases, aspects presented as separate blocks may be combined into one or more other blocks. For example, in some cases, generating the suitability score at block 1414 occurs as part of generating the output at block 1416. In another example, in some cases, process 1400 may proceed from block 1412 to block 1416 without generating an suitability score at block 1414.
Fig. 15 is a flow chart depicting a process 1500 for evaluating suitability of a user interface according to some embodiments of the present disclosure. The user interface may be any suitable user interface, such as user interface 100 in fig. 1. The process 1500 may be performed using a control system, such as a processor of the smart phone 234 or a processor of a server (such as the server 210 in fig. 7).
At block 1402, sensor data associated with a current suitability of a user interface is received, such as from a user device. The sensor data received at block 702 may have been collected from one or more sensors (e.g., one or more sensors/transducers 4270 of RPT device 40 in fig. 4A-4C), one, more than one, or all of the one or more sensors/transducers 4270 may be incorporated into or coupled to a user device (e.g., smart phone 234 of fig. 7). The sensor data may be collected at the same or different frequencies at any suitable resolution. In some cases, the frequency of detection may be based on the sensor used. For example, image data in the visible spectrum may be acquired at a frame rate of 25 to 60 frames per second (fps) or more, while thermal data at an IR frequency may be acquired at a frame rate of 10fps or less.
In some cases, receiving sensor data at block 1502 may include calibrating, adjusting, or stabilizing the sensor data at block 1504. Calibrating, adjusting, or stabilizing the sensor data may include adjusting the sensor data to account for undesirable artifacts in the data. Calibrating, adjusting, or stabilizing the sensor data may include using a portion of the sensor data to make the adjustment. In one example, the image data may be stabilized at block 1502, such as by applying image stabilization software to the image data, or by applying inertial data acquired from an Inertial Measurement Unit (IMU) or sub-sensor of the smartphone. In some cases, stabilizing the sensor data may include receiving image stability information associated with a first portion of the sensor data (e.g., image stability information associated with image data from a visible spectrum camera) and applying the stability information to a second portion of the sensor data (e.g., sensor data from another sensor, such as thermal data from an IR sensor). In one example, image stability information associated with the stability of the visible spectrum camera data (e.g., as obtained via image stabilization software and/or inertial data) may be applied to the thermal data from the IR sensor, thus allowing the thermal data to be stabilized beyond what may occur when the thermal data is used alone.
Calibration may occur by acquiring sensor data of a known or nominal object or event. For example, calibration of exposure and/or color temperature in image data may be accomplished by first collecting reference image data from a calibration surface (e.g., a surface of known color, such as a white balance card), and then using the reference image data to adjust settings of a sensor that acquired the image data and/or to adjust the image data itself.
The adjustment may occur by using a portion of the sensor data to inform adjustments made to other sensor data. For example, a thermal imager (e.g., an IR sensor) may collect thermal data to generate a heat map of the user's face. The heat map may identify local temperatures at various locations on the user's face, as well as some surfaces in the surrounding environment, such as the surface behind the user. In such an example, the thermal data may indicate that the surface behind the user is 19 ℃. However, ambient temperature data collected from a separate temperature sensor may indicate that the surface behind the user and/or the ambient room temperature is closer to 21 ℃. Thus, the system may automatically adjust the thermal data acquired from the thermal imager such that the temperature sensed for the surface behind the user is equal to the temperature measured by the separate temperature sensor (e.g., 21 ℃). Thus, this adjustment may also be continued to local temperature values at various locations on the user's face.
In another example, sensor data from one or more sensors may be used to correlate and coordinate sensor data from one or more other sensors. For example, image data from a visible spectrum camera may be associated with thermal mapping data from an IR sensor. In such instances, certain detected features in the image data (e.g., eyes, ears, nose, mouth, user interface vents, etc.) may be compared to similar detected features in the thermal map data. Thus, if the image data and the thermal map data are not collected in the same scale and field of view, the data may still be correlated together. For example, all pixels of the thermal map data may be adjusted (e.g., stretched in the X-direction and/or Y-direction) such that the locations of features detected in the image data match the locations of related features in the thermal map data.
In some cases, adjusting the sensor data may include adjusting the sensor data with portions of the sensor data related to i) movement of the user device, ii) inherent noise of the user device, iii) breathing noise of the user, iv) speaking (or other sounding) noise of the user, v) changes in ambient lighting, vi) detected transient shadows cast on the user's face, vii) detected transient colored light cast on the user's face, or viii) any combination of i through vii.
Other techniques may be used to calibrate, adjust or stabilize or otherwise modify sensor data based on a portion of the sensor data and/or other data (e.g., historical sensor data, sensor data acquired from sensors not coupled to the system, etc.).
In some cases, at block 1506, sensing feedback may be provided as part of receiving sensor data at block 1502. Providing the sensing feedback may include presenting feedback to the user regarding the process of receiving the sensor data at block 1502. In a first example, the sensing feedback may include text or images on the display that indicate instructions regarding ongoing sensor data collection. For example, the sensory feedback may take the form of contours or other indicia that specify a partition of the screen superimposed on the live camera feed of the user-facing camera, in which case the user is instructed to keep their face within the contours or partition of the screen.
In another example, receiving sensor data at block 1502 may occur for a set duration or until some sensor data is successfully obtained. In this case, the user may need to hold the user device in a particular orientation. Once the duration has elapsed or some sensor data has been obtained, the user device may present sensing feedback 1506 to indicate that the user may cease to hold the user device in a particular orientation (e.g., with the display of the user device facing away from the user). The sensing feedback may be in the form of visual and/or non-visual cues (e.g., stimulus, such as cues, instructions, notifications, etc.). In some cases, the use of non-visual cues (e.g., audio cues or haptic feedback) may be particularly useful, such as if the display of the user device is not currently visible to the user.
In some cases, providing sensory feedback at block 1506 may include presenting instructions to a user to perform certain actions to cause an effect detectable by one or more sensors. For example, the user may be instructed to hold his breath for a duration, smile against the camera, chew, yawn, speak, wear and/or remove the user interface, and so forth. At block 1514, the detectable effect may be used to detect a characteristic. In one example, the detectable effect may be an amount of movement of the user interface relative to the user's face when the user speaks, chews, or yawns. In such instances, excessive movement of the user interface may indicate a poor fit. In some cases, receiving sensor data at block 1502 may include receiving a completion signal associated with a user completion action directed at block 1506. Examples of completion signal detection include sensing button presses (e.g., a user pressing a "completion" button) and automatically detecting completion of an action (e.g., via camera data).
In some cases, providing sensory feedback at block 1506 may include providing instructions for a user to move to a desired location (e.g., move indoors or to a well-lit room), adjust the environment (e.g., turn on or off lights in the room), and/or adjust their orientation or pose (e.g., sitting up).
In some cases, receiving sensor data at block 1502 may include controlling the respiratory system at block 1508. Controlling the respiratory therapy system at block 1508 may include sending a control signal to a respiratory therapy device coupled to the user interface. When the respiratory therapy device receives the control signal, the control signal may cause the respiratory therapy device to adjust a parameter or take some action. For example, the control signal may cause the respiratory therapy device to be turned on and/or off; supplying air at a given pressure or a preset pressure pattern; activating and/or deactivating the heater and/or humidifier; and/or take any other suitable action.
At block 1510, a facial map may be generated using the sensor data. The facial map may be generated using any suitable sensor data, such as ranging data (e.g., from a LiDAR sensor or an IR sensor), image data (e.g., from a camera), and/or thermal data (e.g., from an IR sensor). The facial map may identify one or more features of the user's face, such as eyes, nose, mouth, ears, iris, etc. The facial mapping may include measuring distances between any combination of the identified features. The resulting facial map may be a two-dimensional or three-dimensional facial map. Alternatively, the face map and the result data may be obtained from stored data collected from a face scan performed as explained above with reference to fig. 8A to 8C.
In some cases, the face map is a contour map indicating contours and heights associated with the user's face. In some cases, the facial map is a heat map, indicating local temperatures at different locations on the user's face.
In some cases, generating the facial map at block 1510 may include identifying a first individual and one or more additional individuals. In such cases, generating the facial map may include selecting a first individual (e.g., based on the first individual closest to the one or more sensors or user devices, based on a comparison with previously recorded images or characteristics of the first user, or based on other such analysis), and continuing the facial map generation using the one or more features detected on the face of the first individual.
In some cases, at optional block 1512, a user interface map may be generated using the sensor data. The user interface map may be generated using any suitable sensor data, such as ranging data (e.g., from a LiDAR sensor or an IR sensor), image data (e.g., from a camera), and/or thermal data (e.g., from an IR sensor). The user interface map may identify one or more features of the user interface, such as vents, contours, catheters, catheter connections, strips, and the like. The user interface mapping may include measuring distances between any combination of the identified features. The resulting user interface map may be a two-dimensional or three-dimensional user interface map.
In some cases, generating the facial map at block 1510 may include generating a user interface map at block 1512.
At block 1514, one or more characteristics associated with the current suitability are identified using the sensor data from block 1502 and the facial map from block 1510. In some cases, identifying characteristics at block 1514 may include using the interface map generated at block 1512.
Any suitable characteristic may be identified. These characteristics may include characteristics of the user's face, characteristics of interactions between the user's face and the user interface, characteristics of the user interface, and characteristics of the environment. In some cases, the characteristic may be a user face, a user interface, interactions between the user face and the user interface, or an identifiable aspect of the environment that may be detected from the sensor data and that has a verification value for determining a quality of suitability of the user interface.
In some cases, the characteristics may be associated with a location, although this need not always be the case. The characteristics associated with the location may be associated with or with a location on the face map and/or the user interface map. For example, an example characteristic may be an audio signal indicative of an unintentional air leak (e.g., an audio signal having a characteristic frequency profile associated with the unintentional air leak) or the unintentional air leak itself. In some cases, this characteristic may be used alone (e.g., the presence or absence of unintentional air leakage may be used, such as to generate a suitability score), or in combination with location information associated with the characteristic. In such an instance, an unintentional air leak may be detected at a location on the user's face (e.g., between the nose and left cheek as seen in fig. 12), in which case the location information may indicate a location on a face map or user interface map associated with a location where an unintentional air leak exists on the user's face. Adding location information with the user interface may facilitate providing useful information, such as more accurate suitability scores and/or more accurate instructions to improve suitability.
In some cases, the set of available characteristics may be predetermined. In such cases, identifying characteristics at block 1514 may include analyzing the sensor data and/or facial map to determine which, if any, of a set of available characteristics is detected. For example, the set of available characteristics may include a local temperature rebound on the user's face, a local color rebound on the user's face, and a local temperature on the user interface. In such an instance, if the sensor data does not include thermal data (e.g., if no IR sensor or temperature sensor is available), the only characteristic in the list that can be determined is a local color springback that can be detected from the image data collected by the camera. In another example, if there is sufficient thermal data, the local temperature rebound and local temperature of the user interface may additionally be detected.
In some cases, the characteristic may be a beneficial characteristic or a detrimental characteristic. The beneficial characteristic may be a characteristic associated with a good fit of the user interface. For example, the nature of the local contours on the user's face as a sealed shape of the user interface may indicate that the user interface remains well-fitted to the user's face. Thus, the presence of such a characteristic or the presence of a greater may be beneficial. Conversely, detrimental characteristics may be associated with a poor fit of the user interface. For example, a characteristic that is a local temperature change over a region of the user's face may indicate that cool air is flowing over that region of the user's face, thereby indicating unintentional air leakage associated with poor fit. Thus, the presence of such a characteristic or the presence of a greater may be detrimental.
One example characteristic is a local temperature on the face of the user. The characteristic may be a temperature detected on the face of the user, which may be compared to a previously detected temperature at the same location (e.g., a previous time the user was wearing the user interface) or a current temperature detected at a nearby location. For example, if a cold spot is detected near the seal of the user interface, but surrounded by a warmer temperature, the cold spot may indicate an unintentional air leak.
Another example characteristic is local temperature rebound on the face of the user. Local temperature rebound may include a change in temperature over a period of time at a location (e.g., a location on a facial map). The local temperature rebound may follow a transient event, such as a user interface transient event (e.g., removal of a user interface), a user transient event (e.g., movement from a warm room to a colder room), a respiratory therapy system transient event (e.g., a heater engaging a respiratory therapy system, such as a heater of a flow generator, a heater of a humidifier, and/or a heater of a conduit), or another transient event. For example, detecting a longer temperature rebound time at certain locations relative to the facial map after removing the user interface may indicate a poor fit. "rebound" of a measured characteristic (e.g., temperature, color, profile, etc.) may include a change in the measured characteristic after a transient event such as donning, doffing, or adjusting a user interface, starting or stopping the flow of pressurized air from the RPT device, or another event that produces a change in the measured characteristic over time, for example.
Another example characteristic is a local color on the face of the user. The characteristic may be a color detected on the user's face that may be compared to a previously detected color at the same location (e.g., a previous time the user was wearing the user interface) or a current color detected at a nearby location. For example, if a redder location is detected near the seal of the user interface, but surrounded by a whiter color, the redder location may indicate an inadvertent air leak (e.g., because the seal is not pressed too much against the user's face and/or because of irritation from flowing air).
Another example characteristic is local color rebound on the user's face. Local color rebound may include a change in color over a period of time at a location (e.g., a location on a facial map). The local color rebound may follow a transient event, such as a user interface transient event (e.g., removal of a user interface), a user transient event (e.g., movement from a warm room to a colder room), a respiratory therapy system transient event (e.g., turning on or off a respiratory therapy device), or another transient event. For example, detecting a longer color rebound time at certain locations relative to the facial map after removing the user interface may indicate a good fit.
Another example feature is a local contour on the face of the user. The characteristic may be a contour detected on the user's face that may be compared to a previously detected contour at the same location (e.g., a previous time the user was wearing the user interface) or a current contour detected at a nearby location. For example, if a certain amount of indentations are detected on the user's skin around the entire seal of the user interface, the indentations may indicate a strong seal and a good fit.
Another example feature is local contour resilience on the user's face. Local contour resilience may include a change in contour over a period of time at a certain location (e.g., a certain location on a facial map). The local contour rebound may follow a transient event, such as a user interface transient event (e.g., removal of a user interface), a user transient event (e.g., movement from a warm room to a colder room), a respiratory therapy system transient event (e.g., turning on or off a respiratory therapy device), or another transient event. For example, detecting a longer contour rebound time at certain positions relative to the facial map after removing the user interface may indicate a good fit.
Another example feature is a local outline on the user interface. The characteristic may be a profile detected on the user interface (e.g., on a portion of the user interface, such as a seal of the user interface), which may be compared to a previously detected profile at the same location (e.g., at a previous time the user interface was worn) or a current profile detected at other locations on the user interface. For example, if a certain amount of indentations are detected on the seal of the user interface, the indentations may indicate a strong seal and a good fit.
Another example feature is local contour springback on the user interface. Local contour springback may include a change in contour over a period of time at a certain location on the user interface (e.g., a certain location on the user interface map). The local contour rebound may follow a transient event, such as a user interface transient event (e.g., removal of a user interface), a user transient event (e.g., movement from a warm room to a colder room), a respiratory therapy system transient event (e.g., turning on or off a respiratory therapy device), or another transient event. For example, detecting a longer profile rebound time at some location on the seal of the user interface may indicate a poor fit and/or a failed seal (e.g., a failed seal may not rebound very quickly).
Another example characteristic is a local temperature on the user interface. The characteristic may be a temperature detected at a location on the user interface (e.g., on a portion of the user interface, such as a seal) that may be compared to a previously detected temperature at the same location (e.g., a previous time the user interface was worn) or a current temperature detected at other locations on the user interface. For example, if a cold spot is detected at a location on the seal of the user interface, but other portions of the user interface are detected as being warmer, the cold spot may indicate an unintended air leak. In another example, the length of time it takes for a portion of the user interface or catheter to heat to a given temperature may indicate a good or poor fit.
Another example feature is a vertical position of the user interface relative to one or more features of the user's face. The vertical position of the user interface relative to one or more features of the user's face may be based on the face map and the sensor data and/or the user interface map. In one example, if the user interface or a detected feature of the user interface detects as being too high or too low relative to one or more features of the user's face (e.g., chin, mouth, and/or nose), this may indicate a poor fit (e.g., the strap may pull the user interface too high or too low).
Another example feature is a horizontal position of the user interface relative to one or more features of the user's face. The horizontal position of the user interface relative to one or more features of the user's face may be based on the face map and the sensor data and/or the user interface map. In one example, if the user interface or a detected feature of the user interface detects too far from one side or the other of the user's face, this may indicate a poor fit (e.g., the strap may pull the user interface too far from the user's right or left).
Another example feature is a rotational orientation of the user interface relative to one or more features of the user's face. The rotational orientation may be a measure of how far the user interface rotates relative to the user's face about an axis of rotation extending outwardly from the user's face, such as an axis of rotation extending in an anterior or ventral direction from the face. The rotational orientation of the user interface relative to one or more features of the user's face may be based on the face map and the sensor data and/or the user interface map. In one example, if the user interface or a detected feature of the user interface detects that it is rotated too high relative to one or more features of the user's face, this may indicate a poor fit (e.g., the strap may be twisting the user interface on the user's face).
Another example characteristic is a distance between the identified feature of the user interface relative to one or more features of the user's face. The distance may be based on the face map and the sensor data and/or the user interface map. In one example, if a detected feature of the user interface (e.g., a vent) is detected to be offset too far from a detected feature of the user's face (e.g., the bridge of the nose), this may indicate a poor fit. The distance may be measured in one, two or three dimensions. In one example, a user interface that fits too loosely on a user's face may have a vent positioned a relatively large distance from the surface of the user's face, which may indicate a poor fit. In this case, measurement of the distance between the vent and the feature of the user's face may indicate a poor fit, which may be corrected by tightening the strips of the user interface.
In another example, identifying leakage characteristics at block 1514 may include determining a breathing pattern of the user based on the received sensor data from block 1502, determining a thermal pattern associated with the face of the user based on the received sensor data from block 1502 and the facial map from block 1510, and then generating a leakage characteristic signal indicative of a balance between intentional vent leakage and unintentional seal leakage using the breathing pattern and the thermal pattern. For example, using the breathing pattern and the thermal pattern, an intentional vent leak signal may be generated that is indicative of an instance of an intentional vent leak, and optionally an intensity of the intentional vent leak. A similar unintentional seal leak signal may be generated for unintentional seal leaks. The leakage characteristic signal may be a ratio between an unintentional seal leakage signal and an intentional seal leakage signal.
In some cases, identifying the characteristics at block 1514 may include identifying and validating the potential characteristics at block 1516. Identifying and validating the potential characteristic may include identifying the characteristic using a first set of sensor data and then validating the characteristic using a second set of sensor data. The first set of sensor data and the second set of sensor data may be collected from the same sensor but at different times, or may be collected from different sensors. For example, collecting from the same sensor at different times may include identifying potential unintentional air leaks from thermal data while the user is wearing the user interface (e.g., detecting local changes in surface temperature of the user's face over a zone around the perimeter of the user interface), and then using thermal data acquired after the user has removed the user interface to confirm the unintentional air leaks (e.g., detecting lack of change in surface temperature of the user's face at a particular zone over a duration of time, such as when blood is flushed back into previously compressed tissue at other locations, but not adjacent to the potential air leaks).
In another example, collecting sensor data from different sensors to identify and confirm a potential characteristic may include using thermal data collected while a user is wearing a user interface to identify a potential unintentional air leak, and then using simultaneously collected audio data to confirm an unintentional air leak (e.g., by detecting a characteristic audio or acoustic signal indicative of an unintentional air leak that occurs simultaneously with the simultaneous detection of the potential unintentional air leak in a thermal image).
In some cases, additional sensor data may be received from the respiratory device at block 1520 and used to identify the characteristic at block 1514. Additional sensor data from the respiratory therapy device may include data acquired from a respiratory therapy system, such as the respiratory therapy system in fig. 1. Such additional sensor data may include data such as flow rate, pressure, delivered air temperature, ambient temperature, delivered air humidity, ambient audio signals, audio signals communicated via a conduit or air within a conduit, and the like. In some cases, the additional sensor data may include data collected by a user interface and/or a sensor in the catheter. In some cases, additional sensor data from block 1520 is associated with air delivery of the respiratory therapy system. For example, at block 1514, additional sensor data associated with the delivered air temperature may be used to identify and/or confirm an unintentional air leak by comparing the delivered air temperature to the temperature of the sealed user skin surrounding the user interface.
However, in some cases, additional sensor data from block 1520 may be associated with the current suitability of the user interface. In one example, the respiratory therapy system may include one or more sensors capable of detecting information about the user's face or user interface. For example, an RF sensor in a respiratory therapy device can detect ranging information associated with a current fit of a user interface on a user. In another example, one or more optical sensors (e.g., a visible light camera, a passive infrared sensor, and/or an active infrared sensor) may be used to detect information about a user's face and/or user interface.
In some cases, historical data may be received at block 1522 and used to identify characteristics at block 1514. The historical data may include historical sensor data (e.g., sensor data collected or received at the previous example of block 1502), historical characteristic data (e.g., information associated with the characteristics identified at the previous example of block 1514), and/or other historical data (e.g., previous suitability scores). At block 1518, identifying characteristics at block 1514 may include comparing historical data received at block 1522 with sensor data received at block 1502. For example, receiving the historical data at block 1522 may include accessing a memory containing a thermal map of the user's face prior to wearing the user interface. In such an instance, at block 1518, the thermal map may be compared to a current thermal map of the user's face, which may be taken while the user is wearing the user interface and/or after the user has removed the user interface. The differences detected in the two thermal maps may be used to identify characteristics (e.g., unexpected local temperatures and/or unintended air leaks).
In another example, receiving historical data at block 1522 may include accessing a memory containing previous contour rebound times associated with seals of a particular user interface. In such an instance, the previous contour spring-back time may be compared to a newly acquired contour spring-back time for the same portion of the seal of the user interface. Such a comparison may indicate degradation of the seal over time, which may lead to poor fit and which inevitably leads to seal failure. Thus, such degradation can be detected as a characteristic and used to inform the user to replace the seal.
In some cases, machine learning algorithms may be used to facilitate or implement the identification feature. Such machine learning algorithms may be trained using a training set of sensor data that has been collected by one or more users wearing a user interface or otherwise engaged in user interface transient events. The training data may include information about the determined presence of the characteristic, although this need not always be the case. In some cases, the training data may include information regarding the quality of the suitability of the user interface. Such suitability information quality may be based on subjective assessment by the user, objective values collected using other equipment (e.g., laboratory sensors and equipment, such as user interfaces equipped with dedicated sensors and/or dedicated sensing equipment), and so forth. In some cases, the characteristic identified at block 1514 is a feature used by the machine learning algorithm.
In some cases, identifying characteristics associated with the current suitability may include determining a quality of suitability of the prediction of the given user face (e.g., the face map from block 1510) and the given user interface (e.g., the user interface map or other user interface identification information from block 1512). In such cases, the facial map of the user's face may be applied to design parameters of a given user interface to determine the quality of the predicted suitability (e.g., the best possible suitability of the particular user interface on the user's face). If the quality of the predicted suitability is below a threshold, it may be determined that the given user interface is unsuitable for use by the user. The quality of the predicted suitability may also be used to determine whether a given assessment of the current suitability (e.g., an assessment as described with reference to block 1528) may be improved (e.g., whether improvement of the current suitability may be achieved or may occur). For example, if the quality of the predicted suitability of a given user interface on a particular user face is "good" (e.g., from among "bad", "good" and "very good", although other metrics for representing the quality of the suitability may be used), and the current suitability is evaluated as "good", then it may be determined that there may be no further improvement in the quality of the suitability without changing the user interface. In another example, if the quality of the predicted fit is "very good" and the current fit is assessed as "good," it may be determined that adjusting other factors (e.g., in addition to changing the user interface) may improve the current fit, such as trimming facial hair, removing cosmetics, changing bed position, or other changes.
At block 1524, output feedback may be generated based on the identified characteristics from block 1514 and optionally based on the characteristic location (e.g., location relative to the facial map and/or user interface map). Generating the output feedback at block 1524 may include presenting the output feedback, such as via a GUI, speaker, haptic feedback device, or the like. In some cases, the output feedback may be presented as a superposition on the user image, such as in the form of an augmented reality superposition. For example, icons, highlighting, text, and other indicia may be superimposed on the user's image (e.g., live or non-live) to provide feedback regarding the quality of the current suitability and/or instructions for how to improve the current suitability.
In some cases, the output feedback may include one or more characteristics that have been identified and optionally location information. For example, the output feedback may be an image or representation of the user's face (e.g., taken from a visible spectrum camera, a thermal imager, or graphically generated from a facial map) overlaid with graphics or text indicative of the presence of the detected characteristic. In one example, the detected unintentional air leak may be indicated on an image or representation of the user's face by an arrow, a highlighted circle, or other attentive visual element at the location of the detected unintentional air leak.
In some cases, generating output feedback at block 1524 may include determining and presenting suggested actions to improve fit at block 1526. Determining and presenting suggested actions to improve suitability may include using the identified characteristics and location information thereof to select an action designed to improve suitability, and then presenting the action (e.g., via text, graphics, audio feedback, haptic feedback, etc.). In some cases, the action to improve suitability may be identified based on a look-up table or algorithm that may select an action to take based on the identified characteristics (optionally including location information thereof). For example, detection of an unintentional air leak may be associated with a suggested action to adjust a band of the user interface, in which case the location of the unintentional air leak may be used to determine which band to adjust.
However, in some cases, determining the action to take may be based on a machine learning algorithm. Such machine learning algorithms may be trained using a training set of actions to be taken and suitability information quality before and/or after the actions are taken. Such suitability information quality may be based on subjective assessment by the user, objective values collected using other equipment (e.g., laboratory sensors and equipment, such as user interfaces equipped with dedicated sensors and/or dedicated sensing equipment), and so forth. In some cases, the characteristic identified at block 1514 is a feature used by the machine learning algorithm.
In some cases, generating output feedback at block 1524 may include generating an assessment of current suitability at block 1526. Generating an assessment of the current suitability at block 1526 may include using the sensor data, the identified characteristics, and/or the characteristic location information. The evaluation may be a numerical score (e.g., a fit score such as a numerical score between 0 and 100), a classification score (e.g., a text score such as "good", "normal" and "bad"; a color score such as green, yellow and red; or a graphical score such as a seal depicting no air leakage, a seal depicting a small air leakage, and a seal depicting a large air leakage).
In some cases, generating the estimate may include generating the estimate based on an equation algorithm using the sensor data, the identified characteristics, and/or the characteristic location information as inputs. For example, the suitability score may be a weighted calculation of values associated with different identified characteristics (e.g., an amount of temperature difference, a distance of temperature difference from a detected user interface sealing edge, a duration of an unintentional leak in the detected audio signal, etc.).
In some cases, generating the evaluation may be based at least in part on a comparison to historical data (e.g., historical data received at block 1522). In such cases, the evaluation may be based at least in part on whether the value associated with the identified one or more characteristics is shown as improving or degrading, in which case improving may ensure that the suitability score increases from a previous suitability score and degrading may ensure that the suitability score decreases from a previous suitability score.
In some cases, generating the assessment may be based on a machine learning algorithm. Such machine learning algorithms may be trained using training sets of sensor data, identified characteristics, and/or characteristic location information. In some cases, the training set may include suitability information quality. Such suitability information quality may be based on subjective assessment by the user, objective values collected using other equipment (e.g., laboratory sensors and equipment, such as user interfaces equipped with dedicated sensors and/or dedicated sensing equipment), and so forth. In some cases, the characteristic identified at block 1514 is a feature used by the machine learning algorithm.
Although process 1500 is shown and described herein as occurring in a particular order, more generally, the various blocks of process 1500 may be performed in any suitable order, with fewer and/or additional blocks. For example, in some cases, blocks 1520, 1522, 1512, and 1518 are not used.
The flowcharts in fig. 9-10 and 14-15 represent example machine readable instructions for collecting and analyzing data to select an optimal interface for respiratory pressure therapy and performing follow-up analysis (such as suitability of a selected user interface). In this example, the machine readable instructions comprise an algorithm executed by: (a) a processor; (b) a controller; and/or (c) one or more other suitable processing devices. The algorithm may be embodied in software stored on a tangible medium such as a flash memory, CD-ROM, floppy disk, hard drive, digital video (versatile) disk (DVD), or other storage device. However, those of ordinary skill in the art will readily appreciate that the entire algorithm and/or portions thereof could alternatively be executed by a device other than a processor and/or embodied in firmware or dedicated hardware in a well known manner (e.g., the entire algorithm and/or portions thereof could be embodied by an application specific integrated circuit [ ASIC ], a programmable logic device [ PLD ], a field programmable logic device [ FPLD ], a field programmable gate array [ FPGA ], discrete logic, etc.). For example, any or all of the components of the interface may be implemented by software, hardware, and/or firmware. Further, some or all of the machine readable instructions represented by the flowcharts may be implemented manually. Further, although the example algorithm is described with reference to the flowcharts depicted in fig. 9-10 and 14-15, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.
As used herein, the terms "component," "module," "system" and the like are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit), a combination of hardware and software, or an entity in connection with an operating machine having one or more particular functions. For example, the components may be, but are not limited to: a process running on a processor (e.g., a digital signal processor), a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Furthermore, the "means" may appear in the following form: specially designed hardware; general-purpose hardware specialized by executing software thereon that enables the hardware to perform a particular function; software stored on a computer readable medium; or a combination thereof.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, to the extent that the term "includes," including, "" has, "or variants thereof are used in either the detailed description and/or the claims, such term is intended to be inclusive in a manner similar to the term" comprising.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. Furthermore, terms such as those defined in commonly used dictionaries should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
One or more elements or aspects or steps from one or more of the following claims 1 to 100, or any portion thereof, may be combined with one or more elements or aspects or steps from one or more of the other claims 1 to 100, or any portion thereof, or a combination thereof, to form one or more additional embodiments and/or claims of the present disclosure.
While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example only, and not limitation. Although the invention has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of this specification and the annexed drawings. Furthermore, while a particular feature of the invention may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present invention should not be limited by any of the above-described embodiments. Rather, the scope of the invention should be defined in accordance with the following claims and their equivalents.

Claims (100)

1. A system for selecting an interface suitable for a user's face for respiratory therapy, the system comprising:
a storage means for storing a face image of the user;
a face profile engine operable to determine facial features of the user based on the facial image;
one or more databases for storing:
a plurality of facial features from a user population and a corresponding plurality of interfaces used by the user population; and
operational data of a respiratory therapy device having a plurality of corresponding interfaces for use by the user population; and
a selection engine coupled to the one or more databases, wherein the selection engine is operable to select an interface for the user from the plurality of corresponding interfaces according to stored operational data and determined facial features based on a desired effect.
2. The system of claim 1, wherein the interface is a mask.
3. The system of any one of claims 1-2, wherein the respiratory therapy device is configured to provide one or more of Positive Airway Pressure (PAP) or non-invasive ventilation (NIV).
4. The system of any of claims 1-3, wherein at least one of the respiratory therapy devices includes an audio sensor to collect audio data during operation of the at least one of the respiratory therapy devices.
5. The system of claim 4, wherein the selection engine is operable to analyze the audio data to determine a type of a corresponding interface of the at least one of the respiratory therapy devices based on matching the audio data with acoustic signatures of known interfaces.
6. The system of any of claims 1 to 5, wherein the selection engine selects the interface based on a comparison of demographic data of the user with demographic data of a population of users stored in the one or more databases.
7. The system of any one of claims 1 to 6, wherein the operational data includes one of flow, motor speed, and treatment pressure.
8. The system of any of claims 1 to 7, wherein the facial image is determined from a scan from a mobile device comprising a camera.
9. The system of claim 8, wherein the mobile device comprises a depth sensor, and wherein the camera is a 3D camera, and wherein the facial features are three-dimensional features derived from a gridded surface derived from the facial image.
10. The system of any of claims 1 to 9, wherein the facial image is a two-dimensional image comprising landmarks, wherein the facial features are three-dimensional features derived from the landmarks.
11. The system of any of claims 1 to 9, wherein the facial image is one of a plurality of two-dimensional facial images, and wherein the facial feature is a three-dimensional feature derived from a 3D deformation model adapted to match the facial image.
12. The system of any of claims 1 to 8, wherein the facial image comprises landmarks related to at least one facial dimension.
13. The system of claim 12, wherein the facial dimension comprises at least one of a facial height, a nasal width, and a nasal depth.
14. The system of any one of claims 1 to 13, wherein the desired effect is a seal between the interface and face surface to prevent leakage.
15. The system of any one of claims 1 to 14, wherein the desired effect is compliance with use of the respiratory therapy device.
16. The system of any of claims 1-15, further comprising a mobile device operable to collect subjective data input from a user, and wherein selection of the interface is based in part on the subjective data.
17. The system of any of claims 1 to 16, further comprising a machine learning module operable to determine a type of operational data associated with an interface that achieves the desired effect.
18. The system of any of claims 1 to 17, wherein the selected interface comprises one of a plurality of types of interfaces and one of a plurality of sizes of interfaces.
19. The system of any of claims 1 to 18, wherein the selection engine is further operable to receive feedback from the user based on operating the selected interface and to select another interface based on the desired effect based on an undesired result.
20. The system of claim 19, wherein the undesirable outcome is one of low user compliance with therapy, high leakage, or unsatisfactory subjective outcome data.
21. A method of selecting an interface for respiratory therapy that is suitable for a user's face, the method comprising:
storing a facial image of the user in a storage device;
determining facial features based on the facial image;
storing a plurality of facial features and a corresponding plurality of interfaces from a user population in one or more databases;
Storing operational data of respiratory therapy devices used by a user population having a plurality of corresponding interfaces in one or more databases; and
an interface is selected for the user from the plurality of corresponding interfaces based on the desired effect according to the stored operational data and the determined facial features.
22. The method of claim 21, wherein the interface is a mask.
23. The method of any one of claims 21-22, wherein the respiratory therapy device is configured to provide one of Positive Airway Pressure (PAP) or non-invasive ventilation (NIV).
24. The method of any of claims 21-23, wherein at least one of the respiratory therapy devices includes an audio sensor to collect audio data during operation of the at least one of the respiratory therapy devices.
25. The method of claim 24, wherein the selecting comprises analyzing the audio data to determine a type of a corresponding interface of the at least one of the respiratory therapy devices based on matching the audio data with acoustic signatures of known interfaces.
26. The method of any of claims 21 to 25, wherein the selecting comprises comparing demographic data of the user with demographic data of a population of users stored in the one or more databases.
27. The method of any one of claims 21 to 26, wherein the operational data includes one of flow, motor speed, and treatment pressure.
28. The method of any of claims 21 to 27, wherein the facial image is determined from a scan from a mobile device comprising a camera.
29. The method of claim 28, wherein the mobile device comprises a depth sensor, and wherein the camera is a 3D camera, and wherein the facial features are three-dimensional features derived from a gridded surface derived from the facial image.
30. The method of any one of claims 21 to 29, wherein the facial image is a two-dimensional image comprising landmarks, wherein the facial features are three-dimensional features derived from the landmarks.
31. The method of any of claims 21 to 30, wherein the facial image is one of a plurality of two-dimensional facial images, and wherein the facial feature is a three-dimensional feature derived from a 3D deformation model adapted to match the facial image.
32. The method of any of claims 21 to 28, wherein the facial image comprises landmarks relating to at least one facial dimension.
33. The method of claim 32, wherein the facial dimension comprises at least one of facial height, nasal width, and nasal depth.
34. A method according to any one of claims 21 to 33, wherein the desired effect is a seal between the interface and face surface to prevent leakage.
35. The method of any one of claims 21 to 34, wherein the desired effect is user compliance with a treatment.
36. The method of any of claims 21-35, further comprising collecting, by at least one mobile device, subjective data input from a user, and wherein selection of the interface is based at least in part on the subjective data.
37. The method of any of claims 21 to 36, further comprising determining, via a machine learning module, a type of operational data associated with an interface that achieves the desired effect.
38. The method of any of claims 21 to 37, wherein the selected interface comprises one of a plurality of types of interfaces and one of a plurality of sizes of interfaces.
39. The method of any of claims 21 to 38, further comprising:
receiving feedback from the user based on operating the selected interface; and
Based on the undesired results, another interface is selected based on the desired effects.
40. The method of claim 39, wherein the undesirable outcome is one of low user compliance with therapy, high leakage, or unsatisfactory subjective outcome data.
41. The method of any one of claims 21 to 40, further comprising:
receiving sensor data associated with a current fit of the selected interface on the user's face, the sensor data collected by one or more sensors of a mobile device;
generating a face map using the sensor data, wherein the face map is indicative of one or more features of the user's face;
identifying a first characteristic associated with the current suitability using the sensor data and the facial map, wherein the first characteristic is indicative of a quality of the current suitability, and wherein the characteristic is associated with a characteristic location on the facial map; and
output feedback is generated based on the identified first characteristic and the characteristic location.
42. The method of claim 41, wherein the sensor data comprises distance data, wherein the distance data indicates one or more distances between the one or more sensors of the mobile device and the user's face.
43. The method of claim 42, wherein the one or more sensors comprise: i) A proximity sensor; ii) an infrared-based lattice sensor; iii) LiDAR sensor; iv) MEMS micromirror projector based sensors; or v) any combination of i to iv.
44. The method of any one of claims 41 to 43, wherein the collecting of the sensor data is in: i) Before the user wears the interface; ii) when the user wears the interface with the current suitability; iii) After the user removes the interface; or iii) any combination of i, ii, and iii.
45. The method of claim 44, further comprising receiving initial sensor data, wherein the initial sensor data is associated with the user's face prior to donning the interface, and wherein identifying the characteristic comprises comparing the initial sensor data with the sensor data.
46. The method of any one of claims 41 to 45, wherein the characteristic comprises:
i) A local temperature on the face of the user;
ii) a local temperature change on the face of the user;
iii) A local color on the face of the user;
iv) a local color change on the face of the user;
v) a local contour on the face of the user;
vi) a local contour change on the user's face;
vii) a local profile on the interface;
viii) local changes on the interface;
ix) a local temperature at the interface; or alternatively
x) any combination of i to ix.
47. The method of any one of claims 41 to 46, wherein the first characteristic comprises:
i) A vertical position of the interface relative to the one or more features of the user's face;
ii) a horizontal position of the interface relative to the one or more features of the user's face;
iii) A rotational orientation of the interface relative to the one or more features of the user's face;
iv) a distance between the identified feature of the interface relative to the one or more features of the user's face; or alternatively
v) any combination of i to iv.
48. The method of any of claims 41-47, wherein the one or more sensors comprise one or more orientation sensors, wherein the sensor data comprises orientation sensor data from the one or more orientation sensors, and wherein receiving sensor data comprises:
Scanning the face of the user using a mobile device when the mobile device is oriented such that the one or more orientation sensors are oriented toward the face of the user; and
tracking progress of the scan of the face; and
an invisible stimulus is generated that indicates progress of scanning of the face.
49. The method of any one of claims 41 to 48, further comprising:
receive motion data associated with movement of the mobile device; and
the motion data is applied to the sensor data to account for movement of the mobile device relative to the user's face.
50. The method of any of claims 41-49, further comprising generating an interface map using the sensor data, wherein the interface map indicates a relative position of one or more features of the interface with respect to the face map, and wherein identifying the first characteristic comprises using the interface map.
51. The method of any of claims 41-50, wherein the one or more sensors comprise a camera, and wherein the sensor data comprises camera data, the method further comprising:
Receiving sensor calibration data collected by the camera when the camera is oriented toward a calibration surface; and
the camera data of the sensor data is calibrated based on the sensor calibration data.
52. The method of any one of claims 41 to 51, further comprising:
identifying a second characteristic using the sensor data and the facial map, the second characteristic being associated with a possible future failure of the interface; and
output feedback is generated based on the identified second characteristic, the output feedback being usable to reduce a likelihood that the possible future failure will occur or delay the occurrence of the possible future failure.
53. The method of any of claims 41-52, further comprising accessing historical sensor data associated with one or more historical adaptations of the interface on the user's face prior to receiving the sensor data, wherein identifying the first characteristic further uses the historical sensor data.
54. The method of any of claims 41-53, further comprising generating a current suitability score using the sensor data and the facial map, wherein the output feedback is generated to improve a subsequent suitability score.
55. The method of any of claims 41-54, wherein receiving sensor data includes receiving audio data from one or more audio sensors, and wherein identifying the first characteristic includes using the audio data to identify unintentional leakage.
56. The method of any one of claims 41 to 55, wherein the one or more sensors comprise a camera, an audio sensor, and a thermal sensor; wherein the sensor data includes camera data, audio data, and thermal data; and wherein identifying the characteristic comprises:
identifying potential characteristics using at least one of the camera data, the audio data, and the thermal data; and
at least one other of the camera data, the audio data, and the thermal data is used to confirm the potential characteristic.
57. The method of any one of claims 41 to 56, further comprising:
presenting user instructions, wherein the user instructions indicate actions to be performed by the user; and
receiving a completion signal, wherein the completion signal indicates that the user has performed the action, wherein a first portion of the sensor data is collected before receiving the completion signal and a second portion of the sensor data is collected after receiving the completion signal; and wherein identifying the characteristic comprises comparing the first portion of the sensor data with the second portion of the sensor data.
58. The method of any of claims 41-57, wherein generating the facial map comprises:
identifying a first individual and a second individual using the received sensor data;
identifying the first individual as being associated with the interface; and
the facial map of the first individual is generated.
59. The method of any one of claims 41 to 58, wherein receiving the sensor data comprises:
determining adjustment data from the received sensor data, the adjustment data being associated with: i) Movement of the mobile device, ii) inherent noise of the mobile device, iii) respiratory noise of the user, iv) speech noise of the user, v) changes in ambient lighting, vi) detected transient shadows cast on the user's face, vii) detected transient colored light cast on the user's face, or viii) any combination of i through vii; and
an adjustment is applied to at least some of the received sensor data based on the adjustment data.
60. The method of any one of claims 41 to 59, wherein receiving the sensor data comprises:
receiving image data associated with a camera of the one or more sensors, the camera operating in the visible spectrum;
Receiving unstable data associated with an additional sensor of the one or more sensors, the additional sensor being a ranging sensor or an image sensor operating outside the visible spectrum;
determining image stability information associated with stability of the image data; and
the instability data is stabilized using the image stability information associated with the stability of the image data.
61. The method of any one of claims 41 to 60, wherein the output feedback can be used to improve the current suitability.
62. The method of claim 61, further comprising:
generating an initial score based on the current suitability using the sensor data;
receive subsequent sensor data associated with a subsequent suitability of the interface on the user's face, wherein the subsequent suitability is based on the current suitability after the output feedback is implemented;
generating a subsequent score based on the subsequent suitability using the subsequent sensor data; and
the subsequent score is evaluated, wherein the subsequent score indicates a quality improvement relative to the initial score.
63. The method of any of claims 41-62, wherein identifying the first characteristic associated with the current suitability comprises:
determining a breathing pattern of the user based on the received sensor data;
determining a thermal mode associated with the user's face based on the received sensor data and the facial map; and
a leak characteristic is determined using the breathing pattern and the thermal pattern, wherein the leak characteristic is indicative of a balance between intentional vent leakage and unintentional seal leakage.
64. The method of any one of claims 41-63, wherein the one or more sensors comprise at least two sensors selected from the group consisting of: i) A passive thermal sensor; ii) an active thermal sensor; iii) A camera; iv) an accelerometer; v) a gyroscope; vi) an electronic compass; vii) a magnetometer; viii) a pressure sensor; ix) a microphone; x) a temperature sensor; xi) a proximity sensor; xii) an infrared-based lattice sensor; xiii) LiDAR sensor; xiv) MEMS micromirror projector based sensor; xv) a radio frequency based ranging sensor; and xvi) a wireless network interface.
65. The method of any of claims 41-64, further comprising receiving additional sensor data from one or more additional sensors of a respiratory device, wherein identifying the first characteristic comprises using the additional sensor data.
66. The method of any one of claims 41 to 65, further comprising sending a control signal that, when received by the respiratory device, causes the respiratory device to operate using a set of defined parameters; wherein a first portion of the sensor data is collected when the respiratory device is operating with the set of defined parameters and a second portion of the sensor data is collected when the respiratory device is not operating with the set of defined parameters; and wherein identifying the first characteristic comprises comparing the first portion of the sensor data with the second portion of the sensor data.
67. A method, comprising:
receiving sensor data associated with a current suitability of an interface on a face of a user, the sensor data collected by one or more sensors of a mobile device;
generating a face map using the sensor data, wherein the face map is indicative of one or more features of the user's face;
Identifying a first characteristic associated with the current suitability using the sensor data and the facial map, wherein the characteristic is indicative of a quality of the current suitability, and wherein the first characteristic is associated with a characteristic location on the facial map;
output feedback is generated based on the identified first characteristic and the characteristic location.
68. The method of claim 67, wherein said interface is fluidly coupleable to a respiratory device, and wherein said mobile device is separate from said respiratory device.
69. The method of claim 67 or 68, wherein generating the output feedback comprises:
determining a suggested action that, if implemented, would affect the first characteristic to improve the current suitability; and
the suggested action is presented using an electronic interface of the mobile device.
70. The method of any one of claims 67 to 69, wherein said sensor data comprises infrared data from: i) A passive thermal sensor; ii) an active thermal sensor; or iii) both i and ii.
71. The method of any of claims 67-70, wherein the sensor data comprises distance data, wherein the distance data indicates one or more distances between the one or more sensors of the mobile device and the user's face.
72. The method of claim 71, wherein the one or more sensors comprise: i) A proximity sensor; ii) an infrared-based lattice sensor; iii) LiDAR sensor; iv) MEMS micromirror projector based sensors; or v) any combination of i to iv.
73. The method of any one of claims 67 to 72, wherein the collection of sensor data is at: i) Before the user wears the interface; ii) when the user wears the interface with the current suitability; iii) After the user removes the interface; or iii) any combination of i, ii, and iii.
74. The method of claim 73, further comprising receiving initial sensor data, wherein the initial sensor data is associated with the user's face prior to donning the interface, and wherein identifying the first characteristic comprises comparing the initial sensor data to the sensor data.
75. The method of any one of claims 67 to 74, wherein the first characteristic comprises:
i) A local temperature on the face of the user;
ii) a local temperature change on the face of the user;
iii) A local color on the face of the user;
iv) a local color change on the face of the user;
v) a local contour on the face of the user;
vi) a local contour change on the user's face;
vii) a local profile on the interface;
viii) local changes on the interface;
ix) a local temperature at the interface; or alternatively
x) any combination of i to ix.
76. The method of any one of claims 67 to 75, wherein the first characteristic comprises:
i) A vertical position of the interface relative to the one or more features of the user's face;
ii) a horizontal position of the interface relative to the one or more features of the user's face;
iii) A rotational orientation of the interface relative to the one or more features of the user's face;
iv) a distance between the identified feature of the interface relative to the one or more features of the user's face; or alternatively
v) any combination of i to iv.
77. The method of any one of claims 67 to 76, wherein the one or more sensors comprise one or more orientation sensors, wherein the sensor data comprises orientation sensor data from the one or more orientation sensors, and wherein receiving sensor data comprises:
Scanning the face of the user using a mobile device when the mobile device is oriented such that the one or more orientation sensors are oriented toward the face of the user; and
tracking progress of the scan of the face; and
an invisible stimulus is generated that indicates progress of scanning of the face.
78. The method of any one of claims 67 to 77, further comprising:
receive motion data associated with movement of the mobile device; and
the motion data is applied to the sensor data to account for movement of the mobile device relative to the user's face.
79. The method of any of claims 67 to 78, further comprising generating an interface map using the sensor data, wherein the interface map indicates a relative position of one or more features of the interface with respect to the face map, and wherein identifying the characteristic comprises using the interface map.
80. The method of any of claims 67-79, wherein the one or more sensors comprise a camera, and wherein the sensor data comprises camera data, the method further comprising:
Receiving sensor calibration data collected by the camera when the camera is oriented toward a calibration surface; and
the camera data of the sensor data is calibrated based on the sensor calibration data.
81. The method of any one of claims 67 to 80, further comprising:
identifying a second characteristic using the sensor data and the facial map, the second characteristic being associated with a possible future failure of the interface; and
output feedback is generated based on the identified second characteristic, the output feedback being usable to reduce the likelihood that the possible future failure will occur or delay the occurrence of the possible future failure.
82. The method of any of claims 67-81, further comprising accessing historical sensor data associated with one or more historical adaptations of the interface on the user's face prior to receiving the sensor data, wherein identifying the first characteristic further uses the historical sensor data.
83. The method of any of claims 67 to 82, further comprising generating a current suitability score using the sensor data and the facial map, wherein the output feedback is generated to improve a subsequent suitability score.
84. The method of any of claims 67-83, wherein receiving sensor data includes receiving audio data from one or more audio sensors, and wherein identifying the first characteristic includes using the audio data to identify unintentional leakage.
85. The method of any one of claims 67 to 84, wherein said one or more sensors comprise a camera, an audio sensor, and a thermal sensor; wherein the sensor data includes camera data, audio data, and thermal data; and wherein identifying the first characteristic comprises:
identifying potential characteristics using at least one of the camera data, the audio data, and the thermal data; and
at least one other of the camera data, the audio data, and the thermal data is used to confirm the potential characteristic.
86. The method of any one of claims 67 to 85, further comprising:
presenting user instructions, wherein the user instructions indicate actions to be performed by the user; and
receiving a completion signal, wherein the completion signal indicates that the user has performed the action, wherein a first portion of the sensor data is collected before receiving the completion signal and a second portion of the sensor data is collected after receiving the completion signal; and wherein identifying the characteristic comprises comparing the first portion of the sensor data with the second portion of the sensor data.
87. The method of any one of claims 67 to 86, wherein generating the facial map comprises:
identifying a first individual and a second individual using the received sensor data;
identifying the first individual as being associated with the interface; and
the facial map of the first individual is generated.
88. The method of any one of claims 67 to 87, wherein receiving the sensor data comprises:
determining adjustment data from the received sensor data, the adjustment data being associated with: i) Movement of the mobile device, ii) inherent noise of the mobile device, iii) respiratory noise of the user, iv) speech noise of the user, v) changes in ambient lighting, vi) detected transient shadows cast on the user's face, vii) detected transient colored light cast on the user's face, or viii) any combination of i through vii; and
an adjustment is applied to at least some of the received sensor data based on the adjustment data.
89. The method of any one of claims 67 to 88, wherein receiving the sensor data comprises:
receiving image data associated with a camera of the one or more sensors, the camera operating in the visible spectrum;
Receiving unstable data associated with an additional sensor of the one or more sensors, the additional sensor being a ranging sensor or an image sensor operating outside the visible spectrum;
determining image stability information associated with stability of the image data; and
the instability data is stabilized using the image stability information associated with the stability of the image data.
90. The method of any one of claims 67 to 89, wherein the output feedback can be used to improve the current suitability.
91. The method of claim 90, further comprising:
generating an initial score based on the current suitability using the sensor data;
receive subsequent sensor data associated with a subsequent suitability of the interface on the user's face, wherein the subsequent suitability is based on the current suitability after the output feedback is implemented;
generating a subsequent score based on the subsequent suitability using the subsequent sensor data; and
the subsequent score is evaluated, wherein the subsequent score indicates a quality improvement relative to the initial score.
92. The method of any one of claims 67-91, wherein identifying the characteristic associated with the current suitability comprises:
determining a breathing pattern of the user based on the received sensor data;
determining a thermal model associated with the user's face based on the received sensor data and the facial map; and
a leak characteristic is determined using the breathing pattern and the thermal pattern, wherein the leak characteristic is indicative of a balance between intentional vent leakage and unintentional seal leakage.
93. The method of any one of claims 67 to 92, wherein said one or more sensors comprise at least two sensors selected from the group consisting of: i) A passive thermal sensor; ii) an active thermal sensor; iii) A camera; iv) an accelerometer; v) a gyroscope; vi) an electronic compass; vii) a magnetometer; viii) a pressure sensor; ix) a microphone; x) a temperature sensor; xi) a proximity sensor; xii) an infrared-based lattice sensor; xiii) LiDAR sensor; xiv) MEMS micromirror projector based sensor; xv) a radio frequency based ranging sensor; and xvi) a wireless network interface.
94. The method of any of claims 68-93, further comprising receiving additional sensor data from one or more additional sensors of the respiratory device, wherein identifying the characteristic comprises using the additional sensor data.
95. The method of any one of claims 68 to 94, further comprising sending a control signal that, when received by the respiratory device, causes the respiratory device to operate using a set of defined parameters; wherein a first portion of the sensor data is collected when the respiratory device is operating with the set of defined parameters and a second portion of the sensor data is collected when the respiratory device is not operating with the set of defined parameters; and wherein identifying the characteristic comprises comparing the first portion of the sensor data with the second portion of the sensor data.
96. A system, comprising:
a control system comprising one or more processors; and
a memory having machine-readable instructions stored thereon;
wherein the control system is coupled to the memory and when the machine readable instructions in the memory are executed by at least one of the one or more processors of the control system, implement the method of any of claims 67 to 95.
97. A system for evaluating interface suitability, the system comprising a control system configured to implement the method of any one of claims 67 to 95.
98. A computer program product comprising instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 67 to 94.
99. The computer program product of claim 98, wherein the computer program product is a non-transitory computer-readable medium.
100. A system for determining suitability of an interface to a face of a user for respiratory therapy, the system comprising:
a mobile device comprising a sensor operable to collect sensor data associated with a current suitability of the interface on a face of a user; and
a control system operable to:
generating a face map using the sensor data, wherein the face map is indicative of one or more features of the user's face;
identifying a characteristic associated with the current suitability using the sensor data and the facial map, wherein the characteristic is indicative of a quality of the current suitability, and wherein the characteristic is associated with a characteristic location on the facial map; and
Output feedback is generated based on the identified characteristics and the characteristic locations on the facial map.
CN202280016909.9A 2021-02-26 2022-02-28 System and method for continuously adjusting personalized mask shape Pending CN116888682A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/154,223 2021-02-26
US202163168635P 2021-03-31 2021-03-31
US63/168,635 2021-03-31
PCT/US2022/018178 WO2022183116A1 (en) 2021-02-26 2022-02-28 System and method for continuous adjustment of personalized mask shape

Publications (1)

Publication Number Publication Date
CN116888682A true CN116888682A (en) 2023-10-13

Family

ID=88262672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280016909.9A Pending CN116888682A (en) 2021-02-26 2022-02-28 System and method for continuously adjusting personalized mask shape

Country Status (1)

Country Link
CN (1) CN116888682A (en)

Similar Documents

Publication Publication Date Title
US11857726B2 (en) Mask sizing tool using a mobile application
US20220023567A1 (en) Intelligent setup and recommendation system for sleep apnea device
US11935252B2 (en) System and method for collection of fit data related to a selected mask
US11724051B2 (en) Systems and methods for detecting an intentional leak characteristic curve for a respiratory therapy system
CN116601721A (en) System and method for identifying user interface
US20230206486A1 (en) Systems and methods for locating user interface leak
US20240091476A1 (en) Systems and methods for estimating a subjective comfort level
US20230098579A1 (en) Mask sizing tool using a mobile application
US20240131287A1 (en) System and method for continuous adjustment of personalized mask shape
CN116888682A (en) System and method for continuously adjusting personalized mask shape
JP2023553957A (en) System and method for determining sleep analysis based on body images
US20230364365A1 (en) Systems and methods for user interface comfort evaluation
WO2022183116A1 (en) System and method for continuous adjustment of personalized mask shape
US20240009416A1 (en) Systems and methods for determining feedback to a user in real time on a real-time video
WO2024023743A1 (en) Systems for detecting a leak in a respiratory therapy system
CA3232840A1 (en) Method and system for selecting a mask
CN116783661A (en) System and method for determining mask advice
CN111511432A (en) Improved delivery of pressure support therapy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination