WO2022132465A1 - Systems and methods for augmented health monitoring - Google Patents

Systems and methods for augmented health monitoring Download PDF

Info

Publication number
WO2022132465A1
WO2022132465A1 PCT/US2021/061853 US2021061853W WO2022132465A1 WO 2022132465 A1 WO2022132465 A1 WO 2022132465A1 US 2021061853 W US2021061853 W US 2021061853W WO 2022132465 A1 WO2022132465 A1 WO 2022132465A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensors
subject
data
health
hours
Prior art date
Application number
PCT/US2021/061853
Other languages
French (fr)
Inventor
Fan Li
Mohammadhadi Kiapour
Nan DU
Nan Liu
Jia Li
Original Assignee
DawnLight Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DawnLight Technologies Inc. filed Critical DawnLight Technologies Inc.
Publication of WO2022132465A1 publication Critical patent/WO2022132465A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/01Measuring temperature of body parts ; Diagnostic temperature sensing, e.g. for malignant or inflamed tissue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items

Definitions

  • Patient monitoring may require collection and analysis of health information over a period of time to detect clinical signs of a health status of the patient (e.g., occurrence of a disease or disorder or abnormal symptoms or vital signs).
  • Patient monitoring whether performed inside or outside of a clinical setting (e.g., a hospital), may be performed using sensors (e.g., contactless sensors) that enable non-invasive collection of health data and assessment of health statuses of the patient.
  • sensors e.g., contactless sensors
  • Such systems and methods may allow patients with elevated risk of an adverse health condition to be accurately monitored for deterioration, occurrence of an adverse health condition, or recurrence of an adverse health condition, whether inside or outside of a clinical setting.
  • the systems and methods may process health data including collected sensor information or other clinical health data (e.g., obtained by blood testing, imaging, etc.).
  • the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.
  • the plurality of contactless sensors is configured to acquire at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
  • the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject.
  • the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
  • the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subject.
  • the one or more image sensors are selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
  • CCD charged- coupled device
  • CMOS complementary metal oxide semiconductor
  • MOS metal oxide semiconductor
  • DRAM dynamic random access memory
  • QIS Quanta Image Sensor
  • the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject.
  • the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
  • CCD charged-coupled device
  • CMOS complementary metal oxide semiconductor
  • MOS metal oxide semiconductor
  • DRAM dynamic random access memory
  • QIS Quanta Image Sensor
  • the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject.
  • the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
  • the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject.
  • the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
  • the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject.
  • the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature.
  • the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
  • the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject.
  • the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
  • the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
  • the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
  • the transceiver comprises a wireless transceiver.
  • the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
  • the one or more computer processors are further configured to store the acquired health data in a database.
  • the database comprises a cloud-based database.
  • the one or more computer processors are further configured to present an alert on the electronic display based at least in part on the generated output. In some embodiments, the one or more computer processors are further configured to transmit an alert over a network to a health care provider of the subject based at least in part on the output.
  • the alert comprises instructions to administer care or treatment to the subject.
  • administering the treatment comprises providing a medication to the subject.
  • administering the care comprises turning the subject around.
  • the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
  • the trained algorithm comprises a machine learning-based classifier configured to process the health data to generate the output indicative of the health status of the subject.
  • the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
  • SVM support vector machine
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • LSTM long shortterm memory
  • the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition.
  • the adverse health condition is a disease or disorder.
  • the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
  • the health status of the subject is a fall or a vital sign abnormality.
  • the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
  • the subject has received a clinical treatment or procedure.
  • the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.
  • the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.
  • the one or more computer processors are configured to further analyze an interaction between the subject and a caregiver of the subject, and provide an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
  • the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99.
  • AUC Area-Under-the-Curve
  • the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about 3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
  • the one or more computer processors are configured to perform (i), (ii), and (iii) in real time or substantially in real time.
  • the present disclosure provides a method for monitoring a subject, comprising: (a) receiving, using a transceiver, health data of the subject acquired from a plurality of sensors over a period of time, which plurality of sensors comprises a plurality of contactless sensors; (b) computer processing the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time; and (c) providing the output for display on an electronic display.
  • the health data comprises at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
  • the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject.
  • the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
  • the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subject.
  • the one or more image sensors are selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
  • CCD charged- coupled device
  • CMOS complementary metal oxide semiconductor
  • MOS metal oxide semiconductor
  • DRAM dynamic random access memory
  • QIS Quanta Image Sensor
  • the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject.
  • the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
  • CCD charged-coupled device
  • CMOS complementary metal oxide semiconductor
  • MOS metal oxide semiconductor
  • DRAM dynamic random access memory
  • QIS Quanta Image Sensor
  • the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject.
  • the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
  • the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject.
  • the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
  • the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject.
  • the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature.
  • the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
  • the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject.
  • the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
  • the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
  • the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
  • the transceiver comprises a wireless transceiver.
  • the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
  • RF radio frequency
  • the method further comprises storing the acquired health data in a database.
  • the database comprises a cloud-based database.
  • the method further comprises presenting an alert on the electronic display based at least in part on the generated output.
  • the method further comprises transmitting an alert over a network to a health care provider of the subject based at least in part on the output.
  • the alert comprises instructions to administer care or treatment to the subject.
  • administering the treatment comprises providing a medication to the subject.
  • administering the care comprises turning the subject around.
  • the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
  • the trained algorithm comprises a machine learning-based classifier configured to process the health data to generate the output indicative of the health status of the subject.
  • the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
  • SVM support vector machine
  • DNN deep neural network
  • CNN convolutional neural network
  • RNN recurrent neural network
  • LSTM long shortterm memory
  • the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition.
  • the adverse health condition is a disease or disorder.
  • the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
  • the health status of the subject is a fall or a vital sign abnormality.
  • the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
  • the subject has received a clinical treatment or procedure.
  • the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.
  • the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.
  • the method further comprises analyzing an interaction between the subject and a caregiver of the subject, and providing an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
  • the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
  • the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99.
  • AUC Area-Under-the-Curve
  • the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about 3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
  • the method further comprises performing (i), (ii), and (iii) in real time or substantially in real time.
  • Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.
  • Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto.
  • the computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
  • FIG. 1 illustrates an overview of an example method for monitoring a subject.
  • FIG. 2 illustrates an overview of an example workflow for monitoring a subject.
  • FIG. 3 shows a computer system that is programmed or otherwise configured to implement methods provided herein.
  • the term “subject,” as used herein, generally refers to a human such as a patient.
  • the subject may be a person (e.g., a patient) with a disease or disorder, or a person that has been treated for a disease or disorder, or a person that is being monitored for recurrence of a disease or disorder, or a person that is suspected of having the disease or disorder, or a person that does not have or is not suspected of having the disease or disorder.
  • the disease or disorder may be an infectious disease, an immune disorder or disease, a cancer, a genetic disease, a degenerative disease, a lifestyle disease, an injury, a rare disease, or an age related disease.
  • the infectious disease may be caused by bacteria, viruses, fungi and/or parasites.
  • the disease or disorder may comprise sepsis, atrial fibrillation, stroke, heart attack, and other preventable outpatient illnesses.
  • the disease or disorder may comprise deterioration or recurrence of a disease or disorder for which the subject has previously been treated.
  • Patient monitoring may require collection and analysis of health information over a period of time to detect clinical signs of a health status of the patient (e.g., occurrence of a disease or disorder or abnormal symptoms or vital signs).
  • Patient monitoring whether performed inside or outside of a clinical setting (e.g., a hospital), may be performed using sensors (e.g., contactless sensors) that enable non-invasive collection of health data and assessment of health statuses of the patient.
  • sensors e.g., contactless sensors
  • a patient who has been treated for a disease or disorder at a hospital or other clinical setting may need to be monitored for occurrence or recurrence of the disease or disorder (or occurrence of a complication related to an administered treatment for the disease or disorder).
  • a patient who has received an operation e.g., a surgery
  • Patient monitoring may include detecting adverse health conditions or events (e.g., a fall) or causes thereof (e.g., bacteria or virus).
  • Patient monitoring may detect clinical complications such as stroke, pneumonia, heart failure, myocardial infarction (heart attack), chronic obstructive pulmonary disease (COPD), general deterioration, influenza, atrial fibrillation, and panic or anxiety attack.
  • adverse health conditions or events e.g., a fall
  • causes thereof e.g., bacteria or virus
  • patient monitoring may be performed in a hospital or other clinical setting using specialized equipment such as medical monitors (e.g., cardiac monitoring, respiratory monitoring, neurological monitoring, blood glucose monitoring, hemodynamic monitoring, and body temperature monitoring) to measure and/or collect health data such as vital sign measurements (e.g., heart rate, blood pressure, respiratory rate, and pulse oximetry).
  • medical monitors e.g., cardiac monitoring, respiratory monitoring, neurological monitoring, blood glucose monitoring, hemodynamic monitoring, and body temperature monitoring
  • vital sign measurements e.g., heart rate, blood pressure, respiratory rate, and pulse oximetry
  • patient monitoring outside of a clinical setting e.g., at a subject’s home
  • patient monitoring outside of a clinical setting may pose challenges for non-invasive and contactless collection of health data and accurate assessment of clinical health states of a subject.
  • the present disclosure provides systems and methods that may advantageously collect and analyze health data over a period of time to accurately and non-invasively assess the subject’s health status. Such systems and methods may allow patients with elevated risk of an adverse health condition to be accurately monitored for deterioration, occurrence of an adverse health condition, or recurrence of an adverse health condition, whether inside or outside of a clinical setting. In some embodiments, the systems and methods may process health data including collected sensor information or other clinical health data (e.g., obtained by blood testing, imaging, etc.).
  • These systems and methods may improve the accuracy of assessment of a health complication, reducing clinical health care costs, and improving patients’ quality of life.
  • such systems and methods may produce accurate detections or predictions of likelihood of occurrence or recurrence of a disease, disorder, or complication that are clinically actionable by physicians (or other health care workers) toward deciding whether to discharge patients from a hospital for monitoring in a home setting, thereby reducing clinical health care costs.
  • such systems and methods may enable in-home patient monitoring, thereby increasing patients’ quality of life compared to remaining hospitalized or making frequent visits to clinical care sites.
  • a goal of patient monitoring (e.g., in-home) may include preventing hospital readmissions for a discharged patient.
  • the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.
  • the collected and transmitted vital sign information may be aggregated, for example, by batching and uploading to a computer server (e.g., a secure cloud database), where artificial intelligent algorithms may analyze the data in a continuous or realtime manner. If an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) is detected or predicted, the computer server may send a real-time alert to a health care provider (e.g., a general practitioner and/or treating physician).
  • a health care provider e.g., a general practitioner and/or treating physician.
  • the health care provider may subsequently perform follow-up care, such as contacting the patient and requesting that the patient return to the hospital for further treatment or clinical inspection (e.g., monitoring, diagnosis, or prognosis).
  • the health care provider may prescribe a treatment, a clinical procedure, or a care procedure to be administered to the subject based on the real-time alert.
  • the real-time alert may indicate instructions for administering a treatment, a clinical procedure, or a care procedure to be administered to the subject.
  • a monitoring system may be used to collect and analyze vital sign information from a subject over a period of time to predict a likelihood of the subject having a disease, disorder, or complication related to an administered treatment for a disease or disorder.
  • the monitoring system may comprise a contactless monitoring device.
  • the contactless monitoring device may be positioned in proximity to a subject’s body or a portion thereof (e.g., the subject’s head or chest) and collect and transmit vital sign information to the subject’s smartphone or other mobile device.
  • the monitoring system may be used in a hospital or other clinical setting or in a home setting of the subject.
  • the contactless monitoring device may be configured to measure, collect, and/or record health data, such as vital sign data of the subject.
  • the contactless monitoring device may be further configured to transmit such health data (e.g., wirelessly) to a device, such as a computer or mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses).
  • a device such as a computer or mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses).
  • the health data may include vital sign data.
  • FIG. 1 illustrates an overview of an example method 100 for monitoring a subject.
  • the method 100 may comprise receiving (e.g., using a transceiver) health data of the subject acquired from a plurality of sensors, as in operation 102.
  • the health data may be acquired over a period of time.
  • the plurality of sensors comprises a plurality of contactless sensors.
  • the method 100 may comprise computer processing the health data using a trained algorithm to generate an output indicative of a health status of the subject (e.g., over the period of time), as in operation 104.
  • the method 100 may comprise providing the output for display on an electronic display, as in operation 106.
  • a system of the present disclosure may comprise a contactless monitoring device, one or more computing devices, a software application (e.g., mobile application), and a web database.
  • the system may comprise a contactless monitoring device to measure health data of a patient, a user interface (e.g., graphical user interface, or GUI) of the application (e.g., to enable a user to control collection, measurement, recording, storage, and/or analysis of health data for assessment of health states), and computer hardware and/or software for storage and/or analytics of the collected health data (e.g., health information).
  • a user interface e.g., graphical user interface, or GUI
  • computer hardware and/or software for storage and/or analytics of the collected health data (e.g., health information).
  • FIG. 2 illustrates an overview of an example workflow for monitoring a subject.
  • the workflow may comprise using contactless sensors and/or wearable devices to acquire health data of a subject.
  • the sensors may be disposed (e.g., at fixed positions or variable positions) around one or more locations (e.g., a hospital room, a subject’s home or a portion thereof, a senior care facility or a portion thereof, etc.).
  • the sensors may collect a variety of different types of information from multiple sources to broadly encompass a health state of a subject.
  • the contactless sensors may comprise a combination of audio sensors configured to acquire audio data of the subject, video sensors configured to acquire video data of the subject, environmental sensors configured to acquire environmental data of the subject, and/or vital sign sensors configured to acquire vital sign data of the subject.
  • the contactless sensors may be used alone, or in conjunction or combination with wearable devices.
  • the wearable devices may comprise vital sign sensors configured to acquire vital sign data of the subject (e.g., blood pressure monitor, heart rate monitor, blood oxygenation monitor, blood glucose monitor, fetal monitor, etc.).
  • the wearable devices may be configured for specific disease care (e.g., hypertension, hypercholesterolemia, diabetes, organ failure or impairment, etc.) of the subject.
  • the wearable devices may be light enough to be worn by a subject comfortably (e.g., for extended periods of time).
  • the wearable devices may be reversibly attached to a subject’s body (e.g., using a clip, a strap, an adhesive patch, etc.).
  • the sensor data acquired from the contactless sensors and/or the wearable devices may be fused together to obtain a comprehensive set of acquired health data of the subject.
  • the sensor data may be used to augment analysis of existing health data of the subject (e.g., clinical health data, electronic medical records, etc.).
  • the workflow may comprise using a data platform to receive the health data.
  • the workflow may comprise applying an intelligent application to process the health data (e.g., using trained machine learning algorithms) to produce one or more outcomes.
  • the outcomes may comprise a fall detection of the subject, a vital sign anomaly of the subject, a risky event detection of the subject, etc.
  • the trained machine learning algorithms may be selected and optimized based on the type of data collected (e.g., input data) and the outcomes being generated (e.g., output data).
  • Sensors may be connected together to the system through a communications link.
  • the communications link may be a wired link, such as a USB or serial connection, or wireless link, such as an Ethernet link, a Bluetooth link, a WiFi link, a radio frequency (RF) link, or a Zigbee link.
  • the communications link may be implemented using a wireless transceiver, such as a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
  • the communications link may be part of a network, such as a local area network (LAN), a wide area network (WAN), or a cloud-based network.
  • the system may comprise computer processors configured to perform analysis of captured sensor data.
  • the computer processors may comprise an edge device used to detect vital signs and abnormal events.
  • the computer processors may comprise a server containing a database and components for data analysis. In some embodiments, these functions may be implemented on one or more computing devices.
  • the data may be measured, collected, and/or recorded in real-time (e.g., by using suitable biosensors, mechanical sensors, audio sensors, image sensors, video sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, etc.).
  • the collected data may be transmitted continuously to the device (e.g., through a wireless transceiver such as a Bluetooth transceiver).
  • the device may be used to monitor a subject (e.g., patient) over a period of time based on the acquired health data, for example, by assessing the subject’s health status over the period of time.
  • the software application of the monitoring system may receive data sent from the contactless monitoring device at regular intervals, decode the information, and then store the clinical health data (e.g., vital sign data) in a local database on the mobile device itself.
  • the regular intervals may be about 1 second, about 5 seconds, about 10 seconds, about 15 seconds, about 20 seconds, about 30 seconds, about 1 minute, about 2 minutes, about 5 minutes, about 10 minutes, about 20 minutes, about 30 minutes, about 60 minutes, about 90 minutes, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, or about 24 hours, thereby provide real-time or near real-time updates of clinical health data.
  • the regular intervals may be sufficient to enable methods of the present disclosure to be performed in real-time or substantially real-time.
  • the regular intervals may be adjustable by the user or in response to battery consumption requirements. For example, intervals may be extended in order to decrease battery consumption.
  • the data may be localized without leaving the user’s device.
  • the local database may be encrypted, to prevent the exposure of sensitive data (e.g., in the event that the user’s phone becomes lost).
  • the local database may require authentication (e.g., by password or biometry) by the user to grant access to the clinical health data and profiles.
  • the system or device may comprise a software application configured to allow a user to pair with, control, and view collected data.
  • the software application may be configured to allow a user to use a computer processor or a mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses) to pair with the contactless monitoring device (e.g., through a wireless transceiver such as a Bluetooth transceiver) for transmission of data and/or control signals.
  • the software application may comprise a graphical user interface (GUI) to allow the user to view trends, statistics, and/or alerts generated based on their measured, collected, or recorded data (e.g., currently measured data, previously collected or recorded data, or a combination thereof).
  • GUI graphical user interface
  • the GUI may allow the user to view historical or average trends of a set of data over a period of time (e.g., on an hourly basis, on a daily basis, on a weekly basis, or on a monthly basis).
  • the software application may further communicate with a web-based software application, which may be configured to store and analyze the recorded data.
  • the recorded data may be stored in a database (e.g., a computer server or on a cloud network) for real-time or future processing and analysis.
  • Health care providers such as physicians and treating teams of a patient (e.g., the user) may have access to patient alerts, health data (e.g., vital sign data), and/or predictions or assessments generated from such data.
  • patient alerts e.g., vital sign data
  • health data e.g., vital sign data
  • predictions or assessments generated from such data.
  • Such access may be provided by a web-based dashboard (e.g., a GUI).
  • the web-based dashboard may be configured to display, for example, patient metrics, recent alerts, and/or prediction of health outcomes (e.g., rate or likelihood of deterioration and/or complication).
  • health care providers may determine clinical decisions or outcomes based at least in part on such displayed alerts, data, and/or predictions or assessments generated from such data.
  • a physician may instruct the patient to undergo one or more clinical tests at the hospital or other clinical site, based at least in part on patient metrics or on alerts detecting or predicting an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the subject over a period of time.
  • the monitoring system may generate and transmit such alerts to health care providers when a certain predetermined criterion is met (e.g., a minimum threshold for a likelihood of deterioration of the patient’s state, presence of a disease or disorder, or occurrence of a complication).
  • Such a minimum threshold may be, for example, at least about a 5% likelihood, at least about a 10% likelihood, at least about a 20% likelihood, at least about a 25% likelihood, at least about a 30% likelihood, at least about a 35% likelihood, at least about a 40% likelihood, at least about a 45% likelihood, at least about a 50% likelihood, at least about a 55% likelihood, at least about a 60% likelihood, at least about a 65% likelihood, at least about a 70% likelihood, at least about a 75% likelihood, at least about an 80% likelihood, at least about a 85% likelihood, at least about a 90% likelihood, at least about a 95% likelihood, at least about a 96% likelihood, at least about a 97% likelihood, at least about a 98% likelihood, or at least about a 99% likelihood.
  • a physician may prescribe a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing to be administered to the patient based at least in part on patient metrics or on alerts detecting or predicting an adverse health condition (e.g., deterioration of the patient’s health state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the subject over a period of time.
  • the physician may prescribe an anti-inflammatory therapeutic in response to an indication of inflammation in the patient, or an analgesic therapeutic in response to an indication of pain in the patient.
  • Such a prescription of a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing may be determined without requiring an in-person clinical appointment with the prescribing physician.
  • the software application (e.g., mobile application) of the contactless monitoring system may utilize or access external capabilities of artificial intelligence techniques to develop signatures for patient deterioration and disease states.
  • the web-based software may further use these signatures to accurately predict deterioration (e.g., hours to days earlier than with traditional clinical care).
  • health care providers e.g., physicians
  • the software application may analyze acquired health data from a subject (patient) to generate a likelihood of the subject having an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication).
  • the web-based software may further use these signatures to accurately predict adverse respiratory states (e.g., minutes, hours, or days earlier than with traditional clinical care).
  • health care providers e.g., physicians
  • the mobile device application may apply a trained (e.g., prediction) algorithm to the acquired health data to generate the likelihood of the subject having an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication).
  • the trained algorithm may comprise an artificial intelligence based classifier, such as a machine learning based classifier, configured to process the acquired health data to generate the likelihood of the subject having the disease or disorder.
  • the machine learning classifier may be trained using clinical datasets from one or more cohorts of patients, e.g., using clinical health data of the patients (e.g., vital sign data) as inputs and known clinical health outcomes (e.g., occurrence or recurrence of a disease or disorder) of the patients as outputs to the machine learning classifier.
  • clinical health data of the patients e.g., vital sign data
  • known clinical health outcomes e.g., occurrence or recurrence of a disease or disorder
  • the machine learning classifier may comprise one or more machine learning algorithms.
  • machine learning algorithms may include a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network (such as a deep neural network (DNN), a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) recurrent neural network (RNN), or a gated recurrent unit (GRU) recurrent neural network (RNN)), deep learning, or other supervised learning algorithm or unsupervised learning algorithm for classification and regression.
  • the machine learning classifier may be trained using one or more training datasets corresponding to patient data.
  • Training datasets may be generated from, for example, one or more cohorts of patients having common clinical characteristics (features) and clinical outcomes (labels). Training datasets may comprise a set of features and labels corresponding to the features. Features may correspond to algorithm inputs comprising patient demographic information derived from electronic medical records (EMR) and medical observations.
  • EMR electronic medical records
  • Features may comprise clinical characteristics such as, for example, certain ranges or categories of vital sign measurements, such as heart rate, heart rate variability, blood pressure (e.g., systolic and diastolic), respiratory rate, blood oxygen concentration (SpCh), carbon dioxide concentration in respiratory gases, a hormone level, sweat analysis, blood glucose, body temperature, impedance (e.g., bioimpedance), conductivity, capacitance, resistivity, electromyography, galvanic skin response, neurological signals (e.g., electroencephalography), immunology markers, and other physiological measurements.
  • Features may comprise patient information such as patient age, patient medical history, other medical conditions, current or past medications, and time since the last observation. For example, a set of features collected from a given patient at a given time point may collectively serve as a vital sign signature, which may be indicative of a health state or status of the patient at the given time point.
  • ranges of vital sign measurements may be expressed as a plurality of disjoint continuous ranges of continuous measurement values
  • categories of vital sign measurements may be expressed as a plurality of disjoint sets of measurement values (e.g., ⁇ “high”, “low” ⁇ , ⁇ “high”, “normal” ⁇ , ⁇ “low”, “normal” ⁇ , ⁇ “high”, “borderline high”, “normal”, “low” ⁇ , etc.).
  • Clinical characteristics may also include clinical labels indicating the patient’s health history, such as a diagnosis of a disease or disorder, a previous administration of a clinical treatment (e.g., a drug, a surgical treatment, chemotherapy, radiotherapy, immunotherapy, etc.), behavioral factors, or other health status (e.g., hypertension or high blood pressure, hyperglycemia or high blood glucose, hypercholesterolemia or high blood cholesterol, history of allergic reaction or other adverse reaction, etc.).
  • a clinical treatment e.g., a drug, a surgical treatment, chemotherapy, radiotherapy, immunotherapy, etc.
  • behavioral factors e.g., hypertension or high blood pressure, hyperglycemia or high blood glucose, hypercholesterolemia or high blood cholesterol, history of allergic reaction or other adverse reaction, etc.
  • Labels may comprise clinical outcomes such as, for example, a presence, absence, diagnosis, or prognosis of an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the patient.
  • Clinical outcomes may include a temporal characteristic associated with the presence, absence, diagnosis, or prognosis of the adverse health condition in the patient. For example, temporal characteristics may be indicative of the patient having had an occurrence of the adverse health condition within a certain period of time after a previous clinical outcome (e.g., being discharged from the hospital, undergoing an organ transplantation or other surgical operation, undergoing a clinical procedure, etc.).
  • Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year.
  • Input features may be structured by aggregating the data into bins or alternatively using a one-hot encoding with the time since the last observation included. Inputs may also include feature values or vectors derived from the previously mentioned inputs, such as crosscorrelations calculated between separate vital sign measurements over a fixed period of time, and the discrete derivative or the finite difference between successive measurements.
  • Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year.
  • Training records may be constructed from sequences of observations. Such sequences may comprise a fixed length for ease of data processing. For example, sequences may be zero-padded or selected as independent subsets of a single patient’s records.
  • the machine learning classifier algorithm may process the input features to generate output values comprising one or more classifications, one or more predictions, or a combination thereof.
  • classifications or predictions may include a binary classification of a disease or a non-disease state, a classification between a group of categorical labels (e.g., ‘no adverse condition’, ‘apparent adverse condition’, and ‘likely adverse condition’), a likelihood (e.g., relative likelihood or probability) of developing a particular disease, disorder, or adverse condition, a score indicative of a presence of disease, disorder, or adverse condition, a score indicative of a level of systemic inflammation or other symptoms experienced by the patient, a ‘risk factor’ for the likelihood of incidence or mortality of the patient, a prediction of the time at which the patient is expected to have developed the disease, disorder, or adverse condition, and a confidence interval for any numeric predictions.
  • Various machine learning techniques may be cascaded such that the output of a machine learning technique may also be used as input features to subsequent layers or subsections
  • datasets may be sufficiently large to generate statistically significant classifications or predictions.
  • datasets may comprise: health record databases of de-identified data including vital sign observations, databases of ambulatory vital sign observations collected via tele-health programs, databases of vital sign observations collected from fitness trackers, vital sign observations from a hospital or other clinical setting, vital sign measurements collected using an FDA-approved monitoring device, and vital sign measurements collected using contactless monitoring devices of the present disclosure.
  • datasets are annotated or labeled.
  • Datasets may be split into subsets (e.g., discrete or overlapping), such as a training dataset, a development dataset, and a test dataset.
  • a dataset may be split into a training dataset comprising 80% of the dataset, a development dataset comprising 10% of the dataset, and a test dataset comprising 10% of the dataset.
  • the training dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset.
  • the development dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset.
  • the test dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset.
  • Training sets e.g., training datasets
  • training sets e.g., training datasets
  • the datasets may be augmented to increase the number of samples within the training set.
  • data augmentation may comprise rearranging the order of observations in a training record.
  • methods to impute missing data may be used, such as forward-filling, back-filling, linear interpolation, and multi-task Gaussian processes.
  • Datasets may be filtered to remove confounding factors. For example, within a database, a subset of patients may be excluded.
  • the machine learning classifier may comprise one or more neural networks, such as a neural network, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a transformer, or a deep RNN.
  • the recurrent neural network may comprise units which can be long short-term memory (LSTM) units or gated recurrent units (GRU).
  • the machine learning classifier may comprise an algorithm architecture comprising a neural network with a set of input features such as vital sign and other measurements, patient medical history, and/or patient demographics. Neural network techniques, such as dropout or regularization, may be used during training the machine learning classifier to prevent overfitting.
  • an alert or alarm may be generated and transmitted to a health care provider, such as a physician, nurse, or other member of the patient’s treating team within a hospital.
  • Alerts may be transmitted via an automated phone call, a short message service (SMS) or multimedia message service (MMS) message, an e-mail, or an alert within a dashboard.
  • the alert may comprise output information such as a prediction of an adverse condition, a likelihood of the predicted adverse condition, a time until an expected onset of the adverse condition, a confidence interval of the likelihood or time, or a recommended course of treatment for the adverse condition.
  • the neural network may comprise a plurality of sub-networks, each of which is configured to generate a classification or prediction of a different type of output information (e.g., which may be combined to form an overall output of the neural network).
  • AUROC receiver-operating curve
  • ROC receiveroperating curve
  • cross-validation may be performed to assess the robustness of a machine learning classifier model across different training and testing datasets.
  • a machine learning classifier model may be trained using a dataset of records which are a subset of a single patient’s observations
  • the performance of the classifier model’s discrimination ability is calculated using the entire record for a patient.
  • performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), AUPRC, AUROC, or similar
  • PV positive predictive value
  • NPV negative predictive value
  • AUPRC AUROC
  • AUROC AUROC
  • a “false positive” may refer to an outcome in which if an alert or alarm has been incorrectly or prematurely activated (e.g., before the actual onset of, or without any onset of, a disease state or condition) fires too early.
  • a “true positive” may refer to an outcome in which an alert or alarm has been activated at the correct time (within a predetermined buffer or tolerance), and the patient’s record indicates the disease or condition.
  • a “false negative” may refer to an outcome in which no alert or alarm has been activated, but the patient’s record indicates the disease or condition.
  • a “true negative” may refer to an outcome in which no alert or alarm has been activated, and the patient’s record does not indicate the disease or condition.
  • the machine learning classifier may be trained until certain predetermined conditions for accuracy or performance are satisfied, such as having minimum desired values corresponding to diagnostic accuracy measures.
  • the diagnostic accuracy measure may correspond to prediction of a likelihood of occurrence of an adverse health condition such as deterioration or a disease or disorder in the subject.
  • the diagnostic accuracy measure may correspond to prediction of a likelihood of deterioration or recurrence of an adverse health condition such as a disease or disorder for which the subject has previously been treated.
  • a diagnostic accuracy measure may correspond to prediction of likelihood of recurrence of an infection in a subject who has previously been treated for the infection.
  • diagnostic accuracy measures may include sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, area under the precision-recall curve (AUPRC), and area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) corresponding to the diagnostic accuracy of detecting or predicting an adverse health condition.
  • PV positive predictive value
  • NDV negative predictive value
  • AUPRC area under the precision-recall curve
  • AUC area under the curve
  • AUROC Receiver Operating Characteristic
  • such a predetermined condition may be that the sensitivity of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • such a predetermined condition may be that the specificity of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • such a predetermined condition may be that the positive predictive value (PPV) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • PSV positive predictive value
  • such a predetermined condition may be that the negative predictive value (NPV) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • NSV negative predictive value
  • such a predetermined condition may be that the area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
  • AUC area under the curve
  • AUROC Receiver Operating Characteristic
  • such a predetermined condition may be that the area under the precision-recall curve (AUPRC) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
  • AUPRC precision-recall curve
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • the adverse health condition such as deterioration or a disease or disorder with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a positive predictive value (PPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • PSV positive predictive value
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a negative predictive value (NPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
  • NPV negative predictive value
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with an area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
  • AUC area under the curve
  • AUROC Receiver Operating Characteristic
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with an area under the precision-recall curve (AUPRC) of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
  • AUPRC precision-recall curve
  • the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder over a period of time before the actual occurrence or recurrence of the adverse health condition (e.g., a period of time including a window beginning about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 7 hours, about 8 hours, about 9 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 36 hours, about 48 hours, about 72 hours, about 96 hours, about 120 hours, about 6 days, about 7 days, about 8 days, about 9 days, about 10 days, about 11 days, about 12 days, about 13 days, about 14 days, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 5 months, about 6 months, about 7 months, about 8 months, about 9 months, about 10 months
  • the contactless monitoring device may be lightweight and discrete, and may comprise electronic sensors, a power source (e.g., battery), and a physical enclosure or housing.
  • the electronic sensors may be wearable or contactless sensors configured to acquire health data from a subject.
  • the physical enclosure of the contactless monitoring device may comprise a maximum dimension of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
  • the physical enclosure of the contactless monitoring device may comprise a length of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
  • the physical enclosure of the contactless monitoring device may comprise a width of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
  • the physical enclosure of the contactless monitoring device may comprise a height of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
  • the physical enclosure of the contactless monitoring device may have a maximum weight of no more than about no more than about 300 grams (g), no more than about 250 g, no more than about 200 g, no more than about 150 g, no more than about 100 g, no more than about 90 g, no more than about 80 g, no more than about 70 g, no more than about 60 g, no more than about 50 g, no more than about 40 g, no more than about 30 g, no more than about 20 g, no more than about 10 g, or no more than about 5 g.
  • g grams
  • the contactless monitoring device may be powered by a power source, such as an energy storage device.
  • the energy storage device may be or include a solid state battery or capacitor.
  • the energy storage device may comprise one or more batteries of type alkaline, nickel metal hydride (NiMH) such as nickel cadmium (Ni-Cd), lithium ion (Li-ion), or lithium polymer (LiPo).
  • NiMH nickel metal hydride
  • the energy storage device may comprise one or more batteries of type AA, AAA, C, D, 9V, or a coin cell battery.
  • the battery may comprise one or more rechargeable batteries or non -rechargeable batteries.
  • the battery may comprise a wattage of no more than about 10 watts (W), no more about 5 W, no more about 4 W, no more about 3 W, no more about 2 W, no more about 1 W, no more about 500 milliwatts (mW), no more about 100 mW, no more about 50 mW, no more about 10 mW, no more about 5 mW, or no more about 1 mW.
  • W watts
  • mW milliwatts
  • the battery may comprise a voltage of no more than about 9 volts (V), no more than about 6 V, no more than about 4.5 V, no more than about 3.7 V, no more than about 3 V, no more than about 1.5 V, no more than about 1.2 V, or no more than about 1 V.
  • V 9 volts
  • the battery may comprise a capacity of no more than about 50 milliampere hours (mAh), no more than about 100 mAh, no more than about 150 mAh, no more than about 200 mAh, no more than about 250 mAh, no more than about 300 mAh, no more than about 400 mAh, no more than about 500 mAh, no more than about 1,000 mAh, no more than about 2,000 mAh, no more than about 3,000 mAh, no more than about 4,000 mAh, no more than about 5,000 mAh, no more than about 6,000 mAh, no more than about 7,000 mAh, no more than about 8,000 mAh, no more than about 9,000 mAh, or no more than about 10,000 mAh.
  • mAh milliampere hours
  • the battery may be configured to be rechargeable with a charging time of about 10 minutes, about 20 minutes, about 30 minutes, about 60 minutes, about 90 minutes, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, or about 24 hours.
  • the electronic device may be configured to allow the battery to be replaceable. Alternatively, the electronic device may be configured with a battery which is not replaceable by a user.
  • the software application of the monitoring system may provide functionality for a user of the monitoring system to control the monitoring system and a graphical user interface (GUI) for the user to view their measured, collected, or recorded clinical health data (e.g., vital sign data).
  • GUI graphical user interface
  • the application may be configured to run on mobile platforms, such as iOS and Android.
  • the application may be run on a variety of mobile devices, such as mobile phones (e.g., Apple iPhone or Android phone), tablet computers (e.g., Apple iPad, Android tablet, or Windows 10 tablet), smart watches (e.g., Apple Watch or Android smart watch), and portable media players (e.g., Apple iPod Touch).
  • FIG. 3 shows a computer system 301 that is programmed or otherwise configured to implement methods provided herein.
  • the computer system 301 can regulate various aspects of the present disclosure, such as, for example, receiving health data of a subject acquired from sensors, processing the health data using a trained algorithm to generate an output indicative of a health status of the subject, providing the output for display on an electronic display, presenting an alert on the electronic display, and transmitting an alert over a network.
  • the computer system 301 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device.
  • the electronic device can be a mobile electronic device.
  • the computer system 301 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 305, which can be a single core or multi core processor, or a plurality of processors for parallel processing.
  • the computer system 301 also includes memory or memory location 310 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 315 (e.g., hard disk), communication interface 320 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 325, such as cache, other memory, data storage and/or electronic display adapters.
  • the memory 310, storage unit 315, interface 320 and peripheral devices 325 are in communication with the CPU 305 through a communication bus (solid lines), such as a motherboard.
  • the storage unit 315 can be a data storage unit (or data repository) for storing data.
  • the computer system 301 can be operatively coupled to a computer network (“network”) 330 with the aid of the communication interface 320.
  • the network 330 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
  • the network 330 in some cases is a telecommunication and/or data network.
  • the network 330 can include one or more computer servers, which can enable distributed computing, such as cloud computing.
  • one or more computer servers may enable cloud computing over the network 330 (“the cloud”) to perform various aspects of analysis, calculation, and generation of the present disclosure, such as, for example, receiving health data of a subject acquired from sensors, processing the health data using a trained algorithm to generate an output indicative of a health status of the subject, providing the output for display on an electronic display, presenting an alert on the electronic display, and transmitting an alert over a network.
  • cloud computing may be provided by cloud computing platforms such as, for example, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and IBM cloud.
  • the network 330 in some cases with the aid of the computer system 301, can implement a peer-to-peer network, which may enable devices coupled to the computer system 301 to behave as a client or a server.
  • the CPU 305 can execute a sequence of machine-readable instructions, which can be embodied in a program or software.
  • the instructions may be stored in a memory location, such as the memory 310.
  • the instructions can be directed to the CPU 305, which can subsequently program or otherwise configure the CPU 305 to implement methods of the present disclosure. Examples of operations performed by the CPU 305 can include fetch, decode, execute, and writeback.
  • the CPU 305 can be part of a circuit, such as an integrated circuit.
  • a circuit such as an integrated circuit.
  • One or more other components of the system 301 can be included in the circuit.
  • the circuit is an application specific integrated circuit (ASIC).
  • the storage unit 315 can store files, such as drivers, libraries and saved programs.
  • the storage unit 315 can store user data, e.g., user preferences and user programs.
  • the computer system 301 in some cases can include one or more additional data storage units that are external to the computer system 301, such as located on a remote server that is in communication with the computer system 301 through an intranet or the Internet.
  • the computer system 301 can communicate with one or more remote computer systems through the network 330.
  • the computer system 301 can communicate with a remote computer system of a user.
  • remote computer systems include personal computers (e.g., portable PC), slate or tablet PC’s (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants.
  • the user can access the computer system 301 via the network 330.
  • Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 301, such as, for example, on the memory 310 or electronic storage unit 315.
  • the machine executable or machine readable code can be provided in the form of software.
  • the code can be executed by the processor 305.
  • the code can be retrieved from the storage unit 315 and stored on the memory 310 for ready access by the processor 305.
  • the electronic storage unit 315 can be precluded, and machine-executable instructions are stored on memory 310.
  • the code can be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or can be compiled during runtime.
  • the code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion.
  • aspects of the systems and methods provided herein can be embodied in programming.
  • Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium.
  • Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk.
  • “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server.
  • another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links.
  • a machine readable medium such as computer-executable code
  • a tangible storage medium such as computer-executable code
  • Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings.
  • Volatile storage media include dynamic memory, such as main memory of such a computer platform.
  • Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system.
  • Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications.
  • RF radio frequency
  • IR infrared
  • Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD- ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data.
  • Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
  • the computer system 301 can include or be in communication with an electronic display 335 that comprises a user interface (LT) 340.
  • user interfaces include, without limitation, a graphical user interface (GUI) and web-based user interface.
  • GUI graphical user interface
  • the computer system can include a web-based dashboard (e.g., a GUI) configured to display, for example, patient metrics, recent alerts, and/or prediction of health outcomes, thereby allowing health care providers, such as physicians and treating teams of a patient, to access patient alerts, data (e.g., health data), and/or predictions or assessments generated from such data.
  • Methods and systems of the present disclosure can be implemented by way of one or more algorithms.
  • An algorithm can be implemented by way of software upon execution by the central processing unit 305.
  • the algorithm can, for example, receive health data of a subj ect acquired from sensors, process the health data using a trained algorithm to generate an output indicative of a health status of the subject, provide the output for display on an electronic display, present an alert on the electronic display, and transmit an alert over a network.

Abstract

The present disclosure provides systems and methods for performing health monitoring of a subject. In an aspect, a system for monitoring a subject may comprise: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.

Description

SYSTEMS AND METHODS FOR AUGMENTED HEALTH MONITORING
CROSS-REFERENCE
[0001] This application claims the benefit of U.S. Provisional Application No.
63/125,211, filed on December 14, 2020; the contents of which is incorporated by reference herein in its entirety.
BACKGROUND
[0002] Patient monitoring may require collection and analysis of health information over a period of time to detect clinical signs of a health status of the patient (e.g., occurrence of a disease or disorder or abnormal symptoms or vital signs). Patient monitoring, whether performed inside or outside of a clinical setting (e.g., a hospital), may be performed using sensors (e.g., contactless sensors) that enable non-invasive collection of health data and assessment of health statuses of the patient.
SUMMARY
[0003] The problem of effective monitoring of health statuses of patients in an inpatient and critical care setting may be challenging, often because symptoms are not present before the onset of adverse health statuses. Further, patient monitoring using wearable devices may be cumbersome, uncomfortable, invasive, and/or expensive to perform. Therefore, there exists a need for augmented health monitoring of a subject in an inpatient or outpatient setting, which takes advantage of non-invasive and contactless techniques of acquiring health data.
[0004] Recognized herein is the need for systems and methods for augmented health monitoring of subjects (e.g., patients) by continuous collection and analysis of health data. Such analysis of health data of a subject (patient) may be performed by contactless monitoring devices such as contactless sensors (e.g., at the subject’s home, or at a clinical setting such as a hospital) over a period of time to assess the health status of the subject (e.g., likelihood of the subject having an adverse health condition or occurrence of a complication). [0005] The present disclosure provides systems and methods that may advantageously collect and analyze health data over a period of time to accurately and non-invasively assess the subject’s health status. Such systems and methods may allow patients with elevated risk of an adverse health condition to be accurately monitored for deterioration, occurrence of an adverse health condition, or recurrence of an adverse health condition, whether inside or outside of a clinical setting. In some embodiments, the systems and methods may process health data including collected sensor information or other clinical health data (e.g., obtained by blood testing, imaging, etc.).
[0006] In an aspect, the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.
[0007] In some embodiments, the plurality of contactless sensors is configured to acquire at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
[0008] In some embodiments, the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject. In some embodiments, the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
[0009] In some embodiments, the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subject. In some embodiments, the one or more image sensors are selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
[0010] In some embodiments, the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject. In some embodiments, the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
[0011] In some embodiments, the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject. In some embodiments, the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
[0012] In some embodiments, the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject. In some embodiments, the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
[0013] In some embodiments, the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject. In some embodiments, the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature. In some embodiments, the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
[0014] In some embodiments, the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject. In some embodiments, the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
[0015] In some embodiments, the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
[0016] In some embodiments, the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
[0017] In some embodiments, the transceiver comprises a wireless transceiver. In some embodiments, the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver. [0018] In some embodiments, the one or more computer processors are further configured to store the acquired health data in a database. In some embodiments, the database comprises a cloud-based database.
[0019] In some embodiments, the one or more computer processors are further configured to present an alert on the electronic display based at least in part on the generated output. In some embodiments, the one or more computer processors are further configured to transmit an alert over a network to a health care provider of the subject based at least in part on the output.
[0020] In some embodiments, the alert comprises instructions to administer care or treatment to the subject. In some embodiments, administering the treatment comprises providing a medication to the subject. In some embodiments, administering the care comprises turning the subject around.
[0021] In some embodiments, the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
[0022] In some embodiments, the trained algorithm comprises a machine learning-based classifier configured to process the health data to generate the output indicative of the health status of the subject. In some embodiments, the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
[0023] In some embodiments, the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition. In some embodiments, the adverse health condition is a disease or disorder. In some embodiments, the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
[0024] In some embodiments, the health status of the subject is a fall or a vital sign abnormality. In some embodiments, the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
[0025] In some embodiments, the subject has received a clinical treatment or procedure. In some embodiments, the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof. In some embodiments, the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.
[0026] In some embodiments, the one or more computer processors are configured to further analyze an interaction between the subject and a caregiver of the subject, and provide an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
[0027] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0028] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0029] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0030] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0031] In some embodiments, the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99.
[0032] In some embodiments, the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about 3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
[0033] In some embodiments, the one or more computer processors are configured to perform (i), (ii), and (iii) in real time or substantially in real time.
[0034] In an aspect, the present disclosure provides a method for monitoring a subject, comprising: (a) receiving, using a transceiver, health data of the subject acquired from a plurality of sensors over a period of time, which plurality of sensors comprises a plurality of contactless sensors; (b) computer processing the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time; and (c) providing the output for display on an electronic display.
[0035] In some embodiments, the health data comprises at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
[0036] In some embodiments, the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject. In some embodiments, the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
[0037] In some embodiments, the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subject. In some embodiments, the one or more image sensors are selected from the group consisting of a camera, a charged- coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
[0038] In some embodiments, the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject. In some embodiments, the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
[0039] In some embodiments, the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject. In some embodiments, the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
[0040] In some embodiments, the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject. In some embodiments, the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
[0041] In some embodiments, the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject. In some embodiments, the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature. In some embodiments, the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
[0042] In some embodiments, the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject. In some embodiments, the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
[0043] In some embodiments, the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpO2), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
[0044] In some embodiments, the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
[0045] In some embodiments, the transceiver comprises a wireless transceiver. In some embodiments, the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
[0046] In some embodiments, the method further comprises storing the acquired health data in a database. In some embodiments, the database comprises a cloud-based database. [0047] In some embodiments, the method further comprises presenting an alert on the electronic display based at least in part on the generated output. In some embodiments, the method further comprises transmitting an alert over a network to a health care provider of the subject based at least in part on the output.
[0048] In some embodiments, the alert comprises instructions to administer care or treatment to the subject. In some embodiments, administering the treatment comprises providing a medication to the subject. In some embodiments, administering the care comprises turning the subject around.
[0049] In some embodiments, the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
[0050] In some embodiments, the trained algorithm comprises a machine learning-based classifier configured to process the health data to generate the output indicative of the health status of the subject. In some embodiments, the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
[0051] In some embodiments, the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition. In some embodiments, the adverse health condition is a disease or disorder. In some embodiments, the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
[0052] In some embodiments, the health status of the subject is a fall or a vital sign abnormality. In some embodiments, the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
[0053] In some embodiments, the subject has received a clinical treatment or procedure. In some embodiments, the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof. In some embodiments, the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.
[0054] In some embodiments, the method further comprises analyzing an interaction between the subject and a caregiver of the subject, and providing an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
[0055] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0056] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0057] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0058] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, at least about 99%, or more than about 99%.
[0059] In some embodiments, the method further comprises processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the-Curve (AUC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, at least about 0.99, or more than about 0.99.
[0060] In some embodiments, the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about 3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
[0061] In some embodiments, the method further comprises performing (i), (ii), and (iii) in real time or substantially in real time.
[0062] Another aspect of the present disclosure provides a non-transitory computer readable medium comprising machine executable code that, upon execution by one or more computer processors, implements any of the methods above or elsewhere herein.
[0063] Another aspect of the present disclosure provides a system comprising one or more computer processors and computer memory coupled thereto. The computer memory comprises machine executable code that, upon execution by the one or more computer processors, implements any of the methods above or elsewhere herein.
[0064] Additional aspects and advantages of the present disclosure will become readily apparent to those skilled in this art from the following detailed description, wherein only illustrative embodiments of the present disclosure are shown and described. As will be realized, the present disclosure is capable of other and different embodiments, and its several details are capable of modifications in various obvious respects, all without departing from the disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
INCORPORATION BY REFERENCE
[0065] All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference. To the extent publications and patents or patent applications incorporated by reference contradict the disclosure contained in the specification, the specification is intended to supersede and/or take precedence over any such contradictory material.
BRIEF DESCRIPTION OF THE DRAWINGS
[0066] The novel features of the disclosure are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present disclosure will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the disclosure are utilized, and the accompanying drawings (also “Figure” and “FIG.” herein), of which:
[0067] FIG. 1 illustrates an overview of an example method for monitoring a subject.
[0068] FIG. 2 illustrates an overview of an example workflow for monitoring a subject.
[0069] FIG. 3 shows a computer system that is programmed or otherwise configured to implement methods provided herein.
DETAILED DESCRIPTION
[0070] While various embodiments of the invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions may occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed.
[0071] Various terms used throughout the present description may be read and understood as follows, unless the context indicates otherwise: “or” as used throughout is inclusive, as though written “and/or”; singular articles and pronouns as used throughout include their plural forms, and vice versa; similarly, gendered pronouns include their counterpart pronouns so that pronouns should not be understood as limiting anything described herein to use, implementation, performance, etc. by a single gender; “exemplary” should be understood as “illustrative” or “exemplifying” and not necessarily as “preferred”
-l i over other embodiments. Further definitions for terms may be set out herein; these may apply to prior and subsequent instances of those terms, as will be understood from a reading of the present description.Whenever the term “at least,” “greater than,” or “greater than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “at least,” “greater than” or “greater than or equal to” applies to each of the numerical values in that series of numerical values. For example, greater than or equal to 1, 2, or 3 is equivalent to greater than or equal to 1, greater than or equal to 2, or greater than or equal to 3.
[0072] Whenever the term “no more than,” “less than,” or “less than or equal to” precedes the first numerical value in a series of two or more numerical values, the term “no more than,” “less than,” or “less than or equal to” applies to each of the numerical values in that series of numerical values. For example, less than or equal to 3, 2, or 1 is equivalent to less than or equal to 3, less than or equal to 2, or less than or equal to 1.
[0073] The term “subject,” as used herein, generally refers to a human such as a patient. The subject may be a person (e.g., a patient) with a disease or disorder, or a person that has been treated for a disease or disorder, or a person that is being monitored for recurrence of a disease or disorder, or a person that is suspected of having the disease or disorder, or a person that does not have or is not suspected of having the disease or disorder. The disease or disorder may be an infectious disease, an immune disorder or disease, a cancer, a genetic disease, a degenerative disease, a lifestyle disease, an injury, a rare disease, or an age related disease. The infectious disease may be caused by bacteria, viruses, fungi and/or parasites. For example, the disease or disorder may comprise sepsis, atrial fibrillation, stroke, heart attack, and other preventable outpatient illnesses. For example, the disease or disorder may comprise deterioration or recurrence of a disease or disorder for which the subject has previously been treated.
[0074] Patient monitoring may require collection and analysis of health information over a period of time to detect clinical signs of a health status of the patient (e.g., occurrence of a disease or disorder or abnormal symptoms or vital signs). Patient monitoring, whether performed inside or outside of a clinical setting (e.g., a hospital), may be performed using sensors (e.g., contactless sensors) that enable non-invasive collection of health data and assessment of health statuses of the patient.
[0075] As an example, a patient who has been treated for a disease or disorder at a hospital or other clinical setting may need to be monitored for occurrence or recurrence of the disease or disorder (or occurrence of a complication related to an administered treatment for the disease or disorder). For example, a patient who has received an operation (e.g., a surgery) may need to be monitored for an occurrence of post-operative complications related to the operation (e.g., post-surgery complications such as infection). Patient monitoring may include detecting adverse health conditions or events (e.g., a fall) or causes thereof (e.g., bacteria or virus). Patient monitoring may detect clinical complications such as stroke, pneumonia, heart failure, myocardial infarction (heart attack), chronic obstructive pulmonary disease (COPD), general deterioration, influenza, atrial fibrillation, and panic or anxiety attack.
[0076] In some embodiments, patient monitoring may be performed in a hospital or other clinical setting using specialized equipment such as medical monitors (e.g., cardiac monitoring, respiratory monitoring, neurological monitoring, blood glucose monitoring, hemodynamic monitoring, and body temperature monitoring) to measure and/or collect health data such as vital sign measurements (e.g., heart rate, blood pressure, respiratory rate, and pulse oximetry). However, patient monitoring outside of a clinical setting (e.g., at a subject’s home) may pose challenges for non-invasive and contactless collection of health data and accurate assessment of clinical health states of a subject.
[0077] The present disclosure provides systems and methods that may advantageously collect and analyze health data over a period of time to accurately and non-invasively assess the subject’s health status. Such systems and methods may allow patients with elevated risk of an adverse health condition to be accurately monitored for deterioration, occurrence of an adverse health condition, or recurrence of an adverse health condition, whether inside or outside of a clinical setting. In some embodiments, the systems and methods may process health data including collected sensor information or other clinical health data (e.g., obtained by blood testing, imaging, etc.).
[0078] These systems and methods may improve the accuracy of assessment of a health complication, reducing clinical health care costs, and improving patients’ quality of life. For example, such systems and methods may produce accurate detections or predictions of likelihood of occurrence or recurrence of a disease, disorder, or complication that are clinically actionable by physicians (or other health care workers) toward deciding whether to discharge patients from a hospital for monitoring in a home setting, thereby reducing clinical health care costs. As another example, such systems and methods may enable in-home patient monitoring, thereby increasing patients’ quality of life compared to remaining hospitalized or making frequent visits to clinical care sites. A goal of patient monitoring (e.g., in-home) may include preventing hospital readmissions for a discharged patient. [0079] In an aspect, the present disclosure provides a system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.
[0080] In some embodiments, the collected and transmitted vital sign information may be aggregated, for example, by batching and uploading to a computer server (e.g., a secure cloud database), where artificial intelligent algorithms may analyze the data in a continuous or realtime manner. If an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) is detected or predicted, the computer server may send a real-time alert to a health care provider (e.g., a general practitioner and/or treating physician). The health care provider may subsequently perform follow-up care, such as contacting the patient and requesting that the patient return to the hospital for further treatment or clinical inspection (e.g., monitoring, diagnosis, or prognosis). Alternatively or in combination, the health care provider may prescribe a treatment, a clinical procedure, or a care procedure to be administered to the subject based on the real-time alert. Alternatively or in combination, the real-time alert may indicate instructions for administering a treatment, a clinical procedure, or a care procedure to be administered to the subject.
[0081] A monitoring system may be used to collect and analyze vital sign information from a subject over a period of time to predict a likelihood of the subject having a disease, disorder, or complication related to an administered treatment for a disease or disorder. The monitoring system may comprise a contactless monitoring device. For example, the contactless monitoring device may be positioned in proximity to a subject’s body or a portion thereof (e.g., the subject’s head or chest) and collect and transmit vital sign information to the subject’s smartphone or other mobile device. The monitoring system may be used in a hospital or other clinical setting or in a home setting of the subject.
[0082] The contactless monitoring device may be configured to measure, collect, and/or record health data, such as vital sign data of the subject. The contactless monitoring device may be further configured to transmit such health data (e.g., wirelessly) to a device, such as a computer or mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses). For example, the health data may include vital sign data.
[0083] FIG. 1 illustrates an overview of an example method 100 for monitoring a subject. The method 100 may comprise receiving (e.g., using a transceiver) health data of the subject acquired from a plurality of sensors, as in operation 102. The health data may be acquired over a period of time. In some embodiments, the plurality of sensors comprises a plurality of contactless sensors. The method 100 may comprise computer processing the health data using a trained algorithm to generate an output indicative of a health status of the subject (e.g., over the period of time), as in operation 104. The method 100 may comprise providing the output for display on an electronic display, as in operation 106.
[0084] A system of the present disclosure may comprise a contactless monitoring device, one or more computing devices, a software application (e.g., mobile application), and a web database. The system may comprise a contactless monitoring device to measure health data of a patient, a user interface (e.g., graphical user interface, or GUI) of the application (e.g., to enable a user to control collection, measurement, recording, storage, and/or analysis of health data for assessment of health states), and computer hardware and/or software for storage and/or analytics of the collected health data (e.g., health information).
[0085] FIG. 2 illustrates an overview of an example workflow for monitoring a subject. The workflow may comprise using contactless sensors and/or wearable devices to acquire health data of a subject. The sensors may be disposed (e.g., at fixed positions or variable positions) around one or more locations (e.g., a hospital room, a subject’s home or a portion thereof, a senior care facility or a portion thereof, etc.). The sensors may collect a variety of different types of information from multiple sources to broadly encompass a health state of a subject. For example, the contactless sensors may comprise a combination of audio sensors configured to acquire audio data of the subject, video sensors configured to acquire video data of the subject, environmental sensors configured to acquire environmental data of the subject, and/or vital sign sensors configured to acquire vital sign data of the subject.
[0086] The contactless sensors may be used alone, or in conjunction or combination with wearable devices. For example, the wearable devices may comprise vital sign sensors configured to acquire vital sign data of the subject (e.g., blood pressure monitor, heart rate monitor, blood oxygenation monitor, blood glucose monitor, fetal monitor, etc.). The wearable devices may be configured for specific disease care (e.g., hypertension, hypercholesterolemia, diabetes, organ failure or impairment, etc.) of the subject. The wearable devices may be light enough to be worn by a subject comfortably (e.g., for extended periods of time). The wearable devices may be reversibly attached to a subject’s body (e.g., using a clip, a strap, an adhesive patch, etc.). The sensor data acquired from the contactless sensors and/or the wearable devices may be fused together to obtain a comprehensive set of acquired health data of the subject. The sensor data may be used to augment analysis of existing health data of the subject (e.g., clinical health data, electronic medical records, etc.). [0087] The workflow may comprise using a data platform to receive the health data. The workflow may comprise applying an intelligent application to process the health data (e.g., using trained machine learning algorithms) to produce one or more outcomes. For example, the outcomes may comprise a fall detection of the subject, a vital sign anomaly of the subject, a risky event detection of the subject, etc. The trained machine learning algorithms may be selected and optimized based on the type of data collected (e.g., input data) and the outcomes being generated (e.g., output data).
[0088] Sensors may be connected together to the system through a communications link. The communications link may be a wired link, such as a USB or serial connection, or wireless link, such as an Ethernet link, a Bluetooth link, a WiFi link, a radio frequency (RF) link, or a Zigbee link. The communications link may be implemented using a wireless transceiver, such as a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver. The communications link may be part of a network, such as a local area network (LAN), a wide area network (WAN), or a cloud-based network. The system may comprise computer processors configured to perform analysis of captured sensor data. The computer processors may comprise an edge device used to detect vital signs and abnormal events. The computer processors may comprise a server containing a database and components for data analysis. In some embodiments, these functions may be implemented on one or more computing devices.
[0089] The data may be measured, collected, and/or recorded in real-time (e.g., by using suitable biosensors, mechanical sensors, audio sensors, image sensors, video sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, etc.). The collected data may be transmitted continuously to the device (e.g., through a wireless transceiver such as a Bluetooth transceiver). The device may be used to monitor a subject (e.g., patient) over a period of time based on the acquired health data, for example, by assessing the subject’s health status over the period of time.
[0090] The software application of the monitoring system may receive data sent from the contactless monitoring device at regular intervals, decode the information, and then store the clinical health data (e.g., vital sign data) in a local database on the mobile device itself. For example, the regular intervals may be about 1 second, about 5 seconds, about 10 seconds, about 15 seconds, about 20 seconds, about 30 seconds, about 1 minute, about 2 minutes, about 5 minutes, about 10 minutes, about 20 minutes, about 30 minutes, about 60 minutes, about 90 minutes, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, or about 24 hours, thereby provide real-time or near real-time updates of clinical health data. The regular intervals may be sufficient to enable methods of the present disclosure to be performed in real-time or substantially real-time. [0091] The regular intervals may be adjustable by the user or in response to battery consumption requirements. For example, intervals may be extended in order to decrease battery consumption. The data may be localized without leaving the user’s device. The local database may be encrypted, to prevent the exposure of sensitive data (e.g., in the event that the user’s phone becomes lost). The local database may require authentication (e.g., by password or biometry) by the user to grant access to the clinical health data and profiles. [0092] The system or device may comprise a software application configured to allow a user to pair with, control, and view collected data. For example, the software application may be configured to allow a user to use a computer processor or a mobile device (e.g., a smartphone, a tablet, a laptop, a smart watch, or smart glasses) to pair with the contactless monitoring device (e.g., through a wireless transceiver such as a Bluetooth transceiver) for transmission of data and/or control signals. The software application may comprise a graphical user interface (GUI) to allow the user to view trends, statistics, and/or alerts generated based on their measured, collected, or recorded data (e.g., currently measured data, previously collected or recorded data, or a combination thereof). For example, the GUI may allow the user to view historical or average trends of a set of data over a period of time (e.g., on an hourly basis, on a daily basis, on a weekly basis, or on a monthly basis). The software application may further communicate with a web-based software application, which may be configured to store and analyze the recorded data. For example, the recorded data may be stored in a database (e.g., a computer server or on a cloud network) for real-time or future processing and analysis.
[0093] Health care providers, such as physicians and treating teams of a patient (e.g., the user) may have access to patient alerts, health data (e.g., vital sign data), and/or predictions or assessments generated from such data. Such access may be provided by a web-based dashboard (e.g., a GUI). The web-based dashboard may be configured to display, for example, patient metrics, recent alerts, and/or prediction of health outcomes (e.g., rate or likelihood of deterioration and/or complication). Using the web-based dashboard, health care providers may determine clinical decisions or outcomes based at least in part on such displayed alerts, data, and/or predictions or assessments generated from such data.
[0094] For example, a physician may instruct the patient to undergo one or more clinical tests at the hospital or other clinical site, based at least in part on patient metrics or on alerts detecting or predicting an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the subject over a period of time. The monitoring system may generate and transmit such alerts to health care providers when a certain predetermined criterion is met (e.g., a minimum threshold for a likelihood of deterioration of the patient’s state, presence of a disease or disorder, or occurrence of a complication).
[0095] Such a minimum threshold may be, for example, at least about a 5% likelihood, at least about a 10% likelihood, at least about a 20% likelihood, at least about a 25% likelihood, at least about a 30% likelihood, at least about a 35% likelihood, at least about a 40% likelihood, at least about a 45% likelihood, at least about a 50% likelihood, at least about a 55% likelihood, at least about a 60% likelihood, at least about a 65% likelihood, at least about a 70% likelihood, at least about a 75% likelihood, at least about an 80% likelihood, at least about a 85% likelihood, at least about a 90% likelihood, at least about a 95% likelihood, at least about a 96% likelihood, at least about a 97% likelihood, at least about a 98% likelihood, or at least about a 99% likelihood.
[0096] As another example, a physician may prescribe a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing to be administered to the patient based at least in part on patient metrics or on alerts detecting or predicting an adverse health condition (e.g., deterioration of the patient’s health state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the subject over a period of time. For example, the physician may prescribe an anti-inflammatory therapeutic in response to an indication of inflammation in the patient, or an analgesic therapeutic in response to an indication of pain in the patient. Such a prescription of a therapeutically effective dose of a treatment (e.g., drug), a clinical procedure, or further clinical testing may be determined without requiring an in-person clinical appointment with the prescribing physician.
[0097] The software application (e.g., mobile application) of the contactless monitoring system may utilize or access external capabilities of artificial intelligence techniques to develop signatures for patient deterioration and disease states. The web-based software may further use these signatures to accurately predict deterioration (e.g., hours to days earlier than with traditional clinical care). Using such a predictive capability, health care providers (e.g., physicians) may be able to make informed, accurate risk-based decisions, thereby allowing more at-risk patients to be treated from home.
[0098] The software application (e.g., mobile application) may analyze acquired health data from a subject (patient) to generate a likelihood of the subject having an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication). The web-based software may further use these signatures to accurately predict adverse respiratory states (e.g., minutes, hours, or days earlier than with traditional clinical care). Using such a predictive capability, health care providers (e.g., physicians) may be able to make informed, accurate risk-based decisions, thereby improving quality of care and monitoring provided to patients.
[0099] For example, the mobile device application may apply a trained (e.g., prediction) algorithm to the acquired health data to generate the likelihood of the subject having an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication). The trained algorithm may comprise an artificial intelligence based classifier, such as a machine learning based classifier, configured to process the acquired health data to generate the likelihood of the subject having the disease or disorder. The machine learning classifier may be trained using clinical datasets from one or more cohorts of patients, e.g., using clinical health data of the patients (e.g., vital sign data) as inputs and known clinical health outcomes (e.g., occurrence or recurrence of a disease or disorder) of the patients as outputs to the machine learning classifier.
[0100] The machine learning classifier may comprise one or more machine learning algorithms. Examples of machine learning algorithms may include a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network (such as a deep neural network (DNN), a recurrent neural network (RNN), a deep RNN, a long short-term memory (LSTM) recurrent neural network (RNN), or a gated recurrent unit (GRU) recurrent neural network (RNN)), deep learning, or other supervised learning algorithm or unsupervised learning algorithm for classification and regression. The machine learning classifier may be trained using one or more training datasets corresponding to patient data.
[0101] Training datasets may be generated from, for example, one or more cohorts of patients having common clinical characteristics (features) and clinical outcomes (labels). Training datasets may comprise a set of features and labels corresponding to the features. Features may correspond to algorithm inputs comprising patient demographic information derived from electronic medical records (EMR) and medical observations. Features may comprise clinical characteristics such as, for example, certain ranges or categories of vital sign measurements, such as heart rate, heart rate variability, blood pressure (e.g., systolic and diastolic), respiratory rate, blood oxygen concentration (SpCh), carbon dioxide concentration in respiratory gases, a hormone level, sweat analysis, blood glucose, body temperature, impedance (e.g., bioimpedance), conductivity, capacitance, resistivity, electromyography, galvanic skin response, neurological signals (e.g., electroencephalography), immunology markers, and other physiological measurements. Features may comprise patient information such as patient age, patient medical history, other medical conditions, current or past medications, and time since the last observation. For example, a set of features collected from a given patient at a given time point may collectively serve as a vital sign signature, which may be indicative of a health state or status of the patient at the given time point.
[0102] For example, ranges of vital sign measurements may be expressed as a plurality of disjoint continuous ranges of continuous measurement values, and categories of vital sign measurements may be expressed as a plurality of disjoint sets of measurement values (e.g., {“high”, “low”}, {“high”, “normal”}, {“low”, “normal”}, {“high”, “borderline high”, “normal”, “low”}, etc.). Clinical characteristics may also include clinical labels indicating the patient’s health history, such as a diagnosis of a disease or disorder, a previous administration of a clinical treatment (e.g., a drug, a surgical treatment, chemotherapy, radiotherapy, immunotherapy, etc.), behavioral factors, or other health status (e.g., hypertension or high blood pressure, hyperglycemia or high blood glucose, hypercholesterolemia or high blood cholesterol, history of allergic reaction or other adverse reaction, etc.).
[0103] Labels may comprise clinical outcomes such as, for example, a presence, absence, diagnosis, or prognosis of an adverse health condition (e.g., deterioration of the patient’s state, occurrence or recurrence of a disease or disorder, or occurrence of a complication) in the patient. Clinical outcomes may include a temporal characteristic associated with the presence, absence, diagnosis, or prognosis of the adverse health condition in the patient. For example, temporal characteristics may be indicative of the patient having had an occurrence of the adverse health condition within a certain period of time after a previous clinical outcome (e.g., being discharged from the hospital, undergoing an organ transplantation or other surgical operation, undergoing a clinical procedure, etc.). Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year.
[0104] Input features may be structured by aggregating the data into bins or alternatively using a one-hot encoding with the time since the last observation included. Inputs may also include feature values or vectors derived from the previously mentioned inputs, such as crosscorrelations calculated between separate vital sign measurements over a fixed period of time, and the discrete derivative or the finite difference between successive measurements. Such a period of time may be, for example, about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 2 days, about 3 days, about 4 days, about 5 days, about 6 days, about 7 days, about 10 days, about 2 weeks, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 6 months, about 8 months, about 10 months, about 1 year, or more than about 1 year.
[0105] Training records may be constructed from sequences of observations. Such sequences may comprise a fixed length for ease of data processing. For example, sequences may be zero-padded or selected as independent subsets of a single patient’s records.
[0106] The machine learning classifier algorithm may process the input features to generate output values comprising one or more classifications, one or more predictions, or a combination thereof. For example, such classifications or predictions may include a binary classification of a disease or a non-disease state, a classification between a group of categorical labels (e.g., ‘no adverse condition’, ‘apparent adverse condition’, and ‘likely adverse condition’), a likelihood (e.g., relative likelihood or probability) of developing a particular disease, disorder, or adverse condition, a score indicative of a presence of disease, disorder, or adverse condition, a score indicative of a level of systemic inflammation or other symptoms experienced by the patient, a ‘risk factor’ for the likelihood of incidence or mortality of the patient, a prediction of the time at which the patient is expected to have developed the disease, disorder, or adverse condition, and a confidence interval for any numeric predictions. Various machine learning techniques may be cascaded such that the output of a machine learning technique may also be used as input features to subsequent layers or subsections of the machine learning classifier.
[0107] In order to train the machine learning classifier model (e.g., by determining weights and correlations of the model) to generate real-time classifications or predictions, the model can be trained using datasets. Such datasets may be sufficiently large to generate statistically significant classifications or predictions. For example, datasets may comprise: health record databases of de-identified data including vital sign observations, databases of ambulatory vital sign observations collected via tele-health programs, databases of vital sign observations collected from fitness trackers, vital sign observations from a hospital or other clinical setting, vital sign measurements collected using an FDA-approved monitoring device, and vital sign measurements collected using contactless monitoring devices of the present disclosure. In some cases, datasets are annotated or labeled.
[0108] Datasets may be split into subsets (e.g., discrete or overlapping), such as a training dataset, a development dataset, and a test dataset. For example, a dataset may be split into a training dataset comprising 80% of the dataset, a development dataset comprising 10% of the dataset, and a test dataset comprising 10% of the dataset. The training dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. The development dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. The test dataset may comprise about 10%, about 20%, about 30%, about 40%, about 50%, about 60%, about 70%, about 80%, or about 90% of the dataset. Training sets (e.g., training datasets) may be selected by random sampling of a set of data corresponding to one or more patient cohorts to ensure independence of sampling. Alternatively, training sets (e.g., training datasets) may be selected by proportionate sampling of a set of data corresponding to one or more patient cohorts to ensure independence of sampling.
[0109] To improve the accuracy of model predictions and reduce overfitting of the model, the datasets may be augmented to increase the number of samples within the training set. For example, data augmentation may comprise rearranging the order of observations in a training record. To accommodate datasets having missing observations, methods to impute missing data may be used, such as forward-filling, back-filling, linear interpolation, and multi-task Gaussian processes. Datasets may be filtered to remove confounding factors. For example, within a database, a subset of patients may be excluded.
[0110] The machine learning classifier may comprise one or more neural networks, such as a neural network, a convolutional neural network (CNN), a deep neural network (DNN), a recurrent neural network (RNN), a transformer, or a deep RNN. The recurrent neural network may comprise units which can be long short-term memory (LSTM) units or gated recurrent units (GRU). For example, the machine learning classifier may comprise an algorithm architecture comprising a neural network with a set of input features such as vital sign and other measurements, patient medical history, and/or patient demographics. Neural network techniques, such as dropout or regularization, may be used during training the machine learning classifier to prevent overfitting.
[oni] When the machine learning classifier generates a classification or a prediction of a disease, disorder, or complication, an alert or alarm may be generated and transmitted to a health care provider, such as a physician, nurse, or other member of the patient’s treating team within a hospital. Alerts may be transmitted via an automated phone call, a short message service (SMS) or multimedia message service (MMS) message, an e-mail, or an alert within a dashboard. The alert may comprise output information such as a prediction of an adverse condition, a likelihood of the predicted adverse condition, a time until an expected onset of the adverse condition, a confidence interval of the likelihood or time, or a recommended course of treatment for the adverse condition. The neural network may comprise a plurality of sub-networks, each of which is configured to generate a classification or prediction of a different type of output information (e.g., which may be combined to form an overall output of the neural network).
[0112] To validate the performance of the machine learning classifier model, different performance metrics may be generated. For example, an area under the receiver-operating curve (AUROC) may be used to determine the diagnostic capability of the machine learning classifier. For example, the machine learning classifier may use classification thresholds which are adjustable, such that specificity and sensitivity are tunable, and the receiveroperating curve (ROC) can be used to identify the different operating points corresponding to different values of specificity and sensitivity.
[0113] In some cases, such as when datasets are not sufficiently large, cross-validation may be performed to assess the robustness of a machine learning classifier model across different training and testing datasets.
[0114] In some cases, while a machine learning classifier model may be trained using a dataset of records which are a subset of a single patient’s observations, the performance of the classifier model’s discrimination ability (e.g., as assessed using an AUROC) is calculated using the entire record for a patient. To calculate performance metrics such as sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), AUPRC, AUROC, or similar, the following definitions may be used. A “false positive” may refer to an outcome in which if an alert or alarm has been incorrectly or prematurely activated (e.g., before the actual onset of, or without any onset of, a disease state or condition) fires too early. A “true positive” may refer to an outcome in which an alert or alarm has been activated at the correct time (within a predetermined buffer or tolerance), and the patient’s record indicates the disease or condition. A “false negative” may refer to an outcome in which no alert or alarm has been activated, but the patient’s record indicates the disease or condition. A “true negative” may refer to an outcome in which no alert or alarm has been activated, and the patient’s record does not indicate the disease or condition.
[0115] The machine learning classifier may be trained until certain predetermined conditions for accuracy or performance are satisfied, such as having minimum desired values corresponding to diagnostic accuracy measures. For example, the diagnostic accuracy measure may correspond to prediction of a likelihood of occurrence of an adverse health condition such as deterioration or a disease or disorder in the subject. As another example, the diagnostic accuracy measure may correspond to prediction of a likelihood of deterioration or recurrence of an adverse health condition such as a disease or disorder for which the subject has previously been treated. For example, a diagnostic accuracy measure may correspond to prediction of likelihood of recurrence of an infection in a subject who has previously been treated for the infection. Examples of diagnostic accuracy measures may include sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, area under the precision-recall curve (AUPRC), and area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) corresponding to the diagnostic accuracy of detecting or predicting an adverse health condition.
[0116] For example, such a predetermined condition may be that the sensitivity of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0117] As another example, such a predetermined condition may be that the specificity of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0118] As another example, such a predetermined condition may be that the positive predictive value (PPV) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0119] As another example, such a predetermined condition may be that the negative predictive value (NPV) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of, for example, at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0120] As another example, such a predetermined condition may be that the area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
[0121] As another example, such a predetermined condition may be that the area under the precision-recall curve (AUPRC) of predicting occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder comprises a value of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
[0122] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a sensitivity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0123] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a specificity of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0124] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a positive predictive value (PPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0125] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with a negative predictive value (NPV) of at least about 50%, at least about 55%, at least about 60%, at least about 65%, at least about 70%, at least about 75%, at least about 80%, at least about 85%, at least about 90%, at least about 95%, at least about 96%, at least about 97%, at least about 98%, or at least about 99%.
[0126] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with an area under the curve (AUC) of a Receiver Operating Characteristic (ROC) curve (AUROC) of at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
[0127] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder with an area under the precision-recall curve (AUPRC) of at least about 0.10, at least about 0.15, at least about 0.20, at least about 0.25, at least about 0.30, at least about 0.35, at least about 0.40, at least about 0.45, at least about 0.50, at least about 0.55, at least about 0.60, at least about 0.65, at least about 0.70, at least about 0.75, at least about 0.80, at least about 0.85, at least about 0.90, at least about 0.95, at least about 0.96, at least about 0.97, at least about 0.98, or at least about 0.99.
[0128] In some embodiments, the trained classifier may be trained or configured to predict occurrence or recurrence of the adverse health condition such as deterioration or a disease or disorder over a period of time before the actual occurrence or recurrence of the adverse health condition (e.g., a period of time including a window beginning about 1 hour, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 7 hours, about 8 hours, about 9 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, about 24 hours, about 36 hours, about 48 hours, about 72 hours, about 96 hours, about 120 hours, about 6 days, about 7 days, about 8 days, about 9 days, about 10 days, about 11 days, about 12 days, about 13 days, about 14 days, about 3 weeks, about 4 weeks, about 1 month, about 2 months, about 3 months, about 4 months, about 5 months, about 6 months, about 7 months, about 8 months, about 9 months, about 10 months, about 11 months, or about 12 months prior to the onset of the health condition).
[0129] The contactless monitoring device may be lightweight and discrete, and may comprise electronic sensors, a power source (e.g., battery), and a physical enclosure or housing. The electronic sensors may be wearable or contactless sensors configured to acquire health data from a subject.
[0130] The physical enclosure of the contactless monitoring device may comprise a maximum dimension of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
[0131] For example, the physical enclosure of the contactless monitoring device may comprise a length of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
[0132] For example, the physical enclosure of the contactless monitoring device may comprise a width of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
[0133] For example, the physical enclosure of the contactless monitoring device may comprise a height of no more than about 5 mm, no more than about 1 cm, no more than about 2 cm, no more than about 3 cm, no more than about 4 cm, no more than about 5 cm, no more than about 6 cm, no more than about 7 cm, no more than about 8 cm, no more than about 9 cm, no more than about 10 cm, no more than about 15 cm, no more than about 20 cm, no more than about 25 cm, or no more than about 30 cm.
[0134] The physical enclosure of the contactless monitoring device may have a maximum weight of no more than about no more than about 300 grams (g), no more than about 250 g, no more than about 200 g, no more than about 150 g, no more than about 100 g, no more than about 90 g, no more than about 80 g, no more than about 70 g, no more than about 60 g, no more than about 50 g, no more than about 40 g, no more than about 30 g, no more than about 20 g, no more than about 10 g, or no more than about 5 g.
[0135] The contactless monitoring device may be powered by a power source, such as an energy storage device. The energy storage device may be or include a solid state battery or capacitor. The energy storage device may comprise one or more batteries of type alkaline, nickel metal hydride (NiMH) such as nickel cadmium (Ni-Cd), lithium ion (Li-ion), or lithium polymer (LiPo). For example, the energy storage device may comprise one or more batteries of type AA, AAA, C, D, 9V, or a coin cell battery. The battery may comprise one or more rechargeable batteries or non -rechargeable batteries.
[0136] The battery may comprise a wattage of no more than about 10 watts (W), no more about 5 W, no more about 4 W, no more about 3 W, no more about 2 W, no more about 1 W, no more about 500 milliwatts (mW), no more about 100 mW, no more about 50 mW, no more about 10 mW, no more about 5 mW, or no more about 1 mW.
[0137] The battery may comprise a voltage of no more than about 9 volts (V), no more than about 6 V, no more than about 4.5 V, no more than about 3.7 V, no more than about 3 V, no more than about 1.5 V, no more than about 1.2 V, or no more than about 1 V.
[0138] The battery may comprise a capacity of no more than about 50 milliampere hours (mAh), no more than about 100 mAh, no more than about 150 mAh, no more than about 200 mAh, no more than about 250 mAh, no more than about 300 mAh, no more than about 400 mAh, no more than about 500 mAh, no more than about 1,000 mAh, no more than about 2,000 mAh, no more than about 3,000 mAh, no more than about 4,000 mAh, no more than about 5,000 mAh, no more than about 6,000 mAh, no more than about 7,000 mAh, no more than about 8,000 mAh, no more than about 9,000 mAh, or no more than about 10,000 mAh. [0139] The battery may be configured to be rechargeable with a charging time of about 10 minutes, about 20 minutes, about 30 minutes, about 60 minutes, about 90 minutes, about 2 hours, about 3 hours, about 4 hours, about 5 hours, about 6 hours, about 8 hours, about 10 hours, about 12 hours, about 14 hours, about 16 hours, about 18 hours, about 20 hours, about 22 hours, or about 24 hours. [0140] The electronic device may be configured to allow the battery to be replaceable. Alternatively, the electronic device may be configured with a battery which is not replaceable by a user.
[0141] The software application of the monitoring system may provide functionality for a user of the monitoring system to control the monitoring system and a graphical user interface (GUI) for the user to view their measured, collected, or recorded clinical health data (e.g., vital sign data). The application may be configured to run on mobile platforms, such as iOS and Android. The application may be run on a variety of mobile devices, such as mobile phones (e.g., Apple iPhone or Android phone), tablet computers (e.g., Apple iPad, Android tablet, or Windows 10 tablet), smart watches (e.g., Apple Watch or Android smart watch), and portable media players (e.g., Apple iPod Touch).
Computer systems
[0142] The present disclosure provides computer systems that are programmed to implement methods of the disclosure. FIG. 3 shows a computer system 301 that is programmed or otherwise configured to implement methods provided herein.
[0143] The computer system 301 can regulate various aspects of the present disclosure, such as, for example, receiving health data of a subject acquired from sensors, processing the health data using a trained algorithm to generate an output indicative of a health status of the subject, providing the output for display on an electronic display, presenting an alert on the electronic display, and transmitting an alert over a network. The computer system 301 can be an electronic device of a user or a computer system that is remotely located with respect to the electronic device. The electronic device can be a mobile electronic device.
[0144] The computer system 301 includes a central processing unit (CPU, also “processor” and “computer processor” herein) 305, which can be a single core or multi core processor, or a plurality of processors for parallel processing. The computer system 301 also includes memory or memory location 310 (e.g., random-access memory, read-only memory, flash memory), electronic storage unit 315 (e.g., hard disk), communication interface 320 (e.g., network adapter) for communicating with one or more other systems, and peripheral devices 325, such as cache, other memory, data storage and/or electronic display adapters. The memory 310, storage unit 315, interface 320 and peripheral devices 325 are in communication with the CPU 305 through a communication bus (solid lines), such as a motherboard. The storage unit 315 can be a data storage unit (or data repository) for storing data. The computer system 301 can be operatively coupled to a computer network (“network”) 330 with the aid of the communication interface 320. The network 330 can be the Internet, an internet and/or extranet, or an intranet and/or extranet that is in communication with the Internet.
[0145] The network 330 in some cases is a telecommunication and/or data network. The network 330 can include one or more computer servers, which can enable distributed computing, such as cloud computing. For example, one or more computer servers may enable cloud computing over the network 330 (“the cloud”) to perform various aspects of analysis, calculation, and generation of the present disclosure, such as, for example, receiving health data of a subject acquired from sensors, processing the health data using a trained algorithm to generate an output indicative of a health status of the subject, providing the output for display on an electronic display, presenting an alert on the electronic display, and transmitting an alert over a network. Such cloud computing may be provided by cloud computing platforms such as, for example, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and IBM cloud. The network 330, in some cases with the aid of the computer system 301, can implement a peer-to-peer network, which may enable devices coupled to the computer system 301 to behave as a client or a server.
[0146] The CPU 305 can execute a sequence of machine-readable instructions, which can be embodied in a program or software. The instructions may be stored in a memory location, such as the memory 310. The instructions can be directed to the CPU 305, which can subsequently program or otherwise configure the CPU 305 to implement methods of the present disclosure. Examples of operations performed by the CPU 305 can include fetch, decode, execute, and writeback.
[0147] The CPU 305 can be part of a circuit, such as an integrated circuit. One or more other components of the system 301 can be included in the circuit. In some cases, the circuit is an application specific integrated circuit (ASIC).
[0148] The storage unit 315 can store files, such as drivers, libraries and saved programs. The storage unit 315 can store user data, e.g., user preferences and user programs. The computer system 301 in some cases can include one or more additional data storage units that are external to the computer system 301, such as located on a remote server that is in communication with the computer system 301 through an intranet or the Internet.
[0149] The computer system 301 can communicate with one or more remote computer systems through the network 330. For instance, the computer system 301 can communicate with a remote computer system of a user. Examples of remote computer systems include personal computers (e.g., portable PC), slate or tablet PC’s (e.g., Apple® iPad, Samsung® Galaxy Tab), telephones, Smart phones (e.g., Apple® iPhone, Android-enabled device, Blackberry®), or personal digital assistants. The user can access the computer system 301 via the network 330.
[0150] Methods as described herein can be implemented by way of machine (e.g., computer processor) executable code stored on an electronic storage location of the computer system 301, such as, for example, on the memory 310 or electronic storage unit 315. The machine executable or machine readable code can be provided in the form of software. During use, the code can be executed by the processor 305. In some cases, the code can be retrieved from the storage unit 315 and stored on the memory 310 for ready access by the processor 305. In some situations, the electronic storage unit 315 can be precluded, and machine-executable instructions are stored on memory 310.
[0151] The code can be pre-compiled and configured for use with a machine having a processor adapted to execute the code, or can be compiled during runtime. The code can be supplied in a programming language that can be selected to enable the code to execute in a pre-compiled or as-compiled fashion.
[0152] Aspects of the systems and methods provided herein, such as the computer system 301, can be embodied in programming. Various aspects of the technology may be thought of as “products” or “articles of manufacture” typically in the form of machine (or processor) executable code and/or associated data that is carried on or embodied in a type of machine readable medium. Machine-executable code can be stored on an electronic storage unit, such as memory (e.g., read-only memory, random-access memory, flash memory) or a hard disk. “Storage” type media can include any or all of the tangible memory of the computers, processors or the like, or associated modules thereof, such as various semiconductor memories, tape drives, disk drives and the like, which may provide non-transitory storage at any time for the software programming. All or portions of the software may at times be communicated through the Internet or various other telecommunication networks. Such communications, for example, may enable loading of the software from one computer or processor into another, for example, from a management server or host computer into the computer platform of an application server. Thus, another type of media that may bear the software elements includes optical, electrical and electromagnetic waves, such as used across physical interfaces between local devices, through wired and optical landline networks and over various air-links. The physical elements that carry such waves, such as wired or wireless links, optical links or the like, also may be considered as media bearing the software. As used herein, unless restricted to non-transitory, tangible “storage” media, terms such as computer or machine “readable medium” refer to any medium that participates in providing instructions to a processor for execution.
[0153] Hence, a machine readable medium, such as computer-executable code, may take many forms, including but not limited to, a tangible storage medium, a carrier wave medium or physical transmission medium. Non-volatile storage media include, for example, optical or magnetic disks, such as any of the storage devices in any computer(s) or the like, such as may be used to implement the databases, etc. shown in the drawings. Volatile storage media include dynamic memory, such as main memory of such a computer platform. Tangible transmission media include coaxial cables; copper wire and fiber optics, including the wires that comprise a bus within a computer system. Carrier-wave transmission media may take the form of electric or electromagnetic signals, or acoustic or light waves such as those generated during radio frequency (RF) and infrared (IR) data communications. Common forms of computer-readable media therefore include for example: a floppy disk, a flexible disk, hard disk, magnetic tape, any other magnetic medium, a CD-ROM, DVD or DVD- ROM, any other optical medium, punch cards paper tape, any other physical storage medium with patterns of holes, a RAM, a ROM, a PROM and EPROM, a FLASH-EPROM, any other memory chip or cartridge, a carrier wave transporting data or instructions, cables or links transporting such a carrier wave, or any other medium from which a computer may read programming code and/or data. Many of these forms of computer readable media may be involved in carrying one or more sequences of one or more instructions to a processor for execution.
[0154] The computer system 301 can include or be in communication with an electronic display 335 that comprises a user interface (LT) 340. Examples of user interfaces (UIs) include, without limitation, a graphical user interface (GUI) and web-based user interface. For example, the computer system can include a web-based dashboard (e.g., a GUI) configured to display, for example, patient metrics, recent alerts, and/or prediction of health outcomes, thereby allowing health care providers, such as physicians and treating teams of a patient, to access patient alerts, data (e.g., health data), and/or predictions or assessments generated from such data.
[0155] Methods and systems of the present disclosure can be implemented by way of one or more algorithms. An algorithm can be implemented by way of software upon execution by the central processing unit 305. The algorithm can, for example, receive health data of a subj ect acquired from sensors, process the health data using a trained algorithm to generate an output indicative of a health status of the subject, provide the output for display on an electronic display, present an alert on the electronic display, and transmit an alert over a network.
[0156] Although the description has been described with respect to particular embodiments thereof, these particular embodiments are merely illustrative, and not restrictive. Concepts illustrated in the examples may be applied to other examples and implementations.
[0157] While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. It is not intended that the invention be limited by the specific examples provided within the specification. While the invention has been described with reference to the aforementioned specification, the descriptions and illustrations of the embodiments herein are not meant to be construed in a limiting sense. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. Furthermore, it shall be understood that all aspects of the invention are not limited to the specific depictions, configurations or relative proportions set forth herein which depend upon a variety of conditions and variables. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is therefore contemplated that the invention shall also cover any such alternatives, modifications, variations or equivalents. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims

CLAIMS WHAT IS CLAIMED IS:
1. A system for monitoring a subject, comprising: a plurality of sensors comprising a plurality of contactless sensors, which plurality of sensors are configured to acquire health data of the subject over a period of time; and an electronic device, comprising: an electronic display; a transceiver; and one or more computer processors operatively coupled to the electronic display and the transceiver, which one or more computer processors are configured to (i) receive the health data from the plurality of sensors through the transceiver, (ii) process the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time, and (iii) provide the output for display on the electronic display.
2. The system of claim 1, wherein the plurality of contactless sensors is configured to acquire at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
3. The system of claim 1, wherein the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject.
4. The system of claim 3, wherein the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
5. The system of claim 1, wherein the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subj ect.
6. The system of claim 5, wherein the one or more image sensors are selected from the group consisting of a camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
7. The system of claim 1, wherein the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject.
8. The system of claim 7, wherein the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
-34-
9. The system of claim 1, wherein the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject.
10. The system of claim 9, wherein the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
11. The system of claim 1, wherein the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject.
12. The system of claim 11, wherein the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
13. The system of claim 1, wherein the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject.
14. The system of claim 13, wherein the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature.
15. The system of claim 13, wherein the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
16. The system of claim 1, wherein the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject.
17. The system of claim 16, wherein the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
18. The system of claim 17, wherein the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpCh), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity, electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
19. The system of claim 17, wherein the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
-35-
20. The system of claim 1, wherein the transceiver comprises a wireless transceiver.
21. The system of claim 20, wherein the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
22. The system of claim 1, wherein the one or more computer processors are further configured to store the acquired health data in a database.
23. The system of claim 22, wherein the database comprises a cloud-based database.
24. The system of claim 1, wherein the one or more computer processors are further configured to present an alert on the electronic display based at least in part on the generated output.
25. The system of claim 1, wherein the one or more computer processors are further configured to transmit an alert over a network to a health care provider of the subject based at least in part on the output.
26. The system of claim 24 or 25, wherein the alert comprises instructions to administer care or treatment to the subject.
27. The system of claim 26, wherein administering the treatment comprises providing a medication to the subject.
28. The system of claim 26, wherein administering the care comprises turning the subject around.
29. The system of claim 25, wherein the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
30. The system of claim 1, wherein the trained algorithm comprises a machine learningbased classifier configured to process the health data to generate the output indicative of the health status of the subject.
31. The system of claim 30, wherein the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
32. The system of claim 1, wherein the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition.
33. The system of claim 32, wherein the adverse health condition is a disease or disorder.
34. The system of claim 33, wherein the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
35. The system of claim 1, wherein the health status of the subject is a fall or a vital sign abnormality.
36. The system of claim 35, wherein the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
37. The system of claim 1, wherein the subject has received a clinical treatment or procedure.
38. The system of claim 37, wherein the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.
39. The system of claim 37, wherein the subject is being monitored for complications subsequent to receiving the clinical treatment or procedure.
40. The system of claim 1, wherein the one or more computer processors are configured to further analyze an interaction between the subject and a caregiver of the subject, and provide an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
41. The system of any one of claims 1-40, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 70%.
42. The system of any one of claims 1-40, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 70%.
43. The system of any one of claims 1-40, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 70%.
44. The system of any one of claims 1-40, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 70%.
45. The system of any one of claims 1-40, wherein the one or more computer processors are configured to process the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the- Curve (AUC) of at least about 0.70.
46. The system of any one of claims 1-45, wherein the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about
3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
47. The system of any one of claims 1-46, wherein the one or more computer processors are configured to perform (i), (ii), and (iii) in real time or substantially in real time.
48. A method for monitoring a subject, comprising:
(a) receiving, using a transceiver, health data of the subject acquired from a plurality of sensors over a period of time, which plurality of sensors comprises a plurality of contactless sensors;
(b) computer processing the health data using a trained algorithm to generate an output indicative of a health status of the subject over the period of time; and
(c) providing the output for display on an electronic display.
49. The method of claim 48, wherein the health data comprises at least one of audio data of the subject, image data of the subject, video data of the subject, activity data of the subject, environment data of the subject, posture data of the subject, vital sign measurements of the subject, and any combination thereof.
50. The method of claim 48, wherein the plurality of contactless sensors comprises one or more audio sensors configured to acquire the audio data of the subject.
51. The method of claim 50, wherein the one or more audio sensors are selected from the group consisting of an acoustic sensor, a microphone, and any combination thereof.
52. The method of claim 48, wherein the plurality of contactless sensors comprises one or more image sensors configured to acquire the image data of the subj ect.
53. The method of claim 52, wherein the one or more image sensors are selected from the group consisting of a camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
-38-
54. The method of claim 48, wherein the plurality of contactless sensors comprises one or more video sensors configured to acquire the video data of the subject.
55. The method of claim 54, wherein the one or more video sensors are selected from the group consisting of a video camera, a charged-coupled device (CCD), a complementary metal oxide semiconductor (CMOS) sensor, a metal oxide semiconductor (MOS) sensor, a dynamic random access memory (DRAM) sensor, and a Quanta Image Sensor (QIS).
56. The method of claim 48, wherein the plurality of contactless sensors comprises one or more environment sensors configured to acquire the environment data of the subject.
57. The method of claim 56, wherein the one or more environment sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
58. The method of claim 48, wherein the plurality of contactless sensors comprises one or more posture sensors configured to acquire the posture data of the subject.
59. The method of claim 58, wherein the one or more posture sensors are selected from the group consisting of audio sensors, image sensors, video sensors, motion sensors, radar sensors, accelerometer sensors, beacon sensors, temperature sensors, depth sensors, light detection and ranging (LIDAR) sensors, infrared sensors, biosensors, and mechanical sensors.
60. The method of claim 48, wherein the plurality of contactless sensors comprises one or more contactless vital sign sensors configured to acquire the vital sign data of the subject.
61. The method of claim 60, wherein the vital sign data comprise one or more measurements selected from the group consisting of respiratory rate and body temperature.
62. The method of claim 60, wherein the one or more contactless vital sign sensors are selected from the group consisting of a respiratory rate monitor and a thermometer.
63. The method of claim 48, wherein the plurality of sensors further comprises one or more wearable sensors configured to acquire health data of the subject.
64. The method of claim 63, wherein the one or more wearable sensors comprise one or more wearable vital sign sensors configured to acquire vital sign data of the subject.
65. The method of claim 64, wherein the vital sign data comprise one or more measurements selected from the group consisting of heart rate, systolic blood pressure, diastolic blood pressure, respiratory rate, blood oxygen concentration (SpCh), blood glucose, body temperature, hormone level, impedance, conductivity, capacitance, resistivity,
-39- electrocardiography, electroencephalography, electromyography, galvanic skin response, and neurological signals.
66. The method of claim 64, wherein the one or more wearable vital sign sensors are selected from the group consisting of a heart rate monitor, a blood pressure monitor, a respiratory rate monitor, a blood oxygen monitor, a blood glucose monitor, a thermometer, an electrocardiograph machine, an electroencephalograph machine, and an electromyography machine.
67. The method of claim 48, wherein the transceiver comprises a wireless transceiver.
68. The method of claim 67, wherein the wireless transceiver comprises a WiFi transceiver, a Bluetooth transceiver, a radio frequency (RF) transceiver, or a Zigbee transceiver.
69. The method of claim 48, further comprising storing the acquired health data in a database.
70. The method of claim 69, wherein the database comprises a cloud-based database.
71. The method of claim 48, further comprising presenting an alert on the electronic display based at least in part on the generated output.
72. The method of claim 48, further comprising transmitting an alert over a network to a health care provider of the subject based at least in part on the output.
73. The method of claim 71 or 72, wherein the alert comprises instructions to administer care or treatment to the subject.
74. The method of claim 73, wherein administering the treatment comprises providing a medication to the subject.
75. The method of claim 73, wherein administering the care comprises turning the subject around.
76. The method of claim 72, wherein the network comprises an internet, an intranet, a local area network, a wireless network, a cellular network, or a cloud-based network.
77. The method of claim 48, wherein the trained algorithm comprises a machine learningbased classifier configured to process the health data to generate the output indicative of the health status of the subject.
78. The method of claim 77, wherein the machine learning-based classifier is selected from the group consisting of a support vector machine (SVM), a naive Bayes classification, a random forest, a neural network, a deep neural network (DNN), a convolutional neural network (CNN), a deep CNN, a recurrent neural network (RNN), a deep RNN, a long shortterm memory (LSTM) neural network, and any combination thereof.
-40-
79. The method of claim 48, wherein the health status of the subject is a presence, absence, progression, regression, diagnosis, or prognosis of an adverse health condition.
80. The method of claim 79, wherein the adverse health condition is a disease or disorder.
81. The method of claim 80, wherein the disease or disorder is selected from the group consisting of: mobility deterioration, delirium, chronic obstructive pulmonary disease (COPD), congestive heart failure (CHF), sleep apnea, sleep disorder, stroke, dementia, asthma, psychiatric disorders, cognitive decline, musculoskeletal disorders, and aging effects.
82. The method of claim 48, wherein the health status of the subject is a fall or a vital sign abnormality.
83. The method of claim 82, wherein the vital sign abnormality is selected from the group consisting of: breathing difficulties, sudden infant death syndrome (SIDS), atrial fibrillation (AF), cardiac arrhythmia, respiratory arrhythmia, and fever.
84. The method of claim 48, wherein the subject has received a clinical treatment or procedure.
85. The method of claim 84, wherein the clinical treatment or procedure is selected from the group consisting of: a drug treatment, surgery, operation, chemotherapy, radiotherapy, immunotherapy, targeted therapy, childbirth, and a combination thereof.
86. The method of claim 84, further comprising monitoring the subject for complications subsequent to receiving the clinical treatment or procedure.
87. The method of claim 48, further comprising analyzing an interaction between the subject and a caregiver of the subject, and providing an assessment of quality of care of the subject based at least in part on the analysis of the interaction between the subject and the caregiver of the subject.
88. The method of claim 48-87, further comprising processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a sensitivity of at least about 70%.
89. The method of claim 48-87, further comprising processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a specificity of at least about 70%.
90. The method of claim 48-87, further comprising processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a positive predictive value of at least about 70%.
-41-
91. The method of claim 48-87, further comprising processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with a negative predictive value of at least about 70%.
92. The method of claim 48-87, further comprising processing the health data using the trained algorithm to generate the output indicative of the health status of the subject over the period of time with an Area-Under-the-Curve (AUC) of at least about 0.70.
93. The method of claim 48-92, wherein the period of time begins at least about 24 hours, about 12 hours, about 10 hours, about 8 hours, about 6 hours, about 4 hours, about 3 hours, about 2 hours, about 1 hour, about 45 minutes, about 30 minutes, about 20 minutes, about 15 minutes, about 10 minutes, about 5 minutes, about 4 minutes, about 3 minutes, about 2 minutes, about 1 minute, about 30 seconds, about 20 seconds, about 15 seconds, about 10 seconds, or about 5 seconds prior to onset of the health condition.
94. The method of claim 48-93, further comprising performing (i), (ii), and (iii) in real time or substantially in real time.
-42-
PCT/US2021/061853 2020-12-14 2021-12-03 Systems and methods for augmented health monitoring WO2022132465A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063125211P 2020-12-14 2020-12-14
US63/125,211 2020-12-14

Publications (1)

Publication Number Publication Date
WO2022132465A1 true WO2022132465A1 (en) 2022-06-23

Family

ID=82058043

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/061853 WO2022132465A1 (en) 2020-12-14 2021-12-03 Systems and methods for augmented health monitoring

Country Status (1)

Country Link
WO (1) WO2022132465A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040078219A1 (en) * 2001-12-04 2004-04-22 Kimberly-Clark Worldwide, Inc. Healthcare networks with biosensors
US20050200486A1 (en) * 2004-03-11 2005-09-15 Greer Richard S. Patient visual monitoring system
US20100152543A1 (en) * 2008-09-24 2010-06-17 Biancamed Ltd. Contactless and minimal-contact monitoring of quality of life parameters for assessment and intervention
US20110112442A1 (en) * 2007-05-02 2011-05-12 Earlysense Ltd. Monitoring, Predicting and Treating Clinical Episodes
US20110245633A1 (en) * 2010-03-04 2011-10-06 Neumitra LLC Devices and methods for treating psychological disorders
US20120215556A1 (en) * 2008-09-30 2012-08-23 General Electric Company System and method to manage a quality of delivery of healthcare
US20130172770A1 (en) * 2010-09-22 2013-07-04 Koninklijke Philips Electronics N.V. Method and apparatus for monitoring the respiration activity of a subject
US20140249430A1 (en) * 2003-11-04 2014-09-04 Goverment of the United States, as Represented by the Secretary of the Army Life sign detection and health state assessment system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040078219A1 (en) * 2001-12-04 2004-04-22 Kimberly-Clark Worldwide, Inc. Healthcare networks with biosensors
US20140249430A1 (en) * 2003-11-04 2014-09-04 Goverment of the United States, as Represented by the Secretary of the Army Life sign detection and health state assessment system
US20050200486A1 (en) * 2004-03-11 2005-09-15 Greer Richard S. Patient visual monitoring system
US20110112442A1 (en) * 2007-05-02 2011-05-12 Earlysense Ltd. Monitoring, Predicting and Treating Clinical Episodes
US20100152543A1 (en) * 2008-09-24 2010-06-17 Biancamed Ltd. Contactless and minimal-contact monitoring of quality of life parameters for assessment and intervention
US20120215556A1 (en) * 2008-09-30 2012-08-23 General Electric Company System and method to manage a quality of delivery of healthcare
US20110245633A1 (en) * 2010-03-04 2011-10-06 Neumitra LLC Devices and methods for treating psychological disorders
US20130172770A1 (en) * 2010-09-22 2013-07-04 Koninklijke Philips Electronics N.V. Method and apparatus for monitoring the respiration activity of a subject

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YU M. CHI ; GERT CAUWENBERGHS: "Wireless Non-contact EEG/ECG Electrodes for Body Sensor Networks", BODY SENSOR NETWORKS (BSN), 2010 INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 7 June 2010 (2010-06-07), Piscataway, NJ, USA , pages 297 - 301, XP031705072, ISBN: 978-1-4244-5817-2 *

Similar Documents

Publication Publication Date Title
US20210052218A1 (en) Systems and methods for sepsis detection and monitoring
US20210022660A1 (en) Systems and methods for subject monitoring
US11877830B2 (en) Machine learning health analysis with a mobile device
US10561321B2 (en) Continuous monitoring of a user's health with a mobile device
US20190076031A1 (en) Continuous monitoring of a user's health with a mobile device
Yin et al. A health decision support system for disease diagnosis based on wearable medical sensors and machine learning ensembles
Wang et al. Enabling smart personalized healthcare: A hybrid mobile-cloud approach for ECG telemonitoring
US20150359489A1 (en) Smart mobile health monitoring system and related methods
US10980490B2 (en) Method and apparatus for evaluating physiological aging level
Sow et al. Mining of sensor data in healthcare: A survey
WO2019071201A1 (en) Continuous monitoring of a user's health with a mobile device
US9724042B1 (en) Device, system, and method for adjusting biometric sensing rate based on available energy
US10667687B2 (en) Monitoring system for physiological parameter sensing device
Shafik Wearable Medical Electronics in Artificial Intelligence of Medical Things
Srivastava et al. Supervision of Worldwide Healthcare through an IoT-Based System
EP3861558A1 (en) Continuous monitoring of a user's health with a mobile device
Aghav et al. Health track
WO2022132465A1 (en) Systems and methods for augmented health monitoring
WO2021127566A1 (en) Devices and methods for measuring physiological parameters
Sheikdavood et al. Smart Health Monitoring System for Coma Patients using IoT
Sinha Role of artificial intelligence and Internet of Things based medical diagnostics smart health care system for a post-COVID-19 world
CN115697192A (en) Methods and systems for non-invasive prediction, detection and monitoring of viral infections
WO2022120017A1 (en) Systems and methods for contactless respiratory monitoring
WO2020073013A1 (en) Machine learning health analysis with a mobile device
Singh et al. Performance of IoT-Enabled Devices in Remote Health Monitoring Applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21907461

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 02.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21907461

Country of ref document: EP

Kind code of ref document: A1