EP3860436A1 - Analyse de santé basée sur un apprentissage machine à l'aide d'un dispositif mobile - Google Patents
Analyse de santé basée sur un apprentissage machine à l'aide d'un dispositif mobileInfo
- Publication number
- EP3860436A1 EP3860436A1 EP19791147.2A EP19791147A EP3860436A1 EP 3860436 A1 EP3860436 A1 EP 3860436A1 EP 19791147 A EP19791147 A EP 19791147A EP 3860436 A1 EP3860436 A1 EP 3860436A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- fidelity
- health
- user
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000036541 health Effects 0.000 title claims abstract description 51
- 238000010801 machine learning Methods 0.000 title claims description 145
- 238000004458 analytical method Methods 0.000 title description 9
- 238000000034 method Methods 0.000 claims abstract description 100
- 238000012549 training Methods 0.000 claims description 53
- 238000013528 artificial neural network Methods 0.000 claims description 47
- 206010003658 Atrial Fibrillation Diseases 0.000 claims description 44
- 238000005259 measurement Methods 0.000 claims description 44
- 238000009826 distribution Methods 0.000 claims description 41
- 230000015654 memory Effects 0.000 claims description 37
- 238000012545 processing Methods 0.000 claims description 36
- 230000002159 abnormal effect Effects 0.000 claims description 23
- 238000012935 Averaging Methods 0.000 claims description 19
- 238000005070 sampling Methods 0.000 claims description 16
- 230000000306 recurrent effect Effects 0.000 claims description 15
- 230000000694 effects Effects 0.000 claims description 14
- 230000004044 response Effects 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 abstract description 43
- 230000036772 blood pressure Effects 0.000 abstract description 27
- 230000003862 health status Effects 0.000 abstract description 6
- 230000005189 cardiac health Effects 0.000 abstract description 4
- 239000010410 layer Substances 0.000 description 62
- 238000002565 electrocardiography Methods 0.000 description 37
- 238000003745 diagnosis Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 22
- 238000004422 calculation algorithm Methods 0.000 description 21
- 206010003119 arrhythmia Diseases 0.000 description 16
- 230000006793 arrhythmia Effects 0.000 description 16
- 238000001514 detection method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 239000013598 vector Substances 0.000 description 11
- 230000008859 change Effects 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 8
- 208000006011 Stroke Diseases 0.000 description 7
- 230000005856 abnormality Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 239000000203 mixture Substances 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 238000011176 pooling Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 230000004075 alteration Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 239000002356 single layer Substances 0.000 description 4
- 230000003247 decreasing effect Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 208000024891 symptom Diseases 0.000 description 3
- 206010019280 Heart failures Diseases 0.000 description 2
- 206010020772 Hypertension Diseases 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 2
- 230000002547 anomalous effect Effects 0.000 description 2
- 230000010100 anticoagulation Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 239000008280 blood Substances 0.000 description 2
- 210000004369 blood Anatomy 0.000 description 2
- 238000009530 blood pressure measurement Methods 0.000 description 2
- 230000000747 cardiac effect Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 208000019622 heart disease Diseases 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000002085 persistent effect Effects 0.000 description 2
- 230000000284 resting effect Effects 0.000 description 2
- 230000006403 short-term memory Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 206010003662 Atrial flutter Diseases 0.000 description 1
- 208000036829 Device dislocation Diseases 0.000 description 1
- 101001053302 Homo sapiens Serine protease inhibitor Kazal-type 7 Proteins 0.000 description 1
- 208000001953 Hypotension Diseases 0.000 description 1
- 102100024376 Serine protease inhibitor Kazal-type 7 Human genes 0.000 description 1
- 208000001871 Tachycardia Diseases 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000036760 body temperature Effects 0.000 description 1
- 208000006218 bradycardia Diseases 0.000 description 1
- 230000036471 bradycardia Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000001364 causal effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 238000012631 diagnostic technique Methods 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- VJYFKVYYMZPMAB-UHFFFAOYSA-N ethoprophos Chemical compound CCCSP(=O)(OCC)SCCC VJYFKVYYMZPMAB-UHFFFAOYSA-N 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009532 heart rate measurement Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 208000012866 low blood pressure Diseases 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000002483 medication Methods 0.000 description 1
- 238000012806 monitoring device Methods 0.000 description 1
- 208000031225 myocardial ischemia Diseases 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000002106 pulse oximetry Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000006794 tachycardia Effects 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
- A61B5/02055—Simultaneously evaluating both cardiovascular condition and temperature
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7282—Event detection, e.g. detecting unique waveforms indicative of a medical condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02416—Detecting, measuring or recording pulse rate or heart rate using photoplethysmograph signals, e.g. generated by infrared radiation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/0245—Detecting, measuring or recording pulse rate or heart rate by using sensing means generating electric signals, i.e. ECG signals
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/318—Heart-related electrical modalities, e.g. electrocardiography [ECG]
- A61B5/346—Analysis of electrocardiograms
- A61B5/349—Detecting specific parameters of the electrocardiograph cycle
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
- A61B5/318—Heart-related electrical modalities, e.g. electrocardiography [ECG]
- A61B5/346—Analysis of electrocardiograms
- A61B5/349—Detecting specific parameters of the electrocardiograph cycle
- A61B5/361—Detecting fibrillation
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/681—Wristwatch-type devices
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/746—Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/63—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Z—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS, NOT OTHERWISE PROVIDED FOR
- G16Z99/00—Subject matter not provided for in other main groups of this subclass
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/021—Measuring pressure in heart or blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02405—Determining heart rate variability
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02438—Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1118—Determining activity level
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6887—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices
- A61B5/6898—Portable consumer electronic devices, e.g. music players, telephones, tablet computers
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H15/00—ICT specially adapted for medical reports, e.g. generation or transmission thereof
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
Definitions
- Indicators of an individual for example, and not by way of limitation: heart rate, heart rate variability, blood pressure, and ECG
- ECG electrocardiogram
- the value of the health-indicator at a particular time, or a change over time provides information regarding the state of an individual’s health.
- a low or high heart rate or blood pressure, or an ECG that clearly demonstrate myocardial ischemia, for example, may demonstrate the need for immediate intervention. But, readings, a series of readings, or changes to the readings over time of these indicators may provide information not recognized by the user or even a health professional as needing attention.
- Arrhythmias may occur continuously or may occur intermittently.
- Continuously occurring arrhythmias may be diagnosed most definitively from an
- ECG electrocardiogram of an individual. Because a continuous arrhythmia is always present, ECG analysis may be applied at any time in order to diagnose the arrhythmia. An ECG may also be used to diagnose intermittent arrhythmias. However, because intermittent arrhythmias may be asymptomatic and/or are by definition intermittent, diagnosis presents challenges of applying the diagnostic technique at the time when the individual is experiencing the arrhythmia. Thus, actual diagnosis of intermittent arrhythmias is notoriously difficult. This particular difficulty is compounded with asymptomatic arrhythmias, which account for nearly 40% of arrhythmias in the US. Boriani G.
- Atrial Fibrillation Burden and Atrial Fibrillation type Clinical Significance and Impact on the Risk of Stroke and Decision Making for Long-term Anticoagulation, Vascul Pharmacol.. 83:26-35 (Aug. 2016), pp. 26.
- Heart rate is conventionally evaluated as a single scalar value out of context from other data/information that may impact the health- indicator.
- a resting heart rate in the range of 60-100 beats per minute (BPM) may be considered normal.
- a user may generally measure their resting heart rate manually once or twice per day.
- a mobile sensor platform for example: a mobile blood pressure cuff; mobile heart rate monitor; or mobile ECG device
- the health-indicator e.g., heart rate
- the health-indicator e.g., heart rate
- other data about the user such as and without limitation: activity level, body position, and environmental parameters like air temperature, barometric pressure, location, etc.
- activity level e.g., body position
- environmental parameters like air temperature, barometric pressure, location, etc.
- this may result in many thousands of independent health- indicator measurements.
- a measurement once or twice a day there is relatively little data or medical consensus on what a“normal” sequence of thousands of measurements looks like.
- Devices presently used to continuously measure health-indicators of users/patients range from bulky, invasive, and inconvenient to simple wearable or handheld mobile devices. Presently, these devices do not provide the capability to effectively utilize the data to continuously monitor a person’s heath. It is up to a user or health professional to assess the health-indicators in light of other factors that may impact these health-indicators to determine the health status of the user.
- FIGs. 1A-1B depict a convolutional neural network that may be used accordance with some embodiments as described herein;
- FIGs. 2A-2B depict a recurrent neural network that may be used in accordance with some embodiments as described herein;
- FIG. 3 depicts an alternative recurrent neural network that may be used in accordance with some embodiments as described herein;
- FIGs. 4A-4C depict hypothetical data plots to demonstrate application of some embodiments as described herein;
- FIGs. 5A-5E depict alternative recurrent neural networks in accordance with some embodiments as described herein and hypothetical plots used to describe some of these embodiments;
- FIG. 6 depicts an unrolled recurrent neural network in accordance with some embodiments as described herein;
- FIGs. 7A-7B depicts systems and devices in accordance with some embodiments as described herein;
- FIG. 8 depicts a method in accordance with some embodiments as described herein;
- FIGs. 9A-9B depicts a method in accordance with some embodiments as described herein and a hypothetical plot of heartrate versus time to demonstrate one or more embodiments;
- FIG. 10 depicts a method in accordance with some embodiments as described herein.
- FIG. 11 depicts hypothetical data plots to demonstrate application of some embodiments as described herein.
- FIG. 12 depicts systems and devices in accordance with some embodiments as described herein.
- Embodiments described herein include devices, systems, methods, and platforms that can detect abnormalities in an unsupervised fashion from time sequences of health-indicator data alone or in combination with other-factor (as defined herein) data utilizing predictive machine learning models.
- Atrial fibrillation (AF or AFib) is found in 1-2% of the general population, and the presence of AF increases risk of morbidity and adverse outcomes such as stroke and heart failure.
- AFib in many people, some estimate as high as 40% of AF patients, may be asymptomatic, and these asymptomatic patients have similar risk profiles for stroke and heart failure as symptomatic patients. See, id.
- CIEDs implantable electrical devices
- SAF silent AF
- An AF-burden of greater than 5-6 min and particularly greater than 1 hour is associated with significant increased risk of stroke and other negative health outcomes.
- AF-burden of greater than 5-6 min and particularly greater than 1 hour is associated with significant increased risk of stroke and other negative health outcomes.
- Detection of SAF is challenging, typically requiring some form of continuous monitoring. Presently continuous monitoring for AF requires bulky, sometimes invasive, and expensive devices, where such monitoring requires a high level of medical professional oversight and review.
- Many devices continuously obtain data to provide a measurement or calculation of the health-indicator data, for example and without limitation FitBit®, Apple Watch®, Polar®, smart phones, tablets among others are in the class of wearable and/or mobile devices.
- Other devices include permanent or semi-permanent devices on or in a user/patient (e.g., holter), and others may include larger devices in hospitals that may be mobile by virtue of being on a cart. But, little is done with this measured data other than periodically observing it on a display or establishing simple data-thresholds. Observation of the data, even by trained medical professionals, may frequently appear as normal, one primary exception being when a user has readily identifiable acute symptoms. It is tremendously difficult and practically impossible for medical professionals to continuously monitor health-indicators to observe anomalies and/or trends in data that may be indicative of something more serious.
- a platform comprises one or more customized software applications (or“applications”) configured to interact with one another either locally or through a distributed network including the cloud and the Internet.
- Applications of a platform as described herein are configured to collect and analyze user data and may include one or more software models.
- the platform includes one or more hardware components (e g. one or more sensing devices, processing devices, or microprocessors). In some
- a platform is configured to operate together with one or more devices and/or one or more systems. That is, a device as described herein, in some embodiments, is configured to run an application of a platform using a built-in processor, and in some embodiments, a platform is utilized by a system comprising one or more computing devices that interact with or run one or more applications of the platform.
- the present disclosure describes systems, methods, devices, software, and platforms for continuously monitoring a user’s data related to one or more health-indicators (for example not by way of limitation PPG signals, heart rate or blood pressure) from a user-device in combination with corresponding (in time) data related to factors that may impact the health-indicators (for example not by way of limitation PPG signals, heart rate or blood pressure) from a user-device in combination with corresponding (in time) data related to factors that may impact the
- measured health-indicator (referred to herein as“other-factors”) to determine whether a user has normal health as judged by or compared to, for example and not by way of limitation, either (i) a group of individuals impacted by similar other-factors, or (ii) the user him/herself impacted by similar other-factors.
- measured health-indicator data alone or in combination with other-factor data is input into a trained machine learning model that determines a probability the user’s measured health-indicator is considered within a healthy range, and if not to notify the user of such.
- the user not being in a healthy range may increase the likelihood the user may be experiencing a health event warranting high-fidelity information to confirm a diagnosis, such as an arrhythmia which may be symptomatic or asymptomatic.
- the notification may take the form of, for example, requesting the user to obtain an ECG.
- Other high-fidelity measurements may be requested, blood pressure, pulse oximeter to name two, ECG is but one example.
- the high-fidelity measurement, ECG in this embodiment can be evaluated by algorithms and/or medical professionals to make a notification or diagnosis (collectively referred to herein as“diagnosis”, recognizing that only a physician can make a diagnosis).
- the diagnosis may be AFib or any other number of well-known conditions diagnosed utilizing ECGs.
- a diagnosis is used to label a low-fidelity data sequence (e.g., heart rate or PPG), which may include the other-factor data sequence.
- This high-fidelity diagnosis-labeled low-fidelity data sequence is used to train a high-fidelity machine learning model.
- the training of the high-fidelity machine learning model may be trained by unsupervised learning or may be updated from time to time with new training examples.
- a user’s measured low-fidelity health-indicator data sequence and optionally a corresponding (in time) data sequence of other-factors are input into the trained high-fidelity machine learning models to determine a probability and/or prediction the user is experiencing or experienced the diagnosed condition on which the high-fidelity machine learning model was trained.
- This probability may include a probability of when the event begins and when it ends.
- Some embodiments may calculate the atrial fibrillation (AF) burden of a user, or the amount of time a user experiences AF over time.
- AF burden could only be determined using cumbersome and expensive holter or implantable continuous ECG monitoring apparatus.
- some embodiments described herein can continuously monitor a user’ s health status and notify the user of a health status change by continuously monitoring health-indicator data (for example and not by way of limitation PPG data, blood pressure data, and heart rate data) obtained from a user worn device alone or in combination with
- health-indicator data for example and not by way of limitation PPG data, blood pressure data, and heart rate data
- “Other-factors”, as used herein, include anything that may impact the health-indicator, and/or may impact the data representing the health-indicator (e.g., PPG data). These other-factors may include a variety of factors such as by way of example not limitation: air temperature, altitude, exercise levels, weight, gender, diet, standing, sitting, falling, lying down, weather, and BMI to name a few.
- a mathematical or empirical model not a machine learning model may be used to determine when to notify a user to obtain a high-fidelity measurement, which can then be analyzed and used to train a high-fidelity machine training models as described herein.
- Some embodiments described herein can detect abnormalities of a user in an unsupervised fashion by: receiving a primary time sequence of health-indicator data; optionally receiving one or more secondary time sequences of other-factor data, corresponding in time with the primary time sequence of health-indicator data, which secondary sequences may come from a sensor, or from external data sources (e.g.
- a pre-processor which may perform operations on the data like filtering, caching, averaging, time alignment, buffering, upsampling and downsampling; providing the time sequences of data to a machine learning model, trained and/or configured to utilize the values of the primary and secondary time sequence(s) to predict next value(s) of the primary sequence at a future time; comparing the predicted primary time sequence values(s) generated by the machine learning module at a specific time t to the measured values of the primary time sequence at time t; and alerting or prompting the user to take an action if the difference between the predicted future time sequence and measured time sequences exceeds a threshold or criteria.
- Some embodiments described herein thus, detect when the observed behavior of the primary sequence of physiological data with respect to the passage of time and/or in response to the observed secondary sequence of data differs from what is expected given the training examples used to train the model.
- the system can serve as an abnormality detector. If the data has simply been acquired from a specific user without any other categorization, then the system can serve as a change detector, detecting a change in the health-indicator data that the primary sequence is measuring relative to the time at which the training data was captured.
- Described herein are software platforms, systems, devices, and methods for generating and using trained machine learning models to predict or determine a probability when a user’s measured health-indicator data (primary sequence) under the influence of other-factor(s) (secondary sequence) is outside the bounds of normal for a healthy population (i.e., a global model) under the influence of similar other-factors, or outside the bounds of normal for that particular user (i.e., personalized model) under the influence of similar other-factors, where a notification of such is provided to the user.
- the user may be prompted to obtain additional measured high-fidelity data that can be used to label previously acquired low- fidelity user health-indicator data to generate a different trained high-fidelity machine learning model that has the ability to predict or diagnose abnormalities or events using only low-fidelity health-indicator data, where such abnormalities are typically only identified or diagnosed using high-fidelity data.
- Some embodiments described herein may include inputting a user’s health-indicator data, and optionally inputting corresponding (in time) data of other-factors into a trained machine learning model, where the trained machine learning model predicts the user’s health-indicator data or a probability distribution of the health-indicator data at a future time step.
- the prediction in some embodiments is compared with the user’s measured health-indicator data at the time step of the prediction, where, if the absolute value of the difference exceeds a threshold, the user is notified that his or her health-indicator data is outside a normal range.
- This notification may include a diagnosis or instructions to do something, for example and not by way of limitation obtain additional measurements or contact a health professional.
- health-indicator data and corresponding (in time) data of other-factors from a healthy population of people is used to train the machine learning model. It will be appreciated that the other-factors in training examples used to train the machine learning model may not be averages of the population, rather data for each of the other-factors corresponds in time with collection of the health-indicator data for individuals in the training examples.
- Some embodiments are described as receiving discrete data points in time, predicting discrete data points at a future time from the input and then determining if a loss between discrete measured input at the future time and the predicted value at the future time exceeds a threshold.
- the skilled artisan will readily appreciate that the input data and output predictions may take forms other than a discrete data point or a scalar.
- the health-indicator data sequence also referred to herein as primary sequence
- the other-data sequence also referred to herein as secondary sequence
- the skilled artisan will recognize the manner in which the data is segmented is a matter of design choice and may take many different forms.
- Some embodiments partition the health-indicator data sequence (also referred to herein as primary sequence) and the other-data sequence (also referred to herein as secondary sequence) into two segments: past, representing all data before a specific time t, and future, representing all data at or after time t. These embodiments input the health-indicator data sequence for a past time segment and all other-data sequence(s) for the past time segment into a machine learning model configured to predict the most probable future segment of the health-indicator data (or distribution of probable future segments).
- these embodiments input the health- indicator data sequence for a past time segment, all other-data sequences for the past time segment and other-data sequences from the future segment into a machine learning model configured to predict the most probable future segment of the health-indicator data (or distribution of probable future segments).
- the predicted future segment of the health-indicator data is compared to the user’s measured health-indicator data at the future segment to determine a loss and whether the loss exceeds a threshold, in which case some action is taken.
- the action may include for example and not by way of limitation: notifying the user to obtain additional data (e.g., ECG or blood pressure); notifying the user to contact a healthcare professional; or automatically triggering acquisition of additional data.
- Automatic acquisition of additional data may include, for example and not by way of limitation, ECG acquisition via a sensor operably coupled (wired or wirelessly) to a user worn computing device, or blood pressure via a mobile cuff around the user’s wrist or other appropriate body part and coupled to a user worn computing device.
- the segments of data may include a single data point, many data points over a period of time, an average of these data points over the time period where the average may include a true average, median or mode. In some embodiments the segments may overlap in time.
- corresponding (in time) other-factor sequence of data differs from what is expected from the training examples, which training examples are collected under similar other-factors. If the training examples are gathered from healthy individuals under similar other-factors or from data that has been previously categorized as healthy for a specific user under similar other-factors, then these embodiments serve as an abnormality detector from the healthy population or from the specific user, respectively. If the training examples have simply been acquired from a specific user without any other categorization, then these embodiments serve as a change detector, detecting a change in the health-indicators at the time of measurement relative to the time at which the training examples were collected for the specific user.
- Some embodiments described herein utilize machine learning to continuously monitor a person’s health-indicators under the impact of one or more other-factors and assess whether the person is healthy in view a population categorized as healthy under the impact of similar other factors.
- machine learning algorithms or models including without limitation Bayes, Markov, Gaussian processes, clustering algorithms, generative models, kernel and neural network algorithms
- typical neural networks employ, by way of example not limitation, one or more layers of nonlinear activation functions to predict an output for a received input, and may include one or more hidden layers in addition to the input and output layers.
- a health monitoring system monitor heart rate and activity data of an individual as low-fidelity data (e.g., heartrate or PPG data) and detect a condition (e.g. AFib) normally detected using high-fidelity data (e.g., ECG data).
- a condition e.g. AFib
- high-fidelity data e.g., ECG data
- the heart rate of an individual may be provided by a sensor continuously or in discrete intervals (such as every five seconds). The heart rate may be determined based on PPG, pulse oximetry, or other sensors.
- the activity data may be generated as a number of steps taken, an amount of movement sensed, or other data points indicating an activity level.
- the low-fidelity (e.g., heartrate) data and activity data can then be input into a machine learning system to determine a prediction of a high-fidelity outcome.
- the machine learning system may use the low-fidelity data to predict an arrhythmia or other indication of a user’s cardiac health.
- the machine learning system may use an input of segment of data inputs to determine a prediction. For example, an hour of activity level data and heart rate data may be input to the machine learning system. The system can then use the data to generate a prediction of a condition such as atrial fibrillation.
- a condition such as atrial fibrillation.
- a trained convolution neural network (CNN) 100 takes input data 102, (e.g., a picture of a boat) into convolutional layers (aka hidden layers) 103, applies a series of trained weights or filters 104 to the input data 106 in each of the convolutional layers 103.
- the output of the first convolutional layer is an activation map (not shown), which is the input to the second convolution layer, to which a trained weight or filter (not shown) is applied, where the output of the subsequent convolutional layers results in activation maps that represent more and more complex features of the input data to the first layer.
- a non-linear layer (not shown) is applied to introduce non-linearity into the problem, which nonlinear layers may include tanh, sigmoid or ReLU.
- a pooling layer (not shown) may be applied after the nonlinear layers, also referred to as a downsampling layer, which basically takes a filter and stride of the same length and applies it to the input, and outputs the maximum number in every sub-region the filter convolves around.
- Other options for pooling are average pooling and L2-norm pooling.
- the pooling layer reduces the spatial dimension of the input volume reducing computational costs and to control overfitting.
- the final layer(s) of the network is a fully connected layer, which takes the output of the last convolutional layer and outputs an n-dimensional output vector representing the quantity to be predicted, e.g., probabilities of image classification 20% automobile, 75% boat 5% bus and 0% bicycle, i.e., resulting in predictive output 106 (O * ), e.g. this is likely a picture of a boat.
- the output could be a scalar value data point being predicted by the network, a stock price for example.
- Trained weights 104 may be different for each of the convolutional layers 103, as will be described more fully below.
- the neural network needs to be trained on known data inputs or training examples resulting in trained CNN 100.
- training examples e.g., many pictures of boats
- a skilled artisan in neural networks will fully understand the description above provides a somewhat simplistic view of CNNs to provide some context for the present discussion and will fully appreciate the application of any CNN alone or in combination with other neural networks will be equally applicable and within the scope of some embodiments described herein.
- FIG. IB demonstrates training CNN 108.
- convolutional layers 103 are shown as individual hidden convolutional layers 105, 105’ up to convolutional layer 105 n_1 and the final n th layer is a fully connected layer. It will be appreciated that last layers may be more than one fully connected layer.
- Training example 111 is input into convolutional layers 103, a nonlinear activation function (not shown) and weights 110, 110’ through 110" are applied to training example 111 in series, where the output of any hidden layer is input to the next layer, and so on until the final n ,h fully connected layer 105" produces output 114.
- Output or prediction 114 is compared against training example 111 (e.g., picture of a boat) resulting in difference 116 between output or prediction 114 and training example 111. If difference or loss 116 is less than some preset loss (e.g., output or prediction 114 predicts the object is a boat), the CNN is converged and considered trained. If the CNN has not converged, using the technique of backpropagation, weights 110 and 110’ through 110" are updated in accordance with how close the prediction is to the known input. The skilled artisan will appreciate that methods other than back propagation may be used to adjust the weights.
- the second training example (e.g., different picture of a boat) is input and the process repeated again with the updated weights, which are then updated again and so on until the n lh training example (e.g., n th picture of n th boat) has been input.
- This is repeated over and over with the same n-training examples until the convolutional neural network (CNN) is trained or converges on the correct outputs for the known inputs.
- CNN 108 is trained, weights 110, 110’ through 110" are fixed and used in trained CNN 100, which are weights 104 as depicted in FIG. 1 A. As explained, there are different weights for each convolutional layer 103 and for each of the fully connected layers.
- the trained CNN 100 or model is then fed image data to determine or predict that which it is trained to predict/identify (e.g., a boat), as described above.
- Any trained model, CNN, RNN, etc. may be trained further, i.e., modification of the weights may be permitted, with additional training examples or with predicted data output by the model which is then used as a training example.
- the machine learning model can be trained“offline”, e.g. trained once on a computational platform separate from the platform using/executing the trained model, and then transferred to that platform.
- embodiments described herein may periodically or continually update the machine learning model based on newly acquired training data. This updated training may occur on a separate computational platform which delivers the updated trained models to the platform using/executing the re-trained model over a network connection, or the training/re
- CNN is applicable to data in a fixed array (e.g., a picture, character, word etc.) or a time sequence of data.
- sequenced health-indicator data and other- factor data can be modeled using a CNN.
- Some embodiments utilize a feed-forward, CNN with skip connections and a Gaussian Mixture Model output to determine a probability distribution for the predicted health-indicator, e.g., heart rate, PPG, or arrhythmia.
- Some embodiments can utilize other types and configurations of neural network.
- the number of convolutional layers can be increased or decreased, as well as the number of fully- connected layers.
- the optimal number and proportions of convolutional vs. fully- connected layers can be set experimentally, by determining which configuration gives the best performance on a given dataset.
- the number of convolutional layers could be decreased to 0, leaving a fully-connected network.
- the number of convolutional filters and width of each filter can also be increased or decreased.
- the output of the neural network may be a single, scalar value, corresponding to an exact prediction for the primary time sequence.
- the output of the neural network could be a logistic regression, in which each category corresponds to a specific range or class of primary time sequence values, are any number of alternative outputs readily appreciated by the skilled artisan.
- Gaussian Mixture Model output in some embodiments is intended to constrain the network to learning well-formed probability distributions and improve
- Gaussian Mixture Model generalization on limited training data.
- the use of a multiple elements in some embodiments in the Gaussian Mixture Model is intended to allow the model to learn multi-modal probability distributions.
- a machine learning model combining or aggregating the results of different neural networks could also be used, where the results could be combined.
- Machine learning models that have an updatable memory or state from previous predictions to apply to subsequent predictions is another approach for modeling sequenced data.
- some embodiments described herein utilize a recurring neural network.
- FIG. 2A a diagram of a trained recurrent neural network (RNN) 200 is shown.
- Trained RNN 200 has updatable state (S) 202 and trained weights (W) 204.
- Input data 206 is input into sate 202 where weights (W) 204 are applied, and prediction 206 (P * ) is output.
- state 202 is updated based on the input data, thereby serving as memory from the previous state for the next prediction with the next data in sequence.
- FIG. 2B shows trained RNN 200 unrolled, and its applicability to sequenced data. Unrolled, the RNN appears analogous to a CNN, but in an unrolled RNN each of the apparently analogous layers appears as a single layer with an updated state, where the same weights are applied in each iteration of the loop.
- the skilled artisan will appreciate the single layer may itself have sub layers, though for clarity of explanation a single layer is depicted here.
- Input data (It) 208 at time t is input into state-at-time t (St) 210 and trained weights 204 are applied within cell-at-time t (Ct) 212.
- Ct 212 is prediction-at time step t+l (P t * +1 ) 214 and updated state St+i 216.
- Ct+i 220 It+i 218 is input into St+i 216, the same trained weights 204 are applied, and the output of Ct+i 220 is P t * +2 222.
- St+i is updated from St, therefor St+i has memory from St from the previous time step. For example, and not by way of limitation, this memory may include previous health-indicator data or previous other-factor data from one or more previous time steps.
- This process continues for n-steps, where It+n 224 is input into St+n 226 and the same weights 204 are applied.
- the output of cell Ct+n is prediction P t * +n .
- the states are updated from previous time steps giving RNNs the benefit of memory from a previous state. This characteristic makes RNNs an alternative choice to make predictions on sequenced data for some embodiments. Though, and as described above, there are other suitable machine learning techniques for performing such predictions on sequenced data, including CNNs.
- RNNs like CNNs, can handle a string of data as input, and output a predicted string of data.
- a simple way to explain this aspect of using an RNN is using the example of natural language prediction. Take the phrase: The sky is blue. The string of words (i.e., data) has context. So as the state is updated, the string of data is updated from one iteration to the next, which provides context to predict blue.
- RNNs have a memory component to aid in making predictions on sequenced data. However, the memory in the updated state of an RNN may be limited in how far it can look back, akin to short-term memory.
- LSM Long Short Term Memory
- RNNs have a relatively simple repeating structure, for example they may have a single layer with a nonlinear activation function (e.g., tanh or sigmoid).
- LSTMs similarly have a chain like structure, but (for example) have four neural network layers, not one. These additional neural network layers give LSTMs the ability to remove or add information to the state (S) by using structures called cell gates. Id.
- FIG. 3 shows a cell 300 for a LSTM RNN.
- Line 302 represents the cell state (S), and can be viewed as an information highway; it is relatively easy for information to flow along the cell state unchanged.
- Cell gates 304, 306, and 308 determine how much information to allow through the state, or along the information highway.
- Cell gate 304 first decides how much information to remove from the cell state St, so- called forget-gate layer.
- cell gate 306 and 306’ determines which information will be added to the cell state, and cell gate 308 and 308’ determines what will be output from the cell state as prediction P t * +1 .
- the information highway or cell state is now updated cell state St+i for use in the next cell.
- LSTMs permits RNNs to have a more persistent or long(er)-term memory. LSTMs provide additional advantages to RNN based machine learning models in that output predictions take into account a context separated from the input data by longer space or time, depending on how the data is sequenced, than the simpler RNN structure.
- the primary and secondary time sequences may not be provided to the RNN as vectors at each time step. Instead, the RNN may be provided only the current value of the primary and secondary time sequence(s), along with the future values or aggregate functions of the secondary time sequence(s) within the prediction interval. In this manner, the RNN uses the persistent state vector to retain information about the previous values for use in making predictions
- Machine learning is well suited for continuous monitoring of one or multiple criteria to identify anomalies or trends, big and small, in input data as compared to training examples used to train the model. Accordingly, some embodiments described herein input a user’s health-indicator data and optionally other-factor data into a trained machine learning model that predicts what a healthy person’s health-indicator data would look like at the next time step and compares the prediction with the user’s measured health-indicator data at the future time step. If the absolute value of the difference (e.g., loss as described below) exceeds a threshold, the user is notified his or her health-indicator data is not in a normal or healthy range.
- the absolute value of the difference e.g., loss as described below
- the threshold is a number set by the designer and, in some embodiments, may be changed by the user to allow a user to adjust the notification sensitivity.
- the machine learning model of these embodiments may be trained on health-indicator data alone or in combination with corresponding (in time) other-factor data from a population of healthy people, or trained on other training examples to suit the design needs for the model.
- Data from health-indicators like heart rate data, are sequenced data, and more particularly time sequenced data.
- Heartrate for example and not by way of limitation, can be measured in a number of different ways, e.g., measuring electric signals from a chest strap or derived from a PPG signal.
- Some embodiments take the derived heartrate from the device, where each data point (e.g., heart rate) is produced at approximately equal intervals (e.g., 5s). But, in some cases and in other embodiments the derived heart rate is not provided in roughly equal time steps, for example because the data needed for the derivation is not reliable (e.g., PPG signal is unreliable because the device moved or from light pollution). The same may be said of obtaining the secondary sequence of data from motion sensors or other sensors used to collect the other- factor data.
- each data point e.g., heart rate
- intervals e.g., 5s
- the raw signal/data (electric signal from ECG, chest strap, or PPG signals) itself is a time sequence of data that can be used in accordance with some embodiments.
- PPG electrical signal from ECG, chest strap, or PPG signals
- this description uses PPG to refer to the data representing the health-indicator.
- the skilled artisan will readily appreciate that either form of the data for the health-indicator, raw data, waveform or number derived from raw data or waveform, may be used in accordance with some embodiments described herein.
- Machine learning models that may be used with embodiments described herein include by way of example not limitation Bayes, Markov, Gaussian processes, clustering algorithms, generative models, kernel and neural network algorithms. Some embodiments utilize a machine learning model based on a trained neural network, other embodiments utilize a recurrent neural network, and additional embodiments use LTSM RNNs. For the purpose of clarity, and not by way of limitation, recurrent neural networks will be used to describe some embodiments of the present description.
- FIGs. 4A-4C show hypothetical plots against time for PPG (FIG 4A), steps taken (FIG 4B) and air temperature (FIG. 4C).
- PPG is an example of health-indicator data, where steps, activity level, and air temperature are examples other-factor data for other factors that may impact the health-indicator data.
- the other-data may be obtained from any of many known sources including without limitation accelerometer data, GPS data, a weight scale, user entry etc., and may include without limitation air temperature, activity (running, walking, sitting, cycling, falling, climbing stairs, steps etc.), BMI, weight, height, age etc.
- FIG. 4A is a hypothetical plot of number of a user’s steps at various times
- FIG. 4C is a hypothetical plot of air temp at various times.
- FIGs 5A-5B depict a schematic for a trained recurrent neural network 500 to receive the input data depicted in FIGs 4A-4C, i.e., PPG (P), steps (R) and air temperature (T).
- these input data are merely examples of health-indicator data and other-factor data. It will also be appreciated that data for more than one health-indicator may be input and predicted, and more or less than two other-factor data may be used, where the choice depends on for what the model is being designed. It will be further appreciated by the skilled artisan that other-factor data is collected to correspond in time with the collection or
- FIG 5A depicts trained neural network 500 as a loop.
- P, T and R are input into state 502 of RNN 500, where weights W are applied, and RNN 500 outputs predicted PPG 504 (P * ).
- step 506 the difference P-P * (DR * ) is calculated, and at step 508 it is determined if
- alert/notification/ detection could be, for example and not by way of limitation, a suggestion to see/consult a doctor, a simple notification like a haptic feedback, request to take additional measurement like and ECG, or simple note without recommendation, or any combination thereof. If
- a primary sequence of heartrate data (e.g., derived from a PPG signal) and a secondary sequence of other-factor data are provided to the trained machine learning model, which may be an RNN a CNN, other machine learning models, or a combination of models.
- the machine learning model is configured to receive as input at reference time t:
- a vector (VH) of length 300 of the last 300 health-indicator samples (e.g., heart rate in beats per minute) up to and including any health-indicator data at time t;
- VTD vector of length 300 where the entry at index i, VDT(I), contains the time difference between the timestamps of health-indicator sample VH(I) and VH (i-l);
- a scalar prediction interval other-factor rate Orate representing the mean other-factor rate (e.g., step rate) measured over the time period from t to t+t, where t may be, for example and not by way of limitation, 2.5 minutes and is the future prediction interval.
- the output of this embodiment may be, for example, a probability distribution characterizing the predicted heart rate measured over the time period from t to t+t.
- the machine learning model is trained with training examples that includes continuous time sequences of health-indicator data and other-factor data sequences.
- the notification system assigns a timestamp to each predicted health- indicator (e.g., heart rate) distribution of ⁇ +t/2, thus centering the predicted distribution within the predictive interval (t).
- y is a threshold
- an alert is generated when the measured health-indicator is more than a certain multiple of the standard deviation away from the mean of the predicted health- indicator values within a particular window W.
- the window W can be applied in a sliding fashion across the sequences of measured and predicted health-indicator values, with each window overlapping the previous window in time by a designer specified fraction, e g., 0.5 mins.
- the notification may take any number of different forms. For example, and not by way of limitation, it may notify the user to obtain an ECG and/or blood pressure, it may direct the computing system (e.g. wearable etc.) to automatically obtain an ECG or blood pressure (for example), it may notify the user to see a doctor, or simply inform the user the health-indicator data is not normal.
- the computing system e.g. wearable etc.
- VDT cardiac diharmonic deriving health-indicator data from less than consistent raw data.
- heart rate samples are produced by the Apple Watch algorithm only when it has sufficiently reliable raw PPG data to output a reliable heart rate value, which results in irregular time gaps between heart rate samples.
- this embodiment utilizes the vector for other-factor data (Vo) with the same length as the other vectors to handle different and irregular sample rates between the primary sequence (health- indicator) and secondary sequence (other-factor).
- the secondary sequence in this embodiment, is remapped or interpolated onto the same time points as the primary time sequence.
- the configuration of data from secondary time sequences presented as input to a machine learning model from a future prediction time interval may be modified.
- the single scalar value containing the average other-factor data rate over the prediction interval could be modified with multiple scalar values, e.g. one for each secondary time sequence.
- a vector of values could be used over the prediction interval.
- the prediction interval may itself be adjusted. A shorter prediction interval, for example, may provide faster response to changes and improved detection of events whose fundamental timescale is short(er), but may also be more sensitive to interference from sources of noise, like motion artifacts.
- the output prediction of the machine learning model itself does not need to be a scalar.
- some embodiments may generate a time series of predictions for multiple times t within the time interval between t and t + t, and the alerting logic may compare each of these predictions with the measured value within the same time interval.
- the machine learning model itself may comprise, for example, a 7-layer feed-forward neural network.
- the first 3 layers may be convolutional layers containing 32 kernels each with a kernel width of 24 and a stride of 2.
- the first layer may have as input the arrays VH, Vo, and VTD, in three channels.
- the final 4 layers may be fully-connected layers, all utilizing hyperbolic tangent activation functions except the last layer.
- the output of the third layer may be flattened into one array for input into the first fully connected layer.
- the final layer outputs 30 values parameterizing a Gaussian Mixture Model with 10 mixtures (mean, variance, and weight for each mixture).
- the network uses a skip connection between the first and third fully connected layers, such that the output of layer 6 is summed with the output of layer 4 to produce the input to layer 7.
- Standard batch normalization may be used on all layers but the last layer, with a decay of 0.97. The use of skip connections and batch normalization can improve the ability to propagate gradients through the network.
- the choice of machine learning model may affect the performance of the system.
- the machine learning model configuration may be separated into two types of considerations.
- First is the model’s internal architecture, meaning the choice of model type (convolutional neural network, recurrent neural network, random forests, etc. generalized nonlinear regression), as well as the parameters that characterize the implementation of the model (generally, the number of parameters, and/or number of layers, number of decision trees, etc.).
- Second is the model’s external architecture - the arrangement of data being fed into the model and the specific parameters of the problem the model is being asked to solve.
- the external architecture may be characterized in part by the dimensionality and type of data being provided as input to the model, the time range(s) spanned by that data, and the pre-or-post processing done on the data.
- the choice of external architecture is a balance between increasing the number of parameters and amount of information provided as input, which may increase the predictive power of the machine learning model, with the available storage and computational capacity to train and evaluate a larger model, and the availability of sufficient amounts of data to prevent overfitting.
- the number of input vectors, as well as the absolute length (number of elements) and time span covered, may be modified. It is not necessary that each input vector be the same length or cover the same span of time.
- the data does not need to be equally sampled in time - for example and not by way of limitation, one might provide a 6-hour history of heart rate data, in which data less than one hour before t is sampled at a rate of 1 Hz, data more than 1 hour before t but less than 2 hours before t is sampled at a rate of 0.5 Hz, and data older than 2 hours is sampled at a rate of 0.1 Hz, where t is the reference time.
- FIG. 5B shows trained RNN 500 unrolled.
- Input data 513 (Pt, Rt, and Tt) is input into state-at-time t (St) 514 and trained weights 516 are applied.
- the output of cell (Ct) 518 is prediction-at-time t+1 (P t * +1 ) 520 and updated state St+i 522.
- input data (Pt+i, Rt+i, and Tt+i) 513’ is input into St+i 522 and trained weights 516 are applied and the output of Ct+i 524 is P t * + 2 523.
- St+i results from updating St
- St+i has memory from St from the operation in cell (Ct) 518 at the previous time step.
- This process continues for n-steps, where input data (P n , Rn, and T n ) 513” is input into Sn 530 and trained weights 516 are applied.
- the output of cell Ct is prediction 532 P ⁇ +1 .
- trained RNNs apply the same weights throughout, but, and importantly, the states are updated from previous time steps giving RNNs the benefit of memory from a previous time step.
- the order-in-time of inputting the dependent health-indicator data may vary and would still produce the desired result.
- the measured health-indicator data from a previous time step e.g., Pt-i
- the other-factor data from the current time step e.g., Rt and Tt
- St the state at the current time step
- the model predicts the health-indicator at the current time step P t * , which is compared to the measured health-indicator data at the present time step to determine if the user’s health-indicator is normal or in a healthy range, as described above.
- FIG. 5C shows an alternative embodiment of a trained RNN to determine whether a user’s health-indicator sequenced data, PPG in our example, is in a band or threshold for a healthy person.
- the input data in this embodiment is a linear combination
- l t a t P f + (1— a t )P t
- P t * is the predicted health-indicator value at time t
- Pt is the measured health-indicator at time t.
- a ranges from 0-1 nonlinearly as a function of loss (L), where the loss and a are discussed in more detail below.
- state St which, in some embodiments, outputs a probability distribution (b) of the predicted health-indicator data (P t * +1 ) at time step t+l where b *) is the probability distribution function of predicted health-indicator (P * ).
- the probability distribution function is sampled to select a predicted
- FIG. 5D shows a hypothetical probability distribution for a range of hypothetical health-indicator data at time t+l .
- This function is sampled, for example at maximum probability 0.95, to determine a predicted health-indicator at time t+l (P t * +1 ).
- the probability distribution (b ⁇ +1 ) is also evaluated using the measured or actual health-indicator data (P t+i ), and a probability is determined that the model would have predicted if the actual data had been input into the model.
- b ⁇ r ⁇ is 0.85.
- a loss may be defined to help determine whether to notify a user his or her health status is not in a normal range as predicted by the trained machine learning model.
- the loss is chosen to model how close the predicted data is to the actual or measured data.
- the skilled artisan will appreciate many ways to define loss.
- ) is a loss.
- the loss (L) may be
- L is a measure of how close the predicted data is to the
- measured or actual data b ⁇ ranges from 0 to 1 , where 1 means the predicted value and measured value are the same. Therefore, a low loss indicates the predicted value is probably the same as or close to the measured value; in this context it means the measured data looks like it comes from a healthy /normal person.
- thresholds for L are set, e.g., L > 5, where the user is notified the health-indicator data is outside the range considered healthy. Other embodiments may take an average of losses over a period of time and compare the average to a threshold.
- the threshold itself may be a function of a statistical calculation of the predicted values or an average of the predicted values. In some embodiments, the following equation may be used to notify the user the health-indicator is not in a healthy range:
- ⁇ Prange is determined by a method of averaging predicted health-indicator data over the same time range
- the methods of averaging include, by way of example not limitation, average, arithmetic mean, median and mode.
- outliers are removed so as not to skew the calculated number.
- a t is defined as a function of L and ranges from 0 to 1.
- oc(L) may be a linear function, or a non-linear function, or may be linear over some range of L and non-linear over a separate range of L.
- the function a(L) is linear for L between 0 and 3, quadratic for L between 3 and 13, and 1 for L greater than 13.
- the input data It+i will be approximately the measured data Pt+i, as a-l will be near zero.
- a(L) varies quadratically, and the relative contributions of predicted and measured health-indicator data to the input data will also vary.
- the linear combination of predicted health-indicator data and measured health-indicator data weighted by a(L) permits, in this embodiment, weighting the input data between predicted and measured data at any particular time step.
- the input data may also include the other-factor data (Ot). This is only one example of self-sampling, where some combination of predicted data and measured data are used as input to the trained network. The skilled artisan will appreciate many others may be used.
- Machine learning models in embodiments use a trained machine learning model.
- the machine learning models use a recurrent neural network, which requires a trained RNN.
- FIG. 6 depicts an unrolled RNN to demonstrate training a RNN in accordance with some embodiments.
- Cell 602 has initial state So 604 and weight matrix W 606. Step-rate data Ro, air temperature data To and initial PPG data Po at the time step zero are input into state So, weight W is applied, and a predicted PPG (P() at the first time step is output from cell 602, and AP ⁇ is calculated using PPG obtained at time step 1 (Pi).
- Cell 602 also outputs updated state at time step 1 608 (Si), which goes into cell 610.
- Step rate data Ri, air temperature data Ti and PPG data Pi at time step 1 are input into Si, weight 606 W is applied, and a predicted PPG (P 2 ) at the time step 2 is output from cell 610, and DR 2 is calculated using PPG (P2) obtained at time step 2.
- Cell 610 also outputs updated state at time step
- Step rate data R3, air temperature data T3 and PPG data at time step 3 (P 3 ) are input into S2, weight 606 W is applied, and a predicted PPG (P 3 ) at time step
- DR 3 is output from cell 614, and DR 3 is calculated using PPG obtained at time step 3 (P3). This is continued until state at time-step-n 616 is output and AP * +1 is calculated.
- the DR * ’s are used in back propagation to adjust the weight matrix, similar to the training of convolutional neural networks. However, unlike convolutional networks, the same weight matrix in recurrent neural networks is applied at each iteration; it is only modified in back propagation during training. Many training examples with health-indicator data and corresponding other-factor data are input into RNN 600 over and over until it converges.
- LTSM RNNs may be used in some embodiments where the states of such networks provide a longer term contextual analysis of input data, which may provide better prediction when the network learns long(er)- term correlations.
- machine learning models will fall within the scope of embodiments described herein, and may include by way of example not limitation CNN or other feed-forward networks.
- FIG 7 A depicts a system 700 that predicts whether a user’s measured health-indicators are within or outside a threshold of normal for that of a healthy person under similar other- factors.
- System 700 has machine learning model 702 and health detector 704.
- Embodiments for machine learning model 702 include a trained machine learning model, a trained RNN, CNN or other feed forward network for example (and not by way of limitation).
- the trained RNN, other network or combination of networks may be trained on training examples from a population of healthy people from whom health-indicator data and corresponding (in time) other-factor data has been collected.
- the trained RNN, other network or combination of networks may be trained on training examples from a particular user, making it a personalized trained machine learning model.
- the health-indicator data in this and other embodiments may be one or more health-indicators.
- one or more of PPG data, heartrate data, blood pressure data, body temperature data, blood oxygen concentration data and the like could be used to train the models and to predict the health of a user.
- Health detector 704 uses prediction 708 from machine learning model 702 and input data 710 to determine whether a loss, or other metric determined by analyzing the predicted output with the measured data, exceeds a threshold considered normal and thus unhealthy. System 700 then outputs a notification or the state of a user’s health.
- Input generator 706 continuously obtains data with a sensor (not shown) from a user wearing or in contact with the sensor, where the data represents one or more health-indicators of the user. Corresponding (in time) other-factor data may be collected by another sensor or acquired through other means as described herein or as readily apparent to the skilled artisan.
- Input generator 706 may also collect data to determine/calculate other-factor data.
- Input generator may include a smart watch, wearable or mobile device (e.g., Apple Watch® or FitBit® smart phone, tablet or laptop computer), a combination of smart watch and mobile device, a surgically implanted device with the ability to transmit data to a mobile device or other portable computing device, or a device on a cart in a medical care facility.
- user input generator 706 has a sensor (e.g., PPG sensor, electrode sensor) to measure data related to one or more health-indicators.
- the smart watch, tablet, mobile phone or laptop computer of some embodiments may carry the sensor or the sensor may be remotely placed (surgically embedded, contacted to the body remote from the mobile device, or some separate device) where, in all these cases, the mobile device
- system 700 may be provided on the mobile devices alone, in combination with other mobile devices, or in combination with other computing systems via communication through a network through which these devices may communicate.
- system 700 may be a smart watch or wearable with machine learning model 702 and health detector 704 located on the device, e.g., the memory of the watch or firmware on the watch.
- the watch may have user input generator 706 and communicate with other computing devices (e.g.
- Smart watch 712 in accordance with an embodiment, is depicted.
- Smart watch 712 includes watch 714 which contains all the circuitry and microprocessors, and processing devices (not shown) known to the skilled artisan.
- Watch 714 also includes display 716, on which a user’s health-indicator data 718 may be displayed, in this example heart rate data. Also displayed on display 716 may be the predicted health-indicator band 720 for the normal or the healthy population. In FIG. 7B the user’s measured heart rate data does not exceed the predicted healthy band, so in this particular example no notification would be made.
- Watch 714 may also include watch band 722, and high-fidelity sensor 724, for example an ECG sensor.
- watch band 722 may be an expandable cuff to measure blood pressure.
- Low- fidelity sensors 726 are provided on the back of watch 714 to collect user health-indicator data, such as PPG data, which can be used to derive heart rate data or other data like blood pressure, for example.
- a fitness band may be used in some embodiments, such as FitBit or Polar, where the fitness bands have similar processing power and other-factor measurement devices (e.g., ppg and
- FIG. 8 depicts an embodiment of a method 800 for continuously monitoring a user’s health status.
- Step 802 receives the user input data, which may include data for one or more health-indicators (aka primary sequence of data) and corresponding (in time) data for other- factors (aka secondary sequence of data).
- Step 804 inputs the user data into a trained machine learning model, which may include a trained RNN, CNN, other feed-forward network as described herein or other neural network known to the skilled artisan.
- the health-indicator input data may be one or a combination of predicted health-indicator data and measured health-indicator data, e.g., a linear combination, as described in some embodiments herein.
- Step 806 outputs data for one or more predicted health-indicators at a time step, which outputs may include, by way of example not limitation, a single predicted value, a probability distribution as a function of predicted values.
- Step 808 determines a loss based on the predicted health-indicator, where, for example and not by way of limitation, the loss may be a simple difference between predicted and measured health-indicators, or some other appropriately selected loss function (e.g. negative log of a probability distribution evaluated at the value for the measured health-indicator).
- Step 810 determines if the loss exceeds a threshold considered normal or unhealthy, where the threshold may be, for example and not by way of limitation, a simple number picked by the designer, or a more complex function of some parameter related to the prediction.
- step 812 notifies the user that his or her health indicator exceeds a threshold considered normal or healthy.
- the notification may take many forms.
- this information may be visualized to the user.
- the information can be displayed on a user interface such as a graph that shows (i) measured health-indicator data (e.g., heart rate) and other-factor data (e g., step count) as a function of time, (ii) a distribution of predicted health-indicator data (e.g., predicted heart rate values) generated by the machine learning model.
- the user can visually compare the measured data points to the predicted data points and determine by visual inspection whether their heart rate, for example, falls into the range expected by the machine learning model.
- Some embodiments described herein have mentioned using a threshold to determine whether to notify a user or not.
- the user may change the threshold to adjust or tune the system or method to more closely match the user’s personal health knowledge. For example, if the physiological indicator used is blood pressure and the user has higher blood pressure, then embodiments may frequently alert/notify the user that his health-indicator is outside normal or healthy range from a model trained on a healthy population. Thus, certain embodiments permit the user to increase the threshold value so the user is not notified so frequently that his/her health-indicator data exceeds what is considered normal or healthy.
- Some embodiments preferably use the raw data for the health-indicators. If the raw data is processed to derive a specific measurement, e.g., heart rate, this derived data may be used in accordance with embodiments. In some situations, the provider of a health monitoring apparatus does not have control of the raw data, rather what is received is processed data in the form of a calculated health-indicator, e.g., heart rate or blood pressure. As will be appreciated by the skilled artisan, the form of the data used to train a machine learning model should match the form of the data collected from the user and input into the trained model, otherwise the predictions could prove erroneous. For example, the Apple Watch gives heart rate measurement data at unequal time steps, and does not provide raw PPG data.
- a specific measurement e.g., heart rate
- this derived data may be used in accordance with embodiments.
- the provider of a health monitoring apparatus does not have control of the raw data, rather what is received is processed data in the form of a calculated health-indicator,
- a user wears an Apple Watch that outputs heart rate data in accordance with Apple’s PPG processing algorithm with heart rate data at unequal time steps.
- the model is trained on this data.
- Apple deciding to change its algorithm for providing the heart rate data may render the model trained on data from the previous algorithm obsolete to use on data input from the new algorithm.
- some embodiments resample the irregularly spaced data (heart rate, blood pressure data, or ECG data etc.) onto a regularly spaced grid and sample from regularly spaced grid when collecting data to train the model. If Apple, or other supplier of data, changes its algorithm, the model needs only to be retrained on newly collected training examples, but the model does not need to be reconstructed to account for the algorithm change.
- the trained machine learning model may be trained on the user’s data, resulting in a personalized trained machine learning model.
- This trained personalized machine learning model can be used in place of or in combination with the machine learning models trained on a healthy population of people described herein. If used by itself, a user’s data is input into the personalized trained machine learning model, which would output a prediction of that individual’s health-indicator in the next time step that is normal for that user, which is then compared with the actual/measured data from the next time step in a manner consistent with embodiments described herein to determine whether the user’s health-indicators had differed by some threshold from what is predicted normal for that user.
- this personalized machine learning model could be used in combination with the machine learning model trained on training examples from a population of healthy people to generate predictions and associated notifications as related to both what is predicted normal for that individual user and predicted normal for the healthy population of people.
- FIG. 9A depicts a method 900 in accordance with another embodiment
- FIG. 9B shows a hypothetical plot 902 of heart rate (by way of example not limitation) as a function of time for the purpose of explanation.
- Step 904 receives user heart rate data (or other health-indicator data) and, optionally, corresponding (in time) other-factor data, and inputs this data into a personalized-trained machine learning model.
- the user heart rate data or other health-indicator data
- personalized-trained model is trained on the user’s individual health-indicator data and, optionally, corresponding (in time) other-data as described herein.
- the personalized-trained machine learning model predicts normal heart rate data for that individual user under conditions of the other-factor(s), and step 908 identifies aberrations or anomalies in the user’s health-indicator data as compared to what is predicted as normal for that particular user.
- Some embodiments receive the user’s health-indicator data from a wearable device (e.g., Apple Watch, smart watch, FitBit®, etc.) on the user, or from another mobile device (e.g., tablet, computer, etc.) in communication with a sensor on the user (e.g., Polar® strap, PPG sensor etc.), which is discussed throughout this description.
- a wearable device e.g., Apple Watch, smart watch, FitBit®, etc.
- another mobile device e.g., tablet, computer, etc.
- a sensor on the user e.g., Polar® strap, PPG sensor etc.
- a loss may be defined to help determine whether to notify a user, in step 908, that the user’s measured data is anomalous to what is predicted as normal for that particular user.
- the loss is chosen to model how close the prediction is to the actual or measured data.
- the skilled artisan will appreciate many ways to define loss.
- the absolute value of the difference between the predicted value and the absolute value ⁇ AP * ⁇ is a form of a loss.
- the loss (L) may
- L generally, is a measure of how close the predicted
- thresholds for L are set, e.g., L > 5, where the user is notified an anomalous condition exists from that predicted for that particular user. This notification may take many forms, as described elsewhere herein. As also described elsewhere herein, other embodiments may take an average of losses over a period of time and compare the average to a threshold. In some embodiments, as described in more detail elsewhere herein, the threshold itself may be a function of a statistical calculation of the predicted data or an average of the predicted data.
- the input and predicted data may be scalar values, or segments of data over a time period.
- a system designer may be interested in 5-minute data segments, and would input all the data prior to time t and all other-data for t + 5min, predict the health-indicator data for t+5mins and determine a loss between measured health-indicator data for the t+5min segment against the predicted health-indicator data for the t+5min segment.
- Step 908 determines if an anomaly is present or not. As discussed this may be determined if the loss exceeds a threshold. As previously described, the threshold is set by choice of the designer and based on the purpose of the system being designed. In some embodiments the threshold may be modified by the user, but preferably not so in this embodiment. If an anomaly is not present, the process is repeated at step 904. If an anomaly is present, step 910 notifies or alerts the user to obtain a high-fidelity measurement, an ECG or blood pressure measurement for example and not by way of limitation.
- the high-fidelity data is analyzed by an algorithm, a health professional or both and is described as normal or not normal, and if not normal some diagnosis may be assigned, e.g., AFib, tachycardia, bradycardia, atrial flutter, or high/low blood pressure depending on the high-fidelity measurement obtained.
- AFib tachycardia
- bradycardia bradycardia
- atrial flutter or high/low blood pressure depending on the high-fidelity measurement obtained.
- notification to record high-fidelity data is equally applicable and possible in other embodiments, and in particular embodiments using general models described above.
- the high-fidelity measurement in some embodiments, may be obtained directly by the user using a mobile monitoring system, such as ECG or blood pressure systems, which may be associated with the wearable device in some embodiments.
- the notification step 910 causes automatic acquisition of the high-fidelity measurement.
- the wearable device may communicate with a sensor (hard-wired or via wireless communication) and obtain ECG data, or it may communicate with a blood pressure cuff-system (e.g., wrist band of a wearable or an armband cuff) to automatically obtain a blood pressure measurement, or it may communicate with an implanted device such as a pace maker or ECG electrodes.
- a sensor hard-wired or via wireless communication
- a blood pressure cuff-system e.g., wrist band of a wearable or an armband cuff
- an implanted device such as a pace maker or ECG electrodes.
- Systems for remotely obtaining an ECG are provided, for example, by AliveCor, Inc., such systems include (without limitation) one or more sensors contacting the user in two or more locations, where the sensor collects electrical cardiac data that is transmitted, either wired or wirelessly, to a mobile computing device, where an app generates an ECG strip from the data, which can be analyzed by algorithms, a medical professional or both.
- the sensor may be a blood pressure monitor, where the blood pressure data are transmitted, either wired or wirelessly, to the mobile computing device.
- the wearable itself may be a blood pressure system having a cuff with ability to measure health-indicator data and optionally with an ECG sensor similar to that described above.
- the ECG sensor may also include an ECG sensor such as that described in co-owned U.S. Provisional Application No. 61/872,555, the contents of which is incorporated herein by reference.
- the mobile computing device may be, for example and not by way of limitation, a computer tablet (e.g., iPad), smart phone (e.g., iPhone®), wearable (e.g., Apple Watch) or a device (maybe mounted on a cart) in a healthcare facility.
- the mobile computing device could be, in some embodiments, a laptop computer or a computer in communication with some other mobile device.
- a wearable or smartwatch will also be considered mobile computing devices in terms of the capabilities provided in the context of embodiments described herein.
- the sensor may be placed on the band of the wearable where the sensor may transmit the data wirelessly or by wire to the computing device/wearable, or the band may also be a blood pressure monitoring cuff, or both as previously described.
- the sensor may be pads attached to or remote from the phone, where the pads sense electrical cardiac signals and wirelessly or by hardwire
- Step 912 analyzes the high-fidelity data and provides a description or diagnosis, as previously described.
- step 914 diagnosis or categorization of the high-fidelity measurement is received by a computing system, which may be in some embodiments the mobile or wearable computing system used to collect the user’s heart rate data (or other health-indicator data), and in step 916 the low-fidelity health-indicator data sequence (heart rate data in this example) is labeled with the diagnosis.
- the labeled user’s low-fidelity data sequence is used to train a high- fidelity machine learning model, and optionally other-factor data sequence is also provided to train the model.
- the trained high-fidelity machine learning model has the capability to receive measured low-fidelity health-indicator data sequence (e.g., heart rate data or PPG data) and optionally other-factor data and give a probability or predict or diagnose or detect when a user is experiencing an event typically diagnosed or detected using high-fidelity data.
- the trained high-fidelity machine learning model is able to do this because it has been trained on user’s health-indicator data (and optionally other-factor data) labeled with diagnoses of the high-fidelity data.
- the trained model has the ability to predict when a user is having an event associated with one or more of the labels (e.g., Afib, high blood pressure etc.) solely based on measured low-fidelity health-indicator input data sequence, e.g. heart rate or ppg data (and optionally other-factor data).
- the training of the high- fidelity model can take place on the user’s mobile device, remote from the user’s mobile device, a combination of the two, or in a distributed network.
- the user’s health-indicator data could be stored in a cloud system, and this data can be labeled in the cloud using the diagnosis from step 914.
- a global trained high-fidelity model could be used, which would be trained on labeled training examples from a population of people experiencing these conditions typically diagnosed or detected with high-fidelity measurements. These global training examples would provide low-fidelity data sequences (e.g., heart rate) labeled with conditions diagnosed using a high-fidelity measurement (e.g., Afib called from a ECG by a medical professional or an algorithm).
- a high-fidelity measurement e.g., Afib called from a ECG by a medical professional or an algorithm.
- plot 902 shows a schematic of heart rate plotted as a function of time.
- Aberrations 920 from the user’s normal heart rate data occurred at times ti, 12, t3, t4 ts, t6, t7, tg.
- Normal as described above, means that the predicted data for this particular user was within a threshold of the measured data, where the aberrations are outside the threshold.
- At aberrations from normal some embodiments prompt the user to obtain a more definitive or high-fidelity reading, by way of example not limitation an ECG reading, identified as ECGi, ECG2, ECG3, ECG4, ECG5, ECG6, ECG7, ECG8.
- the high-fidelity reading could be automatically obtained, the user may obtain it, and it could be things other than an ECG, e.g., blood pressure.
- High-fidelity readings are analyzed by algorithm, health professional or both to identify the high-fidelity data as normal/abnormal and to further identify/diagnose abnormal, AFib for example and not by way of limitation. This information is used to label the health-indicator data (e.g., heart rate or PPG data) at the point(s) of anomaly 920 in the user’s sequenced data.
- the health-indicator data e.g., heart rate or PPG data
- high-fidelity and low-fidelity data are one where high-fidelity data or measurements are typically used to make a determination, detection or diagnosis, where low-fidelity data cannot readily be used for such.
- an ECG scan may be used to identify, detect or diagnose arrhythmias, whereas heart rate or PPG data do not typically provide this capability.
- machine learning algorithms e.g., Bayes, Markov, Gaussian processes, clustering algorithms, generative models, kernel and neural network algorithms
- arrhythmias particularly AF may not present and even when symptoms do present it is notoriously difficult to record an ECG at that moment, and without expensive, bulky and sometimes invasive monitoring devices it is incredibly difficult to continuously monitor the user.
- AF arrhythmias
- AF burden may have similar import.
- Some embodiments allow for continuous monitoring of arrhythmias (e.g., AF) or other serious conditions using only the continuous monitoring of low-fidelity health-indicator data, such as heart rate or ppg along with optional other-factor data.
- FIG. 10 depicts a method 1000 in accordance with some embodiments of health monitoring systems and methods.
- Step 1002 receives measured or actual user low-fidelity health-indicator data (e.g., heart rate or PPG data from a sensor on a wearable), and optionally receives corresponding (in time) other-factor data, which may impact the health-indicator data as described herein.
- the low-fidelity health-indicator data may be measured by a mobile computing device, such as a smart watch, other wearable, or computer tablet.
- step 1004 the user’s low-fidelity health-indicator data (and optionally the other-factor data) is input into a trained high-fidelity machine learning model, which, in step 1006, outputs a predicted identification or diagnosis for the user based on the measured low-fidelity
- Step 1008 asks if the identification or diagnosis is normal, which, if yes, the process starts over. If the
- step 1010 notifies the user of the problem or detection.
- the system, method or platform may be set up to notify any combination of the user, family, friends, healthcare professionals, emergency 911, or the like. Which of these people are notified may depend on the identification, detection or diagnosis. If the identification, detection or diagnosis is life threatening, then certain people may be contacted or notified that may not be notified if the diagnosis is not life threatening.
- the measured health-indicator data sequence is input into the trained high-fidelity machine learning model and the amount of time a user is experiencing an abnormal event (e.g., difference between onset and cessation of the predicted abnormal event) is calculated, permitting a better understanding of the abnormal burden on the user.
- an abnormal event e.g., difference between onset and cessation of the predicted abnormal event
- AF burden may be highly important to understand in preventing stroke and other serious conditions.
- some embodiments allow continuous monitoring of abnormal events with a mobile computing device, a wearable computing device or other portable device capable of only acquiring low-fidelity health-factor data, and optionally other-factor data.
- Fig. 11 depicts example data 1100 analyzed based on low-fidelity data to generate a high-fidelity output prediction or detection, according to some embodiments as described herein. While described with reference to detection of atrial fibrillation, similar data may be generated for additional predictions of high-fidelity diagnosis based on low-fidelity measurements.
- the first chart 1110 shows heart rate calculations over time for a user. The heart rate may be determined based on PPG data or other heart rate sensors.
- the second chart 1120 shows activity data for a user during the same time period. For example, the activity data may be determined based on step count, or other measurements of movement of the user.
- the third chart 1130 shows a classifier output from a machine learning model and a horizontal threshold for when a notification is generated.
- a machine learning model may generate the prediction based on an input of low-fidelity measurements.
- the data in the first chart 1110 and the second chart 1120 may be analyzed by a machine learning system as described further above.
- the result of the machine learning system analysis may be provided as the atrial fibrillation probability shown in chart 1130.
- a health monitoring system can trigger a notification or other alert for the user, a physician, or other users associated with the user.
- the data in charts 1110 and 1120 may be provided as continuous measurements to a machine learning system.
- the heart rate and activity levels may be generated as measurements every 5 seconds in order an accurate measurement.
- a segment of time with multiple measurements can then be input to a machine learning model.
- the previous hour of data can be used as an input to the machine learning model.
- shorter or longer periods of time may be provided rather than one hour.
- the output chart 1130 provides an indication of periods of time in which a user is undergoing an abnormal health event. For example, the periods when the prediction is over a certain confidence level may be used by a health monitoring system to determine atrial fibrillation. This value can then be used to determine an atrial fibrillation burden on the user during the measured time period.
- a machine learning model to generate the predicted output in chart 1130 may be trained based on labeled user data.
- the labeled user data may be provided based on high-fidelity data (such as an ECG reading) taken at a time period when low- fidelity data (e.g., PPG, heart rate) and other data (e.g., activity level or steps) is also available.
- high-fidelity data such as an ECG reading
- low- fidelity data e.g., PPG, heart rate
- other data e.g., activity level or steps
- the machine learning model is designed to determine if there was likely atrial fibrillation during a preceding time period. For example, the machine learning model may take an hour of low-fidelity data as an input and provide a likelihood there was an event.
- training data may include hours of recorded data for a population of individuals.
- the data can be health-event-labeled-times when a condition was diagnosed based on high- fidelity data.
- the machine learning model may determine that any one-hour window of low-fidelity data with that event that is input into the untrained machine learning model should provide a prediction of the health-event.
- the untrained machine learning model can then be updated based on comparing the prediction with the label. After repeating for a number iterations and determining that the machine learning model has converged, it may be used by a health monitoring system to monitor for atrial fibrillation of users based on low-fidelity data. In various embodiments, other conditions than atrial fibrillation may be detected using low-fidelity data.
- Figure 12 illustrates a diagrammatic representation of a machine in the example form of a computer system 1200 within which a set of instructions, for causing the machine to perform any one or more of the methodologies discussed herein, may be executed.
- the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet.
- the machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, a hub, an access point, a network access control device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- STB set-top box
- a cellular telephone a web appliance
- server a network router, a switch or bridge, a hub, an access point, a network access control device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- the term“machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
- computer system 1200 may be representative of a
- the exemplary computer system 1200 includes a processing device 1202, a main memory 1204 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM)), a static memory 1206 (e.g., flash memory, static random access memory (SRAM), etc ), and a data storage device 1218, which communicate with each other via a bus 1230.
- main memory 1204 e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM)
- static memory 1206 e.g., flash memory, static random access memory (SRAM), etc
- SRAM static random access memory
- Processing device 1202 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or other processing device. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 1202 may also be one or more special- purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 1202 is configured to execute processing logic 1226, which may be one example of a health-monitor 1250 and related systems for performing the operations and steps discussed herein.
- processing logic 1226 may be one example of a health-monitor 1250 and related systems for performing the operations and steps discussed herein.
- the data storage device 1218 may include a machine-readable storage medium 1228, on which is stored one or more set of instructions 1222 (e.g., software) embodying any one or more of the methodologies of functions described herein, including instructions to cause the processing device 1202 to execute a health-monitor 1250 and related processes as described herein.
- the instructions 1222 may also reside, completely or at least partially, within the main memory 1204 or within the processing device 1202 during execution thereof by the computer system 1200; the main memory 1204 and the processing device 1202 also constituting machine- readable storage media.
- the instructions 1222 may further be transmitted or received over a network 1220 via the network interface device 1208.
- the machine-readable storage medium 1228 may also be used to store instructions to perform a method for monitoring user health, as described herein. While the machine-readable storage medium 1228 is shown in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that store the one or more sets of instructions.
- a machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer).
- the machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
- magnetic storage medium e.g., floppy diskette
- optical storage medium e.g., CD-ROM
- magneto-optical storage medium e.g., magneto-optical storage medium
- ROM read-only memory
- RAM random-access memory
- EPROM and EEPROM erasable programmable memory
- flash memory or another type of medium suitable for storing electronic instructions.
- some embodiments may be practiced in distributed computing environments where the machine-readable medium is stored on and or executed by more than one computer system.
- the information transferred between computer systems may either be pulled or pushed across the communication medium connecting the computer systems.
- Embodiments of the claimed subject matter include, but are not limited to, various operations described herein. These operations may be performed by hardware components, software, firmware, or a combination thereof.
- the term“or” is intended to mean an inclusive“or” rather than an exclusive“or”. That is, unless specified otherwise, or clear from context,“X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then“X includes A or B” is satisfied under any of the foregoing instances.
- the articles“a” and“an” as used in this application and the appended claims should generally be construed to mean“one or more” unless specified otherwise or clear from context to be directed to a singular form.
- Some example implementations provide a method of monitoring a user’s cardiac health.
- the method can include, receiving measured health-indicator data and other-factor data of a user at a first time, inputting, by a processing device, the health-indicator data and other- factor data into a machine learning model, wherein the machine learning model generates predicted health-indicator data at the next time step, receiving the user’s data at the next time step, determining, by the processing device, a loss at the next time step, wherein the loss is a measure between the predicted health-indicator data at the next time step and the user’s measured health-indicator data at the next time step, determining that the loss exceeds a threshold, and outputting, in response to determining that the loss exceeds a threshold, a notification to the user.
- the trained machine learning model is a trained generative neural network. In some example implementations of the method of any example implementations the trained machine learning model is a feed-forward network. In some example implementations of the method of any example implementations the trained machine learning model is a RNN. In some example implementations of the method of any example implementations the trained machine learning model is a CNN.
- the trained machine learning model is trained on training examples from one or more of: a healthy population, a population with heart disease, and the user.
- the loss at the next time step is the absolute value of the difference between the predicted health- indicator data at the next time step and the user’s measured health-indicator at the next time step.
- the predicted health-indicator data is a probability distribution, and wherein the predicted health- indicator data at the next time step is sampled from the probability distribution.
- the predicted health-indicator data at the next time step is sampled according to a sampling technique selected from the group consisting of: the predicted health-indicator data at maximum probability; and random sampling the predicted health-indicator data from the probability distribution.
- the predicted health-indicator data is a probability distribution (b), and wherein the loss is determined based on a negative logarithm of the probability distribution at the next time step evaluated with the user’s measured health-indicator at the next time step.
- the method further includes self-sampling of the probability distribution.
- the method further includes averaging the predicted health-indicator data over a period of time steps, averaging the user’s measured health-indicator data over the period of time steps, and determining the loss based on an absolute value difference between the predicted health- indicator data and the measured health-indicator data.
- the measured health-indicator data comprises PPG data. In some example implementations of the method of any example implementations the measured health-indicator data comprises heart rate data. [108] In some example implementations of the method of any example implementations the method further includes resampling irregularly spaced heart rate data onto a regularly spaced grid, wherein the heart rate data is sampled from the regularly spaced grid.
- the measured health-indicator data is one or more health-indicator data selected from the group consisting of: PPG data, heart rate data, pulse oximeter data, ECG data, and blood pressure data.
- Some example limitations provide an apparatus comprising a mobile computing device comprising a processing device, a display, a heath-indicator data sensor, and a memory having instructions stored thereon that, when executed by the processing device, cause the processing device to: receive measured health-indicator data from the health-indicator data sensor at time and other-factor data at a first time, input health-indicator data and other-factor data, into a trained machine learning model, and wherein the trained machine learning model generates predicted health-indicator data at a next time step, receive measured health-indicator data and other-factor data at the next time step, determine a loss at the next time step, wherein the loss is a measure between the predicted health-indicator data at the next time step and the measured health-indicator data at the next time step, and output a notification if the loss at the next time step exceeds a threshold.
- the trained machine learning model comprises a trained generative neural network.
- the trained machine learning model comprises a feed forward network.
- the trained machine learning model is a RNN.
- the trained machine learning model is a CNN.
- the trained machine learning model is trained on training examples from one of the group consisting of: a healthy population, a population with heart disease and the user.
- the predicted health-indicator data is a point prediction of the user’s health-indicator the next time step, and wherein the loss is the absolute value of the difference between the predicted health-indicator data and the measured health-indicator data at the next time step.
- the predicted health- indicator data is sampled from a probability distribution generated from the machine learning model.
- the predicted health- indicator data is sampled according to a sampling technique selected from the group consisting of: a maximum probability; and random sampling from the probability distribution.
- the predicted health- indicator data is a probability distribution (b), and wherein the loss is determined based on a negative logarithm of b evaluated with the user’s measured health-indicator at the next time step.
- the processing device is further to define a function a ranging from 0 to 1, wherein I t comprises a linear combination the user’s measured health-indicator data and the predicted health-indicator data as a function of a.
- the processing device is further to perform self-sampling of the probability distribution.
- the processing device is further to: average, using an averaging method, the predicted health-indicator data sampled from the probability distribution over a period of time steps, average, using the averaging method, the user’ s measured health-indicator data over the period of time steps, defining the loss the absolute value of the averaged predicted health-indicator data and the measured health-indicator data.
- the averaging method comprises one or more methods selected from the group consisting of: calculating an average, calculating an arithmetic mean, calculating a median and calculating a mode.
- health-indicator data comprises PPG data from a PPG signal.
- the measured health-indicator data is heart rate data.
- the heart rate data is collected by resampling irregularly spaced heart rate data onto a regularly spaced grid, and the heart rate data is sampled from the regularly spaced grid.
- the measured health-indicator data is one or more health-indicator data selected from the group consisting of: PPG data, heart rate data, pulse oximeter data, ECG data, and blood pressure data.
- the mobile device is selected from the group consisting of: a smart watch; a fitness band; a computer tablet; and a laptop computer.
- the mobile device further comprises a user high-fidelity sensor, wherein the notification requests the user to obtain high-fidelity measurement data
- the processing device is further to: receive an analysis of the high-fidelity measurement data; label the user measured health-indicator data with the analysis to generate labeled user health-indicator data; and use labeled user health-indicator data as a training example to train a trained personalized high-fidelity machine learning model.
- the trained machine learning model is stored on the memory. In some example implementations of any example apparatus the trained machine learning model is stored on a remote memory, wherein the remote memory is separate from the computing device and wherein the mobile computing device is a wearable computing device. In some example implementations of any example apparatus the trained personalized high-fidelity machine learning model is stored on the memory. In some example implementations of any example apparatus the trained personalized high-fidelity machine learning model is stored on a remote memory, wherein the remote memory is separate from the computing device and wherein the mobile computing device is a wearable computing device.
- the processing device is further to predict that the user is experiencing atrial fibrillation and determine an atrial fibrillation burden of the user.
- Some example implementations provide a method of monitoring a user’s cardiac health.
- the method can include receiving measured low fidelity user health-indicator data and other-factor data at a first time, inputting data comprising the user health-indicator data and other-factor data at the first time, into a personalized high-fidelity trained machine learning model, wherein the personalized high-fidelity trained machine learning model makes a prediction if the user’s health-indicator data is abnormal, and if the prediction is abnormal, sending a notification that the user’s health is abnormal.
- the trained personalized high-fidelity machine learning model is trained on measured low fidelity user health-indicator data labeled with an analysis of high-fidelity measurement data.
- the analysis of high-fidelity measurement data is based on user specific high-fidelity measurement data.
- the personalized high-fidelity machine learning model outputs a probability distribution, wherein the prediction is sampled from the probability distribution.
- the prediction is sampled according to a sampling technique selected from the group consisting of the prediction at a maximum probability and random sampling the prediction from the probability distribution.
- a sampling technique selected from the group consisting of the prediction at a maximum probability and random sampling the prediction from the probability distribution.
- an averaged prediction is determined by averaging, using an averaging method, the prediction over a period of time steps, and wherein the averaged prediction is used to determine if the user’s health-indicator data is normal or abnormal.
- the averaging method comprises one or more methods selected from the group consisting of:
- the personalized high-fidelity trained machine learning model is stored in a memory of a user wearable device. In some example implementations of the method of any example
- the measured health-indicator data and other-factor data are time segments of data over a time period.
- the personalized high-fidelity trained machine learning model is stored in a remote memory, wherein the remote memory is located remotely from a user wearable computing device.
- a health monitoring apparatus may include a mobile computing device comprising a microprocessor, a display, a user heath-indicator data sensor, and a memory having instructions stored thereon that, when executed by the microprocessor, cause the processing device to: receive measured low fidelity health-indicator data and other-factor data at a first time, wherein measured health-indicator data is obtained by the user
- health-indicator data sensor input data comprising the health-indicator data and other-factor data at the first time, into a trained high-fidelity machine learning model, wherein the trained high-fidelity machine learning model makes a prediction if the user’s health-indicator data is normal or abnormal; and in response to the prediction being abnormal, send a notification to at least the user that the user’s health is abnormal.
- the trained high-fidelity machine learning model is a trained high-fidelity generative neural network.
- the trained high-fidelity machine learning model is a trained recurrent neural network (RNN).
- RNN trained recurrent neural network
- the trained high-fidelity machine learning model is a trained feed-forward neural network.
- the trained high-fidelity machine learning model is a CNN.
- the trained high-fidelity machine learning model is trained on measured user health-indicator data labeled with based on user specific high-fidelity measurement data.
- the trained high-fidelity machine learning model is trained on low fidelity health-indicator data labeled based on high-fidelity measurement data, wherein the low fidelity health-indicator data and the high-fidelity measurement data is from a population of subjects.
- the high-fidelity machine learning model outputs a probability distribution, wherein the prediction is sampled from the probability distribution.
- the prediction is sampled according to a sampling technique selected from the group consisting of: the prediction at a maximum probability; and random sampling the prediction from the probability distribution.
- an averaged prediction is determined by averaging, using an averaging method, the prediction over a period of time steps, and wherein the averaged prediction is used to determine if the user’s health-indicator data is normal or abnormal.
- the measured health-indicator data and other-factor data are time segments of data over a time period.
- the averaging method comprises one or more methods selected from the group consisting of: calculating an average, calculating an arithmetic mean, calculating a median and calculating a mode.
- the personalized high-fidelity trained machine learning model is stored in the memory. In some example implementations of health monitoring apparatus of any example implementation the personalized high-fidelity trained machine learning model is stored in a remote memory, wherein the remote memory is located remotely from the wearable computing device. In some example implementations of health monitoring apparatus of any example implementation the mobile device is selected from the group consisting of: a smart watch; a fitness band; a computer tablet; and a laptop computer.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Cardiology (AREA)
- Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Molecular Biology (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Physiology (AREA)
- Artificial Intelligence (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Signal Processing (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Databases & Information Systems (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Computation (AREA)
- Mathematical Physics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Pulmonology (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measuring Pulse, Heart Rate, Blood Pressure Or Blood Flow (AREA)
Abstract
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/153,403 US20190038148A1 (en) | 2013-12-12 | 2018-10-05 | Health with a mobile device |
US16/580,574 US11877830B2 (en) | 2013-12-12 | 2019-09-24 | Machine learning health analysis with a mobile device |
PCT/US2019/054882 WO2020073013A1 (fr) | 2018-10-05 | 2019-10-04 | Analyse de santé basée sur un apprentissage machine à l'aide d'un dispositif mobile |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3860436A1 true EP3860436A1 (fr) | 2021-08-11 |
Family
ID=70051487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19791147.2A Pending EP3860436A1 (fr) | 2018-10-05 | 2019-10-04 | Analyse de santé basée sur un apprentissage machine à l'aide d'un dispositif mobile |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP3860436A1 (fr) |
JP (1) | JP7495398B2 (fr) |
CN (1) | CN113164057B (fr) |
WO (1) | WO2020073013A1 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20230290506A1 (en) * | 2020-07-22 | 2023-09-14 | REHABILITATION INSTITUTE OF CHICAGO d/b/a Shirley Ryan AbilityLab | Systems and methods for rapidly screening for signs and symptoms of disorders |
WO2024043748A1 (fr) * | 2022-08-25 | 2024-02-29 | 서울대학교병원 | Procédé et dispositif de mesure simultanée de six dérivations d'électrocardiogramme à l'aide d'un téléphone intelligent |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100106036A1 (en) * | 2008-10-27 | 2010-04-29 | Cardiac Pacemakers, Inc. | Arrythmia adjudication and therapy training systems and methods |
US9351654B2 (en) | 2010-06-08 | 2016-05-31 | Alivecor, Inc. | Two electrode apparatus and methods for twelve lead ECG |
US8509882B2 (en) | 2010-06-08 | 2013-08-13 | Alivecor, Inc. | Heart monitoring system usable with a smartphone or computer |
WO2014074913A1 (fr) | 2012-11-08 | 2014-05-15 | Alivecor, Inc. | Détection de signal d'électrocardiogramme |
US9247911B2 (en) | 2013-07-10 | 2016-02-02 | Alivecor, Inc. | Devices and methods for real-time denoising of electrocardiograms |
US20150018660A1 (en) | 2013-07-11 | 2015-01-15 | Alivecor, Inc. | Apparatus for Coupling to Computing Devices and Measuring Physiological Data |
WO2015089484A1 (fr) * | 2013-12-12 | 2015-06-18 | Alivecor, Inc. | Procédés et systèmes de suivi et de notation de l'arythmie |
JP2017513626A (ja) | 2014-04-21 | 2017-06-01 | アライヴコア・インコーポレーテッド | モバイルデバイスおよびアクセサリを用いた心臓監視のための方法およびシステム |
WO2015171764A1 (fr) | 2014-05-06 | 2015-11-12 | Alivecor, Inc. | Dispositif de surveillance de la tension artérielle |
EP3282933B1 (fr) | 2015-05-13 | 2020-07-08 | Alivecor, Inc. | Surveillance de la discordance |
TWI610655B (zh) * | 2015-11-13 | 2018-01-11 | 慶旺科技股份有限公司 | 具有心率分析模組之血壓計 |
US20180242863A1 (en) | 2016-01-08 | 2018-08-30 | Heartisans Limited | Wearable device for assessing the likelihood of the onset of cardiac arrest and a method thereo |
AU2017246369B2 (en) * | 2016-04-06 | 2019-07-11 | Cardiac Pacemakers, Inc. | Confidence of arrhythmia detection |
CN108606798B (zh) * | 2018-05-10 | 2021-03-02 | 东北大学 | 基于深度卷积残差网络的非接触式房颤智能检测系统 |
-
2019
- 2019-10-04 EP EP19791147.2A patent/EP3860436A1/fr active Pending
- 2019-10-04 JP JP2021518647A patent/JP7495398B2/ja active Active
- 2019-10-04 WO PCT/US2019/054882 patent/WO2020073013A1/fr active Application Filing
- 2019-10-04 CN CN201980080631.XA patent/CN113164057B/zh active Active
Also Published As
Publication number | Publication date |
---|---|
JP7495398B2 (ja) | 2024-06-04 |
CN113164057A (zh) | 2021-07-23 |
CN113164057B (zh) | 2024-08-09 |
JP2022504288A (ja) | 2022-01-13 |
WO2020073013A1 (fr) | 2020-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11877830B2 (en) | Machine learning health analysis with a mobile device | |
US10561321B2 (en) | Continuous monitoring of a user's health with a mobile device | |
US20190076031A1 (en) | Continuous monitoring of a user's health with a mobile device | |
WO2019071201A1 (fr) | Surveillance continue de la santé d'un utilisateur avec un dispositif mobile | |
US10321871B2 (en) | Determining sleep stages and sleep events using sensor data | |
JP5841196B2 (ja) | ヒトの健康に関する残差ベースの管理 | |
CN114929094A (zh) | 用于癫痫发作预测及检测的系统及方法 | |
US20200265950A1 (en) | Biological information processing system, biological information processing method, and computer program recording medium | |
US20210298648A1 (en) | Calibration of a noninvasive physiological characteristic sensor based on data collected from a continuous analyte sensor | |
JP7495397B2 (ja) | モバイルデバイスを用いたユーザの健康状態の継続的監視 | |
US20240099593A1 (en) | Machine learning health analysis with a mobile device | |
Arpaia et al. | Conceptual design of a machine learning-based wearable soft sensor for non-invasive cardiovascular risk assessment | |
WO2021127566A1 (fr) | Dispositifs et procédés pour mesurer des paramètres physiologiques | |
JP7495398B2 (ja) | モバイルデバイスを用いる機械学習健康分析 | |
US20240321447A1 (en) | Method and System for Personalized Prediction of Infection and Sepsis | |
US20210117782A1 (en) | Interpretable neural networks for cuffless blood pressure estimation | |
KR20220129283A (ko) | 인공지능 알고리즘 기반의 생체신호 비정상 측정상태 알림 시스템 및 방법 | |
CN115775627A (zh) | 糖尿病早期预警方法、设备及系统 | |
KR20240157089A (ko) | 신호 추적 분석을 이용한 심부전 진단 도구 및 방법 | |
WO2023220245A2 (fr) | Procédé et appareil pour déterminer des anomalies cardiaques de manière non invasive |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210318 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 40047131 Country of ref document: HK |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230411 |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ALIVECOR, INC. |