WO2019079475A1 - Machine learning based system for identifying and monitoring neurological disorders - Google Patents

Machine learning based system for identifying and monitoring neurological disorders Download PDF

Info

Publication number
WO2019079475A1
WO2019079475A1 PCT/US2018/056320 US2018056320W WO2019079475A1 WO 2019079475 A1 WO2019079475 A1 WO 2019079475A1 US 2018056320 W US2018056320 W US 2018056320W WO 2019079475 A1 WO2019079475 A1 WO 2019079475A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
data
recording
trained
diagnostic
Prior art date
Application number
PCT/US2018/056320
Other languages
French (fr)
Inventor
Satish Rao
Matthew Wilder
Original Assignee
Satish Rao
Matthew Wilder
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Satish Rao, Matthew Wilder filed Critical Satish Rao
Priority to KR1020207011443A priority Critical patent/KR20200074951A/en
Priority to CA3077481A priority patent/CA3077481A1/en
Priority to AU2018350984A priority patent/AU2018350984A1/en
Priority to CN201880068046.3A priority patent/CN111225612A/en
Priority to EP18868878.2A priority patent/EP3697302A4/en
Priority to JP2020522316A priority patent/JP2020537579A/en
Publication of WO2019079475A1 publication Critical patent/WO2019079475A1/en
Priority to IL273789A priority patent/IL273789A/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4005Detecting, measuring or recording for evaluating the nervous system for evaluating the sensory system
    • A61B5/4023Evaluating sense of balance
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4082Diagnosing or monitoring movement diseases, e.g. Parkinson, Huntington or Tourette
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/40ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management of medical equipment or devices, e.g. scheduling maintenance or upgrades
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2560/00Constructional details of operational features of apparatus; Accessories for medical measuring apparatus
    • A61B2560/02Operational features
    • A61B2560/0223Operational features of calibration, e.g. protocols for calibrating sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0204Acoustic sensors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0219Inertial sensors, e.g. accelerometers, gyroscopes, tilt switches
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1114Tracking parts of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4094Diagnosing or monitoring seizure diseases, e.g. epilepsy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick

Definitions

  • dizziness is a common and difficult symptom to diagnose.
  • the prevalence of dizziness and related complaints, such as vertigo and unsteadiness maybe between 40%-50% (Front Neurol. 2013;4:29).
  • Dizziness as a chief complaint in the emergency department (ED) is near 3.9 million visits annually and dizziness can be a component symptom of up to 50% of all ED visits.
  • ED emergency department
  • a secondary challenge, especially for physicians (commonly emergency physicians, neurologists and internal medicine hospitalists) providing acute care in the emergency department, urgent care, clinics, or hospital is the physical exam. This is centered on discriminating normal from abnormal eye movements. Indeed, even seasoned neurologists can have difficulty accurately examining eye movements. There can also be very subtle abnormalities in motor speech production or facial symmetry.
  • An epileptic seizure is a brief electrical event (mean duration ⁇ 1 minute) that occurs in the cerebral cortex and is caused by an excessive volume of neurons depolarizing ('firing') hypersynchronously.
  • 'firing' depolarizing
  • One in ten people will have seizure at some point in their life, but only around one in 100 (1%) of the population develop epilepsy.
  • Epilepsy is an enduring propensity towards recurrent, unprovoked seizures.
  • ES epileptic seizures
  • This disorder has multiple names in the medical literature adding confusion to patients sufferring and nonspecialists treating these conditions. These names include: pseudoseizures, nonepileptic seizures, psychogenic seizures, psychogenic nonepileptic seizures, nonepileptic attack disorder, or nonepileptic behavioral spell.
  • NBS nonepileptic behavioral spell
  • Nonepileptic behavioral spells are a psychologic condition that typically stem from a severe emotional trauma prior to the onset of the NBS. In some cases, the trauma may have occurred 40-50 years prior to the onset.
  • the emotional trauma for unclear reasons, manifests into physical symptoms. This process is broadly termed 'conversion disorders' referring to the central nervous system converting emotional pain into physical symptoms. These physical symptoms can often manifest as chronic, unexplained abdominal pain or headaches, for example. Sometimes the emotional pain or stress manifest into episodes of convulsing, or what appears to be alteration of consciousness, these events are NBS.
  • V-EEG video- electroencephalography
  • Time synchronized digital video, scalp EEG, electrocardiogram (ECG) and pulse oximetry are all recorded continuously 24/7 to record a habitual event.
  • the diagnosis primarily relies on the 'ictal EEG' pattern. Ictal or ictus refers to the event. Therefore, this refers to the what is happening in the brain waves during the actual the episode. For most epileptic seizures, there is a distinct change in the EEG, i.e. the seizure manifests as self-limited rhythmic focal or generalized pattern. There is typically some post-seizure slowing of brain wave frequencies afterwards for a few minutes, and then resumption of normal patterns.
  • Neurologists have long recognized that ES and NBS have distinct differences in their physical manifestations. Furthermore, that with proper education, training and exposure to a high volume of examples, a neurologist can become fairly accurate in diagnosing NBS from digital video or direct observation. These neurologists have usually done a 1- 2-year fellowship after neurology residency are termed epileptologists. There is a predicted shortage looming of all neurology providers, including epileptologists.
  • An additional challenge is monitoring the progression of a neurological disorder over time.
  • the ability to quantitatively measure this progression could have significant impacts in the development and administration of treatments for these diseases.
  • the ability to monitor the state of the disease may enable patients to adjust their treatments without requiring a specialist visit.
  • the system is tailored to diagnose patients presenting with symptoms of a stroke, patients suffering from a potential movement disorder, patients who have recently undergone a seizure, and patients suffering from dizziness.
  • DBSs deep brain stimulation devices
  • the system will comprise a series of sensors to collect data from the patient that are relevant to the diagnosis.
  • sensors may include light sensors, such as video or still cameras, audio sensors, such as those found on standard cellular phones, gyroscopes, accelerometers, pressure sensors, and sensors sensitive to other electromagnetic wavelengths, such as infrared.
  • these sensors will be in communication with an artificial intelligence system.
  • this system will be a machine learning system that, once trained, will process the inputs from the various sensors and produce a diagnostic prediction for the patient based on the analysis.
  • This system may then produce an output indicating the diagnosis to the patient or a physician.
  • the output may be a simple "yes", “no", "inconclusive" diagnosis for a particular disease.
  • the output may be a list of the most likely diseases, with a probability score assigned to each one.
  • One key advantage of such a system is that, by training the system to reach a diagnosis in an unbiased manner, the system may be able to identify new clinical indicia of disease, or recognize previously unidentified
  • the system of the present invention may operate by assigning a "severity" score to a patient and comparing that score to one derived by the system at an earlier timepoint.
  • Such information can be beneficial to a patient, as it allows to the patient to, for example, monitor the success of a course of treatment or determine if a more invasive form of treatment may be justified.
  • the diagnostic system of the present invention is housed in a remotely accessible location, and is capable of performing all of the data processing and analysis necessary to render a diagnosis.
  • a physician or patient with limited access to resources or in a remote location may submit raw data collected on the sensors available to them, and receive a diagnosis from the system.
  • a system for diagnosing a patient comprising: at least one sensor in communication with a processor and a memory; wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient; wherein said raw patient data comprises at least one of a video recording and an audio recording; a data processing module in communication with the processor and the memory; wherein said data processing module converts said raw patient data into processed diagnostic data; a diagnosis module in communication with the data processing module; wherein said diagnosis module is remote from the at least one sensor; wherein said diagnosis module comprises a trained diagnostic system; wherein said trained diagnostic system comprises a plurality of diagnostic models; wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data; and wherein said trained diagnostic system integrates the classifications of said plurality of diagnostic models to output a diagnostic prediction for said patient.
  • diagnosis module is housed on a remote server.
  • diagnostic prediction further comprises a confidence value.
  • said machine learning system comprises at least one of a convolutional neural network (e.g., Krizhevsky, A., Sutskever, I, and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks.
  • a convolutional neural network e.g., Krizhevsky, A., Sutskever, I, and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks.
  • said video recording comprises a recording of a patient preforming repetitive movements.
  • said repetitive movements comprise at least one of rapid finger tapping, opening and closing the hand, hand rotations, and heel tapping.
  • said raw patient data comprises a video recording, wherein said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
  • said raw patient data comprises an audio recording
  • said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
  • said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short-term memory network; support vector machines; and a random forest regression model.
  • said implanted medical device comprises a deep brain stimulation device (DBS).
  • DBS deep brain stimulation device
  • said calibration recommendation comprises a change to the programming settings of said DBS comprising at least one of: amplitude, pulse width, rate, polarity, electrode selection, stimulation mode, cycle, power source, and calculated charge density.
  • said raw patient data comprises a video recording
  • said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
  • said raw patient data comprises an audio recording
  • said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
  • said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short-term memory network; support vector machines; and a random forest regression model.
  • Figure 1 Block diagram of one embodiment of the training procedure of the artificial intelligence based diagnostic system.
  • Figure 2 Block diagram of one embodiment of the diagnostic system as used in practice.
  • Figure 3 Diagram illustrating one possible implementation of the system of the present invention.
  • Figure 4 Diagram illustrating one possible embodiment of the system of the present invention.
  • phrases "comprising at least one of X and Y" refers to situations where X is selected alone, situations where Y is selected alone, and situations where both X and Y are selected together.
  • a "confidence value” indicates the relative confidence that the diagnostic system has in the accuracy of a particular diagnosis.
  • a “mobile device” is an electronic device which may be carried and used by a person outside of the home or office. Such devices include, but are not limited to, smartphones, tablets, laptop computers, and PDAs. Such devices typically possess a processor coupled to a memory, an input mechanism, such as a touchscreen or keyboard, and output devices such as a display screen or audio output, and a wired or wireless interface capability, such as wifi, BLUETOOTHTM, cellular network, or wired LAN connection that will enable the device to communicate with other computer devices.
  • a processor coupled to a memory
  • an input mechanism such as a touchscreen or keyboard
  • output devices such as a display screen or audio output
  • a wired or wireless interface capability such as wifi, BLUETOOTHTM, cellular network, or wired LAN connection that will enable the device to communicate with other computer devices.
  • a software "module” comprises a program or set of programs executable on a processor and configured to accomplish the designated task.
  • a module may operate autonomously, or may require a user to input certain commands.
  • a "server” is a computer system, such as one or more computers and/or devices, that provides services to other computer systems over a network.
  • the system consists of a collection of sensors used to record a patient's behaviors over a period of time producing a temporal sequence of data.
  • the primary system preferably involves utilizing the video and audio sensors commonly available on smart-phones, tablets, and laptops.
  • other sensors including range imaging camera, gyroscope, accelerometer, touch screen / pressure sensor, etc. may be used to provide input to the machine learning and diagnostic system. It will be apparent to those having skill in the art that the more sensor data that is available to the system, the more accurate the resulting diagnosis is likely to be once diagnostic systems have been trained using the relevant sensor data.
  • the purpose of the machine learning system is to take as input the temporal or static data recorded from the sensors and produce as output a probability score for each of a collection of diagnoses.
  • the system may also output a confidence score for each of the diagnostic probabilities.
  • the system may be used to calibrate implanted devices, such as deep brain stimulation devices, to optimize the therapeutic efficacy of such devices.
  • one goal of the machine learning system is to serve as an inexpensive means for detecting neurological disorders, including movement disorders.
  • the output of the system will guide physicians in making a decision about a patient, however, this state of affairs may change as confidence grows in the accuracy of the system.
  • the system will initially be used primarily to identify at-risk patients, it may be tuned to have a low false negative rate (i.e., high sensitivity) at the cost of a higher false positive rate (i.e., lower specificity).
  • the system of present invention may be used to monitor patients after a diagnosis has been made. Such monitoring may be used, for example, to determine disease progression, guide treatment plans for patients, such as recommending dosages of medication to treat a movement disorder, or suggested programing changes for an implanted medical device such as a deep brain stimulation device.
  • the system will include a collection of tests the patient will be asked to perform during which time sensor data will be recorded. These tests will be designed to elicit specific diagnostic information.
  • the device used to collect the data will prompt the user or patient to perform the preferable tests. Such prompts may be made, by way of example, by using a written description of the test, by providing a video demonstration to be displayed on the screen of the device (if available), or by providing a frame or other outline on a live video feed displayed on the device to indicate where the camera should be centered.
  • the system will be flexible such that it can produce a diagnostic decision without needing results from every test (for example in cases where a particular sensor is unavailable).
  • the patient may repeat the suite of tests at regular or irregular intervals of time. For example, the patient may repeat the test once every two weeks to continually monitor the progression of the disease.
  • the diagnostic system may integrate across all data points to derive an evaluation of the state of the disease.
  • the machine learning system as a whole will take the data acquired during these tests and use them to produce the desired output.
  • the system may also integrate background information about a patient including but not limited to age, sex, prior medical history, family history, and results from any additional or alternate medical tests.
  • the whole machine learning system may include components that utilize specific machine learning algorithms to produce diagnoses from a single test or a subset of the tests. If the system includes multiple diagnostic components, the system will utilize an additional machine learning algorithm to combine across the results in order to produce the final system output.
  • the machine learning system may have a subset of required tests that must be completed for every patient or it can be designed to operate with the data from any available tests. Additionally, the system may prescribe additional tests in order to strengthen the diagnosis.
  • the processing performed by the machine learning system can be performed on device, on a local desktop machine, or in a remote location via an electronic connection.
  • processing is not performed on the same device which collected the sensor data, it is assumed that the data will be transmitted to the appropriate computing device, such as a server, using any commonly available wired or wireless technology.
  • the remote computer will be configured to receive the data from the initial device, analyze such data, and transmit the result to the appropriate location.
  • the machine learning system for identifying potential diseases comprises one or more machine learning algorithms combined with data processing methods.
  • the machine learning algorithms typically involve several stages of processing to obtain the output including: data preprocessing, data normalization, feature extraction, and classification/regression.
  • the components of the system may be implemented separately for each sensor in which case, the final output results from the fusion of the classification/regression outputs associated with each sensor.
  • some of the sensor data can be fused at the feature extraction stage and passed on to a shared classification/regression model.
  • Data preprocessing Temporally aligning data, subsampling or supersampling (interpolation) in time and space, basic filtering.
  • detection/localization e.g., Viola, P. and Jones, M. (2001). Robust real-time face detection. International Journal of Computer Vision (UCV),57(2): 137-154.
  • facial keypoint detection e.g., Ren, S., Cao, X., Wei, Y., Sun, J. (2014). Face alignment at 3000 fps via regressing local binary features. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1685-1692.
  • speech detection motion detection.
  • Feature Extraction Application of filters or other methods to obtain an abstract feature set that captures the relevant aspects of the input data.
  • An example of this is the extraction of optical flow features from image sequences.
  • MFCC Mel Frequency Cepstral Coefficients
  • the feature extraction may be implicitly implemented within the classification/regression model (this is commonly the case with deep learning methods). Alternately, feature extraction may performed prior to passing the data to an artificial neural network.
  • Classification/Regression A supervised machine learning algorithm that is trained from data to produce a desired output.
  • the system's goal is to determine which of a set of diagnoses is most likely given the input.
  • the set of diagnoses will preferably include a null option that represents no disease or movement disorder.
  • the output of a classification system is generally a probability associated with each possible diagnosis (where the probabilities across all output sum to 1).
  • real valued outputs are predicted independently. For example, the system could be trained to predict scores that fall on an institutional scale for measuring the severity of a disorder (e.g., Unified Parkinson's Disease Rating Scale (UPDRS)).
  • UPDS Unified Parkinson's Disease Rating Scale
  • machine learning classification/regression algorithms that might be used to produce the final output are artificial neural networks (relatively shallow or deep) (Goodfellow, I, Bengio, Y., and Courville, A. (2016). Deep Learning. The MIT Press.), recurrent neural networks, support vector machines (Hearst, M. (1998). Support Vector Machines. IEEE Intelligent Systems 13, 4 (July), 18-28.), and random forests.
  • the system may also utilize an ensemble of machine learning methods to generate the output (Zhang, C. and Ma, Y. (2012).
  • a range of sensors may be employed to collect data from the patient to be used as input to the machine learning system.
  • sensors are discussed below along with examples of how the data from them may be processed. These examples are meant to illustrate the types of analyses that may be applied but does not cover the full range of analyses the system can include.
  • Video analysis of the patient may include analysis of the patient's face and facial movements, mouth specific movements, arm movements, full body movement, gait analysis, finger tapping.
  • the video camera will be positioned in a manner to completely capture the relevant content (e.g., if the focus is just the face, the camera will be close to the face but will not cut off any part of the face/head, or if the focus is the hand for finger tapping, just the patient's hand will be in frame).
  • the system may aid the user in collecting the appropriate images by providing an on-screen prompt, such as a frame on the video display of the device.
  • initial processing may be done to accurately localize the body part and its sub components (e.g., the face and parts of the face such as eye and mouth locations).
  • the localization may be used to constrain the region over which further processing and feature extraction is performed.
  • Audio analysis from video or microphone: Throughout the course of video recording, the audio signal may also be recorded. Alternately, a microphone may be used to acquire audio data independently of a video. In some cases, when the focus is purely on movement, the audio data will not be used. However, in other aspects of the test, the audio signal may include speech from the patient or other sounds that are relevant to the task being performed and may provide diagnostic information (e.g., Zhang, Y. (2017). Can a Smartphone Diagnose Parkinson Disease? A Deep Neural Network Method and Telediagnosis System Implementation. Parkinson's Disease, vol. 2017.).
  • diagnostic information e.g., Zhang, Y. (2017). Can a Smartphone Diagnose Parkinson Disease? A Deep Neural Network Method and Telediagnosis System Implementation. Parkinson's Disease, vol. 2017.).
  • the patient may be prompted to read a specific statement aloud to provide a standardized audio sample across all patients, or make repetitive plosive sounds ("PA,” "KA,” and “TA") for a specific duration.
  • the processing may involve detection of speech and other sounds, statistical analysis of the audio data, filtering of the signal for feature extraction.
  • the raw audio data and or any derived features could then be provided as input to a recurrent neural network to perform further feature extraction.
  • the intermediate representation might be passed to another neural network to generate the desired output or could be combined with features from other modalities before passed to the final decision making component.
  • Range imaging system e.g., Infrared Time-of-flight, LiDAR, etc.
  • Range imaging systems record information about the structure of objects in view. Typically they record a depth value for every pixel in the image (though in the case of LiDAR, they may produce a full 3D point cloud for the visible scene). 2D depth data or 3D point cloud data can be integrated into the machine learning system to assist in object localization, keypoint detection, motion feature extraction, and classification/regression decisions. In many instances, this data is processed in a similar manner to image and audio data in that it often requires preprocessing, normalization, and feature extraction.
  • Gyroscope and accelerometer Most hand held devices (e.g., smartphones and tablets) include sensors that measure orientation and movement of the device. These sensors may be used by the machine learning system to provide supplemental diagnostic information. In particular, the sensors can be used to record movement information about the patient while he or she is performing a particular task. The movement data can be the primary source data for the task or can be combined with video data recorded at the same time. The temporal movement data can be processed in a similar way to the video data using preprocessing stages to prepare the data and feature extraction to obtain a discriminative representation that can be passed to the machine learning algorithm.
  • Touch screen / pressure sensors Many devices have an onboard touch screen that captures physical interactions with the device. In some cases, the device also has more fine resolution pressure sensors that can differentiate between different types of tactile interactions. These sensors can be integrated into the machine learning system as an additional source of diagnostic information. For example, the patient may be directed to perform a sequence of tasks that involve interacting with the touch screen. The timing, location, and pressure of the patient's responses can be integrated as supplemental features in the machine learning system.
  • the machine learning system may be trained to produce the expected output for a given input set.
  • expert neurologists who have viewed and annotated the raw input data will define the data outputs used in training the machine learning system.
  • the outputs for some tests may be defined by information known about the patient. For example, if a patient is known to have a particular movement disorder, that information may be associated with the input of a particular test even if the expert neurologist cannot diagnose the movement disorder from that particular test alone.
  • An annotated dataset covering a range of healthy and diseased patients will be assembled and used to train and validate the machine learning system.
  • the artificial intelligence system may integrate additional expert knowledge that is not learned from the data but is deemed important for the diagnosis (for example, a supplemental decision tree (Quinlan, J. (1986). Induction of Decision Trees. Machine Learning 1 (1): 81-106.) defined by an expert neurologist).
  • a supplemental decision tree Quinlan, J. (1986). Induction of Decision Trees. Machine Learning 1 (1): 81-106.
  • the dataset will be generated in part from recordings performed on devices similar to those that will be used when the system is deployed. However, training may also rely on data generated from other sources (e.g., existing video recordings of patients with and without movement disorders).
  • additional data may be collected (with the patient's permission) and used to train and improve future versions of the machine learning system.
  • This data may be recorded on the device and transferred to permanent computer storage at a later time or may be transmitted to off device storage system at real or near-real time.
  • the means of transfer may include any commonly available wired or wireless technology.
  • a deep learning approach may be used to perform the desired classification/regression task.
  • the deep learning system will internally generate an abstract feature representation relevant to the problem.
  • the temporal data may be processed using a recurrent neural network such as a long short- term memory (LSTM), to obtain a deep, abstract feature representation.
  • LSTM long short- term memory
  • This feature representation may then be provided to a standard deep neural network architecture to obtain the final classification or regression outputs.
  • the Intelligence system of the present invention may be trained.
  • the raw data (101) is acquired from a number of healthy individuals, as well as from individuals who have been diagnosed with the disease (or diseases) of interest.
  • Such data may be collected from a number of different sensor types, including video, audio, or touch based sensors.
  • multiple different types of data will be collected from each sensor as described above.
  • the data will then be classified by experts trained in diagnosing the relevant disease (102). This classification may be specific to the test preformed (such as using the UPDRS scale for a specific task related to Parkinson's Disease), or it may be a simple binary designation relating to the patient's overall diagnosis, regardless of whether the specific test at issue is indicative of the disease.
  • This raw data will then undergo data processing (103). It will be apparent to those having skill in the art that the data processing may take place on the device used to collect the data, or the raw data may be transmitted to a remote server using any wired or wireless technology to be processed there. Also, it will be apparent that feature extraction may be performed as part of the data processing stage of the system, or may be performed by the machine learning system during the training and model generation stage, depending on the specific machine learning system used. Furthermore, it is possible that the classification step described in (102) above may be performed after the data is processed, rather than before.
  • the system of the present invention will compare the subjects classified as having a particular neurological disorder to the subjects classified as "healthy” to facilitate training of the diagnostic models.
  • the sensor data may be processed using image processing, signal processing, or machine learning to extract measurements associated with some action (e.g., jaw displacement in tremor, finger tapping rate, repetitive speech rate, facial expression, etc.). These measurements can then be compared to normative values for healthy and diseased patients collected via the system or referenced in the literature for various disorders.
  • a common speech test for Parkinson's Disease is to repeatedly say a syllable (e.g., "PA") as many times as possible in 5 seconds.
  • PA syllable
  • a diagnosis could be obtained by comparing the total utterance count to the distribution of counts observed across a population of healthy people. Additionally, the measurement could serve as a feature for a downstream machine learning system that learns to make a diagnosis from a collection of varying measurements perhaps combined with other features extracted from additional sensor data.
  • the data is used to train a plurality of machine learning systems to generate a number of classification models (104) that, when combined, are used to produce a predictive diagnostic model.
  • each of the trained diagnostic models will focus on a single aspect (or subset of aspects) of the collected patient data. For example, diagnostic model 1 may focus exclusively on the blink rate of a video of the patient's face, while diagnostic model 2 may focus on the frequency of a repetitive finger tapping test.
  • Such diagnostic models will be trained by comparing the data from subjects which have been classified as possessing a certain neurological disorder to the data from subjects which have been classified as "healthy.” Preferably, a large number of such trained diagnostic models will be generated for each possible disease. Doing so will enable the overall system to accommodate instances where an individual test is inconclusive or missing. The classifications produced by these trained diagnostic models will then be aggregated (105) by an additional Artificial Intelligence (AI) system to produce a final predicative diagnostic model (106).
  • AI Artificial Intelligence
  • the trained system may be used to produce a predictive diagnosis for a patient ( Figure 2).
  • the data acquisition (201) and processing (202) steps will be similar or identical to the methods used during the training of the diagnostic system.
  • the system will pass the data to the relevant trained diagnostic model, whereby each model will assign a classifier to the data based on the results of the training described above (203).
  • the outputs of each diagnostic model will then be aggregated (204), and the system will thereby produce a predictive diagnostic output (205).
  • the data acquisition, processing, training, and diagnosis steps can be performed on the device used to collect the data, or can be performed on different devices by transmitting the data from one device to another using any known wired or wireless technology.
  • Figure 3 illustrates one possible implementation the system of the present invention to diagnose a patient which may potentially have a neurological disorder.
  • the user instructs a mobile device, such as a cell phone or tablet computer, to run an application that can execute the program of the present invention (301).
  • the user is then prompted to perform a series of tests on the subject to be diagnosed (302). It will apparent that the user and the subject can be the same person, or different people.
  • the application has prompted the user to perform three tests, one focusing on recording various facial expressions using the device's built-in camera, one focusing on fine motor control using an accelerometer equipped within the device, and focusing on speech patterns by having the user read a sentence displayed on the screen and recording the speech using the device's microphone.
  • the relevant data is collected (303).
  • the data is then transmitted to a remote cloud server, where a trained AI program of the present invention processes and analyzes the data (304) to produce a clinical result based on the particular test (305).
  • the individual clinical results are then aggregated by a trained AI program (306) to produce a final clinical result (307) which is output to the user.
  • additional sensor inputs could also be used, and that any individual AI program could incorporate data from one or more sensors to produce an individual clinical result.
  • the trained AI program could be housed on the device used to collect the data, provided the device has sufficient computing power an storage to run the full application.
  • the following Working Example provides one exemplary embodiment of the present invention, and is not intended to limit the scope of the invention in any way.
  • This is one specific embodiment of a general system that diagnoses movement disorders.
  • Such disorders include, but are not limited to, the following: Parkinson's Disease (PD), Vascular PD, drug induced PD, Multisystem atropy, Progressive Supranuclear Palsy, Corticobasal Syndrome, Front-temporal dementia, Psychogenic tremor, Psychogenic movement disorder, and Normal Pressure hydrocephalus; Ataxia, including Friedrichs Ataxia, spinocerebellar ataxias 1 - 14, X-linked congenital ataxia, Adult onset ataxia with tocopherol deficiency, Ataxia-telangiectasia, and Canavan Disease; Huntington's disease, Neuro-acanthocytosys, benign hereditary chorea, and Lesch-Nyan syndrome; Dystonia, including Oppenheim's torsion dystonia, X-linked dys
  • Paroxysmal dyskinesiase including kinesigenic, non-kinesigenic, and exertional
  • Tourette's syndrome and Rett Syndrome Essential tremor, primary head tremor, and primary voice tremor.
  • the training process involves six primary stages: 1) data acquisition, 2) data annotation, 3) data preparation, 4) training diagnostic models, 5) training model aggregation and 6) model deployment.
  • multiple tests are used for diagnosing Parkinson's disease and as such, the details of these 5 stages may vary some from one test to another.
  • the methods below utilize only data that can be collected via a standard video camera (e.g., on a smart phone or computer). However, data from other sensors could be added as extra input.
  • a range of tests may be recorded using a video camera with a functional microphone. The procedure for recording these data should be consistent from one patient to the next. These video recordings will be used for training models to diagnose PD and will serve as the input for the deployed system when making a diagnosis for a new patient.
  • the preferred tests can be broken down into the following tests (some of which may require multiple recordings), although it will be apparent to those having skill in the art that fewer or alternate tests may also be performed while maintaining diagnostic accuracy:
  • [0112] Record the patient getting up from his or her chair, walking 10-15 steps, turning 180 degrees and walking back. This should be recorded in a way that captures a frontal view of the patient getting out of the chair. Additionally, the recording should include a frontal view of the patient at some point during the walking.
  • the above data will be recorded for a population of diseased and healthy individuals. Ultimately, recordings for a large population of individuals are desired. However, the dataset may grow iteratively with intermediate models being trained on available data.
  • the system could be deployed in a smart phone app that directs a patient to perform the above tests. The app could use existing trained models to offer a diagnosis for the patient and the data from that patient could then be added to the set of available training data for future models.
  • a data annotation phase will be required for labeling properties of the video recordings.
  • a trained expert will review each video recording and provide a collection of relevant assessments. When appropriate, the expert will assign a Unified Parkinson's Disease Rating Scale (UPDRS) rating for various observable properties of the patient. For example, for the face recording in Test 1, a UPDRS score will be assigned for facial expression and face/jaw tremor. For situations where the UPDRS is not applicable, the expert may assign an alternative label to the video recording. For example, for the face recording in Test 1, the expert may classify the patient's blink rate into 5 categories ranging from normal to severely reduced. For Test 2, the expert will assign a UPDRS score for the amount of tremor in each extremity.
  • UPDRS Unified Parkinson's Disease Rating Scale
  • the expert will assign a UPDRS score for the patient's speech based on the number of plosive sounds a specific duration, or on the resonance, articulation, prosody, volume, voice quality, and articulatory precision of the prompted paragraph.
  • the expert will assign a UPDRS score for each repetitive movement task performed.
  • the expert will assign a UPDRS score for arising from the chair, posture, gait, and body bradykinesia/hypokinesia.
  • the expert may identify and label any other discriminate properties of the video recordings that could assist in a diagnosis, such as muscle tone (rigidity, spasticity, hypotonia, hypertonia, dystonia and flaccidity) through video analysis of specific tasks, including alternating motion rate (AMRs) and gait analysis.
  • muscle tone rigidity, spasticity, hypotonia, hypertonia, dystonia and flaccidity
  • AMRs alternating motion rate
  • the data may require other forms of non-expert annotation.
  • these annotations are not concerned with diagnosing PD and are instead focused on labeling relevant properties of the video.
  • Examples of this include: trimming the ends of a video recording to remove irrelevant data, marking the beginning and end of speech, identifying and labeling each blink in a video sequence, labeling the location of a hand or foot throughout a video sequence, marking the taps in a video of finger tapping, segmenting actions in the video from Test 5 (e.g., arising from chair, walking, turning), etc.
  • Consistent annotations should be provided for all of the data available for training models. For the diagnostic annotations (UPDRS or other classification), all training examples must be labeled. Non-diagnostic annotations may not be required for every training example as they will generally be used for training data preparation stages rather than for training the final diagnostic models.
  • the raw video and audio data usually needs to go through several stages of preparation before it can be used to train models. These stages include data preprocessing (e.g., trimming video/audio, cropping video, adjusting audio gain, subsampling or supersampling time series, temporal smoothing, etc.), normalization (e.g., aligning audio clips to standard template, transforming face image to canonical view, detecting object of interest and cropping around it, etc.), and feature extraction (e.g., deriving Mel Frequency Cepstral Coefficients (MFCC) from acoustic data, computing optical flow features for video data, extracting and representing actions such as blinks or finger taps, etc.)
  • data preprocessing e.g., trimming video/audio, cropping video, adjusting audio gain, subsampling or supersampling time series, temporal smoothing, etc.
  • normalization e.g., aligning audio clips to standard template, transforming face image to canonical view, detecting object of interest and cropping around it
  • Test 1 The data from Test 1 includes a close-up view of the patient's face at rest and performing some actions. This data could be used to identify and measure tremors in the jaw and other regions of the face. For simplicity here, we will assume that Test 1 was divided into sub collections and that the data available for this task contains a recording of only the face at rest.
  • the facial expression test asks the patient to observe a combination of video and audio that will likely illicit changes in facial expression. This may include (but are not limited to) humorous, disgusting or startling videos, or photographs with similar characteristics, or startling audio clips. While that patient is observing these stimuli.
  • the camera in 'selfie mode," or otherwise directed at the subject's face
  • the first stage in processing the raw video data is to find a continuous region(s) within the video where the face is present, unobstructed, and at rest.
  • off-the- shelf face detection algorithms e.g., Viola, Jones or more advanced convolutional neural networks
  • Amazon RekognitionTM can be used to identify video frames where the face is present. Regions of the video where a face is not present will be discarded. If there are not enough continuous sections with the face present, the video will need to be re-recorded or the data will be discarded from the training set.
  • the face detection algorithms run during this stage will also be used to crop the video to a region that only contains the face (with the face roughly centered). This process helps control for varying sizes of the face across different recordings.
  • the next step in face processing it to identify the locations of standard facial landmarks (e.g., eye corners, mouth, nose, jaw line, etc.).
  • standard facial landmarks e.g., eye corners, mouth, nose, jaw line, etc.
  • This can be done using freely licensed software or via online APIs.
  • a custom solution for this problem can be trained using data from freely available facial landmark datasets.
  • the algorithm extracts regions of interest from the video by cropping a rectangular region around a portion of the face.
  • One such region includes the jaw area and extends roughly from slightly below the chin to the middle of the nose in the vertical direction and to the sides of the face in the horizontal direction.
  • Other regions of the face where tremors occur may also be extracted at this point. Additionally, a crop of the whole face is may be retained.
  • image stabilization techniques are used to assure a smooth view of the object of interest within the cropped video sequence. These techniques may rely on the change in the detected face box region from one frame to the next or similarly the change in the location of specific facial landmarks. The goal of this normalization is to obtain a clear, steady view of the regions of interest. For example, the view of the jaw region should be smooth and consistent such that a tremor in the jaw would be visible as up and down movement within the region of interest and would not result in jitter in the overall view of the jaw region.
  • the prepared data consists of a collection of videos that are zoomed in on specific views of the face. As a final processing step, the duration of these clips may be modified to achieve a standard duration across patient recordings.
  • the dataset prepared according to the description above contains one or more video sequences of face regions of interest. These sequences have been standardized to include a fixed number of frames. Additionally, for each sequence, we have an expert annotation for the UPDRS score associated with the face/jaw tremor observed. For the sake of simplicity, we will describe a model for a single region of interest and then briefly discuss how this framework could be extended to multiple regions of interest.
  • each block includes a combination of convolutional operators and optional pooling and normalization layers.
  • the blocks may also include skip connections that feed the input data or a modified version of it forward in the network.
  • the features are flattened into a single feature vector.
  • the model learns the weights of the convolutional blocks so as to generate a single feature vector for each image that is useful for the discriminative task at hand.
  • the LSTM network in turn generates a feature vector for the whole sequence that can be used for generating a final real-valued prediction for the UPDRS score.
  • Learning in the network is performed by back propagating the loss associated with the predicted UPDRS score up through the LSTM layer and then through the convolutional blocks using standard optimization methods such as stochastic gradient descent.
  • stochastic gradient descent standard optimization methods such as stochastic gradient descent.
  • Training Model Aggregation The goal of a general system for diagnosing PD is to produce a final diagnosis for a patient or to provide an overall UPDRS score for the patient. In order to do this, a final model must be trained to learn how to aggregate the predictions from the set of models that are trained to identify particular movement abnormalities.
  • a standard random forest regression model is trained to predict the overall UPDRS score from the input data.
  • Such a model can be trained and deployed using standard machine learning libraries such as scikit-learn. Many different models could be used to learn to make the overall diagnosis and random forest regression is suggested as just one example.
  • such a system could be implemented in a smart phone app.
  • Data for the patient would be collected by following a process within the app that records video and prompts for the appropriate patient actions.
  • the app would cycle through a series of discrete tests that correspond roughly to the tests above (though some of the above tests would be divided into multiple subtests).
  • Data from each test would be saved on the device or uploaded to the cloud.
  • the data would be passed to the appropriate data preparation methods that in turn would pass the prepared data to the appropriate diagnostic model.
  • the data from a single test might be passed to multiple different diagnostic pipelines (consisting of data preparation and model evaluation).
  • the diagnostic pipelines may be implemented on device, on a remote computer, or some combination of both.
  • the system would output the final diagnostic prediction to the patient along with intermediate model predictions.
  • the system may display such an output on the screen of the device used to collect the initial senor data, or may transmit it to the relevant parties via other means, such as SMS messaging to a mobile device or sending an email to a designated party.
  • the system might present additional information relevant to the diagnostic prediction (e.g., confidence scores, assessment of recording quality, recommendations for follow up tests, etc.).
  • the app may also log relevant information and data from the tests and could pass along information regarding the diagnosis to a selected medical professional.
  • the artificial intelligence system will autonomously decide on whether tissue plasminogen activator (tPA) or ("clot buster”), or other treatment such as endovascular treatment or use of an antithombotic treatment, is appropriate to deliver to patients presenting with a stroke emergency.
  • tPA tissue plasminogen activator
  • clot buster clot buster
  • ASAIS Acute Stroke Artificial Intelligence System
  • the ASAIS will have at least one of three general types of sensors to assess the patient, including video, audio, and infrared generator/sensor.
  • there will be 'clinical data' input The clinical data input can be manually entered by a nurse or medical assistant OR be linked with the facilities electronic health record (EHR) for direct transfer of some of the data.
  • EHR electronic health record
  • the clinical data includes: biographic data, time of onset of symptoms or last time the patient was seen as 'normal', laboratory data (platelet count, international normalized ratio and prothrombin time), brain imaging data (typically head computed tomogram without contrast) and blood pressure.
  • biographic data time of onset of symptoms or last time the patient was seen as 'normal'
  • laboratory data platelet count, international normalized ratio and prothrombin time
  • brain imaging data typically head computed tomogram without contrast
  • blood pressure typically blood pressure.
  • the sensors will determine factors including, but not be limited to, detection of patient signs relevant to the assessment of each aspect of the modified National Institutes of Health Stroke Scale (mNIHSS). Such tests include the following:
  • Dysarthria assessment having the patient read from the list of words provided with the stroke scale and distinguishing between normal; clear and smooth speech, mild- to-moderate dysarthria; some slurring of speech, however the patient can be understood, and severe dysarthria; speech is so slurred that he or she cannot be understood, or patients that cannot produce any speech
  • This aggregate data will then be analyzed by the ASAIS.
  • the collection component of ASAIS may be locally housed in a laptop with software being
  • the ASAIS decision making algorithms will generate one of three ultimate outputs: YES, NO or MAYBE to administering tPA to the patient.
  • the emergency physician can use his own judgement along with the output with the ASAIS to make a final decision to whether to give tPA or not.
  • Flow chart 1 shows this basic process.
  • teleneurology service to further scale up the neurologists volume of hospitals covered (within limits) and provide a human neurologist 'back-up' for any cases that are deemed uncertain by the emergency physician.
  • the second output is NO to administering tPA.
  • the neurologist will be directly involved in only those cases in which the emergency physician questions or is uncertain of the output, as outlined above.
  • the third output option is MAYBE to administering tPA. The neurologist will be involved in all of these cases via telemedicine.
  • NIHSS National Institutes of Health Stroke Scale
  • the National Institutes of Health Stroke Scale (NIHSS) is a standardized neurologic exam scale used widely to rate severity of stroke deficits. The range is from 0 (normal) to 42 (most severe stroke). In broad terms, 0-5 scores of the NIHSS correlate to small strokes and scores above 20 and above correlate to large strokes. Due to anticipated technical limitations, the NIHSS may be modified.
  • the invention will have a mobile application version for home self-testing use. This application will utilize the video, audio and, if available on the device, infrared time-of-flight.
  • Neurostimulation devices are medical devices that provide electrical current to specific regions of the brain or other parts of the nervous system for a therapeutic effect.
  • DBS deep brain stimulation
  • one variant of such neurostimulation devices are termed deep brain stimulation (DBS) devices, such as those described in U.S. Patent No. 8,024,049.
  • DBS is a FDA approved therapy for Parkinson's Disease, tremor and dystonia. In the future, DBS will likely gain FDA approval for stroke recovery. The first DBS implant for stroke recovery occurred on December 19, 2016 at the Cleveland Clinic (Ohio) using a device produced by Boston Scientific.
  • the system of the present invention may be used to produce specific programing suggestions to optimize the performance of the implanted device in the patient to both improve therapeutic efficacy, such as, but not limited to, improving rigidity, tremor, akinesia/bradykinesia or induction of dyskinesia, and reduce unintended side effects such as, but not limited to, dysarthria, tonic contraction, diplopia, mood changes, paresthesia, or visual phenomenon of the device.
  • the sensor inputs described in the working example above may be used to train a machine learning algorithm to make specific suggestions regarding the various programing variables available on DBS devices.
  • Such suggestions include changes in AMPLITUDE (in volts or mA), PULSE WIDTH (in microseconds ⁇ wsec ⁇ ), RATE (in Hertz), POLARITY (of electrodes), ELECTRODE SELECTION, STIMULATION MODE (unipolar or bipolar), CYCLE (on/off times in seconds or minutes), POWER SOURCE (in amplitude) and calculated CHARGE DENSITY ( in uC/cm2 per stimulation phase).
  • the system of present invention may use similar data collected from individual patients to make specific recommendations for altering the programing variables for each patient's implanted device.
  • One key benefit of the system of the present invention is that such programming changes may be made in real time, with the system monitoring the patent to both validate any suggested programming changes or potentially suggest additional changes that may further improve the function of the medical device for the patient.
  • the sensor data may be analyzed in real time by machine learning and optimization systems through an iterative process testing a large number (thousands to millions) of possible DBS stimulation patterns via direct communication with the implanted pulse generator (IPG) through standard telemetry, radiofrequency signals, BluetoothTM or other means of wireless communication between the application and the IPG.
  • the system finds the optimized DBS stimulation pattern and is able to set this stimulation pattern as a baseline.
  • This baseline DBS stimulation pattern can be modified anytime manually by the healthcare provider-programmer or using this application for optimization at a later time.
  • the system of the present invention may use the same iterative process, described above to optimize stimulation patterns for other neuropsychiatric disorders, including obsessive-compulsive disorder, major depressive disorder, drug-resistant epilepsy, central pain and
  • Figure 4 illustrates one possible implementation the system of the present invention to produce recommendation for programing a DBS in a patient.
  • a mobile device such as a cell phone or tablet computer
  • the user is then prompted to perform a series of tests on the subject to be diagnosed (402). It will apparent that the user and the subject can be the same person, or different people.
  • the application has prompted the user to preform three tests, one focusing on recording various facial expressions using the device's built-in camera, one focusing on fine motor control using an accelerometer equipped within the device, and focusing on speech patterns by having the user read a sentence displayed on the screen and recording the speech using the device's microphone.
  • the relevant data is collected (403).
  • the data is then transmitted to a remote cloud server, where a trained AI program of the present invention processes and analyzes the data (404) to produce a DBS result based on the particular test (405).
  • the individual DBS results are then aggregated by a trained AI program (406) to produce a final DBS result (407) which is output to the user, such as suggested programing settings for the variables described above.
  • a trained AI program 406
  • a final DBS result 407
  • the trained AI program could be housed on the device used to collect the data, provided the device has sufficient computing power an storage to run the full application. Dizziness:
  • the role of this invention is to aid the physician, in any clinical setting, to help diagnose the cause of dizziness.
  • the invention includes an Artificial Intelligence based system that uses video, audio and (if available) infrared time-of-flight INPUTS to analyze the patients motor activity, movements, gait, eye movements, facial expression and speech. It will also have inputs regarding the temporal profile of the dizziness (acute severe dizziness, recurrent positional dizziness or recurrent attacks of nonpositional dizziness). This data can be entered manually by a medical assistant or via natural language processing by the patient via prompts.
  • the purpose of the invention is to aid in the differentiation of ES and BS using machine learning algorithms primarily analyzing digital video. In other embodiments, additional inputs may also be utilized.
  • the software can be embedded within existing infrastructure of EMUs and will have mobile/tablet version for patient home use. This will help motivate patients to record the events. In addition to having the analysis from the invention, they will able to share the video with their neurologist for confirmation.

Abstract

A system and methods of diagnosing and monitoring neurological disorders in a patient utilizing an artificial intelligence based system. The system may comprise a plurality of sensors, a collection of trained machine learning based diagnostic and monitoring tools, and an output device. The plurality of sensors may collect data relevant to neurological disorders. The trained diagnostic tool will learn to use the sensor data to assign risk assessments for various neurological disorders. The trained monitoring tool will track the development of a disorder over time and may be used to recommend or modify the administration of relevant treatments. The goal of the system is to render an accurate evaluation of the presence and severity of neurological disorders in a patient without requiring input from an expertly trained neurologist.

Description

MACHINE LEARNING BASED SYSTEM FOR IDENTIFYING AND MONITORING
NEUROLOGICAL DISORDERS
PATENT COOPERATION TREATY (PCT) PATENT APPLICATION
Cross Reference To Related Applications
[0001] This application claims priority from U.S. Provisional Patent Application No. 62573622, filed October 17, 2017, which is incorporated herein by reference, and U.S. Patent Application No. 16162711, filed October 17, 2018.
Background
[0002] The total economic burden of neurologic disease is currently estimated to exceed $800 Billion annually in the United States. Early detection and diagnosis of these diseases typically leads to earlier treatment and a decrease in the total cost of care over an individual's lifetime.
[0003] Currently, diagnosis of such diseases requires the involvement of a physician. In the United States, it is predicted that there will be a shortage of between 90,000 and 140,000 physicians by the year 2025. Worldwide, the shortfall is expected to exceed 12.9 Million healthcare providers by 2035.
[0004] Furthermore, many general practitioner (GP) physicians lack the necessary training to accurately diagnose movement disorders. For instance, a 1999 study conducted in Britain found that GPs had an error rate of just under 50% when diagnosing Parkinson's disease. (Jolyon Meara et. Al ., Accuracy of Diagnosis in Patients with presumed Parkinson's disease; Age and Ageing (1999); 28:99-102.). This state of affairs is partially due to the fact that with most movement disorders, the symptoms at onset may be very subtle, and there is typically no obvious trauma to the patient (such as a blow to the head) which would lead the GP to suspect a problem with the patient's nervous system.
[0005] While neurologists specializing in the disease are much more accurate in their diagnoses, even general neurologists have a significant error rate. As such there is a need for a diagnostic system that can accurately diagnose a neurological disorder, thus reducing the burden on our medical system by both aiding GPs in making an initial diagnosis and reducing the loss and suffering that result from a potential misdiagnosis.
[0006] Additionally, many patients suffering from such diseases are located in remote areas, or otherwise find it difficult to access a trained neurologist to secure an accurate diagnosis of their disease. Thus there is a need for some system of rendering an accurate diagnosis that can be used in a simple clinic setting, or even in the patient's own home, by otherwise untrained individuals.
[0007] In addition to movement disorders, dizziness is a common and difficult symptom to diagnose. The prevalence of dizziness and related complaints, such as vertigo and unsteadiness maybe between 40%-50% (Front Neurol. 2013;4:29). Dizziness as a chief complaint in the emergency department (ED) is near 3.9 million visits annually and dizziness can be a component symptom of up to 50% of all ED visits. In terms of the primary care office, there are an approximated 8 million visits annually with the chief complaint of dizziness and 50% of the elderly population will seek medical attention for dizziness.
[0008] The challenge for the clinician is twofold: one in the broad use of the word "dizzy" by the patient and second because of the wide range of root causes that can manifest those symptoms. The range of root causes from being benign (common cold) to deadly (stroke).
[0009] People very commonly use the word for dizzy as a catch-all word for a variety of more specific symptoms, such as vertigo (hallucination of motion), presyncope (light headedness) or ataxia (lack of balance or coordination). Often the patient themselves, even with skilled probing from the doctor, will not be specific and revert to using the word 'dizzy' . [0010] The other primary challenge related to the wide variety of causes of dizziness. These maybe due to inner ear / vestibular (benign paroxysmal positional vertigo, vestibular neuronitis, Meniere's disease), neurologic (acute stroke, brain tumor), cardiac (heart failure, low blood pressure), psychiatric (anxiety) and variety of other medical disorders.
[0011] A secondary challenge, especially for physicians (commonly emergency physicians, neurologists and internal medicine hospitalists) providing acute care in the emergency department, urgent care, clinics, or hospital is the physical exam. This is centered on discriminating normal from abnormal eye movements. Indeed, even seasoned neurologists can have difficulty accurately examining eye movements. There can also be very subtle abnormalities in motor speech production or facial symmetry.
[0012] It is the above three challenges that finally coalesce into the acute evaluation: Is this dizziness life threatening or not? A dangerous cause of dizziness that is difficult to diagnose solely on history and physical exam is acute stroke effecting the posterior circulation.
[0013] Indeed, there is data showing that strokes effecting the posterior circulation (vertebro-basilar system supplying blood to the brainstem and back of the brain) are more often missed in the ED than strokes occurring in the anterior circulation (carotid system supply blood to the front of the brain). (Stroke. 2016;STROKEAHA. l 15.010613)
[0014] Furthermore, physicians have a difficult time quickly and accurately diagnosing epileptic seizures. An epileptic seizure is a brief electrical event (mean duration ~ 1 minute) that occurs in the cerebral cortex and is caused by an excessive volume of neurons depolarizing ('firing') hypersynchronously. One in ten people will have seizure at some point in their life, but only around one in 100 (1%) of the population develop epilepsy. Epilepsy is an enduring propensity towards recurrent, unprovoked seizures.
[0015] Sometimes patients have episodes that resemble seizures to the observer but they are not epileptic seizures. These 'nonepileptic events' must then be further categorized into physiologic (passing out, heart arrhythmia etc) versus psychogenic. Psychogenic events are the most common diagnostic alternative to epileptic seizures in epilepsy centers, and will be described further.
[0016] Psychogenic events are a physiologically different condition that resemble epileptic seizures (ES) to the observer (i.e. following to the ground and convulsing, etc). This disorder, unfortunately, has multiple names in the medical literature adding confusion to patients sufferring and nonspecialists treating these conditions. These names include: pseudoseizures, nonepileptic seizures, psychogenic seizures, psychogenic nonepileptic seizures, nonepileptic attack disorder, or nonepileptic behavioral spell.
[0017] These terms are synonymous. In this discussion, the preferred term will be nonepileptic behavioral spell (NBS).
[0018] Nonepileptic behavioral spells are a psychologic condition that typically stem from a severe emotional trauma prior to the onset of the NBS. In some cases, the trauma may have occurred 40-50 years prior to the onset. The emotional trauma, for unclear reasons, manifests into physical symptoms. This process is broadly termed 'conversion disorders' referring to the central nervous system converting emotional pain into physical symptoms. These physical symptoms can often manifest as chronic, unexplained abdominal pain or headaches, for example. Sometimes the emotional pain or stress manifest into episodes of convulsing, or what appears to be alteration of consciousness, these events are NBS.
[0019] The gold standard for diagnosing NBS is through inpatient video- electroencephalography (V-EEG) monitoring unit (synonymous term with EMU). This is a time, labor and cost intensive procedure. Patients are typically admitted for three to seven days to the hospital as an inpatient.
[0020] Time synchronized digital video, scalp EEG, electrocardiogram (ECG) and pulse oximetry are all recorded continuously 24/7 to record a habitual event.
[0021] The diagnosis primarily relies on the 'ictal EEG' pattern. Ictal or ictus refers to the event. Therefore, this refers to the what is happening in the brain waves during the actual the episode. For most epileptic seizures, there is a distinct change in the EEG, i.e. the seizure manifests as self-limited rhythmic focal or generalized pattern. There is typically some post-seizure slowing of brain wave frequencies afterwards for a few minutes, and then resumption of normal patterns.
[0022] In contrast, during NBS, there is no change in the EEG during the event. There are typically normal background rhythms of wakefulness with superimposed movement / muscle artifacts.
[0023] The neurologist considers this 'ictal EEG' along with the digital video.
Neurologists have long recognized that ES and NBS have distinct differences in their physical manifestations. Furthermore, that with proper education, training and exposure to a high volume of examples, a neurologist can become fairly accurate in diagnosing NBS from digital video or direct observation. These neurologists have usually done a 1- 2-year fellowship after neurology residency are termed epileptologists. There is a predicted shortage looming of all neurology providers, including epileptologists.
[0024] Even with this body knowledge there can be diagnostic uncertainty in the EMU. For example, there is a type of seizure termed 'simple partial seizure' (SPS) that involves only a focal region of the cerebral cortex and does not alter consciousness. Only 15% of SPS will have a distinct ictal EEG pattern. In these cases, the patient's history, imaging and other seizure types are critical to diagnosis. Another example are mesial frontal lobe seizures. These are seizures which originate on the surface of the frontal lobe at midline where the neurons are no longer directly underneath the skull. Ironically, seizures from these regions can create bizarre seizure types (swirling movements, behavioral changes that appear intentional, etc) and, due to the biophysics of EEG, typically due not produce clear ictal EEG changes.
[0025] The burden of NBS is large. Approximately 25% of patients referred to specialized epilepsy centers for 'drug-resistant' epilepsy are found to actually have NBS. There is average delay of 1-7 years in diagnosing NBS. This leads to unnecessary exposure antiseizure medications, side effects and health care utilization.
[0026] An additional challenge is monitoring the progression of a neurological disorder over time. The ability to quantitatively measure this progression could have significant impacts in the development and administration of treatments for these diseases. Additionally, the ability to monitor the state of the disease may enable patients to adjust their treatments without requiring a specialist visit.
[0027] As such, there is a need for a system which can, either on its own or in conjunction with a physician, accurately diagnose a specific neurological disorder in a patient without the need for the patient or physician to have any prior training in diagnosing such conditions.
Summary of the Invention
[0028] It is one aspect of the present invention to provide a system that provides accurate and rapid diagnosis of a patient. In certain embodiments, the system is tailored to diagnose patients presenting with symptoms of a stroke, patients suffering from a potential movement disorder, patients who have recently undergone a seizure, and patients suffering from dizziness.
[0029] It is another aspect of the present invention to provide a system that provides useful programing recommendations of medical devices implanted in a patient. In certain embodiments, such programming recommendations will improve therapeutic efficacy of the implanted device, or reduce unwanted side effects. In certain embodiments such implanted medical devices include deep brain stimulation devices (DBSs), which may be implanted to improve symptoms associated with Parkinson's Disease or stroke.
[0030] In certain embodiments of the present invention, the system will comprise a series of sensors to collect data from the patient that are relevant to the diagnosis. These sensors may include light sensors, such as video or still cameras, audio sensors, such as those found on standard cellular phones, gyroscopes, accelerometers, pressure sensors, and sensors sensitive to other electromagnetic wavelengths, such as infrared.
[0031] In certain embodiments, these sensors will be in communication with an artificial intelligence system. Preferably, this system will be a machine learning system that, once trained, will process the inputs from the various sensors and produce a diagnostic prediction for the patient based on the analysis. This system may then produce an output indicating the diagnosis to the patient or a physician. In some embodiments, the output may be a simple "yes", "no", "inconclusive" diagnosis for a particular disease. In alternate embodiments, the output may be a list of the most likely diseases, with a probability score assigned to each one. One key advantage of such a system is that, by training the system to reach a diagnosis in an unbiased manner, the system may be able to identify new clinical indicia of disease, or recognize previously unidentified
combinations of symptoms that allow it to accurately diagnose a disorder where even an expert clinician would fail to do so.
[0032] In embodiments where the progression of the disease is monitored, the system of the present invention may operate by assigning a "severity" score to a patient and comparing that score to one derived by the system at an earlier timepoint. Such information can be beneficial to a patient, as it allows to the patient to, for example, monitor the success of a course of treatment or determine if a more invasive form of treatment may be justified.
[0033] In another aspect of the present invention, the diagnostic system of the present invention is housed in a remotely accessible location, and is capable of performing all of the data processing and analysis necessary to render a diagnosis. Thus in certain embodiments, a physician or patient with limited access to resources or in a remote location may submit raw data collected on the sensors available to them, and receive a diagnosis from the system.
[0034] Thus, it is one embodiment of the present invention to provide a system for diagnosing a patient , the system comprising: at least one sensor in communication with a processor and a memory; wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient; wherein said raw patient data comprises at least one of a video recording and an audio recording; a data processing module in communication with the processor and the memory; wherein said data processing module converts said raw patient data into processed diagnostic data; a diagnosis module in communication with the data processing module; wherein said diagnosis module is remote from the at least one sensor; wherein said diagnosis module comprises a trained diagnostic system; wherein said trained diagnostic system comprises a plurality of diagnostic models; wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data; and wherein said trained diagnostic system integrates the classifications of said plurality of diagnostic models to output a diagnostic prediction for said patient.
[0035] It is another embodiment of the present invention to provide such a system, wherein said diagnosis module is housed on a remote server.
[0036] It is yet another embodiment of the present invention to provide such a system, wherein said diagnostic prediction further comprises a confidence value.
[0037] It is still another embodiment of the present invention to provide such a system, wherein said at least one sensor is housed within a mobile device.
[0038] It is yet another embodiment of the present invention to provide such a system, wherein said trained diagnostic system is trained using a machine learning system.
[0039] It is still another embodiment of the present invention to provide such a system, wherein said machine learning system comprises at least one of a convolutional neural network (e.g., Krizhevsky, A., Sutskever, I, and Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in Neural
Information Processing Systems (NIPS 2012)), a recurrent neural network (Jain, L. and Medsker, L. (1999). Recurrent Neural Networks: Design and Applications (1st ed.). CRC Press, Inc., Boca Raton, FL, USA.), a long-term short-term memory network (Hochreiter, S. and Schmidhuber, J. (1997). Long Short-Term Memory. Neural Comput. 9, 8
(November 1997), 1735-1780.), and a random forest regression model (Breiman, L. (2001). Random Forests. Machine Learning. 45 (1): 5-32.).
[0040] It is yet another embodiment of the present invention to provide such a system, wherein said raw patient data comprises a video recording.
[0041] It is still another embodiment of the present invention to provide such a system, wherein said video recording comprises a recording of a patient preforming repetitive movements. [0042] It is yet another embodiment of the present invention to provide such a system, wherein said repetitive movements comprise at least one of rapid finger tapping, opening and closing the hand, hand rotations, and heel tapping.
[0043] It is still another embodiment of the present invention to provide such a system, wherein said raw patient data comprises an audio recording.
[0044] It is yet another embodiment of the present invention to provide such a system, wherein said audio recording comprises the patient reading a prompted sentence aloud.
[0045] It is an additional embodiment of the present invention to provide a system for diagnosing a neurological disorder in a patient, the system comprising: at least one sensor in communication with a processor and a memory; wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient; wherein said raw patient data comprises at least one of a video recording and an audio recording, a data processing module in communication with the processor and the memory; wherein said data processing module converts said raw patient data into processed diagnostic data, a diagnosis module in communication with the data processing module; wherein said diagnosis module comprises a trained diagnostic system; wherein said trained diagnostic system comprises a plurality of diagnostic models; wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data; and wherein said trained diagnostic system integrates said classifications of said plurality of diagnostic models to output a diagnostic prediction for said patient.
[0046] It is another embodiment the present invention to provide such a system, wherein the program executing said diagnosis module is executed on a device that is remote from the at least one sensor.
[0047] It is yet another embodiment the present invention to provide such a system, wherein said trained diagnostic system is trained to diagnose a movement disorder.
[0048] It is still another embodiment the present invention to provide such a system, wherein said movement disorder is Parkinson's Disease. [0049] It is yet another embodiment the present invention to provide such a system, wherein said raw patient data comprises a video recording, wherein said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
[0050] It is still another embodiment the present invention to provide such a system, wherein said raw patient data comprises an audio recording, wherein said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
[0051] It is yet another embodiment the present invention to provide such a system, wherein said plurality of algorithms are trained using a machine learning system.
[0052] It is still another embodiment the present invention to provide such a system, wherein said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short-term memory network; support vector machines; and a random forest regression model.
[0053] It is another embodiment of the present invention to provide a system for calibrating an implanted medical device in a patient, the system comprising: at least one sensor in communication with a processor and a memory; wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient; wherein said raw patient data comprises at least one of a video recording and an audio recording; a data processing module in communication with the processor and the memory; wherein said data processing module converts said raw patient data into processed calibration data; a calibration module in communication with the data processing module; wherein said calibration module comprises a trained calibration system; wherein said trained calibration system comprises a plurality of calibration models; wherein each of said plurality of calibration models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed calibration data; and wherein said trained calibration system integrates said classifications of said plurality of calibration models to output a calibration recommendation for said implanted medical device of said patient.
[0054] It is another embodiment of the present invention to provide such a system, wherein the program executing said calibration module is executed on a device that is remote from the at least one sensor.
[0055] It is yet another embodiment the present invention to provide such a system, wherein said implanted medical device comprises a deep brain stimulation device (DBS).
[0056] It is still another embodiment the present invention to provide such a system, wherein said calibration recommendation comprises a change to the programming settings of said DBS comprising at least one of: amplitude, pulse width, rate, polarity, electrode selection, stimulation mode, cycle, power source, and calculated charge density.
[0057] It is yet another embodiment the present invention to provide such a system, wherein said raw patient data comprises a video recording, wherein said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
[0058] It is still another embodiment the present invention to provide such a system, wherein said raw patient data comprises an audio recording, wherein said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
[0059] It is yet another embodiment the present invention to provide such a system, wherein said plurality of algorithms are trained using a machine learning system.
[0060] It is still another embodiment the present invention to provide such a system, wherein said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short-term memory network; support vector machines; and a random forest regression model.
[0061] It is another embodiment of the present invention to provide a system for monitoring the progression of a neurological disorder in a patient diagnosed with such a disorder, the system comprising: at least one sensor in communication with a processor and a memory; wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient; wherein said raw patient data comprises at least one of a video recording and an audio recording; a data processing module in communication with the processor and the memory; wherein said data processing module converts said raw patient data into processed diagnostic data; a progression module in communication with the data processing module; wherein said progression module comprises a trained diagnostic system; wherein said trained diagnostic system comprises a plurality of diagnostic models; wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data; wherein said trained diagnostic system integrates said classifications of said plurality of diagnostic models to generate a current progression score for said patient; and wherein said progression module compares said current progression score for said patient to a progression score from said patient generated at an earlier timepoint to create a current disease progression state, and output said disease progression state.
[0062] These, and other, embodiments of the invention will be better appreciated and understood when considered in conjunction with the following description and the accompanying tables. It should be understood, however, that the following description, while indicating various embodiments of the invention and numerous specific details thereof, is given by way of illustration and not of limitation. Many substitutions, modifications, additions and/or rearrangements may be made within the scope of the invention without departing from the spirit thereof, and the invention includes all such substitutions, modifications, additions and/or rearrangements.
Description of the Figures: [0063] Figure 1 : Block diagram of one embodiment of the training procedure of the artificial intelligence based diagnostic system.
[0064] Figure 2: Block diagram of one embodiment of the diagnostic system as used in practice.
[0065] Figure 3 : Diagram illustrating one possible implementation of the system of the present invention.
[0066] Figure 4: Diagram illustrating one possible embodiment of the system of the present invention.
Detailed description of the Invention: [0067] Definitions:
[0068] The phrase "comprising at least one of X and Y" refers to situations where X is selected alone, situations where Y is selected alone, and situations where both X and Y are selected together.
[0069] A "confidence value" indicates the relative confidence that the diagnostic system has in the accuracy of a particular diagnosis.
[0070] A "mobile device" is an electronic device which may be carried and used by a person outside of the home or office. Such devices include, but are not limited to, smartphones, tablets, laptop computers, and PDAs. Such devices typically possess a processor coupled to a memory, an input mechanism, such as a touchscreen or keyboard, and output devices such as a display screen or audio output, and a wired or wireless interface capability, such as wifi, BLUETOOTH™, cellular network, or wired LAN connection that will enable the device to communicate with other computer devices.
[0071] A software "module" comprises a program or set of programs executable on a processor and configured to accomplish the designated task. A module may operate autonomously, or may require a user to input certain commands.
[0072] A "server" is a computer system, such as one or more computers and/or devices, that provides services to other computer systems over a network. [0073] In certain embodiments, the system consists of a collection of sensors used to record a patient's behaviors over a period of time producing a temporal sequence of data. The primary system preferably involves utilizing the video and audio sensors commonly available on smart-phones, tablets, and laptops. In addition to these primary sensors, when available, other sensors including range imaging camera, gyroscope, accelerometer, touch screen / pressure sensor, etc. may be used to provide input to the machine learning and diagnostic system. It will be apparent to those having skill in the art that the more sensor data that is available to the system, the more accurate the resulting diagnosis is likely to be once diagnostic systems have been trained using the relevant sensor data.
[0074] Thus, in certain embodiments, the purpose of the machine learning system is to take as input the temporal or static data recorded from the sensors and produce as output a probability score for each of a collection of diagnoses. The system may also output a confidence score for each of the diagnostic probabilities. Furthermore, the system may be used to calibrate implanted devices, such as deep brain stimulation devices, to optimize the therapeutic efficacy of such devices.
[0075] In light of the challenges described above, one goal of the machine learning system is to serve as an inexpensive means for detecting neurological disorders, including movement disorders. Initially, it is expected that the output of the system will guide physicians in making a decision about a patient, however, this state of affairs may change as confidence grows in the accuracy of the system. As the system will initially be used primarily to identify at-risk patients, it may be tuned to have a low false negative rate (i.e., high sensitivity) at the cost of a higher false positive rate (i.e., lower specificity). In alternate embodiments, the system of present invention may be used to monitor patients after a diagnosis has been made. Such monitoring may be used, for example, to determine disease progression, guide treatment plans for patients, such as recommending dosages of medication to treat a movement disorder, or suggested programing changes for an implanted medical device such as a deep brain stimulation device.
[0076] Preferably, the system will include a collection of tests the patient will be asked to perform during which time sensor data will be recorded. These tests will be designed to elicit specific diagnostic information. In certain embodiments, the device used to collect the data will prompt the user or patient to perform the preferable tests. Such prompts may be made, by way of example, by using a written description of the test, by providing a video demonstration to be displayed on the screen of the device (if available), or by providing a frame or other outline on a live video feed displayed on the device to indicate where the camera should be centered. Preferably, the system will be flexible such that it can produce a diagnostic decision without needing results from every test (for example in cases where a particular sensor is unavailable).
[0077] In certain embodiments, the patient may repeat the suite of tests at regular or irregular intervals of time. For example, the patient may repeat the test once every two weeks to continually monitor the progression of the disease. In cases where data is collected from multiple points in time, the diagnostic system may integrate across all data points to derive an evaluation of the state of the disease.
[0078] In certain embodiments, the machine learning system as a whole will take the data acquired during these tests and use them to produce the desired output. In other embodiments, the system may also integrate background information about a patient including but not limited to age, sex, prior medical history, family history, and results from any additional or alternate medical tests.
[0079] The whole machine learning system may include components that utilize specific machine learning algorithms to produce diagnoses from a single test or a subset of the tests. If the system includes multiple diagnostic components, the system will utilize an additional machine learning algorithm to combine across the results in order to produce the final system output. The machine learning system may have a subset of required tests that must be completed for every patient or it can be designed to operate with the data from any available tests. Additionally, the system may prescribe additional tests in order to strengthen the diagnosis.
[0080] The processing performed by the machine learning system can be performed on device, on a local desktop machine, or in a remote location via an electronic connection. When processing is not performed on the same device which collected the sensor data, it is assumed that the data will be transmitted to the appropriate computing device, such as a server, using any commonly available wired or wireless technology. It will be apparent to those having skill in the art that in such cases, the remote computer will be configured to receive the data from the initial device, analyze such data, and transmit the result to the appropriate location.
[0081] In certain embodiments, the machine learning system for identifying potential diseases comprises one or more machine learning algorithms combined with data processing methods. The machine learning algorithms typically involve several stages of processing to obtain the output including: data preprocessing, data normalization, feature extraction, and classification/regression. The components of the system may be implemented separately for each sensor in which case, the final output results from the fusion of the classification/regression outputs associated with each sensor. Alternatively, some of the sensor data can be fused at the feature extraction stage and passed on to a shared classification/regression model.
[0082] In what follows, examples are provided for what each stage of processing entails. This is meant to help elucidate the role of each component, but by no means covers the full range of methods that may be included.
[0083] Data preprocessing: Temporally aligning data, subsampling or supersampling (interpolation) in time and space, basic filtering.
[0084] Data Normalization: General organization of the data to identify the most important components and to normalize the data across collections. Face
detection/localization (e.g., Viola, P. and Jones, M. (2001). Robust real-time face detection. International Journal of Computer Vision (UCV),57(2): 137-154.), facial keypoint detection (e.g., Ren, S., Cao, X., Wei, Y., Sun, J. (2014). Face alignment at 3000 fps via regressing local binary features. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1685-1692.), speech detection, motion detection.
[0085] Feature Extraction: Application of filters or other methods to obtain an abstract feature set that captures the relevant aspects of the input data. An example of this is the extraction of optical flow features from image sequences. In audio, Mel Frequency Cepstral Coefficients (MFCC) might be extracted from the acoustic signal. The feature extraction may be implicitly implemented within the classification/regression model (this is commonly the case with deep learning methods). Alternately, feature extraction may performed prior to passing the data to an artificial neural network.
[0086] Classification/Regression: A supervised machine learning algorithm that is trained from data to produce a desired output. In the case of classification, the system's goal is to determine which of a set of diagnoses is most likely given the input. The set of diagnoses will preferably include a null option that represents no disease or movement disorder. In certain embodiments, the output of a classification system is generally a probability associated with each possible diagnosis (where the probabilities across all output sum to 1). In a regression system, real valued outputs are predicted independently. For example, the system could be trained to predict scores that fall on an institutional scale for measuring the severity of a disorder (e.g., Unified Parkinson's Disease Rating Scale (UPDRS)). As will be apparent to those with skill in the art, machine learning classification/regression algorithms that might be used to produce the final output are artificial neural networks (relatively shallow or deep) (Goodfellow, I, Bengio, Y., and Courville, A. (2016). Deep Learning. The MIT Press.), recurrent neural networks, support vector machines (Hearst, M. (1998). Support Vector Machines. IEEE Intelligent Systems 13, 4 (July), 18-28.), and random forests. The system may also utilize an ensemble of machine learning methods to generate the output (Zhang, C. and Ma, Y. (2012).
Ensemble Machine Learning: Methods and Applications. Springer Publishing
Company.).
[0087] A range of sensors may be employed to collect data from the patient to be used as input to the machine learning system. By way of example and not limitation, sensors are discussed below along with examples of how the data from them may be processed. These examples are meant to illustrate the types of analyses that may be applied but does not cover the full range of analyses the system can include.
[0088] Image analysis (from video): Video analysis of the patient may include analysis of the patient's face and facial movements, mouth specific movements, arm movements, full body movement, gait analysis, finger tapping. The video camera will be positioned in a manner to completely capture the relevant content (e.g., if the focus is just the face, the camera will be close to the face but will not cut off any part of the face/head, or if the focus is the hand for finger tapping, just the patient's hand will be in frame). The system may aid the user in collecting the appropriate images by providing an on-screen prompt, such as a frame on the video display of the device. Given a video sequence of the specific body location being observed, initial processing may be done to accurately localize the body part and its sub components (e.g., the face and parts of the face such as eye and mouth locations). The localization may be used to constrain the region over which further processing and feature extraction is performed.
[0089] Audio analysis (from video or microphone): Throughout the course of video recording, the audio signal may also be recorded. Alternately, a microphone may be used to acquire audio data independently of a video. In some cases, when the focus is purely on movement, the audio data will not be used. However, in other aspects of the test, the audio signal may include speech from the patient or other sounds that are relevant to the task being performed and may provide diagnostic information (e.g., Zhang, Y. (2017). Can a Smartphone Diagnose Parkinson Disease? A Deep Neural Network Method and Telediagnosis System Implementation. Parkinson's Disease, vol. 2017.). Furthermore, the patient may be prompted to read a specific statement aloud to provide a standardized audio sample across all patients, or make repetitive plosive sounds ("PA," "KA," and "TA") for a specific duration. In the case that the audio is being used, the processing may involve detection of speech and other sounds, statistical analysis of the audio data, filtering of the signal for feature extraction. The raw audio data and or any derived features could then be provided as input to a recurrent neural network to perform further feature extraction. Finally, the intermediate representation might be passed to another neural network to generate the desired output or could be combined with features from other modalities before passed to the final decision making component.
[0090] Range imaging system (e.g., Infrared Time-of-flight, LiDAR, etc.): Range imaging systems record information about the structure of objects in view. Typically they record a depth value for every pixel in the image (though in the case of LiDAR, they may produce a full 3D point cloud for the visible scene). 2D depth data or 3D point cloud data can be integrated into the machine learning system to assist in object localization, keypoint detection, motion feature extraction, and classification/regression decisions. In many instances, this data is processed in a similar manner to image and audio data in that it often requires preprocessing, normalization, and feature extraction.
[0091] Gyroscope and accelerometer: Most hand held devices (e.g., smartphones and tablets) include sensors that measure orientation and movement of the device. These sensors may be used by the machine learning system to provide supplemental diagnostic information. In particular, the sensors can be used to record movement information about the patient while he or she is performing a particular task. The movement data can be the primary source data for the task or can be combined with video data recorded at the same time. The temporal movement data can be processed in a similar way to the video data using preprocessing stages to prepare the data and feature extraction to obtain a discriminative representation that can be passed to the machine learning algorithm.
[0092] Touch screen / pressure sensors: Many devices have an onboard touch screen that captures physical interactions with the device. In some cases, the device also has more fine resolution pressure sensors that can differentiate between different types of tactile interactions. These sensors can be integrated into the machine learning system as an additional source of diagnostic information. For example, the patient may be directed to perform a sequence of tasks that involve interacting with the touch screen. The timing, location, and pressure of the patient's responses can be integrated as supplemental features in the machine learning system.
[0093] The machine learning system may be trained to produce the expected output for a given input set. In certain embodiments, expert neurologists who have viewed and annotated the raw input data will define the data outputs used in training the machine learning system. Alternately (or in addition), the outputs for some tests may be defined by information known about the patient. For example, if a patient is known to have a particular movement disorder, that information may be associated with the input of a particular test even if the expert neurologist cannot diagnose the movement disorder from that particular test alone. An annotated dataset covering a range of healthy and diseased patients will be assembled and used to train and validate the machine learning system. The artificial intelligence system may integrate additional expert knowledge that is not learned from the data but is deemed important for the diagnosis (for example, a supplemental decision tree (Quinlan, J. (1986). Induction of Decision Trees. Machine Learning 1 (1): 81-106.) defined by an expert neurologist).
[0094] The dataset will be generated in part from recordings performed on devices similar to those that will be used when the system is deployed. However, training may also rely on data generated from other sources (e.g., existing video recordings of patients with and without movement disorders).
[0095] Preferably, once the system is in operation additional data may be collected (with the patient's permission) and used to train and improve future versions of the machine learning system. This data may be recorded on the device and transferred to permanent computer storage at a later time or may be transmitted to off device storage system at real or near-real time. The means of transfer may include any commonly available wired or wireless technology.
[0096] In certain embodiments, a deep learning approach may be used to perform the desired classification/regression task. In this case, the deep learning system will internally generate an abstract feature representation relevant to the problem. In particular, the temporal data may be processed using a recurrent neural network such as a long short- term memory (LSTM), to obtain a deep, abstract feature representation. This feature representation may then be provided to a standard deep neural network architecture to obtain the final classification or regression outputs.
[0097] Turning now to the figures, a block diagram of one embodiment of the present invention is described. Figure one illustrates one example of how the Artificial
Intelligence system of the present invention may be trained. First, the raw data (101) is acquired from a number of healthy individuals, as well as from individuals who have been diagnosed with the disease (or diseases) of interest. Such data may be collected from a number of different sensor types, including video, audio, or touch based sensors. Preferably, multiple different types of data will be collected from each sensor as described above. During the training process, the data will then be classified by experts trained in diagnosing the relevant disease (102). This classification may be specific to the test preformed (such as using the UPDRS scale for a specific task related to Parkinson's Disease), or it may be a simple binary designation relating to the patient's overall diagnosis, regardless of whether the specific test at issue is indicative of the disease.
[0098] This raw data will then undergo data processing (103). It will be apparent to those having skill in the art that the data processing may take place on the device used to collect the data, or the raw data may be transmitted to a remote server using any wired or wireless technology to be processed there. Also, it will be apparent that feature extraction may be performed as part of the data processing stage of the system, or may be performed by the machine learning system during the training and model generation stage, depending on the specific machine learning system used. Furthermore, it is possible that the classification step described in (102) above may be performed after the data is processed, rather than before.
[0099] Preferably, the system of the present invention will compare the subjects classified as having a particular neurological disorder to the subjects classified as "healthy" to facilitate training of the diagnostic models.
[0100] In certain embodiments, the sensor data may be processed using image processing, signal processing, or machine learning to extract measurements associated with some action (e.g., jaw displacement in tremor, finger tapping rate, repetitive speech rate, facial expression, etc.). These measurements can then be compared to normative values for healthy and diseased patients collected via the system or referenced in the literature for various disorders. As an example, a common speech test for Parkinson's Disease is to repeatedly say a syllable (e.g., "PA") as many times as possible in 5 seconds. The system would record audio of a person completing this task and would use signal processing or machine learning methods to count the total number of utterances within the 5 second window. A diagnosis could be obtained by comparing the total utterance count to the distribution of counts observed across a population of healthy people. Additionally, the measurement could serve as a feature for a downstream machine learning system that learns to make a diagnosis from a collection of varying measurements perhaps combined with other features extracted from additional sensor data. [0101] Once the data has been prepared, it is used to train a plurality of machine learning systems to generate a number of classification models (104) that, when combined, are used to produce a predictive diagnostic model. Preferably, each of the trained diagnostic models will focus on a single aspect (or subset of aspects) of the collected patient data. For example, diagnostic model 1 may focus exclusively on the blink rate of a video of the patient's face, while diagnostic model 2 may focus on the frequency of a repetitive finger tapping test. Preferably such diagnostic models will be trained by comparing the data from subjects which have been classified as possessing a certain neurological disorder to the data from subjects which have been classified as "healthy." Preferably, a large number of such trained diagnostic models will be generated for each possible disease. Doing so will enable the overall system to accommodate instances where an individual test is inconclusive or missing. The classifications produced by these trained diagnostic models will then be aggregated (105) by an additional Artificial Intelligence (AI) system to produce a final predicative diagnostic model (106).
[0102] Upon deployment, the trained system may be used to produce a predictive diagnosis for a patient (Figure 2). Preferably, the data acquisition (201) and processing (202) steps will be similar or identical to the methods used during the training of the diagnostic system. Once processed, the system will pass the data to the relevant trained diagnostic model, whereby each model will assign a classifier to the data based on the results of the training described above (203). The outputs of each diagnostic model will then be aggregated (204), and the system will thereby produce a predictive diagnostic output (205).
[0103] It will be apparent to those having skill in the art that, when deployed, the data acquisition, processing, training, and diagnosis steps can be performed on the device used to collect the data, or can be performed on different devices by transmitting the data from one device to another using any known wired or wireless technology.
[0104] Figure 3 illustrates one possible implementation the system of the present invention to diagnose a patient which may potentially have a neurological disorder. First, the user instructs a mobile device, such as a cell phone or tablet computer, to run an application that can execute the program of the present invention (301). The user is then prompted to perform a series of tests on the subject to be diagnosed (302). It will apparent that the user and the subject can be the same person, or different people. In this example, the application has prompted the user to perform three tests, one focusing on recording various facial expressions using the device's built-in camera, one focusing on fine motor control using an accelerometer equipped within the device, and focusing on speech patterns by having the user read a sentence displayed on the screen and recording the speech using the device's microphone. As the user performs the prompted tests, the relevant data is collected (303). In this example, the data is then transmitted to a remote cloud server, where a trained AI program of the present invention processes and analyzes the data (304) to produce a clinical result based on the particular test (305). The individual clinical results are then aggregated by a trained AI program (306) to produce a final clinical result (307) which is output to the user. It will be apparent to those having skill in the art that additional sensor inputs could also be used, and that any individual AI program could incorporate data from one or more sensors to produce an individual clinical result. It will further be apparent that the trained AI program could be housed on the device used to collect the data, provided the device has sufficient computing power an storage to run the full application.
Working Example:
[0105] The following Working Example provides one exemplary embodiment of the present invention, and is not intended to limit the scope of the invention in any way. This is one specific embodiment of a general system that diagnoses movement disorders. Such disorders include, but are not limited to, the following: Parkinson's Disease (PD), Vascular PD, drug induced PD, Multisystem atropy, Progressive Supranuclear Palsy, Corticobasal Syndrome, Front-temporal dementia, Psychogenic tremor, Psychogenic movement disorder, and Normal Pressure hydrocephalus; Ataxia, including Friedrichs Ataxia, spinocerebellar ataxias 1 - 14, X-linked congenital ataxia, Adult onset ataxia with tocopherol deficiency, Ataxia-telangiectasia, and Canavan Disease; Huntington's disease, Neuro-acanthocytosys, benign hereditary chorea, and Lesch-Nyan syndrome; Dystonia, including Oppenheim's torsion dystonia, X-linked dystonia-Parkinsonism, Dopa- responsive dystonia, Craio-cervical dystonia, Rapid onset dystonia parkinsonism, Niemann-Pick Type C, Neurodegeneration with iron deposition, spasmodic dysphonia, and spasmodic torticollis; Hereditary hyperplexia, Unverricht-Lundborg disease, Lafora body disease, myoclonic epilepsies, Creutzfeldt- Jakob Disease (familial and sporadic), and Dentatorubral-pallidoluysian atrophy (DRPLA); Episodic Ataxias 1 and 2,
Paroxysmal dyskinesiase, including kinesigenic, non-kinesigenic, and exertional;
Tourette's syndrome and Rett Syndrome; Essential tremor, primary head tremor, and primary voice tremor.
[0106] The training process involves six primary stages: 1) data acquisition, 2) data annotation, 3) data preparation, 4) training diagnostic models, 5) training model aggregation and 6) model deployment. Generally, multiple tests are used for diagnosing Parkinson's disease and as such, the details of these 5 stages may vary some from one test to another. The methods below utilize only data that can be collected via a standard video camera (e.g., on a smart phone or computer). However, data from other sensors could be added as extra input.
1. Data Acquisition
[0107] A range of tests may be recorded using a video camera with a functional microphone. The procedure for recording these data should be consistent from one patient to the next. These video recordings will be used for training models to diagnose PD and will serve as the input for the deployed system when making a diagnosis for a new patient. The preferred tests can be broken down into the following tests (some of which may require multiple recordings), although it will be apparent to those having skill in the art that fewer or alternate tests may also be performed while maintaining diagnostic accuracy:
[0108] Record close-up video of the patient's face while prompting a sequence of actions. The goal of this test is to collect video that contains the face at rest, the face performing simple expressions, blink rate information, and gaze variations (side-to-side, up-down, convergence).
[0109] Record video of the patient's whole body while the patient is seated. The goal of this test is to capture video that contains the patient's hands and feet in a rested position. The data will also contain video of the patient raising their arms and holding them straight in front of themselves.
[0110] Record close-up video (with audio) of the patient's face while they say a prompted sentence or perform an alternative method of speech analysis. The speech analysis may ask the patient to say repetitive plosive sounds ("PA", "TA", "KA", and "PA-TA-KA" for a specified duration, or read aloud a paragraph.
[0111] Record multiple clips of the patient performing repetitive movements. These movements include finger tapping, opening and closing hand repetitively, hand rotations (pronate/supinate), heel tapping. In each case, the video will be zoomed in on the body part performing the action (i.e., for finger/hand movements, the hand should nearly fill the video frame and for foot movements, the foot should nearly fill the video frame).
[0112] Record the patient getting up from his or her chair, walking 10-15 steps, turning 180 degrees and walking back. This should be recorded in a way that captures a frontal view of the patient getting out of the chair. Additionally, the recording should include a frontal view of the patient at some point during the walking.
[0113] For the purpose of training diagnostic models, the above data will be recorded for a population of diseased and healthy individuals. Ultimately, recordings for a large population of individuals are desired. However, the dataset may grow iteratively with intermediate models being trained on available data. For example, the system could be deployed in a smart phone app that directs a patient to perform the above tests. The app could use existing trained models to offer a diagnosis for the patient and the data from that patient could then be added to the set of available training data for future models.
2. Data Annotation
[0114] Following data acquisition, a data annotation phase will be required for labeling properties of the video recordings. A trained expert will review each video recording and provide a collection of relevant assessments. When appropriate, the expert will assign a Unified Parkinson's Disease Rating Scale (UPDRS) rating for various observable properties of the patient. For example, for the face recording in Test 1, a UPDRS score will be assigned for facial expression and face/jaw tremor. For situations where the UPDRS is not applicable, the expert may assign an alternative label to the video recording. For example, for the face recording in Test 1, the expert may classify the patient's blink rate into 5 categories ranging from normal to severely reduced. For Test 2, the expert will assign a UPDRS score for the amount of tremor in each extremity. For Test 3, the expert will assign a UPDRS score for the patient's speech based on the number of plosive sounds a specific duration, or on the resonance, articulation, prosody, volume, voice quality, and articulatory precision of the prompted paragraph. For Test 4, the expert will assign a UPDRS score for each repetitive movement task performed. For Test 5, the expert will assign a UPDRS score for arising from the chair, posture, gait, and body bradykinesia/hypokinesia. The expert may identify and label any other discriminate properties of the video recordings that could assist in a diagnosis, such as muscle tone (rigidity, spasticity, hypotonia, hypertonia, dystonia and flaccidity) through video analysis of specific tasks, including alternating motion rate (AMRs) and gait analysis.
[0115] In addition to the expert annotations described above, the data may require other forms of non-expert annotation. Generally, these annotations are not concerned with diagnosing PD and are instead focused on labeling relevant properties of the video.
Examples of this include: trimming the ends of a video recording to remove irrelevant data, marking the beginning and end of speech, identifying and labeling each blink in a video sequence, labeling the location of a hand or foot throughout a video sequence, marking the taps in a video of finger tapping, segmenting actions in the video from Test 5 (e.g., arising from chair, walking, turning), etc.
[0116] Consistent annotations should be provided for all of the data available for training models. For the diagnostic annotations (UPDRS or other classification), all training examples must be labeled. Non-diagnostic annotations may not be required for every training example as they will generally be used for training data preparation stages rather than for training the final diagnostic models.
3. Data Preparation [0117] The raw video and audio data usually needs to go through several stages of preparation before it can be used to train models. These stages include data preprocessing (e.g., trimming video/audio, cropping video, adjusting audio gain, subsampling or supersampling time series, temporal smoothing, etc.), normalization (e.g., aligning audio clips to standard template, transforming face image to canonical view, detecting object of interest and cropping around it, etc.), and feature extraction (e.g., deriving Mel Frequency Cepstral Coefficients (MFCC) from acoustic data, computing optical flow features for video data, extracting and representing actions such as blinks or finger taps, etc.)
[0118] Given the data collected from the tests above, there are many different analyses that can be applied to obtain a final diagnosis. In what follows, examples of several such analyses are provided to illustrate the methods required to achieve a diagnosis in each case. In a final system, many diagnostic models (including those not described herein) would be trained and combined to achieve the overall diagnosis. The following examples were chosen to roughly cover methods appropriate for the first test described above. The various analyses within each of the 5 tests will generally exhibit more similarity. These same examples will be used in the subsequent section where the model training is described.
[0119] Face/Jaw Tremor Assessment (Data Preparation)
[0120] The data from Test 1 includes a close-up view of the patient's face at rest and performing some actions. This data could be used to identify and measure tremors in the jaw and other regions of the face. For simplicity here, we will assume that Test 1 was divided into sub collections and that the data available for this task contains a recording of only the face at rest.
[0121] In certain embodiments, the facial expression test asks the patient to observe a combination of video and audio that will likely illicit changes in facial expression. This may include (but are not limited to) humorous, disgusting or startling videos, or photographs with similar characteristics, or startling audio clips. While that patient is observing these stimuli. The camera (in 'selfie mode," or otherwise directed at the subject's face) is focused on the patient's face to analyze changes in facial expression and the presence or absence of jaw tremor. [0122] The first stage in processing the raw video data is to find a continuous region(s) within the video where the face is present, unobstructed, and at rest. For this task, off-the- shelf face detection algorithms (e.g., Viola, Jones or more advanced convolutional neural networks) or those available via an online API such as Amazon Rekognition™ can be used to identify video frames where the face is present. Regions of the video where a face is not present will be discarded. If there are not enough continuous sections with the face present, the video will need to be re-recorded or the data will be discarded from the training set. The face detection algorithms run during this stage will also be used to crop the video to a region that only contains the face (with the face roughly centered). This process helps control for varying sizes of the face across different recordings.
[0123] The next step in face processing it to identify the locations of standard facial landmarks (e.g., eye corners, mouth, nose, jaw line, etc.). This can be done using freely licensed software or via online APIs. Alternatively, a custom solution for this problem can be trained using data from freely available facial landmark datasets.
[0124] Once the locations of key facial features are known, the algorithm extracts regions of interest from the video by cropping a rectangular region around a portion of the face. One such region includes the jaw area and extends roughly from slightly below the chin to the middle of the nose in the vertical direction and to the sides of the face in the horizontal direction. Other regions of the face where tremors occur may also be extracted at this point. Additionally, a crop of the whole face is may be retained.
[0125] During the extraction of the regions of interest, image stabilization techniques are used to assure a smooth view of the object of interest within the cropped video sequence. These techniques may rely on the change in the detected face box region from one frame to the next or similarly the change in the location of specific facial landmarks. The goal of this normalization is to obtain a clear, steady view of the regions of interest. For example, the view of the jaw region should be smooth and consistent such that a tremor in the jaw would be visible as up and down movement within the region of interest and would not result in jitter in the overall view of the jaw region. [0126] At the end of this stage, the prepared data consists of a collection of videos that are zoomed in on specific views of the face. As a final processing step, the duration of these clips may be modified to achieve a standard duration across patient recordings.
4. Training Diagnostic Models
[0127] Once the raw video and audio data has been prepared using the techniques described above, models are trained to make accurate diagnostic decisions. Many different models would be trained to diagnose different aspects of the patient' s movements. As in the previous section, several specific examples are described in detail here. However, those not described here would be similar in nature.
[0128] Furthermore, additional medical information not derived from the tests above could be used as a training input for the models. For example, relevant information such as the age, weight, medical history, or family history of the patient could be provided directly to the system of the present invention. Such information could be automatically extracted from the patient' s Electronic Health Records, or entered manually by the patient or physician in response to a questionnaire presented by the system.
[0129] 4.1. Face/Jaw Tremor Assessment (Model Training)
[0130] The dataset prepared according to the description above contains one or more video sequences of face regions of interest. These sequences have been standardized to include a fixed number of frames. Additionally, for each sequence, we have an expert annotation for the UPDRS score associated with the face/jaw tremor observed. For the sake of simplicity, we will describe a model for a single region of interest and then briefly discuss how this framework could be extended to multiple regions of interest.
[0131] Consider a video sequence of a jaw recorded at 30 frames per second for 10 seconds. Assumed that the cropped region around the jaw has a dimension of 128 x 256 pixels (rows x columns). The data would then be a sequence of 300 sample images each of size 128 x 256 (these numbers are merely for illustration purposes and do not reflect the exact dimensions used in the model). For each patient, we have such a sequence and an associated UPDRS score for that patient. The goal of training a model is to learn to predict the UPDRS score from the input sequence derived from the data. [0132] To learn this mapping, we use a combination of convolutional neural networks and recurrent neural networks (in particular Long short-term memory (LSTM) networks). We define a standard collection of convolutional blocks that operate on the independent image frames. Each block includes a combination of convolutional operators and optional pooling and normalization layers. The blocks may also include skip connections that feed the input data or a modified version of it forward in the network. At the end of the convolutional blocks, the features are flattened into a single feature vector. The model learns the weights of the convolutional blocks so as to generate a single feature vector for each image that is useful for the discriminative task at hand. At this point in the network processing pipeline, there is a feature vector for each image frame in the video sequence. This sequence of features is passed to an LSTM network that learns to integrate across the temporal dimension in the data. The LSTM network in turn generates a feature vector for the whole sequence that can be used for generating a final real-valued prediction for the UPDRS score. Learning in the network is performed by back propagating the loss associated with the predicted UPDRS score up through the LSTM layer and then through the convolutional blocks using standard optimization methods such as stochastic gradient descent. It should be noted that the above description is just a sketch of one such model that could be applied to this problem and there are many reasonable variants to it that could be equally effective. Implementation, training and deployment of such a network can be achieved using standard neural network libraries such as TensorFlow, Caffe, etc.
[0133] The description above is of a model that operates over a single region of interest. However, the technique generalizes to multiple regions of interest and a whole model operating on all regions can be trained in one pass. The general approach is to run several of these models concurrently to generate a prediction or feature representation for each of the regions of interest. These predictions or features can then be combined in the network architecture and used via a final fully connected network to make an overall UPDRS score prediction. The learning error can propagate from this final end prediction up through all of the branches of the model associated with specific regions of interest.
5. Training Model Aggregation [0134] The goal of a general system for diagnosing PD is to produce a final diagnosis for a patient or to provide an overall UPDRS score for the patient. In order to do this, a final model must be trained to learn how to aggregate the predictions from the set of models that are trained to identify particular movement abnormalities.
[0135] As input for the final model, we have the predictions from each intermediate model that may be real-values scores, ordinal classifications or general classifications. In addition to these predictions, we may have confidence values for the predictions and other relevant outputs from the intermediate models. For each patient, we assume that we have an expert annotation for the overall UPDRS score for that patient.
[0136] A standard random forest regression model is trained to predict the overall UPDRS score from the input data. Such a model can be trained and deployed using standard machine learning libraries such as scikit-learn. Many different models could be used to learn to make the overall diagnosis and random forest regression is suggested as just one example.
6. Model Deployment
[0137] When deploying this system for diagnosing PD, the same data acquisition process would be applied for a given patient. There would be no annotation of the data as the goal is for the system to perform this. The raw data would be prepared according to the methods in Section 3 above, and would be passed on to the trained models described in Section 4 (though no actual training would be done at this stage). The output of each of the trained diagnostic models would then be passed to the final model to make the overall diagnostic prediction. The predictions from the intermediate models may also be made available in the final diagnosis.
[0138] As an example, such a system could be implemented in a smart phone app. Data for the patient would be collected by following a process within the app that records video and prompts for the appropriate patient actions. The app would cycle through a series of discrete tests that correspond roughly to the tests above (though some of the above tests would be divided into multiple subtests). Data from each test would be saved on the device or uploaded to the cloud. Additionally, the data would be passed to the appropriate data preparation methods that in turn would pass the prepared data to the appropriate diagnostic model. The data from a single test might be passed to multiple different diagnostic pipelines (consisting of data preparation and model evaluation). The diagnostic pipelines may be implemented on device, on a remote computer, or some combination of both. Once all of the diagnostic models have been run, their output would be passed to the final model to obtain the overall diagnostic prediction. Again, this processing could be done on device, in the cloud, or some combination of both. The system would output the final diagnostic prediction to the patient along with intermediate model predictions. The system may display such an output on the screen of the device used to collect the initial senor data, or may transmit it to the relevant parties via other means, such as SMS messaging to a mobile device or sending an email to a designated party. The system might present additional information relevant to the diagnostic prediction (e.g., confidence scores, assessment of recording quality, recommendations for follow up tests, etc.). The app may also log relevant information and data from the tests and could pass along information regarding the diagnosis to a selected medical professional.
[0139] In addition to the working example relating to movement disorders presented above, the system of the present invention would also be applicable to diagnosing the following diseases, as well as many others.
[0140] Stroke:
[0141] In one embodiment, the artificial intelligence system will autonomously decide on whether tissue plasminogen activator (tPA) or ("clot buster"), or other treatment such as endovascular treatment or use of an antithombotic treatment, is appropriate to deliver to patients presenting with a stroke emergency. The patient presenting with acute stroke symptoms will be evaluated simultaneously by the emergency physician and the Acute Stroke Artificial Intelligence System (ASAIS). The ASAIS will have at least one of three general types of sensors to assess the patient, including video, audio, and infrared generator/sensor. In addition, there will be 'clinical data' input. The clinical data input can be manually entered by a nurse or medical assistant OR be linked with the facilities electronic health record (EHR) for direct transfer of some of the data. The clinical data includes: biographic data, time of onset of symptoms or last time the patient was seen as 'normal', laboratory data (platelet count, international normalized ratio and prothrombin time), brain imaging data (typically head computed tomogram without contrast) and blood pressure. Lastly, there will be a brief set of 'yes/no' questions that are required and will need to be manually entered. These will include:
1. Any KNOWN internal bleeding - yes or no
2. Any KNOWN history of recent (within 3 months) of intracranial or
intraspinal surgery? Or serious head trauma? - yes or no
3. Any KNOWN intracranial conditions that may increase the risk of
bleeding? - yes or no
4. Any KNOWN bleeding diathesis? - yes or no
5. Any KNOWN arterial puncture at a non-compressible site within the last 7 days? yes or no
[0142] In certain embodiments, the sensors will determine factors including, but not be limited to, detection of patient signs relevant to the assessment of each aspect of the modified National Institutes of Health Stroke Scale (mNIHSS). Such tests include the following:
[0143] Horizontal eye movement, distinguishing between normal movement, partial gaze palsy and total gaze paresis.
[0144] Visual field assessment, distinguishing among normal visual field, partial hemianopia or complete quadrantanopia; patient recognizes no visual stimulus in one specific quadrant versus complete hemianopia; patient recognizes no visual stimulus in one half of the visual field; and total blindness.
[0145] Motor arm assessment for both left and right arms independently, distinguishing among no arm drift; arm remains in the initial position for 10 seconds, drift; the arm drifts to an intermediate position prior to the end of the full 10 seconds, but not at any point relies on a support, limited effort against gravity; the arm is able to obtain the starting position, but drifts down from the initial position to a physical support prior to the end of the 10 seconds, no effort against gravity; the arm falls immediately after being helped to the initial position, however the patient is able to move the arm in some form (e.g. shoulder shrug), and no movement; patient has no ability to enact voluntary movement in this arm.
[0146] Motor leg assessment for both left and right legs independently, distinguishing among no leg drift; if remains in the initial position for 5 seconds, drift; the leg drifts to an intermediate position prior to the end of the full 5 seconds, but at no point touches the bed for support, limited effort against gravity; the leg is able to obtain the starting position, but drifts down from the initial position to a physical support prior to the end of the 5 seconds, no effort against gravity; the leg falls immediately after being helped to the initial position, however the patient is able to move the leg in some form (e.g. hip flex), and no movement; patient has no ability to enact voluntary movement in this leg.
[0147] Language assessment, distinguishing among normal speech, mild-to-moderate aphasia; detectable loss in fluency, but some information content severe aphasia; all speech is fragmented, and the patient's speech has no discernable information content, and patient is unable to speak.
[0148] Dysarthria assessment, having the patient read from the list of words provided with the stroke scale and distinguishing between normal; clear and smooth speech, mild- to-moderate dysarthria; some slurring of speech, however the patient can be understood, and severe dysarthria; speech is so slurred that he or she cannot be understood, or patients that cannot produce any speech
[0149] Assessment of extinction and inattention, distinguishing among normal, inattention on one side in one modality; visual, tactile, auditory, or spatial and hemi- inattention; does not recognize stimuli in more than one modality on the same side.
[0150] This aggregate data will then be analyzed by the ASAIS. The collection component of ASAIS may be locally housed in a laptop with software being
stored/operated via cloud technology. In one embodiment, the ASAIS decision making algorithms will generate one of three ultimate outputs: YES, NO or MAYBE to administering tPA to the patient. The emergency physician can use his own judgement along with the output with the ASAIS to make a final decision to whether to give tPA or not. Flow chart 1 shows this basic process.
[0151] It is very important to note, that currently due to significant shortages in neurologists, there is pervasive use of telemedicine in many emergency departments across the US. Therefore, the ASAIS could be embedded within an existing
teleneurology service to further scale up the neurologists volume of hospitals covered (within limits) and provide a human neurologist 'back-up' for any cases that are deemed uncertain by the emergency physician.
[0152] In the preferred embodiment, there are three possible outputs from the ASAIS: YES, NO and MAYBE. One output is YES to administering tPA to the patient. If the emergency physician agrees with the output, tPA will be administered. If the emergency physician questions or is uncertain of the output, a remote neurologist may use telemedicine technology to be directly involved in the case and give the final
recommendation. The second output is NO to administering tPA. In this case, the neurologist will be directly involved in only those cases in which the emergency physician questions or is uncertain of the output, as outlined above. The third output option is MAYBE to administering tPA. The neurologist will be involved in all of these cases via telemedicine.
[0153] In addition to the primary ultimate outputs (YES, NO and MAYBE to tPA administration) there may also be a simultaneous modified National Institutes of Health Stroke Scale (mNIHSS) output for physician utilization. The National Institutes of Health Stroke Scale (NIHSS) is a standardized neurologic exam scale used widely to rate severity of stroke deficits. The range is from 0 (normal) to 42 (most severe stroke). In broad terms, 0-5 scores of the NIHSS correlate to small strokes and scores above 20 and above correlate to large strokes. Due to anticipated technical limitations, the NIHSS may be modified. [0154] In an alternate embodiment, the invention will have a mobile application version for home self-testing use. This application will utilize the video, audio and, if available on the device, infrared time-of-flight.
[0155] Neurostimulation device calibration:
[0156] Neurostimulation devices are medical devices that provide electrical current to specific regions of the brain or other parts of the nervous system for a therapeutic effect. In movement disorders, one variant of such neurostimulation devices are termed deep brain stimulation (DBS) devices, such as those described in U.S. Patent No. 8,024,049. DBS is a FDA approved therapy for Parkinson's Disease, tremor and dystonia. In the future, DBS will likely gain FDA approval for stroke recovery. The first DBS implant for stroke recovery occurred on December 19, 2016 at the Cleveland Clinic (Ohio) using a device produced by Boston Scientific.
[0157] It will be apparent to those having skill in the art that such implanted medical devices require special programing to ensure that the device behaves appropriately and provides the optimal outcome for the patient. As such, each implanted device must be specifically calibrated to the patient to maximize its therapeutic effect. Currently, the best practices for programming a DBS (both initially and during follow-up visits) involve a significant amount of trial and error, which results in significant uncertainty for the patient, and has the potential to result in sub-optimal outcomes. See Picillo et. al. (2016), Programming Deep Brain Stimulation for Parkinson's Disease: The Toronto Western Hospital Algorithms, Brain Stimulation 9(3), 425-437. As such, there is a need for a system that can make accurate programming recommendations for a patient.
[0158] As such, in certain embodiments of the present invention, the system of the present invention may be used to produce specific programing suggestions to optimize the performance of the implanted device in the patient to both improve therapeutic efficacy, such as, but not limited to, improving rigidity, tremor, akinesia/bradykinesia or induction of dyskinesia, and reduce unintended side effects such as, but not limited to, dysarthria, tonic contraction, diplopia, mood changes, paresthesia, or visual phenomenon of the device. [0159] Utilizing the sensor and diagnostic system of the present invention, the sensor inputs described in the working example above, preferably including facial expression, motor control, and speech pattern diagnostics, may be used to train a machine learning algorithm to make specific suggestions regarding the various programing variables available on DBS devices. Such suggestions include changes in AMPLITUDE (in volts or mA), PULSE WIDTH (in microseconds {wsec}), RATE (in Hertz), POLARITY (of electrodes), ELECTRODE SELECTION, STIMULATION MODE (unipolar or bipolar), CYCLE (on/off times in seconds or minutes), POWER SOURCE (in amplitude) and calculated CHARGE DENSITY ( in uC/cm2 per stimulation phase).
[0160] Once trained, the system of present invention may use similar data collected from individual patients to make specific recommendations for altering the programing variables for each patient's implanted device.
[0161] One key benefit of the system of the present invention is that such programming changes may be made in real time, with the system monitoring the patent to both validate any suggested programming changes or potentially suggest additional changes that may further improve the function of the medical device for the patient.
[0162] Thus, in certain embodiments the sensor data may be analyzed in real time by machine learning and optimization systems through an iterative process testing a large number (thousands to millions) of possible DBS stimulation patterns via direct communication with the implanted pulse generator (IPG) through standard telemetry, radiofrequency signals, Bluetooth™ or other means of wireless communication between the application and the IPG. The system finds the optimized DBS stimulation pattern and is able to set this stimulation pattern as a baseline. This baseline DBS stimulation pattern can be modified anytime manually by the healthcare provider-programmer or using this application for optimization at a later time. In further embodiments, the system of the present invention may use the same iterative process, described above to optimize stimulation patterns for other neuropsychiatric disorders, including obsessive-compulsive disorder, major depressive disorder, drug-resistant epilepsy, central pain and
cognitive/memory disorders. [0163] Figure 4 illustrates one possible implementation the system of the present invention to produce recommendation for programing a DBS in a patient. First, the user instructs a mobile device, such as a cell phone or tablet computer, to run an application that can execute the program of the present invention (401). The user is then prompted to perform a series of tests on the subject to be diagnosed (402). It will apparent that the user and the subject can be the same person, or different people. In this example, the application has prompted the user to preform three tests, one focusing on recording various facial expressions using the device's built-in camera, one focusing on fine motor control using an accelerometer equipped within the device, and focusing on speech patterns by having the user read a sentence displayed on the screen and recording the speech using the device's microphone. As the user performs the prompted tests, the relevant data is collected (403). In this example, the data is then transmitted to a remote cloud server, where a trained AI program of the present invention processes and analyzes the data (404) to produce a DBS result based on the particular test (405). The individual DBS results are then aggregated by a trained AI program (406) to produce a final DBS result (407) which is output to the user, such as suggested programing settings for the variables described above. It will be apparent to those having skill in the art that additional sensor inputs could also be used, and that any individual AI program could incorporate data from one or more sensors to produce an individual clinical result. It will further be apparent that the trained AI program could be housed on the device used to collect the data, provided the device has sufficient computing power an storage to run the full application. Dizziness:
[0164] The role of this invention is to aid the physician, in any clinical setting, to help diagnose the cause of dizziness. The invention includes an Artificial Intelligence based system that uses video, audio and (if available) infrared time-of-flight INPUTS to analyze the patients motor activity, movements, gait, eye movements, facial expression and speech. It will also have inputs regarding the temporal profile of the dizziness (acute severe dizziness, recurrent positional dizziness or recurrent attacks of nonpositional dizziness). This data can be entered manually by a medical assistant or via natural language processing by the patient via prompts. [0165] Seizures:
[0166] The purpose of the invention is to aid in the differentiation of ES and BS using machine learning algorithms primarily analyzing digital video. In other embodiments, additional inputs may also be utilized.
[0167] Preferably, the software can be embedded within existing infrastructure of EMUs and will have mobile/tablet version for patient home use. This will help motivate patients to record the events. In addition to having the analysis from the invention, they will able to share the video with their neurologist for confirmation.
[0168] Methods and components are described herein. However, methods and components similar or equivalent to those described herein can be also used to obtain variations of the present invention. The materials, articles, components, methods, and examples are illustrative only and not intended to be limiting.
[0169] Although only a few embodiments have been disclosed in detail above, other embodiments are possible and the inventors intend these to be encompassed within this specification. The specification describes specific examples to accomplish a more general goal that may be accomplished in another way. This disclosure is intended to be exemplary, and the claims are intended to cover any modification or alternative which might be predictable to a person having ordinary skill in the art.
[0170] Having illustrated and described the principles of the invention in exemplary embodiments, it should be apparent to those skilled in the art that the described examples are illustrative embodiments and can be modified in arrangement and detail without departing from such principles. Techniques from any of the examples can be incorporated into one or more of any of the other examples. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.

Claims

We Claim:
1. A system for diagnosing a neurological disorder in a patient, the system comprising: i. at least one sensor in communication with a processor and a memory; a. wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient;
i. wherein said raw patient data comprises at least one of a video recording and an audio recording;
ii. a data processing module in communication with the processor and the memory;
a. wherein said data processing module converts said raw patient data into processed diagnostic data;
iii. a diagnosis module in communication with the data processing module; a. wherein said diagnosis module comprises a trained diagnostic system; i. wherein said trained diagnostic system comprises a plurality of diagnostic models;
1. wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data; and
ii. wherein said trained diagnostic system integrates said
classifications of said plurality of diagnostic models to output a diagnostic prediction for said patient.
2. The system of claim 1, wherein the program executing said diagnosis module is executed on a device that is remote from the at least one sensor.
3. The system of claim 1, wherein said trained diagnostic system is trained to diagnose a movement disorder.
4. The system of claim 3, wherein said movement disorder is Parkinson's Disease.
5. The system of claim 3, wherein said raw patient data comprises a video recording, wherein said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
6. The system of claim 3, wherein said raw patient data comprises an audio recording, wherein said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
7. The system of claim 1, wherein said plurality of algorithms are trained using a machine learning system.
8. The system of claim 7, wherein said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short- term memory network; support vector machines; and a random forest regression model.
9. A system for calibrating an implanted medical device in a patient, the system comprising: i. at least one sensor in communication with a processor and a memory; a. wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient;
i. wherein said raw patient data comprises at least one of a video recording and an audio recording;
ii. a data processing module in communication with the processor and the memory;
a. wherein said data processing module converts said raw patient data into processed calibration data.
iii. a calibration module in communication with the data processing module; a. wherein said calibration module comprises a trained calibration
system; i. wherein said trained calibration system comprises a plurality of calibration models;
1. wherein each of said plurality of calibration models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed calibration data; and
ii. wherein said trained calibration system integrates said
classifications of said plurality of calibration models to output a calibration recommendation for said implanted medical device of said patient.
10. The system of claim 8, wherein the program executing said calibration module is executed on a device that is remote from the at least one sensor.
11. The system of claim 8, wherein said implanted medical device comprises a deep brain stimulation device (DBS).
12. The system of claim 10, wherein said calibration recommendation comprises a change to the programming settings of said DBS comprising at least one of: amplitude, pulse width, rate, polarity, electrode selection, stimulation mode, cycle, power source, and calculated charge density.
13. The system of claim 8, wherein said raw patient data comprises a video recording, wherein said video recording comprises at least one of: a recording of the patient's face while preforming simple expressions; a recording of the patient's blink rate; a recording of the patient's gaze variations; a recording of the patient while seated; a recording of the patient's face while reading a prepared statement; a recording of the patient preforming repetitive tasks; and a recording of the patient while walking.
14. The system of claim 8, wherein said raw patient data comprises an audio recording, wherein said audio recording comprises at least one of: a recording of the patient repeating a prepared statement; a recording of the patient reading a sentence; and a recording of the patient making plosive sounds.
15. The system of claim 8, wherein said plurality of algorithms are trained using a machine learning system.
16. The system of claim 15, wherein said machine learning system comprises at least one of: a convolutional neural network; a recurrent neural network; a long-term short- term memory network; support vector machines; and a random forest regression model.
17. A system for monitoring the progression of a neurological disorder in a patient diagnosed with such a disorder, the system comprising: i. at least one sensor in communication with a processor and a memory; a. wherein said at least one sensor in communication with a processor and a memory acquires raw patient data from said patient;
i. wherein said raw patient data comprises at least one of a video recording and an audio recording;
ii. a data processing module in communication with the processor and the memory;
a. wherein said data processing module converts said raw patient data into processed diagnostic data;
iii. a progression module in communication with the data processing module; a. wherein said progression module comprises a trained diagnostic
system;
i. wherein said trained diagnostic system comprises a plurality of diagnostic models;
1. wherein each of said plurality of diagnostic models comprise a plurality of algorithms trained to assign a classification to at least one aspect of said processed diagnostic data;
ii. wherein said trained diagnostic system integrates said
classifications of said plurality of diagnostic models to generate a current progression score for said patient; and
iii. wherein said progression module compares said current
progression score for said patient to a progression score from said patient generated at an earlier timepoint to create a current disease progression state, and output said disease progression state.
PCT/US2018/056320 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders WO2019079475A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
KR1020207011443A KR20200074951A (en) 2017-10-17 2018-10-17 Machine learning-based system for identification and monitoring of nervous system disorders
CA3077481A CA3077481A1 (en) 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders
AU2018350984A AU2018350984A1 (en) 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders
CN201880068046.3A CN111225612A (en) 2017-10-17 2018-10-17 Neural obstacle identification and monitoring system based on machine learning
EP18868878.2A EP3697302A4 (en) 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders
JP2020522316A JP2020537579A (en) 2017-10-17 2018-10-17 Machine learning-based system for identifying and monitoring neuropathy
IL273789A IL273789A (en) 2017-10-17 2020-04-02 Machine learning based system for identifying and monitoring neurological disorders

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762573622P 2017-10-17 2017-10-17
US62/573,622 2017-10-17
US16/162,711 2018-10-17
US16/162,711 US20190110754A1 (en) 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders

Publications (1)

Publication Number Publication Date
WO2019079475A1 true WO2019079475A1 (en) 2019-04-25

Family

ID=66097206

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/056320 WO2019079475A1 (en) 2017-10-17 2018-10-17 Machine learning based system for identifying and monitoring neurological disorders

Country Status (9)

Country Link
US (1) US20190110754A1 (en)
EP (1) EP3697302A4 (en)
JP (1) JP2020537579A (en)
KR (1) KR20200074951A (en)
CN (1) CN111225612A (en)
AU (1) AU2018350984A1 (en)
CA (1) CA3077481A1 (en)
IL (1) IL273789A (en)
WO (1) WO2019079475A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111292851A (en) * 2020-02-27 2020-06-16 平安医疗健康管理股份有限公司 Data classification method and device, computer equipment and storage medium
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
JP2020199072A (en) * 2019-06-10 2020-12-17 国立大学法人滋賀医科大学 Cerebral apoplexy determination device, method, and program
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment
WO2022033442A1 (en) * 2020-08-10 2022-02-17 杭州享励数字科技有限公司 Cloud technology-based intelligent multi-channel disease diagnostic system and method
WO2023178437A1 (en) * 2022-03-25 2023-09-28 Nuralogix Corporation System and method for contactless predictions of vital signs, health risks, cardiovascular disease risk and hydration from raw videos

Families Citing this family (88)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10558785B2 (en) 2016-01-27 2020-02-11 International Business Machines Corporation Variable list based caching of patient information for evaluation of patient rules
US10528702B2 (en) 2016-02-02 2020-01-07 International Business Machines Corporation Multi-modal communication with patients based on historical analysis
US11037658B2 (en) 2016-02-17 2021-06-15 International Business Machines Corporation Clinical condition based cohort identification and evaluation
US10937526B2 (en) 2016-02-17 2021-03-02 International Business Machines Corporation Cognitive evaluation of assessment questions and answers to determine patient characteristics
US10565309B2 (en) * 2016-02-17 2020-02-18 International Business Machines Corporation Interpreting the meaning of clinical values in electronic medical records
US10685089B2 (en) 2016-02-17 2020-06-16 International Business Machines Corporation Modifying patient communications based on simulation of vendor communications
US10311388B2 (en) 2016-03-22 2019-06-04 International Business Machines Corporation Optimization of patient care team based on correlation of patient characteristics and care provider characteristics
US10923231B2 (en) 2016-03-23 2021-02-16 International Business Machines Corporation Dynamic selection and sequencing of healthcare assessments for patients
JP6268628B1 (en) * 2017-11-02 2018-01-31 パナソニックIpマネジメント株式会社 Cognitive function evaluation device, cognitive function evaluation system, cognitive function evaluation method and program
US11540749B2 (en) * 2018-01-22 2023-01-03 University Of Virginia Patent Foundation System and method for automated detection of neurological deficits
US20190290128A1 (en) * 2018-03-20 2019-09-26 Aic Innovations Group, Inc. Apparatus and method for user evaluation
EP3787481B1 (en) 2018-05-01 2023-08-23 Neumora Therapeutics, Inc. Machine learning-based diagnostic classifier
WO2019235335A1 (en) * 2018-06-05 2019-12-12 住友化学株式会社 Diagnosis support system, diagnosis support method, and diagnosis support program
WO2020018469A1 (en) * 2018-07-16 2020-01-23 The Board Of Trustees Of The Leland Stanford Junior University System and method for automatic evaluation of gait using single or multi-camera recordings
US10973454B2 (en) * 2018-08-08 2021-04-13 International Business Machines Corporation Methods, systems, and apparatus for identifying and tracking involuntary movement diseases
EP3921850A1 (en) * 2019-02-06 2021-12-15 AIC Innovations Group, Inc. Biomarker identification
US11752349B2 (en) * 2019-03-08 2023-09-12 Battelle Memorial Institute Meeting brain-computer interface user performance expectations using a deep neural network decoding framework
US11915827B2 (en) * 2019-03-14 2024-02-27 Kenneth Neumann Methods and systems for classification to prognostic labels
US11250062B2 (en) * 2019-04-04 2022-02-15 Kpn Innovations Llc Artificial intelligence methods and systems for generation and implementation of alimentary instruction sets
JPWO2020218013A1 (en) * 2019-04-25 2020-10-29
US11392854B2 (en) 2019-04-29 2022-07-19 Kpn Innovations, Llc. Systems and methods for implementing generated alimentary instruction sets based on vibrant constitutional guidance
US11157822B2 (en) 2019-04-29 2021-10-26 Kpn Innovatons Llc Methods and systems for classification using expert data
US11636955B1 (en) * 2019-05-01 2023-04-25 Verily Life Sciences Llc Communications centric management platform
US10593431B1 (en) 2019-06-03 2020-03-17 Kpn Innovations, Llc Methods and systems for causative chaining of prognostic label classifications
US11607167B2 (en) * 2019-06-05 2023-03-21 Tencent America LLC User device based parkinson disease detection
CN110292377B (en) * 2019-06-10 2022-04-01 东南大学 Electroencephalogram signal analysis method based on instantaneous frequency and power spectrum entropy fusion characteristics
GB201909176D0 (en) * 2019-06-26 2019-08-07 Royal College Of Art Wearable device
JP7269122B2 (en) * 2019-07-18 2023-05-08 株式会社日立ハイテク Data analysis device, data analysis method and data analysis program
US20220346699A1 (en) * 2019-09-17 2022-11-03 Hoffmann-La Roche Inc. Improvements in Personalized Healthcare for Patients with Movement Disorders
CN110751032B (en) * 2019-09-20 2022-08-02 华中科技大学 Training method of brain-computer interface model without calibration
CN110674773A (en) * 2019-09-29 2020-01-10 燧人(上海)医疗科技有限公司 Dementia recognition system, device and storage medium
US11495210B2 (en) * 2019-10-18 2022-11-08 Microsoft Technology Licensing, Llc Acoustic based speech analysis using deep learning models
CN110960195B (en) * 2019-12-25 2022-05-31 中国科学院合肥物质科学研究院 Convenient and rapid neural cognitive function assessment method and device
US20210202090A1 (en) * 2019-12-26 2021-07-01 Teladoc Health, Inc. Automated health condition scoring in telehealth encounters
WO2021155136A1 (en) * 2020-01-31 2021-08-05 Olleyes, Inc. A system and method for providing visual tests
US11809149B2 (en) 2020-03-23 2023-11-07 The Boeing Company Automated device tuning
US11896817B2 (en) 2020-03-23 2024-02-13 The Boeing Company Automated deep brain stimulation system tuning
CN111462108B (en) * 2020-04-13 2023-05-02 山西新华防化装备研究院有限公司 Machine learning-based head-face product design ergonomics evaluation operation method
EP3901963B1 (en) * 2020-04-24 2024-03-20 Cognes Medical Solutions AB Method and device for estimating early progression of dementia from human head images
EP4142580A1 (en) * 2020-04-29 2023-03-08 iSchemaView, Inc. Assessment of facial paralysis and gaze deviation
US11276498B2 (en) * 2020-05-21 2022-03-15 Schler Baruch Methods for visual identification of cognitive disorders
US11923091B2 (en) 2020-05-21 2024-03-05 Baruch SCHLER Methods for remote visual identification of congestive heart failures
CN111724899A (en) * 2020-06-28 2020-09-29 湘潭大学 Parkinson audio intelligent detection method and system based on Fbank and MFCC fusion characteristics
CN111990967A (en) * 2020-07-02 2020-11-27 北京理工大学 Gait-based Parkinson disease recognition system
CN112233785B (en) * 2020-07-08 2022-04-22 华南理工大学 Intelligent identification method for Parkinson's disease
TWI823015B (en) * 2020-07-13 2023-11-21 神經元科技股份有限公司 Decision support system and method thereof for neurological disorders
US20220007936A1 (en) * 2020-07-13 2022-01-13 Neurobit Technologies Co., Ltd. Decision support system and method thereof for neurological disorders
CN111870253A (en) * 2020-07-27 2020-11-03 上海大学 Method and system for monitoring condition of tic disorder disease based on vision and voice fusion technology
CN111883251A (en) * 2020-07-28 2020-11-03 平安科技(深圳)有限公司 Medical misdiagnosis detection method and device, electronic equipment and storage medium
US11762466B2 (en) 2020-07-29 2023-09-19 Penumbra, Inc. Tremor detecting and rendering in virtual reality
US11376434B2 (en) 2020-07-31 2022-07-05 Medtronic, Inc. Stimulation induced neural response for detection of lead movement
US11623096B2 (en) 2020-07-31 2023-04-11 Medtronic, Inc. Stimulation induced neural response for parameter selection
CN111899894B (en) * 2020-08-03 2021-06-25 东南大学 System and method for evaluating prognosis drug effect of depression patient
CN112037908A (en) * 2020-08-05 2020-12-04 复旦大学附属眼耳鼻喉科医院 Aural vertigo diagnosis and treatment device and system and big data analysis platform
KR102478613B1 (en) * 2020-08-24 2022-12-16 경희대학교 산학협력단 Evolving symptom-disease prediction system for smart healthcare decision support system
KR20220028967A (en) 2020-08-31 2022-03-08 서울여자대학교 산학협력단 Treatement apparatus and method based on neurofeedback
US20230363679A1 (en) * 2020-09-17 2023-11-16 The Penn State Research Foundation Systems and methods for assisting with stroke and other neurological condition diagnosis using multimodal deep learning
US11004462B1 (en) * 2020-09-22 2021-05-11 Omniscient Neurotechnology Pty Limited Machine learning classifications of aphasia
CN112185558A (en) * 2020-09-22 2021-01-05 珠海中科先进技术研究院有限公司 Mental health and rehabilitation evaluation method, device and medium based on deep learning
CN112401834B (en) * 2020-10-19 2023-04-07 南方科技大学 Movement-obstructing disease diagnosis device
AT524365A1 (en) * 2020-10-20 2022-05-15 Vertify Gmbh Procedure for assigning a vertigo patient to a medical specialty
CN112370659B (en) * 2020-11-10 2023-03-14 四川大学华西医院 Implementation method of head stimulation training device based on machine learning
WO2022118306A1 (en) 2020-12-02 2022-06-09 Shomron Dan Head tumor detection apparatus for detecting head tumor and method therefor
KR102381219B1 (en) * 2020-12-09 2022-04-01 영남대학교 산학협력단 Motor function prediction apparatus and method for determining need of ankle-foot-orthosis of stroke patients
US20220189637A1 (en) * 2020-12-11 2022-06-16 Cerner Innovation, Inc. Automatic early prediction of neurodegenerative diseases
CN112331337B (en) * 2021-01-04 2021-04-16 中国科学院自动化研究所 Automatic depression detection method, device and equipment
CN113440101B (en) * 2021-02-01 2023-06-23 复旦大学附属眼耳鼻喉科医院 Vertigo diagnosis device and system based on ensemble learning
WO2022191332A1 (en) * 2021-03-12 2022-09-15 住友ファーマ株式会社 Prediction of amount of in vivo dopamine etc., and application thereof
CN113012815B (en) * 2021-04-06 2023-09-01 西北工业大学 Multi-mode data-based parkinsonism health risk assessment method
DE102021205548A1 (en) 2021-05-31 2022-12-01 VitaFluence.ai GmbH Software-based, voice-driven, and objective diagnostic tool for use in the diagnosis of a chronic neurological disorder
CN113274023B (en) * 2021-06-30 2021-12-14 中国科学院自动化研究所 Multi-modal mental state assessment method based on multi-angle analysis
US20230047438A1 (en) * 2021-07-29 2023-02-16 Precision Innovative Data Llc Dba Innovative Precision Health (Iph) Method and system for assessing disease progression
AU2022330129A1 (en) * 2021-08-18 2024-01-18 Advanced Neuromodulation Systems, Inc. Systems and methods for providing digital health services
CN113823267B (en) * 2021-08-26 2023-12-29 中南民族大学 Automatic depression recognition method and device based on voice recognition and machine learning
US20230070665A1 (en) * 2021-09-09 2023-03-09 GenoEmote LLC Method and system for validation of disease condition reprogramming based on personality to disease condition mapping
CN117794453A (en) * 2021-09-16 2024-03-29 麦克赛尔株式会社 Measurement processing terminal, method and computer program for performing measurement processing on finger movement
CN113729709B (en) * 2021-09-23 2023-08-11 中科效隆(深圳)科技有限公司 Nerve feedback device, nerve feedback method, and computer-readable storage medium
CN113709073B (en) * 2021-09-30 2024-02-06 陕西长岭电子科技有限责任公司 Demodulation method of quadrature phase shift keying modulation signal
US20230142121A1 (en) * 2021-11-02 2023-05-11 Chemimage Corporation Fusion of sensor data for persistent disease monitoring
WO2023095321A1 (en) * 2021-11-29 2023-06-01 マクセル株式会社 Information processing device, information processing system, and information processing method
CN114171162B (en) * 2021-12-03 2022-10-11 广州穗海新峰医疗设备制造股份有限公司 Mirror neuron rehabilitation training method and system based on big data analysis
WO2023107430A1 (en) * 2021-12-09 2023-06-15 Boston Scientific Neuromodulation Corporation Neurostimulation programming and triage based on freeform text inputs
CN114305398B (en) * 2021-12-15 2023-11-24 上海长征医院 System for be used for detecting spinal cord type cervical spondylosis of object to be tested
WO2023115558A1 (en) * 2021-12-24 2023-06-29 Mindamp Limited A system and a method of health monitoring
CN114927215B (en) * 2022-04-27 2023-08-25 苏州大学 Method and system for directly predicting tumor respiratory motion based on body surface point cloud data
US11596334B1 (en) * 2022-04-28 2023-03-07 Gmeci, Llc Systems and methods for determining actor status according to behavioral phenomena
US20240087743A1 (en) * 2022-09-14 2024-03-14 Videra Health, Inc. Machine learning classification of video for determination of movement disorder symptoms
CN117297546A (en) * 2023-09-25 2023-12-29 首都医科大学宣武医院 Automatic detection system for capturing seizure symptomology information of epileptic

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169409A1 (en) * 2008-08-04 2010-07-01 Fallon Joan M Systems and methods employing remote data gathering and monitoring for diagnosing, staging, and treatment of parkinsons disease, movement and neurological disorders, and chronic pain
US20160189371A1 (en) * 2014-12-30 2016-06-30 Cognizant Technology Solutions India Pvt. Ltd System and method for predicting neurological disorders
US20170119302A1 (en) * 2012-10-16 2017-05-04 University Of Florida Research Foundation, Incorporated Screening for neurological disease using speech articulation characteristics

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2939922A1 (en) * 2014-02-24 2015-08-27 Brain Power, Llc Systems, environment and methods for evaluation and management of autism spectrum disorder using a wearable data collection device
WO2017106770A1 (en) * 2015-12-18 2017-06-22 Cognoa, Inc. Platform and system for digital personalized medicine
US10485471B2 (en) * 2016-01-07 2019-11-26 The Trustees Of Dartmouth College System and method for identifying ictal states in a patient
US20170258390A1 (en) * 2016-02-12 2017-09-14 Newton Howard Early Detection Of Neurodegenerative Disease

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100169409A1 (en) * 2008-08-04 2010-07-01 Fallon Joan M Systems and methods employing remote data gathering and monitoring for diagnosing, staging, and treatment of parkinsons disease, movement and neurological disorders, and chronic pain
US20170119302A1 (en) * 2012-10-16 2017-05-04 University Of Florida Research Foundation, Incorporated Screening for neurological disease using speech articulation characteristics
US20160189371A1 (en) * 2014-12-30 2016-06-30 Cognizant Technology Solutions India Pvt. Ltd System and method for predicting neurological disorders

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3697302A4 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11942194B2 (en) 2018-06-19 2024-03-26 Ellipsis Health, Inc. Systems and methods for mental health assessment
JP2020199072A (en) * 2019-06-10 2020-12-17 国立大学法人滋賀医科大学 Cerebral apoplexy determination device, method, and program
CN111292851A (en) * 2020-02-27 2020-06-16 平安医疗健康管理股份有限公司 Data classification method and device, computer equipment and storage medium
WO2022033442A1 (en) * 2020-08-10 2022-02-17 杭州享励数字科技有限公司 Cloud technology-based intelligent multi-channel disease diagnostic system and method
WO2023178437A1 (en) * 2022-03-25 2023-09-28 Nuralogix Corporation System and method for contactless predictions of vital signs, health risks, cardiovascular disease risk and hydration from raw videos

Also Published As

Publication number Publication date
JP2020537579A (en) 2020-12-24
US20190110754A1 (en) 2019-04-18
EP3697302A4 (en) 2021-10-20
CA3077481A1 (en) 2019-04-25
CN111225612A (en) 2020-06-02
IL273789A (en) 2020-05-31
AU2018350984A1 (en) 2020-05-07
EP3697302A1 (en) 2020-08-26
KR20200074951A (en) 2020-06-25

Similar Documents

Publication Publication Date Title
US20190110754A1 (en) Machine learning based system for identifying and monitoring neurological disorders
Pereira et al. A survey on computer-assisted Parkinson's disease diagnosis
US20210106265A1 (en) Real time biometric recording, information analytics, and monitoring systems and methods
US11553870B2 (en) Methods for modeling neurological development and diagnosing a neurological impairment of a patient
Parisi et al. Body-sensor-network-based kinematic characterization and comparative outlook of UPDRS scoring in leg agility, sit-to-stand, and gait tasks in Parkinson's disease
US20200060566A1 (en) Automated detection of brain disorders
US20170258390A1 (en) Early Detection Of Neurodegenerative Disease
JP6124140B2 (en) Assessment of patient cognitive function
US11699529B2 (en) Systems and methods for diagnosing a stroke condition
US20210339024A1 (en) Therapeutic space assessment
US11278230B2 (en) Systems and methods for cognitive health assessment
Palliya Guruge et al. Advances in multimodal behavioral analytics for early dementia diagnosis: A review
Sigcha et al. Deep learning and wearable sensors for the diagnosis and monitoring of Parkinson’s disease: a systematic review
Frick et al. Detection of schizophrenia: A machine learning algorithm for potential early detection and prevention based on event-related potentials.
Mantri et al. Real time multimodal depression analysis
Deb How Does Technology Development Influence the Assessment of Parkinson's Disease? A Systematic Review
WO2019227690A1 (en) Screening of behavioral paradigm indicators and application thereof
Ngo et al. Technological evolution in the instrumentation of ataxia severity measurement
Chadha et al. Assistance for Facial Palsy using Quantitative Technology1
Jung et al. Identifying depression in the elderly using gait accelerometry
Davids et al. AIM in Neurodegenerative Diseases: Parkinson and Alzheimer
Isaev Use of Machine Learning and Computer Vision Methods for Building Behavioral and Electrophysiological Biomarkers for Brain Disorders
Chandurkar et al. Introducing an IoT-Enabled Multimodal Emotion Recognition System for Women Cancer Survivors
Paruchuri ParkinSense: A Novel Approach to Remote Idiopathic Parkinson’s Disease Diagnosis, Severity Profiling, and Telemonitoring via Ensemble Learning and Multimodal Data Fusion on Webcam-Derived Digital Biomarkers
Pereira Aprendizado de máquina aplicado ao auxílio do diagnóstico da doença de Parkinson

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18868878

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3077481

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2020522316

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018350984

Country of ref document: AU

Date of ref document: 20181017

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018868878

Country of ref document: EP

Effective date: 20200518