WO2018132483A1 - Cognitive platform configured for determining the presence or likelihood of onset of a neuropsychological deficit or disorder - Google Patents

Cognitive platform configured for determining the presence or likelihood of onset of a neuropsychological deficit or disorder Download PDF

Info

Publication number
WO2018132483A1
WO2018132483A1 PCT/US2018/013182 US2018013182W WO2018132483A1 WO 2018132483 A1 WO2018132483 A1 WO 2018132483A1 US 2018013182 W US2018013182 W US 2018013182W WO 2018132483 A1 WO2018132483 A1 WO 2018132483A1
Authority
WO
WIPO (PCT)
Prior art keywords
individual
interference
task
response
primary
Prior art date
Application number
PCT/US2018/013182
Other languages
French (fr)
Inventor
Jeffrey BOWER
Titiimaea ALAILIMA
Original Assignee
Akili Interactive Labs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Akili Interactive Labs, Inc. filed Critical Akili Interactive Labs, Inc.
Publication of WO2018132483A1 publication Critical patent/WO2018132483A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4082Diagnosing or monitoring movement diseases, e.g. Parkinson, Huntington or Tourette
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/168Evaluating attention deficit, hyperactivity
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7475User input or interface means, e.g. keyboard, pointing device, joystick
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/024Detecting, measuring or recording pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/053Measuring electrical impedance or conductance of a portion of the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14542Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/163Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/318Heart-related electrical modalities, e.g. electrocardiography [ECG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]

Definitions

  • Neurodegenerative conditions can cause individuals to experience a certain amount of cognitive decline. This can cause an individual to experience increased difficulty in challenging situations, such as time-limited, attention- demanding conditions.
  • certain cognitive conditions, diseases, or executive function disorders can result in compromised performance at tasks that require attention, memory, motor function, reaction, executive function, decision-making skills, problem-solving skills, language
  • Alzheimer's disease and Huntington's disease among other types of neurodegenerative conditions, eventually cause diminished cognitive abilities.
  • an apparatus includes a user interface; a memory to store processor- executable instructions; and a processing unit communicatively coupled to the user interface and the memory.
  • the processing unit Upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference.
  • the interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction.
  • the processing unit is configured to: instruct, using the user interface, the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor; generate a performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response; receive data indicative of one or both of an age or a gender identifier of the individual; and generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of (i) at least one of the age or the gender identifier, (ii) the performance score, and (iii) one or more of the first primary response and the first secondary response.
  • an apparatus for enhancing one or more cognitive skills in an individual includes: a user interface; a memory to store processor-executable instructions; and a processing unit communicatively coupled to the user interface and the memory.
  • the processing unit Upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to: execute a first trial at a first time interval, the first trial including: rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; instructing, using the user interface, the individual not to respond to an interference with the primary task that is configured as a distraction and to respond to an interference with the primary task that is configured as an interruptor; and rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference.
  • the first interference is configured to divert the individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction.
  • the processing unit is configured to generate a first performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response; receive data indicative of one or both of an age or a gender identifier of the individual; and based on the performance score and the data indicative of one or both of an age or a gender identifier of the individual, adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at a second difficulty level one or more of a third instance of the primary task or a second interference.
  • the processing unit is configured to execute a second trial at a second time interval that is subsequent to the first time interval, the second trial including: rendering at the user interface the third instance of the primary task with the second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference.
  • the second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction.
  • the processing unit is configured to generate a second performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the third primary response to provide an indication of cognitive skills of the individual.
  • a computer-implemented method for enhancing one or more cognitive skills in an individual includes executing, using a processing unit communicatively coupled to a user interface and a memory, a first trial at a first time interval, the first trial comprising: rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference.
  • the first interference is configured to divert the individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction.
  • the individual is instructed not to respond to the first interference that is configured as a distraction and to respond to the first interference that is configured as an interruptor.
  • the method includes executing, using the processing unit, at least one second trial at a second time interval that is subsequent to the first time interval, the second trial comprising: rendering at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference.
  • the second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction.
  • the individual is instructed not to respond to the second interference that is configured as the distraction and to respond to the second interference that is configured as the interruptor.
  • the method includes generating, using the processing unit, a performance score based at least in part on the data indicative of the first primary response, the second primary response, and the third primary response; receiving data indicative of one or both of an age or a gender identifier of the individual; and generating, using the processing unit, a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the performance score and one or both of the data indicative of the age or the gender identifier.
  • FIG. 1 shows an example plot of data analysis from applying an example predictive model, according to the principles herein.
  • FIGs. 2 and 3 show example plots of data derived from a cross-validation using the example predictive model, according to the principles herein.
  • FIG. 4 shows an example plot of data derived from applying the example predictive model, according to the principles herein.
  • FIG. 5 shows a block diagram of an example apparatus, according to the principles herein.
  • FIG. 6 shows a block diagram of an example computing device, according to the principles herein.
  • FIGs. 7 A - 7B show example systems, according to the principles herein.
  • FIG. 8 shows another example system, according to the principles herein.
  • FIGs. 9A - 9D show example user interfaces with instructions to a user that can be rendered to an example user interface, according to the principles herein.
  • FIGs. 10A - 10D show examples of the time-varying features of example objects (targets or non-targets) that can be rendered to an example user interface, according to the principles herein.
  • FIGs. 1 1A - 1 1 T show examples of the rendering of tasks and interferences at user interfaces, according to the principles herein.
  • FIGs. 12A - 12D show examples of the rendering of tasks and interferences at user interfaces, according to the principles herein.
  • FIGs. 13A - 13B show example physiological measurement data from a plurality of individuals, according to the principles herein.
  • FIGs. 14A - 14B show plots of example performance metrics derived from measures of the individuals' performance, according to the principles herein.
  • FIG. 15 shows plots of the results of measures of a test for episodic memory, according to the principles herein.
  • FIG. 16 shows plots of the results of measures of a test for sustained attention, according to the principles herein.
  • FIGs. 17A - 17C-2 show flowcharts of example methods, according to the principles herein.
  • FIG. 18 shows a block diagram of an example computer system, according to the principles herein.
  • Non-limiting example computer-implemented cognitive platform described herein can be configured for generating an assessment of one or more cognitive skills of an individual based on data collected from as few as a single initial moment (as non-limiting examples, the first few seconds, about the first 5 seconds, about the first 10 seconds, about the first 20 seconds, about the first 30 seconds, about the first 45 seconds, about the first minute, about the first 1 .5 minutes, about the first 3 minutes, about the first 5 minutes, about the first 7.5 minutes, about the first 10 minutes, or about the first 15 minutes) of interaction of the individual with the cognitive platform, as a biomarker or other marker for the cognitive condition of an individual.
  • the example computer-implemented cognitive platform described herein can be configured for generating an assessment of one or more cognitive skills of an individual based on data collected from the initial moment of interaction and at least one subsequent interaction of the individual with the cognitive platform, as a biomarker or other marker for the cognitive condition of an individual.
  • Non-limiting example computer-implemented cognitive platform described herein also can be configured for enhancing the cognitive skills of the individual, and serving as a biomarker or other marker for any change in the cognitive condition of the individual as a result of the enhanced cognitive skills.
  • the cognitive condition can be a neurodegenerative condition, such that the example computer-implemented cognitive platform can be configured to serve as a biomarker or other marker for the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual.
  • the cognitive platform could be configured to server as biomarker for the likelihood of onset of the neurodegenerative condition based on indications that the neurodegenerative condition may develop in the near term, later in time, or potentially at some unspecified time in future.
  • the cognitive platform could be configured to serve as biomarker for the likelihood of onset of the neurodegenerative condition based on indications that the neurodegenerative condition may develop, but not necessarily a definitive projection that it will develop.
  • An example system or apparatus including the computer-implemented cognitive platforms can be configured to apply a predictive model to an indicator of the individual's performance that is derived based on data collected from at least the initial moments of interaction of the individual, measured using components of the cognitive platform.
  • the cognitive platform can be configured to render a primary task and/or a secondary task, collect data indicative of the measured response(s) from the individual to the instance of the primary task, and analyze the collected data to determine at least one indicator of the cognitive ability of the individual.
  • the example system or apparatus is configured to apply the predictive model to the at least one indicator and data indicative of an age and/or a gender identifier of the individual to generate a scoring output indicative of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition, thereby facilitating use of the cognitive platform as a biomarker or other marker.
  • the predictive model can be computed based on datasets including (i) data indicative of the responses of each individual of a plurality of individuals from at least the initial moments of interaction of that individual in their performance of the primary task and/or secondary task, (ii) data indicative of one or more physiological measurement from the individuals, and (iii) data indicative of an age and/or a gender identifier of the individual.
  • the one or more physiological measurements can be made either before or after the individual interacts with the cognitive platform, and/or during at least a portion of time periods during which the individual is interacting with the cognitive platform.
  • the cognitive platform can be implemented to present one or more computer-implemented task(s) to a user, collect data indicative of the user's responses to the one or more computer- implemented task(s), and compute the at least one indicator of the individual's performance.
  • Application of the predictive model to the at least one indicator and data indicative of an age and/or a gender identifier of the individual provides the scoring output that serves as a biomarker or other marker of the neurodegenerative condition, by providing an indication of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual.
  • An advantage of the example systems and apparatus herein including the cognitive platform is that presenting the computer-implemented task(s) to the user and collecting the user's response to the computer-implemented task(s) is comparatively easier and more convenient to the user.
  • the indication of the user's cognitive skills and the markers of neurodegenerative condition can be evaluated without performing physiological measurements (such as but not limited to collecting samples of tissue or fluid from the user, or performing positron emission tomography (PET) scans).
  • the computer- implemented task(s) presented by the cognitive platform can be made interesting, engaging, and fun so that the user is motivated to interact with the cognitive platform regularly, e.g., on a daily basis, or several days in a given month.
  • This allows the indication of the cognitive skills of the user, and the markers of neurodegenerative condition of the user (derived based on a scoring using the predictive model), to be evaluated regularly in a convenient manner.
  • the cognitive platform can track the at least one indicator of the user's cognitive skills over time, and the scoring using the predictive model may be used as a marker of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual.
  • remedial measures may be performed relative to the individual's cognitive condition.
  • the computer-implemented cognitive platform provides a technical solution to the technical problem of using computers or machines to assist a medical practitioner or healthcare provider to evaluate the individual's cognitive condition (including the neurodegenerative condition of the individual).
  • the use of the computer- implemented cognitive platform provides several improvements over conventional methods that may rely on physiological measurement data to detect a neurodegenerative condition or track the progression of the neurodegenerative condition of the user. Since the physiological measurements (e.g., measurements of types of protein and/or conformation of proteins in the tissue or fluid of an individual, or PET scans) often need to be performed by select medical or healthcare professionals, the physiological measurement data are updated infrequently, e.g., perhaps once or twice a year.
  • the individual's cognitive skills or neurodegenerative condition may markedly degrade (e.g., if there is no intervention to enhance the cognitive skills of the individual and/or to administer a drug, biologic or other pharmaceutical agent).
  • the example cognitive platform according to the principles herein are configured for ease of use, and the example cognitive platform can be operated by the users in a more comfortable setting (such as but not limited to at home) and can be more conveniently administered.
  • the measures of the individual's cognitive abilities using the cognitive platform may provide an early sign of a biomarker or other marker of a likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition, with the individual's consent
  • an output message may be transmitted to a medical practitioner or healthcare provider.
  • the cognitive platform can be used in a user's home so that the individualized treatment can be conveniently administered to the user.
  • inventive methods, apparatus and systems comprising a cognitive platform and/or platform product configured for coupling with one or more other types of measurement components (such as but not limited to one or more physiological components), and for analyzing data collected from user interaction with the cognitive platform and/or from at least one measurement of the one or more other types of measurement components.
  • the cognitive platform and/or platform product can be configured for cognitive training and/or for clinical purposes.
  • the cognitive platform may be coupled to (including being in communication with), or integrated with one or more
  • physiological or monitoring components and/or cognitive testing components.
  • the cognitive platform may be separate from, and configured for coupling with, the one or more physiological or monitoring components and/or one or more cognitive testing components.
  • the cognitive platform and systems including the cognitive platform can be configured to present computerized tasks and platform interactions that inform cognitive assessment (including screening and/or monitoring) or to deliver cognitive treatment.
  • the platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product by Akili Interactive Labs, Inc. (Boston, MA), which is configured for presenting computerized tasks and platform interactions that inform cognitive assessment (including screening and/or monitoring) or to deliver cognitive treatment.
  • AKILI® platform product by Akili Interactive Labs, Inc. (Boston, MA)
  • the methods, apparatus and systems comprising the cognitive platform or platform product can be used to determine a predictive model tool of amyloid status, and/or as a clinical trial tool to aid in the assessment of the amyloid status of one or more individuals, and/or as a tool to aid in the assessment of amyloid status.
  • the example tools can be built and trained using one or more training datasets obtained from individuals having known amyloid status.
  • the methods, apparatus and systems comprising the cognitive platform or platform product can be used to determine a predictive model tool of the presence or likelihood of onset of a neuropsychological deficit or disorder, and/or as a clinical trial tool to aid in the assessment of the presence or likelihood of onset of a neuropsychological deficit or disorder of one or more individuals.
  • the example tools can be built and trained using one or more training datasets obtained from individuals having known neuropsychological deficit or disorder.
  • the term “includes” means includes but is not limited to, the term “including” means including but not limited to.
  • the term "stimulus” refers to a sensory event configured to evoke a specified functional response from an individual.
  • the degree and type of response can be quantified based on the individual's interactions with a measuring component (including using sensor devices or other measuring components).
  • the degree of response can be generated based on a degree of an action measured using a sensor (such as but not limited to the degree or rotation measured using a motion sensor or a gyroscope).
  • Non-limiting examples of a stimulus include a navigation path (with an individual being instructed to control an avatar or other processor-rendered guide to navigate the path), or a discrete object, whether a target or a non-target, rendered to a user interface (with an individual being instructed to control a computing component to provide input or other indication relative to the discrete object).
  • the task and/or interference includes a stimulus, which can be a time-varying feature as described hereinbelow.
  • target refers to a type of stimulus that is specified to an individual (e.g., in instructions) to be the focus for an interaction.
  • a target differs from a non-target in at least one characteristic or feature.
  • Two targets may differ from each other by at least one characteristic or feature, but overall are still instructed to an individual as a target, in an example where the individual is instructed/required to make a response that indicates a choice.
  • non-target refers to a type of stimulus that is not to be the focus for an interaction, whether indicated explicitly or implicitly to the individual.
  • the term "task” refers to a goal and/or objective to be accomplished by an individual.
  • the computerized task is rendered using programmed computerized components, and the individual is instructed (e.g., using a computing device) as to the intended goal or objective from the individual for performing the computerized task.
  • the task may require the individual to provide or withhold a response to a particular stimulus, using at least one component of the computing device (e.g., one or more sensor components of the computing device).
  • the "task" can be configured as a baseline cognitive function that is being measured.
  • the term "interference" refers to a type of stimulus presented to the individual such that it interferes with the individual's performance of a primary task.
  • an interference is a type of task that is presented/rendered in such a manner that it diverts or interferes with an individual's attention in performing another task (including the primary task).
  • the interference is configured as an instance of a secondary task that is presented simultaneously with a primary task, either over a discrete time period (e.g., a short, discrete time period) or over an extended time period (e.g., less than the time frame over which the primary task is presented), or over the entire period of time of the primary task.
  • the interference can be presented/rendered continuously, or continually (i.e., repeated in a certain frequency, irregularly, or somewhat randomly).
  • the interference can be presented at the end of the primary task or at discrete, interim periods during presentation of the primary task.
  • the degree of interference can be modulated based on the type, amount, and/or temporal length of presentation of the interference relative to the primary task.
  • a "trial” includes at least one iteration of rendering of a task and/or interference (either or both with time-varying feature) and at least one receiving of the individual's response(s) to the task and/or interference (either or both with time- varying feature).
  • a trial can include at least a portion of a single-tasking task and/or at least a portion of a multi-tasking task.
  • a trial can be a period of time during a navigation task (including a visuo-motor navigation task) in which the individual's performance is assessed, such as but not limited to, assessing whether or the degree of success to which an individual's actions in interacting with the platform result in a guide (including a computerized avatar) navigating along at least a portion of a certain path or in an environment for a time interval (such as but not limited to, fractions of a second, a second, several seconds, or more) and/or causes the guide (including computerized avatar) to cross (or avoid crossing) performance milestones along the path or in the environment.
  • a guide including a computerized avatar navigating along at least a portion of a certain path or in an environment for a time interval (such as but not limited to, fractions of a second, a second, several seconds, or more) and/or causes the guide (including computerized avatar) to cross (or avoid crossing) performance milestones along the path or in the environment.
  • a trial can be a period of time during a targeting task in which the individual's performance is assessed, such as but not limited to, assessing whether or the degree of success to which an individual's actions in interacting with the platform result in identification/selection of a target versus a non-target (e.g., red object versus yellow object), or discriminates between two different types of targets.
  • the segment of the individual's performance that is designated as a trial for the navigation task does not need to be co-extensive or aligned with the segment of the individual's performance that is designated as a trial for the targeting task.
  • an object may be rendered as a depiction of a physical object (including a polygonal or other object), a face (human or non-human), or a caricature, other type of object.
  • instructions can be provided to the individual to specify how the individual is expected to perform the task and/or interference (either or both with time-varying feature) in a trial and/or a session.
  • the instructions can inform the individual of the expected performance of a navigation task (e.g., stay on this path, go to these parts of the environment, cross or avoid certain milestone objects in the path or environment), a targeting task (e.g., describe or show the type of object that is the target object versus the non-target object, or describe or show the type of object that is the target object versus the non-target object, or two different types of target object that the individual is expected to choose between, and/or describe how the individual's performance is to be scored.
  • a navigation task e.g., stay on this path, go to these parts of the environment, cross or avoid certain milestone objects in the path or environment
  • a targeting task e.g., describe or show the type of object that is the target object versus the non-target object, or describe or show the type of object that is the target
  • the instructions may be provided visually (e.g., based on a rendered user interface) or via sound.
  • the instructions may be provided once prior to the performance of two or more trials or sessions, or repeated each time prior to the performance of a trial or a session, or some combination thereof.
  • example systems, methods, and apparatus described herein are based on an individual being instructed/required to decide/select between a target versus a non-target
  • the example systems, methods, and apparatus can be configured such that the individual is instructed/required to decide/choose between two different types of targets (such as but not limited to between two different degrees of a facial expression or other characteristic/feature difference).
  • example systems, methods, and apparatus may be described herein relative to an individual, in other example implementations, the example systems, methods, and apparatus can be configured such that two or more individuals, or members of a group (including a clinical population), perform the tasks and/or interference (either or both with time-varying feature), either individually or concurrently.
  • the example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of neuropsychological conditions, such as but not limited to dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, or other neurodegenerative condition, autism spectrum disorder (ASD), presence of the 16p1 1 .2 duplication, and/or an executive function disorder (such as but not limited to attention deficit hyperactivity disorder (ADHD), sensory-processing disorder (SPD), mild cognitive impairment (MCI), Alzheimer's disease, multiple- sclerosis, schizophrenia, depression, or anxiety).
  • ADHD attention deficit hyperactivity disorder
  • SPD sensory-processing disorder
  • MCI mild cognitive impairment
  • Alzheimer's disease multiple- sclerosis
  • schizophrenia depression, or anxiety
  • the computing device can include an application (an "App program") to perform such functionalities as analyzing the data.
  • an application an "App program"
  • the data from the at least one sensor component can be analyzed as described herein by a processor executing the App program on an example computing device to receive (including to measure) substantially simultaneously to the response from the individual to a primary task and a secondary response of the individual to a secondary task rendered as an interference with the primary task.
  • the data from the at least one sensor component can be analyzed as described herein by a processor executing the App program on an example computing device to analyze the data indicative of the response of the individual to the primary task and to the secondary task to compute at least one performance metric including at least one indicator of cognitive condition.
  • An example system can be configured to implement a predictive model (including using a machine learning predictive model, such as but not limited to a machine learning classifier) to enable an assessment of cognitive skills in an individual using a predictive model and/or to enhance cognitive skills in an individual.
  • the predictive model can include one or more of, e.g., a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, or an artificial neural network.
  • the example system employs an App program executing on a mobile communication device or other hand-held devices.
  • Non-limiting examples of such mobile communication devices or hand-held device include a smartphone, such as but not limited to an iPhone®, a BlackBerry®, or an Android- based smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other computing system that can be used to render game-like elements.
  • the example system can include a head-mounted device, such as smart eyeglasses with built-in displays, a smart goggle with built-in displays, or a smart helmet with built-in displays, and the user can hold a controller or an input device having one or more sensors in which the controller or the input device communicates wirelessly with the head-mounted device.
  • the computing system may be stationary, such as a desktop computing system that includes a main computer and a desktop display (or a projector display), in which the user provides inputs to the App program using a keyboard, a computer mouse, a joystick, handheld consoles, wristbands, or other wearable devices having sensors that communicate with the main computer using wired or wireless communication.
  • the example system may be a virtual reality system, an augmented reality system, or a mixed reality system.
  • the sensors can be configured to measure movements of the user's hands, feet, and/or any other part of the body.
  • the example system can be formed as a virtual reality (VR) system (a simulated environment including as an immersive, interactive 3-D experience for a user), an augmented reality (AR) system (including a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as but not limited to sound, video, graphics and/or GPS data), or a mixed reality (MR) system (also referred to as a hybrid reality which merges the real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact substantially in real time).
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • the term "predictive model” encompasses models trained and developed based on continuous output values and/or models based on discrete labels.
  • the predictive model encompasses a classifier model.
  • the predictive model can be configured to determine scoring outputs that are continuous output values (such as but not limited to values of a psychometric curve) or discrete values (such as but not limited to a classification output).
  • scoring outputs that are continuous output values can be binned into two or more bins (each bin corresponding to a preset range of output values), to provide the classification output.
  • Any example predictive model according to the principles herein can be trained using a plurality of training datasets.
  • Each training dataset corresponds to a previously measured individual of a plurality of individuals.
  • Each training dataset includes data representing at least one indicator of the cognitive ability of the previously measured individual, generated based on the data indicative of the individual's responses from previous interactions with the tasks and/or interference executed by the cognitive platform, and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
  • the trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's
  • the scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of
  • the instant disclosure is directed to computer-implemented devices formed as example cognitive platforms or platform products configured to implement software and/or other processor-executable instructions for the purpose of
  • the example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including physiological condition and/or cognitive condition).
  • Non-limiting example cognitive platforms or platform products can be configured to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or apolipoprotein E (APOE Expression) based on APOE expression level (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a condition, including a neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that data.
  • APOE Expression apolipoprotein E
  • Yet other non-limiting example cognitive platforms or platform products can be configured to classify an individual as to likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition, based on the data collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that data.
  • the neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • Any scoring output of a predictive model (including a classification output of a classifier model) for an individual providing an indication as to likelihood of onset and/or stage of progression of a neurodegenerative condition according to the principles herein can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine
  • the platform product or cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
  • the instant disclosure is also directed to example systems that include platform products and cognitive platforms that are configured for coupling with one or more physiological or monitoring component and/or cognitive testing component.
  • the systems include platform products and cognitive platforms that are integrated with the one or more other physiological or monitoring component and/or cognitive testing component.
  • the systems include platform products and cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring component and/or cognitive testing component, to receive data indicative of measurements made using such one or more components.
  • cData refers to data collected from measures of an interaction of a user with a computer-implemented device formed as a platform product or a cognitive platform.
  • nData refers to other types of data that can be collected according to the principles herein. Any component used to provide nData is referred to herein as a nData component.
  • the data (including cData and nData) is collected with user consent.
  • the cData and/or nData can be collected in realtime.
  • the data (cData or nData) being collected in real-time can be data collected in a time interval at a resolution of up to about 1 .0 millisecond or greater.
  • the time interval can be, but are not limited to, about 2.0 milliseconds, about 3.0 millisecond, about 5.0 millisecond, about 10 milliseconds, about 25 milliseconds, about 40 milliseconds, about 50 milliseconds, about 60 milliseconds, about 70 milliseconds, about 100 milliseconds, about 500 milliseconds, about a second or greater.
  • the nData can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components.
  • the one or more physiological components are configured for performing physiological measurements.
  • the physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
  • nData can be collected from measurements of types of protein and/or conformation of proteins in the tissue or fluid (including blood) of an individual and/or in tissue or fluid (including blood) collected from the individual.
  • the tissue and or fluid can be in or taken from the individual's brain.
  • the measurement of the conformation of the proteins can provide an indication of amyloid formation (e.g., whether the proteins are forming aggregates).
  • nData can be collected from measurements made using a positron emission tomography (PET) scanner to provide data indicative of an individual's amyloid level, and/or using a test to measure the type and level of expression of a protein of clinical interest (e.g., a DNA test to provide data indicative of an individual's genotype and/or expression level of the apolipoprotein E ⁇ 4 allele (referred to herein as "APOE expression group”)).
  • the expression group can be defined based on a threshold expression level of the protein of clinical interest in the neurodegenerative condition, where a measured value of expression level above a pre-specified threshold defines a first expression group and a measured value of expression level approximately equal to or below the pre-specified threshold defines a second expression group.
  • the nData can be collected from measurements of beta amyloid, cystatin, alpha-synuclein, huntingtin protein, and/or tau proteins.
  • the nData can be collected from measurements of other types of proteins that may be implicated in the onset and/or progression of a neurodegenerative condition, such as but not limited to Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • nData can be used to provide a classification or other grouping that can be assigned to an individual based on measurement data from the one or more physiological or monitoring components and/or cognitive testing components. For example, an individual can be classified in an amyloid group of amyloid positive (A+) or amyloid negative (A-) based on analysis of an image from a PET scan.
  • A+ amyloid positive
  • A- amyloid negative
  • certain cData collected from the individual's interaction with the cognitive platform and/or platform product can co-vary or otherwise correlate with the type of amyloid group the individual may be classified to.
  • a non-limiting example system, method and apparatus can be executed to measure cData indicative of the response of the individual(s) to the tasks and/or interference presented to the individual(s), analyze the cData to generate at least one indicator of the cognitive ability of the individual, and apply a predictive model to the at least one indicator of the cognitive ability of the individual derived from the cData, to provide a scoring output indicative of a likelihood of onset and/or stage of progression of a neurodegenerative condition of the individual(s).
  • a predictive model configured as a classifier model trained to provide a classification output of amyloid status (or grouping)
  • an example system, method and apparatus could be implemented to classify an individual according to amyloid status (or grouping).
  • An example system, method and apparatus according to the principles herein can be used as an intelligent proxy for a nData measurement or analysis.
  • the system, method and apparatus can be implemented as an intelligent proxy for nData measurement or analysis indicative of a likelihood of onset and/or stage of progression of a neurodegenerative condition of the individual(s), through use of the predictive model to provide the scoring output.
  • the nData can be an identification of a type of biologic, drug or other pharmaceutical agent administered or to be administered to an individual, and/or data collected from measurements of a level of the biologic, drug or other pharmaceutical agent in the tissue or fluid (including blood) of an individual, whether the measurement is made in situ or using tissue or fluid (including blood) collected from the individual.
  • a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, and crenezumab.
  • drug herein encompasses a drug, a biologic and/or other pharmaceutical agent.
  • the physiological instrument can be a fMRI
  • the nData can be measurement data indicative of the cortical thickness, brain functional activity changes, or other measure.
  • nData can include any data that can be used to characterize an individual's status, such as but not limited to age, gender or other similar data.
  • the nData can be data indicative of an individual's performance using a testing component, such as but not limited to the Rey Auditory Verbal Learning Test (RAVLTTM) by Western Psychological Services (Torrance, CA) and/or the Test of Variables of Attention (T.O.V.A.®) by The TOVA Company (Los Alamitos, CA).
  • a testing component such as but not limited to the Rey Auditory Verbal Learning Test (RAVLTTM) by Western Psychological Services (Torrance, CA) and/or the Test of Variables of Attention (T.O.V.A.®) by The TOVA Company (Los Alamitos, CA).
  • the data (including cData and nData) is collected with the individual's consent.
  • the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the nData. This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near-infrared spectroscopy, and/or pupil dilation measures, to provide the nData.
  • physiological measurements to provide nData include, but are not limited to, the measurement of body temperature, heart or other cardiac- related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), blood pressure, electrical potential at a portion of the skin, galvanic skin response (GSR).
  • ECG electrocardiograph
  • EEG electroencephalogram
  • ERPs event-related potentials
  • GSR galvanic skin response
  • instruments or techniques for performing the physiological measurements to provide nData include, but are not limited to, the use of body functional magnetic resonance imaging (fMRI), magneto-encephalogram (MEG), eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNIRS), and/or a positron emission tomography (PET) scanner.
  • fMRI body functional magnetic resonance imaging
  • MEG-fMRI magneto-encephalogram
  • PET positron emission tomography
  • the fMRI also can be used to provide measurement data (nData) indicative of neuronal activation, based on the difference in magnetic properties of oxygenated versus de-oxygenated blood supply to the brain.
  • nData measurement data
  • the fMRI can provide an indirect measure of neuronal activity by measuring regional changes in blood supply, based on a positive correlation between neuronal activity and brain
  • a PET scanner can be used to perform functional imaging to observe metabolic processes and other physiological measures of the body through detection of gamma rays emitted indirectly by a positron-emitting radionuclide (a tracer).
  • the tracer can be introduced into the user's body using a biologically-active molecule.
  • Indicators of the metabolic processes and other physiological measures of the body can be derived from the scans, including from computer reconstruction of two- and three-dimensional images from nData of tracer concentration from the scans.
  • the nData can include measures of the tracer concentration and/or the PET images (such as two- or three-dimensional images).
  • the cognitive platform and systems including the cognitive platform can be configured to present computerized tasks and platform interactions that inform cognitive assessment (screening or monitoring) or deliver treatment.
  • a task can involve one or more activities that a user is required to engage in. Any one or more of the tasks can be computer-implemented as computerized stimuli or interaction (described in greater detail below).
  • the cognitive platform may require temporally-specific and/or position- specific responses from a user.
  • the cognitive platform may require position-specific and/or motion-specific responses from the user.
  • the cognitive platform may require temporally-specific and/or position-specific responses from the user.
  • the multitasking tasks can include any combination of two or more tasks.
  • the user response to tasks can be recorded using an input device of the cognitive platform.
  • input devices can include a touch, swipe or other gesture relative to a user interface or image capture device (such as but not limited to a touch-screen or other pressure sensitive screen, or a camera), including any form of user interface configured for recording a user interaction.
  • the user response recorded using the cognitive platform for tasks can include user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform.
  • Such changes in a position, orientation, or movement of a computing device can be recorded using an input device disposed in or otherwise coupled to the computing device, such as but not limited to a sensor.
  • sensors include a motion sensor, position sensor, and/or an image capture device (such as but not limited to a camera).
  • the computing device is configured (such as using at least one specially-programmed processing unit) to cause the cognitive platform to present to a user two or more different types of tasks, such as but not limited to, targeting and/or navigation and/or facial expression recognition or object recognition tasks, during a short time frame (including in real- time and/or substantially simultaneously).
  • the computing device is also configured (such as using at least one specially-programmed processing unit) to collect data indicative of the type of user response received to the multi-tasking tasks, within the short time frame (including in real-time and/or substantially simultaneously).
  • the two or more different types of tasks can be presented to the individual within the short time frame (including in real-time and/or substantially simultaneously), and the computing device can be configured to receive data indicative of the user response(s) relative to the two or more different types of tasks within the short time frame (including in real-time and/or substantially simultaneously).
  • the short time frame (including substantially simultaneously) can be of any time interval at a resolution of up to about 1 .0 millisecond or greater.
  • the time intervals can be, but are not limited to, durations of time of any division of a periodicity of about 2.0 milliseconds or greater, up to any reasonable end time.
  • the time intervals can be, but are not limited to, about 3.0 millisecond, about 5.0 millisecond, about 10 milliseconds, about 25 milliseconds, about 40 milliseconds, about 50 milliseconds, about 60 milliseconds, about 70 milliseconds, about 100 milliseconds, or greater.
  • the short time frame can be, but is not limited to, fractions of a second, about a second, between about 1 .0 and about 2.0 seconds, or up to about 2.0 seconds, or more.
  • the platform product or cognitive platform can be configured to collect data indicative of a reaction time of a user's response relative to the time of presentation of the tasks.
  • the computing device can be configured to cause the platform product or cognitive platform to provide smaller or larger reaction time window for a user to provide a response to the tasks as a way of adjusting the difficulty level.
  • the term "computerized stimuli or interaction” or “CSI” refers to a computerized element that is presented to a user to facilitate the user's interaction with a stimulus or other interaction.
  • the computing device can be configured to present auditory stimulus or initiate other auditory-based interaction with the user, and/or to present vibrational stimuli or initiate other vibrational-based interaction with the user, and/or to present tactile stimuli or initiate other tactile-based interaction with the user, and/or to present visual stimuli or initiate other visual-based interaction with the user.
  • Any task according to the principles herein can be presented to a user via a computing device, actuating component, or other device that is used to implement one or more stimuli or other interactive element.
  • the task can be presented to a user by rendering a user interface to present the computerized stimuli or interaction (CSI) or other interactive elements.
  • the task can be presented to a user as auditory, tactile, or vibrational computerized elements
  • the CSI can be rendered using at least one user interface to be presented to a user.
  • at least one user interface is configured for measuring responses as the user interacts with CSI computerized element rendered using the at least one user interface.
  • the user interface can be configured such that the CSI computerized element(s) are active, and may require at least one response from a user, such that the user interface is configured to measure data indicative of the type or degree of interaction of the user with the platform product.
  • the user interface can be configured such that the CSI computerized element(s) are a passive and are presented to the user using the at least one user interface but may not require a response from the user.
  • the at least one user interface can be configured to exclude the recorded response of an interaction of the user, to apply a weighting factor to the data indicative of the response (e.g., to weight the response to lower or higher values), or to measure data indicative of the response of the user with the platform product as a measure of a misdirected response of the user (e.g., to issue a notification or other feedback to the user of the misdirected response).
  • a weighting factor to the data indicative of the response (e.g., to weight the response to lower or higher values)
  • measure data indicative of the response of the user with the platform product as a measure of a misdirected response of the user e.g., to issue a notification or other feedback to the user of the misdirected response.
  • the cognitive platform and/or platform product can be configured as a processor-implemented system, method or apparatus that includes at least one processing unit.
  • the at least one processing unit can be programmed to render at least one user interface to present the computerized stimuli or interaction (CSI) or other interactive elements to the user for interaction.
  • the at least one processing unit can be programmed to cause an actuating component of the platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with the user.
  • the at least one processing unit can be programmed to cause a component of the program product to receive data indicative of at least one user response based on the user interaction with the CSI or other interactive element (such as but not limited to cData), including responses provided using the input device.
  • the at least one processing unit can be programmed to cause the user interface to receive the data indicative of at least one user response.
  • the at least one processing unit also can be programmed to: analyze the cData to provide a measure of the individual's cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses (including based on differences in the cData), and/or adjust the difficulty level of the auditory, tactile, or vibrational computerized elements (including CSIs), the CSIs or other interactive elements based on the analysis of the cData (including the measures of the individual's performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or projected response to cognitive treatment, and/or assessed measures of cognition.
  • the at least one processing unit also can be programmed to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a
  • the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition, based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData.
  • the neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • the platform product can be configured as a processor- implemented system, method or apparatus that includes a display component, an input device, and the at least one processing unit.
  • the at least one processing unit can be programmed to render at least one user interface, for display at the display component, to present the computerized stimuli or interaction (CSI) or other interactive elements to the user for interaction.
  • the at least one processing unit can be programmed to cause an actuating component of the platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with the user.
  • Non-limiting examples of an input device include a touch-screen, or other pressure-sensitive or touch-sensitive surface, a motion sensor, a position sensor, a pressure sensor, joystick, exercise equipment, and/or an image capture device (such as but not limited to a camera).
  • the input device is configured to include at least one component configured to receive input data indicative of a physical action of the individual(s), where the data provides a measure of the physical action of the individual(s) in interacting with the cognitive platform and/or platform product, e.g., to perform the one or more tasks and/or tasks with interference.
  • the analysis of the individual's performance may include using the computing device to compute percent accuracy, number of hits and/or misses during a session or from a previously completed session.
  • Other indicia that can be used to compute performance measures is the amount time the individual takes to respond after the presentation of a task (e.g., as a targeting stimulus).
  • Other indicia can include, but are not limited to, reaction time, response variance, number of correct hits, omission errors, false alarms, learning rate, spatial deviance, subjective ratings, and/or performance threshold, etc.
  • the user's performance can be further analyzed to compare the effects of two different types of tasks on the user's performances, where these tasks present different types of interferences (e.g., a distraction or an interruptor).
  • the computing device is configured to present the different types of interference as CSIs or other interactive elements that divert the user's attention from a primary task.
  • the computing device is configured to instruct the individual to provide a primary response to the primary task and not to provide a response (i.e. , to ignore the distraction).
  • the computing device is configured to instruct the individual to provide a response as a secondary task, and the computing device is configured to obtain data indicative of the user's secondary response to the interruptor within a short time frame (including at substantially the same time) as the user's response to the primary task (where the response is collected using at least one input device).
  • the computing device is configured to compute measures of one or more of a user's performance at the primary task without an interference, performance with the interference being a distraction, and performance with the interference being an interruption.
  • the user's performance metrics can be computed based on these measures. For example, the user's performance can be computed as a cost (performance change) for each type of interference (e.g. , distraction cost and interruptor/multi-tasking cost).
  • the user's performance level on the tasks can be analyzed and reported as feedback, including either as feedback to the cognitive platform for use to adjust the difficulty level of the tasks, and/or as feedback to the individual concerning the user's status or
  • the computing device can also be configured to analyze, store, and/or output the reaction time for the user's response and/or any statistical measures for the individual's performance (e.g. , percentage of correct or incorrect response in the last number of sessions, over a specified duration of time, or specific for a type of tasks (including non-target and/or target stimuli, a specific type of task, etc.).
  • any statistical measures for the individual's performance e.g. , percentage of correct or incorrect response in the last number of sessions, over a specified duration of time, or specific for a type of tasks (including non-target and/or target stimuli, a specific type of task, etc.).
  • the computerized element includes at least one task rendered at a user interface as a visual task or presented as an auditory, tactile, or vibrational task.
  • Each task can be rendered as interactive mechanics that are designed to elicit a response from a user after the user is exposed to stimuli for the purpose of cData and/or nData collection.
  • the computerized element includes at least one platform interaction (gameplay) element of the platform rendered at a user interface, or as auditory, tactile, or vibrational element of a program product.
  • Each platform interaction (gameplay) element of the platform product can include interactive mechanics (including in the form of videogame-like mechanics) or visual (or cosmetic) features that may or may not be targets for cData and/or nData collection.
  • gameplay encompasses a user interaction (including other user experience) with aspects of the platform product.
  • the computerized element includes at least one element to indicate positive feedback to a user.
  • Each element can include an auditory signal and/or a visual signal emitted to the user that indicates success at a task or other platform interaction element, i.e., that the user responses at the platform product has exceeded a threshold success measure on a task or platform interaction (gameplay) element.
  • the computerized element includes at least one element to indicate negative feedback to a user.
  • Each element can include an auditory signal and/or a visual signal emitted to the user that indicates failure at a task or platform interaction (gameplay) element, i.e., that the user responses at the platform product has not met a threshold success measure on a task or platform interaction element.
  • the computerized element includes at least one element for messaging, i.e., a communication to the user that is different from positive feedback or negative feedback.
  • the computerized element includes at least one element for indicating a reward.
  • a reward computer element can be a computer generated feature that is delivered to a user to promote user satisfaction with the CSIs and as a result, increase positive user interaction (and hence enjoyment of the user experience).
  • the cognitive platform can be configured to render multi-task interactive elements.
  • the multi-task interactive elements are referred to as multi-task gameplay (MTG).
  • the multi-task interactive elements include interactive mechanics configured to engage the user in multiple temporally-overlapping tasks, i.e., tasks that may require multiple, substantially simultaneous responses from a user.
  • the cognitive platform can be configured to render single-task interactive elements.
  • the single-task interactive elements are referred to as single-task gameplay (STG).
  • the single-task interactive elements include interactive mechanics configured to engage the user in a single task in a given time interval.
  • the term "cognition” or “cognitive” refers to the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. This includes, but is not limited to,
  • An example computer-implemented device can be configured to collect data indicative of user interaction with a platform product, and to compute metrics that quantify user performance.
  • the quantifiers of user performance can be used to provide measures of cognition (for cognitive assessment) or to provide measures of status or progress of a cognitive treatment.
  • an example platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product (also referred to herein as an "APP") by Akili Interactive Labs, Inc., Boston, MA.
  • AKILI® platform product also referred to herein as an "APP”
  • treatment refers to any manipulation of CSI in a platform product (including in the form of an APP) that results in a measurable change (including improvement) of the measures of cognitive abilities of a user, such as but not limited to improvements related to cognition, a user's mood, emotional state, and/or level of engagement or attention to the cognitive platform.
  • the degree or level of change (including improvement) can be quantified based on user performance measures as describe herein.
  • the term “treatment” may also refer to a therapy.
  • the term "session” refers to a discrete time period, with a clear start and finish, during which a user interacts with a platform product to receive assessment or treatment from the platform product (including in the form of an APP).
  • a session can include two or more trials, including up to multiple trials.
  • the term "session” refers to a portion of a trial that is less than the full trial.
  • the term “assessment” refers to at least one session of user interaction with CSIs or other feature or element of a platform product.
  • the data collected from one or more assessments performed by a user using a platform product can be used as to derive measures or other quantifiers of cognition, or other aspects of a user's abilities.
  • cognitive load refers to the amount of mental resources that a user may need to expend to complete a task. This term also can be used to refer to the challenge or difficulty level of a task or gameplay.
  • the platform product includes a computing device that is configured to present to a user a cognitive platform based on interference
  • At least one processing unit is programmed to render at least one first user interface or cause an actuating component to generate an auditory, tactile, or vibrational signal, to present first CSIs as a first task that requires a first type of response from a user.
  • the example system, method and apparatus is also configured to cause the at least one processing unit to render at least one second user interface or cause the actuating component to generate an auditory, tactile, or vibrational signal, to present second CSIs as a first interference with the first task, requiring a second type of response from the user to the first task in the presence of the first interference.
  • the second type of response can include the first type of response to the first task and a secondary response to the first interference. In another non-limiting example, the second type of response may not include, and be quite different from, the first type of response.
  • the at least one processing unit is also programmed to receive data indicative of the first type of response and the second type of response based on the user interaction with the platform product (such as but not limited to cData), such as but not limited to by rendering the at least one user interface to receive the data.
  • the platform product also can be configured to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components).
  • the at least one processing unit also can be programmed to: analyze the cData and/or nData to provide a measure of the individual's condition (including physiological and/or cognitive condition), and/or analyze the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses (including based on differences in the cData) and differences in the associated nData.
  • the at least one processing unit also can be programmed to: adjust the difficulty level of the first task and/or the first interference based on the analysis of the cData and/or nData
  • the measures of the individual's performance and/or condition including physiological and/or cognitive condition determined in the analysis
  • provide an output or other feedback from the platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or projected response to cognitive treatment, and/or assessed measures of cognition.
  • the at least one processing unit also can be programmed to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on nData and the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData.
  • the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a
  • neuropsychological condition including as to a neurodegenerative condition, based on nData and the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData.
  • the neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • the feedback from the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses and the nData can be used as an input in the cognitive platform that indicates real-time performance of the individual during one or more session(s).
  • the data of the feedback can be used to as an input to a computation component of the computing device to determine a degree of adjustment that the cognitive platform makes to a difficulty level of the first task and/or the first interference that the user interacts within the same ongoing session and/or within a subsequently-performed session.
  • the cognitive platform based on interference processing can be a cognitive platform based on the Project: EVOTM platform by Akili Interactive Labs, Inc. (Boston, MA).
  • the user interface is configured such that, as a component of the interference processing, one of the discriminating features of the targeting task that the user responds to is a feature in the platform that displays an emotion, a shape, a color, and/or a position that serves as an interference element in interference processing.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to set baseline metrics of CSI levels/attributes in APP session(s) based on measurements of nData indicative of physiological condition and/or cognition condition (including indicators of neuropsychological disorders), to increase accuracy of assessment and efficiency of treatment.
  • the CSIs may be used to calibrate a nData component to individual user dynamics of nData.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to detect states of attentiveness or
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use analysis of nData with CSI cData to detect and direct attention to specific CSIs related to treatment or assessment through subtle or overt manipulation of CSIs.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use analysis of CSIs patterns of cData with nData within or across assessment or treatment sessions to generate user profiles (including profiles of ideal, optimal, or desired user responses) of cData and nData and manipulate CSIs across or within sessions to guide users to replicate these profiles.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData for indicators of parameters related to user engagement and to optimize the cognitive load generated by the CSIs to align with time in an optimal engaged state to maximize neural plasticity and transfer of benefit resulting from treatment.
  • a cognitive platform and/or platform product which may include an APP
  • nData for indicators of parameters related to user engagement and to optimize the cognitive load generated by the CSIs to align with time in an optimal engaged state to maximize neural plasticity and transfer of benefit resulting from treatment.
  • the term "neural plasticity" refers to targeted re-organization of the central nervous system.
  • an EEG measurement of the individual can be used to provide nData measures indicative of an attention of the individual as the individual interacts with the task and/or interference.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData indicative of anger and/or frustration to promote continued user interaction (also referred to as "play") with the cognitive platform by offering alternative CSIs or disengagement from CSIs.
  • a cognitive platform and/or platform product which may include an APP
  • nData indicative of anger and/or frustration to promote continued user interaction also referred to as "play”
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to change CSI dynamics within or across assessment or treatment sessions to optimize nData related to cognition or other physiological or cognitive aspects of the user.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to adjust the CSIs or CSI cognitive load if nData signals of task automation are detected, or the physiological measurements that relate to task learning show signs of attenuation.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to combine signals from CSI cData with nData to optimize individualized treatment promoting improvement of indicators of cognitive abilities, and thereby, cognition.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use a profile of nData to confirm/verify/authenticate a user's identity.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to detect positive emotional response to CSIs in order to catalog individual user preferences to customize CSIs to optimize enjoyment and promote continued engagement with assessment or treatment sessions.
  • a cognitive platform and/or platform product which may include an APP
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to generate user profiles of cognitive improvement (such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination), and deliver a treatment that adapts CSIs to optimize the profile of a new user as confirmed by profiles from nData.
  • a cognitive platform and/or platform product which may include an APP
  • user profiles of cognitive improvement such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to provide to a user a selection of one or more profiles configured for cognitive improvement.
  • a cognitive platform and/or platform product (which may include an APP) that is configured to provide to a user a selection of one or more profiles configured for cognitive improvement.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using a cognitive platform or program product.
  • a cognitive platform and/or platform product which may include an APP
  • nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using a cognitive platform or program product.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use cData and/or nData (including metrics from analyzing the data) as a determinant or to make a decision as to whether a user (including a patient using a medical device) is likely to respond or not to respond to a treatment (such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent).
  • a cognitive platform and/or platform product which may include an APP) that is configured to use cData and/or nData (including metrics from analyzing the data) as a determinant or to make a decision as to whether a user (including a patient using a medical device) is likely to respond or not to respond to a treatment (such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent).
  • the system, method, and apparatus can be configured to select whether a user (including a patient using a medical device) should receive treatment based on specific physiological or cognitive measurements that can be used as signatures that have been validated to predict efficacy of the cognitive platform in a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status).
  • a user including a patient using a medical device
  • Such an example system, method, and apparatus configured to perform the analysis (and associated computation) described herein can be used as a biomarker to perform monitoring and/or screening.
  • the example system, method and apparatus configured to provide a quantitative measure of the degree of efficacy of a cognitive treatment (including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent) for a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status).
  • a cognitive treatment including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent
  • a given individual or certain individuals of the population e.g., individual(s) classified to a given group based on amyloid status.
  • the individual or certain individuals of the population may be classified as having a certain neurodegenerative condition.
  • An example system, method, and apparatus includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to monitor a user's ability to anticipate CSI(s) and manipulate CSIs patterns and/or rules to disrupt user anticipation of response to CSIs, to optimize treatment or assessment in use of a cognitive platform or program product.
  • a cognitive platform and/or platform product which may include an APP
  • nData to monitor a user's ability to anticipate CSI(s) and manipulate CSIs patterns and/or rules to disrupt user anticipation of response to CSIs, to optimize treatment or assessment in use of a cognitive platform or program product.
  • Non-limiting examples of analysis (and associated computations) that can be performed based on various combinations of different types of nData and cData are described.
  • the following example analyses and associated computations can be implemented using any example system, method and apparatus according to the principles herein.
  • Non-limiting example system, method, and apparatus provide a cognitive platform and/or platform product that is configured to produce a fast and accurate assessment for amyloid status or a neuropsychological condition in older individuals.
  • the example cognitive platform and/or platform product is configured to implement a predictive model (such as but not limited to a classifier model) trained using clinical trial data set that includes an indication of the amyloid status or a neuropsychological condition of individuals participating in the clinical trial.
  • a predictive model such as but not limited to a classifier model
  • an individual older than about 50, about 55 or about 60 years of age or older can be classified as an older individual.
  • Non-limiting example system, method, and apparatus provide a cognitive platform and/or platform product that is configured to implement an example predictive model (such as but not limited to a classifier model) that is configured to identify individuals having a positive amyloid status versus a negative amyloid status with a high degree of accuracy based on measurement data (including cData) from a least one user interaction with the example cognitive platform and/or platform product.
  • an example predictive model such as but not limited to a classifier model
  • measurement data including cData
  • the example predictive model (such as but not limited to a classifier model) can be configured to identify individuals that have positive amyloid status with about a 77% degree of accuracy, and to identify individuals that have negative amyloid status with about a 90% degree of accuracy, based on measurement data (including cData) from a least one user interaction with the example cognitive platform and/or platform product (including a single user interaction).
  • measurement data including cData
  • FIG. 1 shows data derived from applying an example predictive model (such as but not limited to a classifier model) to data indicative of user interaction (a screen) with an example cognitive platform and/or platform product in an initial screen.
  • the graph shows plots of data indicative of sensitivity and specificity as values of percentage (y-axis) versus values of cData (as score on targeting tasks) derived from the user interaction with the example cognitive platform and/or platform product (x-axis).
  • the targeting score for users having negative amyloid status (indicated using triangles) appear at a first set of values
  • targeting score for users having positive amyloid status appear at a second set of values.
  • the graph shows the predictive model (such as but not limited to a classifier model) based on data from the initial screen can be used to separate a population of user according to an indication of amyloid status.
  • FIGs. 2 and 3 show plots of data derived from a cross-validation routine conducted on the predictive model (such as but not limited to a classifier model) of FIG. 1 , to show the predictive accuracy of the model.
  • a cross-validation routine conducted on the predictive model (such as but not limited to a classifier model) of FIG. 1 , to show the predictive accuracy of the model.
  • the non-limiting example predictive model (such as but not limited to a classifier model) can be trained to generate predictors of the amyloid status of individuals using training cData and corresponding nData, and based on metrics collected from at least one interaction of users with an example cognitive platform and/or platform product.
  • the training nData can includes data indicative of the amyloid status and age of each user that corresponds to cData collected for a given user (such as but not limited to that user's score from at least one interaction with any example cognitive platform and/or platform product herein).
  • the nData can include data indicative of the gender of the user.
  • the cData can be collected based on a limited user interaction, e.g., on the order of a few minutes, with any example cognitive platform and/or platform product herein.
  • the length of time of the limited user interaction can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes.
  • the example cognitive platform and/or platform product can be configured to implement an assessment session (such as but not limited to an assessment implemented using a Project: EVOTM platform).
  • Non-limiting example system, method, and apparatus also provide a cognitive platform and/or platform product that is configured to implement an example predictive model (such as but not limited to a classifier model) that is configured to identify individuals having a positive amyloid status versus a negative amyloid status with a high degree of accuracy based on measurement data (including cData) from a plurality of user interactions with the example cognitive platform and/or platform product.
  • an example predictive model such as but not limited to a classifier model
  • measurement data including cData
  • the example predictive model (such as but not limited to a classifier model) can be configured to identify individuals that have positive amyloid status with about a 83% degree of accuracy, and to identify individuals that have negative amyloid status with about a 79% degree of accuracy, based on measurement data (including cData) from comparing baseline performance data in the first moments of the user performance of a first assessment using the example cognitive platform and/or platform product with values of performance data from the user performance of three (3) subsequent assessments using the example cognitive platform and/or platform product.
  • measurement data including cData
  • FIG. 4 shows data derived from applying an example predictive model (such as but not limited to a classifier model) to data indicative of user interactions (screens) with an example cognitive platform and/or platform product in a plurality of screens (in this example, four (4) screens). Each screen is at least one trial or session of interaction with the cognitive platform and/or platform product.
  • the graph shows plots of data indicative of sensitivity and specificity as values of percentage (y-axis) versus values of cData (as score on targeting tasks) derived from the user interaction with the example cognitive platform and/or platform product (x-axis).
  • the targeting score for users having negative amyloid status appear at a first set of values
  • targeting score for users having positive amyloid status appear at a second set of values.
  • the graph shows the predictive model (such as but not limited to a classifier model) based on data from the multiple screens can be used to separate a population of user according to an indication of amyloid status.
  • the non-limiting example predictive model (such as but not limited to a classifier model) according to the principles herein can be trained to generate predictors of the amyloid status or a neuropsychological condition, including as to a neurodegenerative condition, of individuals using training cData and corresponding nData, and based on metrics collected from a plurality of interactions of users with an example cognitive platform and/or platform product.
  • the training nData can includes data indicative of the amyloid status or a neuropsychological condition, including as to a neurodegenerative condition, and age of each user.
  • the nData can include data indicative of the gender of the user.
  • the corresponding cData is collected for a given user (such as but not limited to that user's score from at least one interaction with any example cognitive platform and/or platform product herein).
  • the cData can be collected based on a plurality of interaction sessions of a user using a cognitive platform and/or platform product herein, e.g., two or more interaction sessions.
  • the length of time of each interaction session can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes.
  • the example cognitive platform and/or platform product can be configured to implement the plurality of assessment sessions (such as but not limited to an assessment implemented using a Project: EVOTM platform).
  • Example systems, methods, and apparatus according to the principles herein also provide a cognitive platform and/or platform product (which may include an APP) that is configured to implement computerized tasks to produce cData.
  • the example cognitive platform and/or platform product can be configured to use cData from a user interaction as inputs to a predictive model (such as but not limited to a classifier model) that determines the likelihood of positive amyloid burden of the user to a high degree of accuracy using a classifier model.
  • a predictive model such as but not limited to a classifier model
  • the example cognitive platform and/or platform product can be configured to use cData from a user interaction as inputs to a predictive model (such as but not limited to a classifier model) that determines the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, such as but not limited to attention deficit
  • ADHD hyperactivity disorder
  • SPD sensory-processing disorder
  • MCI mild cognitive impairment
  • Alzheimer's disease multiple-sclerosis
  • schizophrenia schizophrenia
  • the example cognitive platform and/or platform product (which may include an APP) can be configured to collect performance data from a single assessment procedure that is configured to sequentially present a user with tasks that challenge cognitive control and executive function to varying degrees, and use the resulting cData representative of time ordered performance measures as the basis for the determination of a user's amyloid status, or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, using a classifier model.
  • an example cognitive platform and/or platform product (which may include an APP) can be configured to implement a predictive model (such as but not limited to a classifier model) and computerized tasks, such that data indicative of an individual's performance of the computerized tasks (including cData) in the first moments of the individual's interaction with the example cognitive platform and/or platform product can be compared to data indicative of the individual's performance of the computerized tasks (including cData) in subsequent moments to provide an indication of the user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
  • a predictive model such as but not limited to a classifier model
  • computerized tasks such that data indicative of an individual's performance of the computerized tasks (including cData) in the first moments of the individual's interaction with the example cognitive platform and/or platform product can be compared to data indicative of the individual's performance of the computerized tasks (including
  • the data indicative of the first moments of the individual's interaction with the example cognitive platform and/or platform product can be collected in time periods ranging from the first moments of the individual's interaction, e.g., the first few seconds, about the first 5 seconds, about the first 10 seconds, about the first 20 seconds, about the first 30 seconds, about the first 45 seconds, about the first minute, about the first 1 .5 minutes, about the first 3 minutes, about the first 5 minutes, about the first 7.5 minutes, about the first 10 minutes, or other reasonable initial time interval, of the individual's initial interaction with the example cognitive platform and/or platform product.
  • the first moments of the individual's interaction e.g., the first few seconds, about the first 5 seconds, about the first 10 seconds, about the first 20 seconds, about the first 30 seconds, about the first 45 seconds, about the first minute, about the first 1 .5 minutes, about the first 3 minutes, about the first 5 minutes, about the first 7.5 minutes, about the first 10 minutes, or other reasonable initial time interval, of the individual's initial interaction with the
  • the data indicative of the individual's performance of the computerized tasks (including cData) in the subsequent moments of the individual's interaction with the example cognitive platform and/or platform product can be collected over other time points or time intervals of the individual's interaction with the example cognitive platform and/or platform product.
  • neuropsychological condition including as to a neurodegenerative condition and/or an executive function disorder.
  • the validity of the predictive model (such as but not limited to a classifier model) and the principles of sequential testing of novel executive function tasks of this innovation is evaluated using a cross-validation procedure.
  • the results indicate that the example cognitive platform and/or platform product (which may include an APP) can be configured to implement the predictive model (such as but not limited to a classifier model) to detect amyloid burden in individuals with a high degree of accuracy.
  • the example cognitive platforms or platform products are configured to present assessments that sufficiently challenge a user's cognitive control, attention, working memory, and task engagement.
  • the example classifier models according to the principles herein can be used to predict, with a greater degree of accuracy, a user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on data (including cData) generated from a user's first interaction with the example cognitive platform and/or platform product (e.g., as an initial screening).
  • data including cData
  • the example classifier models according to the principles herein can be used to predict, with a greater degree of accuracy, a user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on a comparison of data (including cData) generated from a user's first moments of interaction (including first trial or first session) with the example cognitive platform and/or platform product and the subsequent moments of interaction (including one or more subsequent trials or sessions) with the example cognitive platform and/or platform product.
  • data including cData
  • the length of time of the user interaction in the first trial or first session can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes. In any example herein, the length of time of the user interaction in each of the one or more
  • subsequent trials or sessions can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes.
  • computations can be implemented by applying one or more linear mixed model regression models to the data (including data and metrics derived from the cData and/or nData).
  • the analysis can be based on a covariate adjustment of comparisons of data for given individuals, i.e., an analysis of factors with multiple measurements (usually longitudinal) for each individual.
  • the analysis can be configured to account for the correlation between measurements, since the data originates from the same source. In this example as well, the analysis can be based on a covariate adjustment of
  • the cData is obtained based on interactions of each individual with any one or more of the example cognitive platforms and/or platform products described herein.
  • the cData used can be derived as described herein using an example cognitive platform and/or platform product that is configured to implement a sequence that could include at least one initial assessment session.
  • additional assessments can include a first challenge session (including one or more sessions that are rendered with an adaptation (i.e., adjustment or other change) in the difficulty level of the primary task and/or the interference over a previous session, as described herein), a first training session (including one or more sessions that are rendered with a similar difficulty level of the primary task and/or the interference over a previous session, as described herein), a second training session (including one or more sessions that are rendered with a similar difficulty level of the primary task and/or the interference over a previous session, as described herein), and/or a second challenge session (including one or more sessions that are rendered with an adaptation (i.e., adjustment or other change) in the difficulty level of the primary task and/or the interference over a previous session, as described herein).
  • a first challenge session including one or more sessions that are rendered with an adaptation (i
  • the cData is collected based on measurements of the responses of the individual with the example cognitive platform and/or platform product during one or more segments of the assessment(s).
  • the cData can include data collected by the cognitive platform and/or platform product to quantify the interaction of the individual with the first moments of an initial assessment as well as data collected to quantify the interaction of the individual with the subsequent moments of an initial assessment.
  • the cData can include data collected by the cognitive platform and/or platform product to quantify the interaction of the individual with the initial assessment as well as data collected to quantify the interaction of the individual with one or more additional assessmentsO
  • the example cognitive platform and/or platform product can be configured to present computerized tasks and platform interactions that inform cognitive assessment (screening or monitoring) or deliver treatment.
  • the tasks can be single-tasking tasks and/or multi-tasking tasks (that include primary tasks with an interference).
  • One or more of the tasks can include CSIs.
  • Non-limiting examples of the types of cData that can be derived from the interactions of an individual with the cognitive platform and/or platform product are as follows.
  • the cData can be one or more scores generated by the cognitive platform and/or platform product based on the individual's response(s) in performance of a single-tasking task presented by the cognitive platform and/or platform product.
  • the single-tasking task can be, but is not limited to, a targeting task, a navigation task, a facial expression recognition task, or an object recognition task.
  • the cData can be one or more scores generated by the cognitive platform and/or platform product based on the individual's response(s) in performance of a multi-tasking task presented by the cognitive platform and/or platform product.
  • the multi-tasking task can include a targeting task and/or a navigation task and/or a facial expression recognition task and/or an object recognition task, where one or more of the multi-tasking tasks can be presented as an interference with one or more primary tasks.
  • the cData collected can be a scoring representative of the individual's response(s) to each task of the multitask task(s) presented, and/or combination scores representative of the individual's overall response(s) to the multi-task task(s).
  • the combination score can be derived based on computation using any one or more of the scores collected from the individual's response(s) to each task of the multi-task task(s) presented, such as but not limited to a mean, mode, median, average, difference (or delta), standard deviation, or other type of combination.
  • the cData can include measures of the individual's reaction time to one or more of the tasks.
  • the cData can be generated based on an analysis (and associated computation) performed using the other cData collected or derived using the cognitive platform and/or platform product.
  • the analysis can include computation of an interference cost or other cost function.
  • the cData can also include data indicative of an individual's compliance with a pre-specified set and type of interactions with the cognitive platform and/or platform product, such as but not limited to a percentage completion of the pre- specified set and type of interactions.
  • the cData can also include data indicative of an individual's progression of performance using the cognitive platform and/or platform product, such as but not limited to a measure of the individual's score versus a pre- specified trend in progress.
  • the cData can be collected from a user interaction with the example cognitive platform and/or platform product at, and/or two or more separate times during the time period leading up to, one or more specific timepoints: an initial timepoint (T1 ) representing an endpoint of the first moments (as defined herein) of an initial assessment session, and a second timepoint (T2) and/or a third timepoint (T3) representing endpoints of the subsequent moments of the initial assessment session.
  • T1 initial timepoint
  • T2 representing an endpoint of the first moments (as defined herein) of an initial assessment session
  • T2 second timepoint
  • T3 third timepoint representing endpoints of the subsequent moments of the initial assessment session.
  • the measurement timepoints T2 and T1 can be separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or more.
  • the measurement timepoints T3 and T2 can be separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or more.
  • the example cognitive platform and/or platform product can be configured for interaction with the individual over multiple different assessment sessions.
  • the cData can be collected at timepoints T; associated with the initial assessment session and later timepoints TL associated with the interactions of the individual with the multiple additional assessment sessions.
  • the example cognitive platform and/or platform product can be configured for screening, for monitoring, and/or for treatment, as described in the various examples herein.
  • the example analyses (and associated computations) can be implemented based at least in part on the cData and nData such as but not limited to data indicative of age, gender, APOE level, and fMRI measures (e.g., cortical thickness, or brain functional activity changes).
  • the results of these example analyses (and associated computations) can be used to provide data indicative of amyloid status of individual(s) and/or the individual's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
  • the example cData and nData can be used to train an example classifier model.
  • the example predictive model (such as but not limited to a classifier model) can be implemented using a cognitive platform and/or platform product to provide data indicative of differences between the individuals having an amyloid positive (A+) status and the individuals having an amyloid negative (A-) status.
  • Example system, method and apparatus herein are configured to perform the analysis (and associated computation) described herein using a predictive model (such as but not limited to a classifier model) to designate individuals of the population as having an amyloid positive (A+) status or an amyloid negative (A-) status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
  • a predictive model such as but not limited to a classifier model
  • the example analyses can be implemented by applying one or more ANCOVA (analysis of covariance) models to the data (including data and metrics derived from the cData and/or nData).
  • ANCOVA analysis of covariance
  • ANCOVA provides a linear model that blends analysis of variance (ANOVA) and regression.
  • ANOVA analysis of variance
  • the analysis can be based on a covariate adjustment of comparisons of data between individuals using a single dependent variable or multiple variables.
  • a non-limiting example predictive model (such as but not limited to a classifier model) can be configured to perform the analysis (and associated computation) using the cData and nData based on various analysis models. Differing analysis models can be applied to data collected from user interactions with the cognitive platform or the platform product (cData) collected at, and/or collected two or more separate times during the time period leading up to, the initial timepoints (T1 and/or ⁇ ;) and/or collected two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL).
  • cData cognitive platform or the platform product
  • the analysis model can be based on an ANCOVA model and/or a linear mixed model regression model, applied to a restricted data set (based on age and gender nData) or a larger data set (based on age, gender, APOE expression group, fMRI, and other nData).
  • the example cognitive platform or platform product can be used to collect cData at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or ⁇ ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL), to apply the predictive model (such as but not limited to a classifier model) to compare the cData collected at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or ⁇ ;) to the cData collected at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL) to derive an indicator that designates individuals of the population as having an amyloid positive (A+) status or an amyloid negative (A-) status, and/or that indicates the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to
  • the analysis can be performed to determine a measure of the sensitivity and specificity of the cognitive platform or the platform product to identify and classify the individuals of the population that have an amyloid positive (A+) status, based on applying a logistic regression model to the data collected (including the cData and/or the nData).
  • the analysis can be performed to determine a measure of the sensitivity and specificity of the cognitive platform or the platform product to identify and classify the individuals of the population according to APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition), based on applying a logistic regression model to the data collected (including the cData and/or the nData), including the amyloid level data.
  • certain cData collected from the individual's interaction with the tasks (and associated CSIs) presented by the cognitive platform and/or platform product, and/or metrics computed using the cData based on the analysis (and associated computations) described can co-vary or otherwise correlate with the nData, such as but not limited to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition).
  • An example cognitive platform and/or platform product can be configured to classify an individual as to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations).
  • the example cognitive platform and/or platform product can include, or communicate with, a machine learning tool or other computational platform that can be trained using the cData and nData to perform the classification using the example classifier model.
  • the example analysis (and associated computation) can be performed by comparing each variable using any example model described herein for the nData corresponding to the drug group along with a covariate set.
  • the example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions, where the cData (from performance of single-tasking tasks and/or multitasking tasks) are compared to determine the efficacy of the drug on the individual's performance.
  • the example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions for sessions of user interaction with the cognitive platform and/or platform product, where the cData (from performance of single-tasking tasks and/or multi-tasking tasks) are compared to determine the efficacy of the drug on the individual's performance.
  • group classification such as but not limited to grouping based on amyloid status
  • cData from performance of single-tasking tasks and/or multi-tasking tasks
  • the example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions for sessions (and types of tasks) of user interaction with the cognitive platform and/or platform product, where the cData (from performance of single-tasking tasks and/or multi-tasking tasks) are compared to determine the efficacy of the drug on the individual's performance.
  • group classification such as but not limited to grouping based on amyloid status
  • cData from performance of single-tasking tasks and/or multi-tasking tasks
  • certain cData collected from the individual's interaction with the tasks (and associated CSIs) presented by the cognitive platform and/or platform product, and/or metrics computed using the cData based on the analysis (and associated computations) described can co-vary or otherwise correlate with the nData, such as but not limited to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent.
  • An example cognitive platform and/or platform product can be configured to classify an individual as to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations).
  • the example cognitive platform and/or platform product can include, or communicate with, a machine learning tool or other computational platform that can be trained using the cData and nData to perform the classification using the example classifier model.
  • a non-limiting example predictive model (such as but not limited to a classifier model) based on data from an initial screen can be derived from
  • cData assessment measurement data collected from a single initial session (such as but not limited to a session lasting as few as about 5 to 7 minutes). Additional inputs can be nData such as but not limited to the participants age and/or gender.
  • the example predictive model (such as but not limited to a classifier model) can be generated based on a formulation of a linear model and fitting techniques to estimate the model's parameters.
  • the predictive model can be expressed as a function of variables related to (i) the age (ageScore) and/or gender (genderCode) of the individual, and (ii) a performance score that depends on measurement data indicative of the individual's physical actions in response to interactions with computerized primary tasks in the presence of an interference (intScore) as described herein.
  • the performance score of the predictive model can be expressed based on differences between measurement data indicative of the individual's physical actions in response to interactions with computerized primary tasks in the presence of an interference (intScore) and measurement data indicative of the individual's physical actions in response to interactions with computerized tasks without interference (isoScore), such as a primary task without interference or a secondary task without interference.
  • inScore an interference
  • isoScore measurement data indicative of the individual's physical actions in response to interactions with computerized tasks without interference
  • the model can be fit to the data in the plurality of training datasets described herein.
  • each training dataset corresponds to a previously classified individual of a plurality of individuals, and each training dataset includes: (i) data representing at least one of the performance score, age, or gender identifier of the classified individual and (ii) data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
  • the first example predictive model (such as but not limited to a classifier model) can be expressed as follows:
  • Page ⁇ ageScore ⁇ gender ⁇ gender Code where coefficient ⁇ indicates an estimated coefficient, coefficient/variable x represents a participant's score or categorical assignment, ⁇ is the predicted value of the classifier model, and ⁇ is the model error.
  • the non-limiting example predictive model computes a scoring based on values such as data indicative of an age
  • This example predictive model (such as but not limited to a classifier model) can be trained using a machine learning tool or other computational platform using the cData and nData collected from user interactions (with the example cognitive platform and/or platform product) of individuals having known amyloid status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
  • a second example predictive model (such as but not limited to a classifier model) can be based on data from user interaction with multiple screens, i.e., user interaction with an initial screen and with multiple additional sessions, such as but not limited to three (3) additional assessments.
  • the second example predictive model (such as but not limited to a classifier model) is constructed similarly to the first example predictive model, except the participants interference score (xintscore ) (i.e., a score for a task involving a primary task presented with and an interference as described herein) is replaced by the average of the score on a single-tasking task (isoScore) from about three (3) subsequent assessments (xmeanPostisoscore ).
  • the computation for the predictive model (such as but not limited to a classifier model) can be expressed as follows.
  • This example predictive model (such as but not limited to a classifier model) can be trained using a machine learning tool or other computational platform using the cData and nData collected from user interactions (with the example cognitive platform and/or platform product) of individuals having known amyloid status, to indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
  • An example cognitive platform and/or platform product configured to implement the predictive model provides certain attributes.
  • the example cognitive platform and/or platform product can be configured to classify a user according to the user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on faster data collection.
  • the data collection from an assessment performed using the example cognitive platform and/or platform product herein can be in a few minutes ⁇ e.g., in as few as about 5 or 7 minutes for an example predictive model (such as but not limited to a classifier model) based on an initial screen).
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be easily and remotely deployable on a mobile device such as but not limited to a smart phone or tablet.
  • Existing assessments may require clinician participation, may require the test to be performed in a laboratory/clinical setting, and/or may require invasive on-site medical procedures.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be delivered in an engaging format (such as but not limited to a 'game-like' format) that encourages user engagement and improves effective use of the assessment, thus increases accuracy.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be configured to combine orthogonal metrics from different tasks collected in a single session for highly accurate results.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model provides an easily deployable, cost effective, engaging, short-duration assessment of amyloid status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, with a high degree of accuracy.
  • At least a portion of the example predictive model (such as but not limited to a classifier model) herein can be implemented in the source code of an example cognitive platform and/or platform product, and/or within a data processing application program interface housed in an internet server.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be used to provide data indicative of a likelihood of positive amyloid status to one or more of an individual, a physician, a clinician, or other medical or healthcare practitioner, or physical therapist.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be used as a screening tool to determine amyloid positive or amyloid negative status of individuals, such as but not limited to, for clinical trials, or other drug trials, or for use by a private physician/clinician practice, and/or for an individual's self- assessment (with corroboration by a medical practitioner).
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be used as a screening tool to provide an accurate assessment of an individual's amyloid status to inform if additional tests, such as but not limited to a PET scan, is to be performed to confirm or clarify amyloid status.
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be used as a clinical trial screening product that increases the efficiency of identifying amyloid status (whether amyloid positive or negative) of individuals and provide significant cost savings by eliminating the need for unnecessary and expensive traditional detection methods (such as but not limited to PET scans).
  • An example cognitive platform and/or platform product herein configured to implement the predictive model can be used in a clinical or private healthcare setting to provide an indication of the likelihood of a positive or negative amyloid status of an individual without need for expensive traditional tests (which may be unnecessary).
  • FIG. 5 shows an example apparatus 500 according to the principles herein that can be used to implement the cognitive platform and/or platform product including the predictive model (such as but not limited to a classifier model) described hereinabove herein.
  • the example apparatus 500 includes at least one memory 502 and at least one processing unit 504.
  • the at least one processing unit 504 is communicatively coupled to the at least one memory 502.
  • Example memory 502 can include, but is not limited to, hardware memory, non-transitory tangible media, magnetic storage disks, optical disks, flash drives, computational device memory, random access memory, such as but not limited to DRAM, SRAM, EDO RAM, any other type of memory, or combinations thereof.
  • Example processing unit 504 can include, but is not limited to, a microchip, a processor, a microprocessor, a special purpose processor, an application specific integrated circuit, a microcontroller, a field programmable gate array, a general- purpose graphics processing unit, a neural network chip, any other suitable processor, or combinations thereof.
  • the at least one memory 502 is configured to store processor-executable instructions 506 and a computing component 508.
  • the computing component 508 can include a set of executable instructions that causes the processing unit 504 to analyze the cData and nData.
  • the computing component 508 can be used to analyze the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more physiological or monitoring components and/or cognitive testing components coupled to the cognitive platform as described herein. As shown in FIG.
  • the memory 502 also can be used to store data 510, such as but not limited to the nData 512 (including computation results from application of an example classifier model, measurement data from measurement(s) using one or more physiological or monitoring components and/or cognitive testing components) and/or data indicative of the response of an individual to the one or more tasks (cData), including responses to tasks rendered at a user interface of the apparatus 500 and/or tasks generated using an auditory, tactile, or vibrational signal from an actuating component coupled to or integral with the apparatus 500.
  • the data 510 can be received from one or more physiological or monitoring components and/or cognitive testing components that are coupled to or integral with the apparatus 500.
  • the computing device can include the computing component 508.
  • the user interface can be a graphical user interface.
  • the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to analyze the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more physiological or monitoring components and/or cognitive testing components coupled to the cognitive platform as described herein, using the computing component 508.
  • the at least one processing unit 504 also can be configured to execute processor-executable instructions 506 stored in the memory 502 to apply the example predictive model (such as but not limited to a classifier model) to the cDdata and nData, to generate computation results indicative of the classification of an individual according to likelihood of amyloid burden
  • the at least one processing unit 504 also executes processor-executable instructions 506 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more
  • physiological or monitoring components and/or cognitive testing components as described herein controls the memory 502 to store values indicative of the analysis of the cData and/or nData.
  • the measurement data can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components.
  • the one or more physiological components are configured for performing physiological measurements.
  • the physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
  • the cData can include reaction time, response variance, correct hits, omission errors, number of false alarms (such as but not limited to a response to a non-target), learning rate, spatial deviance, subjective ratings, and/or performance threshold, or data from an analysis, including percent accuracy, hits, and/or misses in the latest completed trial or session.
  • Measures of the reaction time indicate the time the individual takes to initiate a response to an interference (such as but not limited to a target or non-target) from the moment the interference is launched.
  • the performance threshold can be set by the example system or apparatus based on previous measurement cData indicating the individual's performance of the tasks and/or interference.
  • cData include response time (total of reaction time and the time it takes the individual to complete the response to the interference), task completion time, number of tasks completed in a set amount of time, preparation time for task, accuracy of responses, accuracy of responses under set conditions (e.g., stimulus difficulty or magnitude level and association of multiple stimuli), number of responses a participant can register in a set time limit, number of responses a participant can make with no time limit, number of attempts at a task needed to complete a task, movement stability, accelerometer and gyroscope data, and self-rating.
  • response time total of reaction time and the time it takes the individual to complete the response to the interference
  • task completion time number of tasks completed in a set amount of time
  • preparation time for task preparation time for task
  • accuracy of responses accuracy of responses under set conditions (e.g., stimulus difficulty or magnitude level and association of multiple stimuli)
  • number of responses a participant can register in a set time limit
  • number of responses a participant can make with no time limit
  • the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the measurement data (nData 512).
  • This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near- infrared spectroscopy, and/or pupil dilation measures, to provide the measurement data (nData 512).
  • the one or more physiological components can include one or more sensors for measuring parameter values of the physical characteristics of the body and nervous system, and one or more signal processors for processing signals detected by the one or more sensors.
  • nData 512 Other examples of physiological measurements to provide measurement data include, but are not limited to, the measurement of body temperature, heart or other cardiac-related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), functional magnetic resonance imaging (fMRI), blood pressure, electrical potential at a portion of the skin, galvanic skin response and/or galvanic skin response (GSR).
  • ECG electrocardiograph
  • EEG electroencephalogram
  • ERPs event-related potentials
  • fMRI functional magnetic resonance imaging
  • GSR galvanic skin response
  • the physiological measurements can be made using, e.g., functional magnetic resonance imaging (fMRI), magneto-encephalogram (MEG), and/or functional near-infrared spectroscopy (fNIRS).
  • fMRI functional magnetic resonance imaging
  • MEG magneto-encephalogram
  • fNIRS functional near-infrared spectroscopy
  • the devices for making physiological measurements can include, e.g., an eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNIRS), and/or a positron emission tomography (PET) scanner.
  • An EEG-fMRI or MEG-fMRI measurement allows for simultaneous acquisition of electrophysiology (EEG/MEG) data and hemodynamic (fMRI) data.
  • the example apparatus 500 of FIG. 5 can be configured as a computing device for performing any of the example methods described herein.
  • the computing device can include an App program for performing some of the functionality of the example methods described herein.
  • the example apparatus 500 can be configured to communicate with one or more of a monitoring component, a disease monitoring component, and a physiological measurement component, to provide for biofeedback and/or neurofeedback of data to the computing device, for adjusting a type or a difficulty level of one or more of the task, and the interference, to achieve the desired performance level of the individual.
  • a monitoring component include a device for performing a TOVA® test.
  • the disease monitoring component include any type of device that can be used for monitoring symptoms of a disease.
  • the biofeedback can be based on physiological measurements of the individual as the individual interacts with the apparatus 500, in which the biofeedback can be used by the computing component 508 to modify the type or a difficulty level of one or more of the task, and the interference.
  • the neurofeedback can be based on measurement and monitoring of the individual using a cognitive and/or a disease monitoring component as the individual interacts with the apparatus 500, in which the neurofeedback can be used by the computing component 508 to modify the type or a difficulty level of one or more of the task, and the interference, based on the measurement data indicating, e.g., the individual's cognitive state, to facilitate generating a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and render at the user interface a second instance of the primary task with an interference (configured as a second instance of a secondary task), requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference.
  • an interference configured as a second instance of a secondary task
  • the at least one processing unit 504 is configured to render the interference such that it diverts the individual's attention from the second instance of the primary task and is configured as an interruptor or a distraction.
  • the at least one processing unit 504 is configured to use the user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor.
  • the at least one processing unit 504 is further configured to receive data indicative of the first primary response, the first secondary response, the second primary response, and the response to the interference, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response, and analyzing one or both of the first secondary response or the response to the interference, to determine at least one indicator of the cognitive ability of the individual.
  • the at least one processing unit 504 is further configured to execute a first predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • the computing component 508 can be used to receive (including to measure) substantially simultaneously two or more of: (i) the response from the individual to a primary task (providing at least a portion of cData), (ii) a secondary response of the individual to an interference as an instance of a secondary task (providing at least a portion of cData), and (iii) at least one physiological measure of the individual (using a measurement of at least one physiological component to provide at least a portion of nData).
  • the computing component 508 can be used to analyze the cData and/or nData received from the cognitive platform coupled with one or more physiological components as described herein to compute at least one indicator of cognitive abilities.
  • the computing component 508 can be used to compute the at least one indicator and/or apply the predictive model to generate the scoring output.
  • the memory 502 also can be used to store data 510, such as but not limited to the physiological measurement data (nData 512) received from a physiological component coupled to or integral with the apparatus 500 and/or data indicative of the response of an individual to the one or more tasks, including responses to tasks rendered at a user interface of the apparatus 500 and/or tasks generated using an auditory, tactile, and/or vibrational signal from an actuating component coupled to or integral with the apparatus 500, and/or data indicative of one or more of an amount, concentration, or dose titration, or other treatment regimen of a drug, pharmaceutical agent, biologic, or other medication being or to be administered to an individual.
  • nData 512 physiological measurement data
  • the memory 502 also can be used to store data 510, such as but not limited to the physiological measurement data (nData 512) received from a physiological component coupled to or integral with the apparatus 500 and/or data indicative of the response of an individual
  • the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to measure substantially simultaneously two or more of: (i) the response from the individual to a task (providing at least a portion of cData), (ii) a secondary response of the individual to an interference as an instance of a secondary task (providing at least a portion of cData), and (iii) at least one physiological measure of the individual (using a measurement of at least one physiological component to provide at least a portion of nData).
  • the at least one processing unit 504 also executes the processor-executable instructions 506 stored in the memory 502 at least to analyze the cData and/or nData received from the one or more physiological components coupled with the cognitive platform, to compute at least one indicator of cognitive abilities, using the computing component 508.
  • the at least one processing unit 504 also executes processor- executable instructions 506 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the one or more physiological components coupled with the cognitive platform, and/or controls the memory 502 to store values indicative of the analysis of the cData and/or nData (including the at least one performance metric).
  • the at least one processing unit 504 also may be programmed to execute processor-executable instructions 506 to control a transmission unit to transmit values indicative of the computed performance metrics and/or controls the memory 502 to store values indicative of the performance metrics.
  • the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the first timepoint T1 at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; and render a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference.
  • the at least one processing unit 504 is configured to use the user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor.
  • the at least one processing unit 504 is configured to render the interference such that it diverts the individual's attention from the second instance of the primary task and is configured as an interruptor or a distraction.
  • the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the second timepoint T2 (subsequent to first timepoint T1 ) at least to: render a third instance of a primary task at the user interface, requiring a third primary response from the individual to the third instance of the primary task; and render a fourth instance of the primary task with an interference, requiring a fourth primary response from the individual to the fourth instance of the primary task in the presence of the interference.
  • the at least one processing unit 504 is further configured to receive data indicative of the first primary response, the second primary response, the third primary response, and the fourth primary response, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of two or more of: the first primary response, the second primary response, the third primary response, or the fourth primary response, to determine at least a first indicator of the cognitive ability of the individual.
  • the at least one processing unit 504 is further configured to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • the at least one processing unit 504 may be further configured to execute the processor-executable instructions 506 stored in the memory 502 at least to: adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at least one subsequent instance (i.e., at a later timepoint) of the primary task and/or the interference at a second difficulty level.
  • the third or fourth instance of the primary task can be rendered at a same difficulty level, or at a second (different) difficulty level than one or both of the first or second instance of the primary task.
  • the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T2 may be the same or different than the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T1.
  • the at least one processing unit 504 may be further configured to execute the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the third timepoint T3 (subsequent to second timepoint T2) at least to: render a fifth instance of a primary task at the user interface, requiring a fifth primary response from the individual to the fifth instance of the primary task; and render a sixth instance of the primary task with an interference, requiring a sixth primary response from the individual to the sixth instance of the primary task in the presence of the interference.
  • the at least one processing unit 504 is further configured to receive data indicative of the first primary response, the second primary response, the third primary response, the fourth primary response, the fifth primary response and the sixth primary response, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of three or more of: the first primary response, the second primary response, the third primary response, the fourth primary response, the fifth primary response or the sixth primary response, to determine at least a first indicator of the cognitive ability of the individual.
  • the at least one processing unit 504 is further configured to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • one or both the of fifth or sixth instance of the primary task can be rendered at a same difficulty level, or at a second (different) difficulty level than one or both of the third or fourth instance of the primary task.
  • the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T3 may be the same or different than the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T2.
  • the at least one processing unit 504 further executes the processor-executable instructions 506 stored in the memory 502 to apply a predictive model based at least in part on the at least one indicator determined using any of the examples herein to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • the predictive model can be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least the first indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
  • the trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output.
  • the scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
  • the scoring output (such as but not limited to a classification output) for an individual as to a likelihood of onset and/or stage of progression of a neurodegenerative condition can be transmitted as a signal to one or more of a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
  • the results of the analysis may be used to modify the difficulty level or other property of the computerized stimuli or interaction (CSI) or other interactive elements.
  • CSI computerized stimuli or interaction
  • FIG. 6 shows another example apparatus according to the principles herein, configured as a computing device 600 that can be used to implement the cognitive platform according to the principles herein.
  • the example computing device 600 can include a communication module 610 and an analysis engine 612.
  • the communication module 610 can be implemented to receive data indicative of at least one response of an individual to the primary task in the absence of an interference, and/or at least one response of an individual to the primary task that is being rendered in the presence of the interference.
  • the communication module 610 can be implemented to receive substantially simultaneously two or more of: (i) the response from the individual to a primary task, and (ii) a secondary response of the individual to an interference.
  • the analysis engine 612 can be implemented to analyze the data from the at least one sensor component as described herein and/or to analyze the data indicative of the responses to compute at least one indicator of cognitive abilities.
  • the analysis engine 612 can be implemented to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • the predictive model can be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least the first indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
  • the computing device 600 can include processor-executable instructions such that a processor unit can execute an application program (App 614) that a user can implement to initiate the analysis engine 612.
  • the processor- executable instructions can include software, firmware, or other instructions.
  • the analysis engine 612 can be configured to apply the trained predictive model to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output.
  • the scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
  • the example communication module 610 can be configured to implement any wired and/or wireless communication interface by which information may be exchanged between the computing device 600 and another computing device or computing system.
  • wired communication interfaces include, but are not limited to, USB ports, RS232 connectors, RJ45 connectors, ThunderboltTM connectors, and Ethernet connectors, and any appropriate circuitry associated therewith.
  • wireless communication interfaces may include, but are not limited to, interfaces implementing Bluetooth® technology, Wi-Fi, Wi-Max, IEEE 802.1 1 technology, radio frequency (RF) communications, Infrared Data Association (IrDA) compatible protocols, Local Area Networks (LAN), Wide Area Networks (WAN), and Shared Wireless Access Protocol (SWAP).
  • RF radio frequency
  • IrDA Infrared Data Association
  • LAN Local Area Networks
  • WAN Wide Area Networks
  • SWAP Shared Wireless Access Protocol
  • the example computing device 600 includes at least one other component that is configured to transmit a signal from the apparatus to a second computing device.
  • the at least one component can include a transmitter or a transceiver configured to transmit a signal including data indicative of a measurement by at least one sensor component to the second computing device.
  • the App 614 on the computing device 600 can include processor-executable instructions such that a processor unit of the computing device implements an analysis engine to analyze data indicative of the individual's response to the rendered tasks and/or interference (either or both with computer- implemented time-varying element) and the response of the individual to the at least one computer-implemented time-varying element to compute at least one indicator of cognitive abilities.
  • the App 614 on the computing device 600 can include processor-executable instructions such that a processor unit of the computing device implements an analysis engine to analyze the data indicative of the individual's response to the rendered tasks and/or interference (either or both with computer- implemented time-varying element) and the response of the individual to the at least one computer-implemented time-varying element to provide a predictive model based on the computed values of the performance metric, to generate a predictive model output indicative of a measure of cognition, a mood, a level of cognitive bias, or an affective bias of the individual.
  • an analysis engine to analyze the data indicative of the individual's response to the rendered tasks and/or interference (either or both with computer- implemented time-varying element) and the response of the individual to the at least one computer-implemented time-varying element to provide a predictive model based on the computed values of the performance metric, to generate a predictive model output indicative of a measure of cognition, a mood, a level of cognitive bias, or an affective bias of the individual.
  • the App 614 can include processor- executable instructions such that the processing unit of the computing device implements the analysis engine to apply a predictive model trained to provide a scoring output as to a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition based at least in part on applying the first predictive model to the at least one indicator, and other metrics and analyses described herein.
  • the App 614 can include processor-executable instructions to provide one or more of: (i) a predictive model output indicative of the cognitive capabilities of the individual, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iv) a change in the individual's cognitive capabilities, (v), a recommended treatment regimen, (vi) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vii) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
  • the App 614 can include processor-executable instructions such that the processing unit of the computing device implements the analysis engine to apply a second predictive model (including a second classifier) that is trained to provide a second scoring that is indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv), a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
  • a second predictive model including a second classifier
  • the second predictive model can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals.
  • Each training dataset includes data representing the scoring output derived as described herein from the application of a first predictive model to the at least one indicator of the cognitive ability of the classified individual.
  • Each training dataset also includes data indicative of one or more of (i) an indication (including a description) of the adverse event the individual experienced in response to administration of the pharmaceutical agent, drug, or biologic, or in response to a change in one or more of the amount, concentration, or dose titration of that pharmaceutical agent, drug, or biologic, (ii) the treatment regimen of the individual and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of the regimen in connection with the treatment or management of symptoms of the neurodegenerative condition, (iii) a type and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of any behavioral therapy, counseling, or physical exercise the individual is undergoing in connection with the treatment or management of symptoms of the neurodegenerative condition.
  • the trained second predictive model can be applied to the scoring output (including a classification output) of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate the second scoring as
  • the App 614 can be configured to receive measurement data including physiological measurement nData of an individual received from a physiological component, and/or cData indicative of the response of an individual to a task and/or an interference rendered at a user interface of the apparatus 500 (as described in greater detail below), and/or data indicative of one or more of an amount, concentration, or dose titration, or other treatment regimen of a drug, pharmaceutical agent, biologic, or other medication being or to be administered to an individual.
  • FIG. 7 A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (which may include an APP) is configured as a cognitive platform 702 that is separate from, but configured for coupling with, one or more of the physiological components 704.
  • the platform product which may include an APP
  • the cognitive platform 702 is separate from, but configured for coupling with, one or more of the physiological components 704.
  • FIG. 7B shows another non-limiting example system, method, and apparatus according to the principles herein, where the platform product (which may include an APP) is configured as an integrated device 710, where the cognitive platform 712 is integrated with one or more of the physiological components 714.
  • the platform product which may include an APP
  • the cognitive platform 712 is integrated with one or more of the physiological components 714.
  • FIG. 8 shows a non-limiting example implementation where the platform product (which may include an APP) is configured as a cognitive platform 802 that is configured for coupling with a physiological component 804.
  • the cognitive platform 802 is configured as a tablet including at least one processor programmed to implement the processor-executable instructions associated with the tasks and CSIs described hereinabove, to receive cData associated with user responses from the user interaction with the cognitive platform 802, to receive the nData from the physiological component 804, to analyze the cData and/or nData as described hereinabove, and to analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition.
  • the cognitive platform can be configured to analyze the differences in the individual's performance based on determining the differences between sets of data, each set of data including data indicative of the user's responses to the tasks in the presence and in the absence of interference and the nData, and/or adjust the difficulty level of the computerized stimuli or interaction (CSI) or other interactive elements based on the individual's performance determined in the analysis and based on the analysis of the cData and/or nData.
  • the cognitive platform can be configured to provide an output or other feedback from the platform product indicative of at least one of: (i) the individual's performance, (ii) cognitive assessment, (iii) projected response to cognitive treatment, or (iv) assessed measures of cognition.
  • the physiological component 804 is configured as an EEG device mounted to a user's head, to perform the measurements before, during and/or after user interaction with the cognitive platform 802, to provide the nData.
  • the EEG device can be a low- cost EEG device for medical treatment validation and personalized medicine.
  • the low-cost EEG device can be easier to use and has the potential to vastly improve the accuracy and the validity of medical applications.
  • the platform product may be configured as an integrated device including the EEG component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the EEG component.
  • the user interacts with a cognitive platform, and the EEG device is used to perform physiological measurements of the user. Changes, if any, in EEG measurements data (such as brainwaves) are monitored based on the actions of the user in interacting with the cognitive platform.
  • the nData (e.g., brainwave measurements) using the EEG device can be collected and analyzed to detect changes in the EEG measurements. This analysis can be used to determine the types of response from the user, such as whether the user is performing according to an optimal or desired profile.
  • the nData from the EEG to measurements can be used to identify changes in user performance/condition that indicate that the cognitive platform treatment is having the desired effect.
  • the nData from the EEG measurements can also be used to determine the type of tasks and/or CSIs that works for a given user.
  • the analysis can be used to determine whether the cognitive platform should be configured to provide tasks and/or CSIs to enforce or diminish these user results detected by the EEG device by adjusting the user's experience when interacting with the cognitive platforms.
  • measurements are made using a cognitive platform that is configured for coupling with a fMRI, for use for medical application validation and personalized medicine.
  • Consumer-level fMRI devices may be used to improve the accuracy and the validity of medical applications by tracking and detecting changes in brain part stimulation.
  • fMRI measurements can be used to provide measurement data of the cortical thickness and other similar measurement data.
  • the user interacts with a cognitive platform, and the fMRI is used to measure physiological data.
  • the user is expected to have stimulation of a particular brain part or combination of brain parts (such as but not limited to the prefrontal cortex, the visual cortex, or the hippocampus) based on the actions of the user while interacting with the cognitive platform.
  • the platform product may be configured as an integrated device including the fMRI component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the fMRI component.
  • measurement can be made of the stimulation of portions of the user brain, and analysis can be performed to detect changes in the measurements to determine whether the user exhibits the desired responses.
  • the fMRI can be used to collect measurement data to be used to identify the progress of the user in interacting with the cognitive platform.
  • the analysis can be used to determine whether the cognitive platform should be configured to provide tasks and/or CSIs to enforce or diminish these user results detected by the fMRI, by adjusting the user's experience in interacting with the cognitive platform.
  • system, methods or apparatus can be configured to make adjustments in real-time to one or both of the difficulty levels or the type of the tasks and/or interference (including CSIs).
  • An example system, method, and apparatus can be configured to train a predictive model of a measure of the cognitive capabilities of individuals based on feedback data from the output for individuals that are previously classified as to the measure of cognitive abilities of interest.
  • a predictive model (including a condition classifier) can be trained using a plurality of training datasets, in which each training dataset is associated with a previously classified individual from a group of individuals.
  • Each training dataset includes data representing the at least one indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual, based on the classified individual's interaction with an example apparatus, system, or computing device described herein.
  • the example classifier also can take as input at least one of: (i) data indicative of the performance of the classified individual at a cognitive test, (ii) data indicative of the behavioral test, and/or data indicative of a diagnosis of a status, or (iii) data indicative of the progression of the neurodegenerative condition.
  • the trained classifier can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output (such as but not limited to a classification output).
  • the scoring output (including a classification output) can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
  • the at least one processing unit can be programmed to cause an actuating component of a system (including the cognitive platform) to effect auditory, tactile, and/or vibrational computerized elements to effect the stimulus or other interaction with the individual.
  • the at least one processing unit can be programmed to cause a component of the cognitive platform to receive data indicative of at least one response from the individual based on the user interaction with the task and/or interference, including responses received at an input device.
  • the at least one processing unit can be programmed to cause the user interface to receive the data indicative of at least one response from the individual.
  • the data indicative of the response of the individual to a task and/or an interference can be measured using at least one sensor device contained in and/or coupled to an example system or apparatus herein, such as but not limited to a gyroscope, an accelerometer, a motion sensor, a position sensor, a pressure sensor, an optical sensor, an auditory sensor, a vibrational sensor, a video camera, a pressure-sensitive surface, a touch-sensitive surface, or other type of sensor.
  • the data indicative of the response of the individual to the task and/or an interference can be measured using other types of sensor devices, including a video camera, a microphone, a joystick, a keyboard, a mouse, a treadmill, an elliptical, a bicycle, steppers, or a gaming system (including a Wii®, a Playstation®, or an Xbox® or another gaming system).
  • the data can be generated based on physical actions of the individual that are detected and/or measured using the at least one sensor device when the individual executes a response to the stimuli presented with the task and/or interference.
  • the user may respond to tasks by interacting with the computer device.
  • the user may execute a response using a keyboard for alpha-numeric or directional inputs; a mouse for GO/NO-GO clicking, screen location inputs, and movement inputs; a joystick for movement inputs, screen location inputs, and clicking inputs; a microphone for audio inputs; a camera for still or motion optical inputs; sensors such as accelerometer and gyroscopes for device movement inputs; among others.
  • Non-limiting example inputs for a game system include but are not limited to a game controller for navigation and clicking inputs, a game controller with accelerometer and gryroscope inputs, and a camera for motion optical inputs.
  • Example inputs for a mobile device or tablet include a touch screen for screen location information inputs, virtual keyboard alpha-numeric inputs, GO/NO-GO tapping inputs, and touch screen movement inputs; accelerometer and gyroscope motion inputs; a microphone for audio inputs; and a camera for still or motion optical inputs, among others.
  • data indicative of the individual's response can include physiological sensors/measures to incorporate inputs from the user's physical state, such as but not limited to electroencephalogram (EEG), magnetoencephalography (MEG), heart rate, heart rate variability, blood pressure, weight, eye movements, pupil dilation, electrodermal responses such as the galvanic skin response, blood glucose level, respiratory rate, and blood oxygenation.
  • EEG electroencephalogram
  • MEG magnetoencephalography
  • heart rate heart rate variability
  • blood pressure weight
  • eye movements pupil dilation
  • electrodermal responses such as the galvanic skin response, blood glucose level, respiratory rate, and blood oxygenation.
  • the individual may be instructed to provide a response via a physical action of clicking a button and/or moving a cursor to a correct location on a screen, head movement, finger or hand movement, vocal response, eye movement, or other action of the individual.
  • an individual's response to a task or interference rendered at the user interface that requires a user to navigate a course or environment or perform other visuo-motor activity may require the individual to make movements (such as but not limited to steering) that are detected and/or measured using at least one type of the sensor device.
  • the data from the detection or measurement provides the data indicative of the response.
  • an individual's response to a task or interference rendered at the user interface that requires a user to discriminate between a target and a non-target may require the individual to make movements (such as but not limited to tapping or other spatially or temporally discriminating indication) that are detected and/or measured using at least one type of the sensor device.
  • the data that is collected by a component of the system or apparatus based on the detection or other measurement of the individual's movements provides the data indicative of the individual's responses.
  • the example system, method, and apparatus can be configured to apply the predictive model, using computational techniques and machine learning tools, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, or artificial neural networks, to the data indicative of the individual's response to the tasks and/or interference, and/or data from one or more physiological measures, to create composite variables or profiles that are more sensitive than each measurement alone for generating a predictive model output indicative of the cognitive response capabilities of the individual.
  • the predictive model output can be configured for other indications such as but not limited to detecting an indication of a disease, disorder or cognitive condition, or assessing cognitive health.
  • the example predictive models herein can be trained to be applied to data collected from interaction sessions of individuals with the cognitive platform to provide the output.
  • the predictive model can be used to generate a standards table, which can be applied to the data collected from the individual's response to task and/or interference to classify the individual's cognitive response capabilities.
  • Non-limiting examples of assessment of cognitive abilities include assessment scales or surveys such as the Mini Mental State Exam, CANTAB cognitive battery, Test of Variables of Attention (TOVA), Repeatable Battery for the Assessment of Neuropsychological Status, Clinical Global Impression scales relevant to specific conditions, Clinician's Interview-Based Impression of Change, Severe Impairment Battery, Alzheimer's Disease Assessment Scale, Positive and Negative Syndrome Scale, Schizophrenia Cognition Rating Scale, Conners Adult ADHD Rating Scales, Hamilton Rating Scale for Depression, Hamilton Anxiety Scale, Montgomery-Asberg Depressing Rating scale, Young Mania Rating Scale, Children's Depression Rating Scale, Penn State Worry Questionnaire, Hospital Anxiety and Depression Scale, Aberrant Behavior Checklist, Activities for Daily Living scales, ADHD self-report scale, Positive and Negative Affect Schedule, Depression Anxiety Stress Scales, Quick Inventory of Depressive Symptomatology, and PTSD Checklist.
  • assessment scales or surveys such as the Mini Mental State Exam, CANTAB cognitive battery,
  • the assessment may test specific functions of a range of cognitions in cognitive or behavioral studies, including tests for perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making, and other specific example measurements, including but are not limited to TOVA, MOT (motion-object tracking), SART (Sustained Attention to Response Task), CDT (change detection task), UFOV (useful field of view), Filter task, WAIS (Wechsler Adult Intelligence Scale) digit symbol, Troop, Simon task, Attentional Blink, N-back task, PRP (Psychological Refractory Period) task, task-switching test, and Flanker task.
  • TOVA time-object tracking
  • SART Sutained Attention to Response Task
  • CDT change detection task
  • UFOV useful field of view
  • Filter task WAIS (Wechsler Adult Intelligence Scale) digit symbol
  • Troop Troop
  • Simon task Attentional Blink
  • N-back task N-back task
  • the example systems, methods, and apparatus according to the principles described herein can be applicable to many different types of neuropsychological conditions, such as but not limited to dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, or other neurodegenerative condition.
  • neuropsychological conditions such as but not limited to dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, or other neurodegenerative condition.
  • the instant disclosure is directed to computer-implemented devices formed as example cognitive platforms configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more tasks, to provide a user performance metric.
  • the example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including physiological condition and/or cognitive condition).
  • Non-limiting example cognitive platforms can be configured to classify an individual as to a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition, and/or potential efficacy of use of the cognitive platform when the individual is being administered (or about to be administered) a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform and/or metrics computed based on the analysis (and associated computations) of that data.
  • the neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • Any classification of an individual as to likelihood of onset and/or stage of progression of a neurodegenerative condition can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
  • the cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
  • the instant disclosure is also directed to example systems that include cognitive platforms that are configured for coupling with one or more physiological or monitoring component and/or cognitive testing component.
  • the systems include cognitive platforms that are integrated with the one or more other physiological or monitoring component and/or cognitive testing component.
  • the systems include cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring component and/or cognitive testing component, to receive data indicative of measurements made using such one or more components.
  • the processing unit can be programmed to control the user interface to modify a temporal length of the response window associated with a response-deadline procedure.
  • the processing unit can be further programmed to control the user interface to render the task as a continuous visuo-motor tracking task.
  • the processing unit controls the user interface to render the interference as a target discrimination task.
  • a target discrimination task may also be referred to as a perceptual reaction task, in which the individual is instructed to perform a two-feature reaction task including target stimuli and non-target stimuli through a specified form of response.
  • that specified type of response can be for the individual to make a specified physical action in response to a target stimulus (e.g., move or change the orientation of a device, tap on a sensor-coupled surface such as a screen, move relative to an optical sensor, make a sound, or other physical action that activates a sensor device) and refrain from making such specified physical action in response to a non-target stimulus.
  • the individual is required to perform a visuomotor task (as a primary task) with a target discrimination task as an interference (an instance of a secondary task) (either or both including a computer-implemented time- varying element).
  • a programmed processing unit renders visual stimuli that require fine motor movement as reaction of the individual to the stimuli.
  • the visuomotor task is a continuous visuomotor task.
  • the processing unit is programmed to alter the visual stimuli and record data indicative of the motor movements of the individual over time (e.g., at regular intervals including 1 , 5, 10, or 30 times per second).
  • Example stimuli rendered using the programmed processing unit for a visuomotor task requiring fine motor movement may be a visual presentation of a path that an avatar is required to remain within.
  • the programmed processing unit may render the path with certain types of obstacles that the individual is either required to avoid or to navigate towards.
  • the fine motor movements effect by the individual such as but not limited to tilting or rotating a device, are measured using an accelerometer and/or a gyroscope (e.g., to steer or otherwise guide the avatar on the path while avoiding or crossing the obstacles as specified).
  • the target discrimination task serving as the interference
  • the apparatus may be configured to instruct the individual to provide the response to the computer-implemented time-varying element as an action that is read by one or more sensors, such as a movement that is sensed using a gyroscope or accelerometer or a motion or position sensor, or a touch that is sensed using a touch-sensitive, pressure sensitive or capacitance-sensitive sensor.
  • sensors such as a movement that is sensed using a gyroscope or accelerometer or a motion or position sensor, or a touch that is sensed using a touch-sensitive, pressure sensitive or capacitance-sensitive sensor.
  • the task and/or interference can be a visuomotor task, a target discrimination task, and/or a memory task.
  • the response-deadline can be adjusted between trials or blocks of trials to manipulate the individual's performance characteristics towards certain goals.
  • a common goal is driving the individual's average response accuracy towards a certain value by controlling the response deadline.
  • the hit rate may be defined as the number of correct responses to a target stimuli divided by the total number of target stimuli presented.
  • the false alarm rate can be calculated as the number of responses to a distractor stimuli divided by the number of distractor stimuli presented.
  • the miss rate can be calculated as the number of nonresponses to a target stimuli divided by the number of incorrect responses, including the nonresponses to a target stimuli added to the number of responses to a distractor stimuli.
  • the correct response rate can be calculated as the proportion of correct responses not containing signal (e.g., where the individual correctly responds to the distractor by refraining from effecting an action, such as a tap or other action, if the distractor is rendered).
  • the correct response rate may be calculated as the number of non-responses to the distractor stimuli divided by the number of non-responses to the distractor stimuli plus the number of responses to the target stimuli.
  • the tasks and/or interference are presented to the individual in two or more trials and/or sessions, with an interspersed interval between each trial and/or session.
  • the computing system is configured to implement the tasks and/or interference in the subsequent trial(s) and/or session(s) at a difficulty level that is changed or maintained the same from one trial to another and/or from one session to another.
  • the difficulty level in each subsequent trial and/or each subsequent session can be dependent on the performance of the individual in the previous trial and/or previous session. Based on an analysis by the computing system indicating that the number of correct inputs in the responses made by the individual in a previous trial and/or session increases or reaches a specific threshold (e.g.
  • the computing system is configured to implement the tasks and/or interference in the subsequent trial and/or session at a higher difficulty level than the previous trial and/or session.
  • the computing system Based on an analysis by the computing system indicating that the number of correct inputs in the responses made by the individual is decreased, is at or below a specified threshold, achieves a specified level of failure, or fails to achieve a level of success, in the previous trial and/or session, the computing system is configured to implement the tasks and/or interference in the subsequent trial and/or session at a lower difficulty level than the previous trial and/or session.
  • the computing system is configured to implement the tasks and/or interference in the subsequent trial(s) and/or session(s) at a difficulty level in a step-wise and/or in a peaks and valley fashion.
  • the computing system can be configured to modify the difficulty level of the primary task, or of the interference, or of some combination of the primary task and the interference.
  • the modulation of the difficulty level may be based on either the data indicative of the actual performance of the individual in performing the task or interference (as determined by measurement as the input to a task or interference) or a more indirect parameter governed by the analysis, e.g., a performance metric such as but not limited to the interference cost.
  • the level of difficulty of the task and/or the interference can be adjusted based on an adaptive staircase algorithm at an accuracy of about 50%, about 55%, about 60%, about 65%, about 70%, about 75%, about 80%, about 85%, or about 90% or more.
  • the computing system can be configured to modify the difficulty level such that the platform is specifically tailored to an individual, e.g., by maintaining the difficulty level at or around a threshold success rate for the individual.
  • the computing system can be configured to target the difficulty level to maintain a substantially constant error rate from an individual (e.g., to maintain substantially approximately 80% response accuracy).
  • the computing system can be configured to target the difficulty level to maintain an accuracy of performance from the individual of about 50%, about 55%, about 60%, about 65%, about 70%, about 75%, about 80%, about 85%, or about 90% or more.
  • the difficulty level of a task for a given individual may be determined by implementing the task without interference (e.g.
  • the difficulty level can be change until analysis of the measured data indicates that the individual is performing at a specific threshold level (e.g., percent accuracy).
  • the difficulty of the task (potentially including a computer- implemented time-varying element) adapts with every stimuli that is presented, which could occur more often than once at regular time intervals (e.g., every 5 seconds, every 10 seconds, every 20 seconds or other regular schedule).
  • the difficulty of a continuous task (potentially including a computer-implemented time-varying element) can be adapted on a set schedule, such as but not limited to every 30 seconds, 10 seconds, 1 second, 2 times per second, or 30 times per second.
  • the length of time of a trial depends on the number of iterations of rendering (of the tasks/interference) and receiving (of the individual's responses) and can vary in time.
  • a trial can be on the order of about 500 milliseconds, about 1 second (s), about 10 s, about 20 s, about 25 s, about 30 s, about 45 s, about 60 s, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, or greater.
  • Each trial may have a pre-set length or may be dynamically set by the processing unit (e.g., dependent on an individual's performance level or a requirement of the adapting from one level to another).
  • the task and/or interference can be modified based on targeting changes in one or more specific metrics by selecting features, trajectory, and response window of the targeting task, and level/type of parallel task interference to progressively require improvements in those metrics in order for the apparatus to indicate to an individual that they have successfully performed the task.
  • This could include specific reinforcement, including explicit messaging, to guide the individual to modify performance according to the desired goals.
  • the indication of the modification of the cognitive abilities can include a change in a measure of one or more of affective bias, mood, level of cognitive bias, sustained attention, selective attention, attention deficit, impulsivity, inhibition, perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making.
  • Example systems, methods and apparatus according to the principles herein can be implemented using a programmed computing device including at least one processing unit, to determine a potential biomarker for clinical populations.
  • a programmed processing unit is configured to execute processor- executable instructions at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; and render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task.
  • the programmed processing unit is configured to use the user interface to instruct the individual not to respond to an interference with the primary task that is configured as a distraction and to respond to an interference with the primary task that is configured as an interruptor.
  • the programmed processing unit is configured to render at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference.
  • the first interference is configured to divert the individual's attention from the second instance of the primary task and is configured as an instance of the secondary task that is rendered as an interruptor or a distraction.
  • the programmed processing unit is also configured to receive data indicative of the first primary response, the first secondary response, the second primary response, and the response to the interference; and to adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders one or both of a third instance of the primary task or the interference at a second difficulty level.
  • the programmed processing unit is further configured to render at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference; receive data indicative of the third primary response and the response to the second interference; and analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of two or more of the first primary response, the second primary response, and the third primary response, and also analyzing at least one of the first secondary response, to determine at least one indicator of the cognitive ability of the individual.
  • the programmed processing unit is further configured to execute a first predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
  • a scoring output such as but not limited to a classification output
  • the first predictive model is trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least one indicator of the cognitive ability of the classified individual and data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
  • the trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output.
  • the scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
  • the example processing unit is also configured, based at least in part on the scoring output (such as but not limited to a classification output), to generate an output to the user interface indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to a change in one or more of a recommended amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv) a recommended a treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
  • the scoring output such as but not limited to a classification output
  • the processing unit also can be configured to apply a second predictive model (including a second classifier) that is trained to provide a second scoring that is indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv), a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
  • a second predictive model including a second classifier
  • the second predictive model can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals.
  • Each training dataset includes data representing the scoring output derived as described herein from the application of a first predictive model to the at least one indicator of the cognitive ability of the classified individual.
  • Each training dataset also includes data indicative of one or more of (i) an indication (including a description) of the adverse event the individual experienced in response to administration of the pharmaceutical agent, drug, or biologic, or in response to a change in one or more of the amount, concentration, or dose titration of that pharmaceutical agent, drug, or biologic, (ii) the treatment regimen of the individual and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of the regimen in connection with the treatment or management of symptoms of the neurodegenerative condition, (iii) a type and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of any behavioral therapy, counseling, or physical exercise the individual is undergoing in connection with the treatment or management of symptoms of the neurodegenerative condition.
  • the trained second predictive model can be applied to the scoring output (including a classification output) of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate the second scoring as
  • the processing unit can be further configured to render a second instance of the task at the user interface, requiring a second response from the individual to the second instance of the task, and analyze a difference between the data indicative of the first response and the second response to compute an interference cost as a measure of at least one additional indication of cognitive abilities of the individual.
  • a medical, healthcare, or other professional can gain a better understanding of potential adverse events which may occur (or potentially are occurring) if the individual is administered a particular type of, amount, concentration, or dose titration of a pharmaceutical agent, drug, biologic, or other medication, including potentially affecting cognition.
  • a searchable database is provided herein that includes data indicative of the results of the analysis of the performance metrics (including a condition indicator) for particular individuals, along with known levels of efficacy of at least one type of pharmaceutical agent, drug, biologic, or other medication experienced by the individuals in interacting with the cognitive platform, and/or quantifiable information on one or more adverse events experienced by the individual with administration of the at least one type of pharmaceutical agent, drug, biologic, or other medication.
  • the searchable database can be configured to provide metrics for use to determine whether a given individual is a candidate for benefiting from a particular type of pharmaceutical agent, drug, biologic, or other obtained for the individual in interacting with the task and/or interference rendered at the computing device, and/or the scoring output (such as but not limited to a classification output) generated using the predictive model, and/or a level of expression of one or more of an amyloid, an cystatin, an alpha-synuclein, a huntingtin protein, a tau protein, or an apolipoprotein, and/or an indicator of efficacy of an administered drug, biologic or other pharmaceutical agent (such as but not limited to one or more of methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, or crenezumab) in the individual's interaction(s) with the cognitive platform.
  • MPH methylphenidate
  • performance metrics can assist with identifying whether the individual is a candidate for a particular type of drug (such as but not limited to a stimulant, e.g., methylphenidate or amphetamine) or whether it might be beneficial for the individual to have the drug administered in conjunction with a regiment of specified repeated interactions with the tasks and/or interference rendered to the computing device.
  • a stimulant e.g., methylphenidate or amphetamine
  • Other non-limiting examples of a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, and crenezumab.
  • a medical, healthcare, or other professional can gain a better understanding of potential adverse events which may occur (or potentially are occurring) if the individual is administered a different amount, concentration, or dose titration of a pharmaceutical agent, drug, biologic, or other medication, including potentially affecting cognition.
  • data and other information from an individual is collected, transmitted, and analyzed with their consent.
  • the cognitive platform described in connection with any example system, method and apparatus herein, including a cognitive platform based on interference processing can be based on or include the Project: EVOTM platform by Akili Interactive Labs, Inc., Boston, MA.
  • EEG measurements can provide useful results in the modulation of the gamma band.
  • Muller et al. (2000). "Modulation of induced gamma band activity in the human EEG by attention and visual information processing.” International Journal of Psychophysiology 38.3: 283-299.
  • modification in the EEG alpha band signal during attentional shifts See, e.g., Sauseng et al. (2005) "A shift of visual spatial attention is selectively associated with human EEG alpha activity.” European Journal of Neuroscience 22.1 1 : 2917-2926.
  • the P300 event-related potential (ERP) also provides data cues about attention.
  • Naatanen et al. (1978) "Early selective-attention effect on evoked potential reinterpreted", Acta Psychologica, 42, 313-329, discloses studies of the auditory attention, which show that the evoked potential has an improved negative response when a subject is presented with infrequent stimuli as compared to frequent stimuli.
  • Naatanen et al discloses that this negative component, called the mismatch negativity, occurs 100 to 200 ms after the stimuli, a time which is perfectly in the range of the pre-attentive attention phase.
  • emotional processing and cognitive processing each require interactions within and among specific brain networks.
  • the degree to which a cognitive assessment, monitor, or treatment is successful can depend on the degree of user engagement, attention, and focus.
  • Major depressive disorder and other similar or related disorders can be associated with changes to cognitive capabilities in multiple cognitive domains including attention (concentration), memory (learning), decision making (judgment), comprehension, judgment, reasoning, understanding, learning, and remembering.
  • the cognitive changes associated with depression can contribute to some of the disabilities experienced by individuals with this disorder.
  • the individual's response to a stimulus can vary depending on the state of the individual, including based on the individual's cognitive condition, disease, or executive function disorder. Measurements of the individual's performance can provide insight into the individual's status relative to a cognitive condition, disease, or executive function disorder, including the likelihood of onset and/or stage of progression of the cognitive condition, disease, or executive function disorder.
  • the foregoing non-limiting examples of physiological measurement data, behavioral data, and other cognitive data show that the responses of an individual to tasks can differ based on the type of stimuli. Furthermore, the foregoing examples indicate that the degree to which an individual is affected by a computer-implemented time-varying element, and the degree to which the performance of the individual at a task is affected in the presence of the computer-implemented time-varying element, is dependent on the degree to which the individual exhibits a form of emotional or affective bias. As described herein, the differences in the individual's performance may be quantifiably sensed and measured based on the performance of the individual at cognitive tasks versus stimuli with computer-implemented time-varying elements (e.g., emotional or affective elements).
  • computer-implemented time-varying elements e.g., emotional or affective elements
  • the reported physiological measurement data, behavioral data, and other cognitive data also show that the cognitive or emotional load evoked by a stimulus can vary depending on the state of an individual, including based on the individual's cognitive condition, disease state, or presence or absence of executive function disorder.
  • measurements of the differences in the individual's performance at cognitive tasks versus stimuli with computer- implemented time-varying elements can provide quantifiable insight into the likelihood of onset and/or stage of progression of a cognitive condition, disease, and/or executive function disorder, in the individual, such as but not limited to, social anxiety, depression, bipolar disorder, major depressive disorder, post-traumatic stress disorder, schizophrenia, autism spectrum disorder, attention deficit hyperactivity disorder, dementia, Parkinson's disease, Huntington's disease, or other neurodegenerative condition, Alzheimer's disease, or multiple-sclerosis.
  • ADHD attention deficit hyperactivity disorder
  • Attention selectivity was found to depend on neural processes involved in ignoring goal-irrelevant information and on processes that facilitate the focus on goal-relevant information.
  • the publications report neural data showing that when two objects are simultaneously placed in view, focusing attention on one can pull visual processing resources away from the other.
  • Studies were also reported showing that memory depended more on effectively ignoring distractions, and the ability to maintain information in mind is vulnerable to interference by both distraction and interruption. Interference by distraction can be, e.g., an interference that is a non-target, that distracts the individual's attention from the primary task, but that the instructions indicate the individual is not to respond to.
  • Interference by interruption/interruptor can be, e.g., an interference that is a target or two or more targets, that also distracts the individual's attention from the primary task, but that the instructions indicate the individual is to respond to (e.g., for a single target) or choose between/among (e.g., a forced-choose situation where the individual decides between differing degrees of a feature).
  • An example cost measure disclosed in the publications is the percentage change in an individual's performance at a single-tasking task as compared to a multitasking task, such that a greater cost (that is, a more negative percentage cost) indicates increased interference when an individual is engaged in single-tasking vs multi-tasking.
  • the publications describe an interference cost determined as the difference between an individual's performance on a primary task in isolation versus a primary task with one or more interference applied, where the interference cost provide an assessment of the individual's susceptibility to interference.
  • the tangible benefits of computer-implemented interference processing are also reported.
  • the Nature paper states that multi-tasking performance assessed using computer-implemented interference processing was able to quantify a linear age-related decline in performance in adults from 20 to 79 years of age.
  • the Nature paper also reports that older adults (60 to 85 years old) who interacted with an adaptive form of the computer-implemented interference processing exhibited reduced multitasking costs, with the gains persisting for six (6) months.
  • the Nature paper also reported that age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by the multitasking training (using the computer-implemented interference processing), with enhanced midline frontal theta power and frontal-posterior theta coherence.
  • Interacting with the computer-implemented interference processing resulted in performance benefits that extended to untrained cognitive control abilities (enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting a boost in sustained attention and preservation of multitasking improvement six (6) months later.
  • the example systems, methods, and apparatus are configured to classify an individual as to cognitive abilities and/or to enhance those cognitive abilities based on implementation of interference processing using a computerized cognitive platform.
  • the example systems, methods, and apparatus are configured to implement a form of multi-tasking using the capabilities of a programmed computing device, where an individual is required to perform a primary task and an interference substantially simultaneously, where the task and/or the interference includes a computer-implemented time-varying element, and the individual is required to respond to the computer-implemented time-varying element.
  • the sensing and measurement capabilities of the computing device are configured to collect data indicative of the physical actions taken by the individual during the response execution time to respond to the task at substantially the same time as the computing device collects the data indicative of the physical actions taken by the individual to respond to the computer-implemented time-varying element.
  • the capabilities of the computing devices and programmed processing units to render the task and/or the interference in real time to a user interface, and to measure the data indicative of the individual's responses to the task and/or the interference and the computer-implemented time-varying element in real time and substantially simultaneously can provide quantifiable measures of an individual's cognitive capabilities.
  • the computing devices and programmed processing units are configured to rapidly switch to and from different tasks and interferences.
  • the computing devices and programmed processing units are configured to perform multiple, different, tasks or interferences in a row (including for single-tasking, where the individual is required to perform a single type of task for a set period of time).
  • the task and/or interference includes a response deadline, such that the user interface imposes a limited time period for receiving at least one type of response from the individual interacting with the apparatus or computing device.
  • the period of time that an individual is required to interact with a computing device or other apparatus to perform a primary task and/or an interference can be a predetermined amount of time, such as but not limited to about 30 seconds, about 1 minute, about 4 minutes, about 7 minutes, about 10 minutes, or greater than 10 minutes.
  • the example systems, methods, and apparatus can be configured to implement a form of multi-tasking to provide measures of the individual's capabilities in deciding whether to perform one action instead of another and to activate the rules of the current task in the presence of an interference such that the interference diverts the individual's attention from the task, as a measure of an individual's cognitive abilities in executive function control.
  • the example systems, methods, and apparatus can be configured to implement a form of single-tasking, where measures of the individual's performance at interacting with a single type of task (i.e., with no interference) for a set period of time (such as but not limited to navigation task only or a target discriminating task only) can also be used to provide a measure of an individual's cognitive abilities.
  • measures of the individual's performance at interacting with a single type of task i.e., with no interference
  • a set period of time such as but not limited to navigation task only or a target discriminating task only
  • a session can include a first single-tasking trial (with a first type of task), a second single-tasking trial (with a second type of task), and a multi-tasking trial (a primary task rendered with an interference).
  • a session can include two or more multi-tasking trials (a primary task rendered with an interference).
  • a session can include two or more single-tasking trials (all based on the same type of tasks or at least one being based on a different type of task).
  • the performance can be further analyzed to compare the effects of two different types of interference (e.g. distraction or interruptor) on the performances of the various tasks. Some comparisons can include performance without interference, performance with distraction, and performance with interruption.
  • the cost of each type of interference e.g. distraction cost and interruptor/multi-tasking cost
  • the cost of each type of interference is analyzed and reported to the individual.
  • the interference processing provides a quantifiable way to measure and improve the ability to process interference events (interruptions and distractions).
  • Interference susceptibility is recognized as a limiting factor across global executive function (including attention and memory) and is known to be fragile in multiple diseases. Changes in EEG signals are shown to occurred at neurological loci associated with cognitive control. For example, midline frontal theta (MFT) power as measured by stimulus-locked electroencephalography (EEG) before, during, or after an individual performs the interference processing can provide indications of attention and interference susceptibility.
  • MFT midline frontal theta
  • EEG stimulus-locked electroencephalography
  • the interference can be an instance of a secondary task that includes a stimulus that is either a non-target (as a distraction) or a target (as an interruptor), or a stimulus that is differing types of targets (e.g., differing degrees of a facial expression or other characteristic/feature difference).
  • the example systems, methods, and apparatus herein can be used to collect quantitative measures of the responses form an individual to the task and/or interference, which could not be achieved using normal human capabilities.
  • the example systems, methods, and apparatus herein can be configured to implement a programmed processing unit to render the interference substantially simultaneously with the task over certain time periods.
  • the example systems, methods, and apparatus herein also can be configured to receive the data indicative of the measure of the degree and type of the individual's response to the task substantially simultaneously as the data indicative of the measure of the degree and type of the individual's response to the interference is collected (whether the interference includes a target or a non-target).
  • the example systems, methods, and apparatus are configured to perform the analysis by applying scoring or weighting factors to the measured data indicative of the individual's response to a non-target that differ from the scoring or weighting factors applied to the measured data indicative of the individual's response to a target, in order to compute a cost measure (including an interference cost).
  • the cost measure can be computed based on the difference in measures of the performance of the individual at one or more tasks in the absence of interference as compared to the measures of the performance of the individual at the one or more tasks in the presence of interference, where the one or more tasks and/or the interference includes one or more computer-implemented time-varying elements.
  • the requirement of the individual to interact with (and provide a response to) the computer- implemented time-varying element(s) can introduce cognitive or emotional load that quantifiably affects the individual's capability at performing the task(s) and/or interference due to the requirement for emotional processing to respond to the computer-implemented time-varying element.
  • the interference cost computed based on the data collected herein can provide a quantifiable assessment of the individual's susceptibility to interference.
  • the determination of the difference between an individual's performance on a task in isolation versus a task in the presence of one or more interference provides an interference cost metric that can be used to assess and classify cognitive capabilities of the individual.
  • the interference cost computed based on the individuals performance of tasks and/or interference performed can also provide a quantifiable measure of the individual's cognitive condition, disease state, or presence or stage of an executive function disorder, such as but not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • an executive function disorder such as but not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
  • the example systems, methods, and apparatus herein can be configured to perform the analysis of the individual's susceptibility to interference (including as a cost measure such as the interference cost), as a reiterating, cyclical process. For example, where an individual is determined to have minimized interference cost for a given task and/or interference, the example systems, methods, and apparatus can be configured to require the individual to perform a more challenging task and/or interference (i.e., having a higher difficulty level) until the individual's performance metric indicates a minimized interference cost in that given condition, at which point example systems, methods, and apparatus can be configured to present the individual with an even more challenging task and/or interference until the individual's performance metric once again indicates a minimized interference cost for that condition. This can be repeated any number of times until a desired end-point of the individual's performance is obtained.
  • a cost measure such as the interference cost
  • the interference cost can be computed based on measurements of the individual's performance at a single-tasking task (without an interference) as compared to a multi-tasking task (with interference), to provide an assessment.
  • an individual's performance at a multi-tasking task e.g., targeting task with interference
  • Example systems, apparatus and methods herein are configured to analyze data indicative of the degree to which an individual is affected by a computer- implemented time-varying element, and/or the degree to which the performance of the individual at a task is affected in the presence of the computer-implemented time- varying element, to provide performance metric including a quantified indicator of cognitive abilities of the individual.
  • the performance metric can be used as an indicator of the degree to which the individual exhibits a form of emotional or affective bias.
  • the example systems, methods, and apparatus herein also can be configured to selectively receive data indicative of the measure of the degree and type of the individual's response to an interference that includes a target stimulus (i.e., an interruptor) substantially simultaneously (i.e., at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected and to selectively not collect the measure of the degree and type of the individual's response to an interference that includes a non-target stimulus (i.e., a distraction) substantially simultaneously (i.e., at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected.
  • a target stimulus i.e., an interruptor
  • a non-target stimulus i.e., a distraction
  • the example systems, methods, and apparatus are configured to discriminate between the windows of response of the individual to the target versus non-target by selectively controlling the state of the sensing/measurement components for measuring the response either temporally and/or spatially. This can be achieved by selectively activating or deactivating sensing/measurement components based on the presentation of a target or non-target, or by receiving the data measured for the individual's response to a target and selectively not receiving (e.g., disregarding, denying, or rejecting) the data measured for the individual's response to a non-target.
  • using the example systems, methods, and apparatus herein can be implemented to provide a measure of the cognitive abilities of an individual in the area of attention, including based on capabilities for sustainability of attention over time, selectivity of attention, and reduction of attention deficit.
  • Other areas of an individual's cognitive abilities that can be measured using the example systems, methods, and apparatus herein include affective bias, mood, level of cognitive bias, impulsivity, inhibition, perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making.
  • the processing unit is configured to control parameters of the tasks and/or interference, such as but not limited to the timing, positioning, and nature of the stimuli, so that the physical actions of the individual can be recorded during the interaction(s).
  • the individual's physical actions are affected by their neural activity during the interactions with the computing device to perform single-tasking and multi-tasking tasks.
  • the science of interference processing shows (based on the results from physiological and behavioral measurements) that the aspect of adaptivity can result in changes in the brain of an individual in response to the training from multiple sessions (or trials) based on neuroplasticity, thereby enhancing the cognitive skills of the individual.
  • the example systems, methods, and apparatus are configured to implement tasks and/or interference with at least one computer-implemented time-varying element, where the individual performs the interference processing.
  • the effect on an individual of performing tasks can tap into novel aspects of cognitive training to enhance the cognitive abilities of the individual.
  • FIGs. 9A - 12D show non-limiting example user interfaces that can be rendered using example systems, methods, and apparatus herein to render the tasks and/or interferences (either or both with computer-implemented time-varying element) for user interactions.
  • the non-limiting example user interfaces of FIGs. 9A - 12D also can be used for one or more of: to display instructions to the individual for performing the tasks and/or interferences, interact with the computer-implemented time-varying element, to collect the data indicative of the individual's responses to the tasks and/or the interferences and the computer-implemented time-varying element, to show progress metrics, and to provide analysis metrics.
  • FIGs. 9A - 9D show non-limiting example user interfaces rendered using example systems, methods, and apparatus herein.
  • an example programmed processing unit can be used to render to the user interfaces (including graphical user interfaces) display features 900 for displaying instructions to the individual for performing the tasks and/or interference, and metric features 902 to show status indicators from progress metrics and/or results from application of analytics to the data collected from the individual's interactions (including the responses to tasks/interferences) to provide the analysis metrics.
  • the predictive model can be used to provide the analysis metrics provided as a response output.
  • the data collected from the user interactions can be used as input to train the predictive model.
  • an example programmed processing unit also may be used to render to the user interfaces (including graphical user interfaces) an avatar or other processor-rendered guide 904 that an individual is required to control (such as but not limited to navigate a path or other environment in a visuo-motor task, and/or to select an object in a target discrimination task).
  • FIG. 9A an example programmed processing unit also may be used to render to the user interfaces (including graphical user interfaces) an avatar or other processor-rendered guide 904 that an individual is required to control (such as but not limited to navigate a path or other environment in a visuo-motor task, and/or to select an object in a target discrimination task).
  • the display features 900 can be used to instruct the individual what is expected to perform a navigation task while the user interface depicts (using the dashed line) the type of movement of the avatar or other processor-rendered guide 904 required for performing the navigation task.
  • the navigation task may include milestone objects 910 that the individual is required to steer an avatar to cross or avoid, in order to determine the scoring. As shown in FIG.
  • the display features 900 can be used to instruct the individual what is expected to perform a target discrimination task while the user interface depicts the type of object(s) 906 and 908 that may be rendered to the user interface, with one type of object 906 designated as a target while the other type of object 908 that may be rendered to the user interface is designated as a non-target, e.g., by being crossed out in this example. As shown in FIG.
  • the display features 900 can be used to instruct the individual what is expected to perform both a navigation task as a primary task and a target discrimination as a secondary task (i.e., an interference) while the user interface depicts (using the dashed line) the type of movement of the avatar or other processor-rendered guide 904 required for performing the navigation task, and the user interface renders the object type designated as a target object 906 and the object type designated as a non-target object 908.
  • the measured data indicative of the individual's response to the single- tasking task rendered as a targeting task can be analyzed to provide quantitative insight into the cognitive domains of perception (detection & discrimination), motor function (detection & discrimination), impulsivity/inhibitory control, and visual working memory.
  • the measured data indicative of the individual's response to the single- tasking task rendered as a navigation task can be analyzed to provide quantitative insight into the cognitive domains of visuomotor tracking and motor function.
  • the measured data indicative of the individual's response to a primary task (rendered as a navigation task) in the presence of an interference (rendered as a targeting task), in a multi-tasking task, can be analyzed to provide quantitative insight into the cognitive domains of divided attention and interference management.
  • FIGs. 10A - 10D show examples of the features of object(s) (targets or non- targets) that can be rendered as time-varying characteristics to an example user interface, according to the principles herein.
  • FIG. 10A shows an example where the modification to the time-varying characteristics of an aspect of the object 1000 rendered to the user interface is a dynamic change in position and/or speed of the object 1000 relative to environment rendered in the graphical user interface.
  • FIG. 10B shows an example where the modification to the time-varying characteristics of an aspect of the object 1002 rendered to the user interface is a dynamic change in size and/or direction of trajectory/motion, and/orientation of the object 1002 relative to the environment rendered in the graphical user interface.
  • FIG. 10A shows an example where the modification to the time-varying characteristics of an aspect of the object 1000 rendered to the user interface is a dynamic change in position and/or speed of the object 1000 relative to environment rendered in the graphical user interface.
  • FIG. 10B shows an example where the modification to the time-varying characteristics of an aspect of the object 100
  • 10C shows an example where the modification to the time-varying characteristics of an aspect of the object 1004 rendered to the user interface is a dynamic change in shape or other type of the object 1004 relative to the environment rendered in the graphical user interface.
  • the time-varying characteristic of object 1004 is effected using morphing from a first type of object (a star object) to a second type of object (a round object).
  • the time-varying characteristic of object 1004 is effected by rendering a blendshape as a proportionate combination of a first type of object and a second type of object.
  • FIG. 10C shows an example where the modification to the time-varying characteristics of an aspect of the object 1004 rendered to the user interface is a dynamic change in shape or other type of the object 1004 rendered in the graphical user interface (in this non-limiting example, from a star object to a round object).
  • FIG. 10D shows an example where the modification to the time-varying characteristics of an aspect of the object 1006 rendered to the user interface is a dynamic change in pattern, or color, or visual feature of the object 1006 relative to environment rendered in the graphical user interface (in this non-limiting example, from a star object having a first pattern to a star object having a second pattern).
  • the time-varying characteristic of the object can be a rate of change of a facial expression depicted on or relative to the object.
  • FIGs. 1 1A - 1 1 T show a non-limiting example of the dynamics of tasks and interferences that can be rendered at user interfaces, according to the principles herein.
  • the primary task is a visuo-motor navigation task
  • the interference is target discrimination (as a secondary task).
  • the individual is required to perform the navigation task by controlling the motion of the avatar 1 102 along a path that coincides with the milestone objects 1 104.
  • FIGs. 1 1A - 1 1T show a non-limiting example implementation where the individual is expected to actuate an apparatus or computing device (or other sensing device) to cause the avatar 1 102 to coincide with the milestone object 1 104 as the response in the navigation task, with scoring based on the success of the individual at crossing paths with (e.g., hitting) the milestone objects 1 104.
  • the individual is expected to actuate an apparatus or computing device (or other sensing device) to cause the avatar 1 102 to miss the milestone object 1 104, with scoring based on the success of the individual at avoiding the milestone objects 1 104.
  • FIGs. 1 1A - 1 1 C show the dynamics of a target object 1 106 (a star having a first type of pattern).
  • FIGs. 1 1 1 E— 1 1 H show the dynamics of a non-target object 1 108 (a star having a second type of pattern).
  • FIGs. 1 1 1 - 1 1 T show the dynamics of other portions of the navigation task, where the individual is expected to guide the avatar 1 102 to cross paths with the milestone object 1 104 in the absence of an interference (an instance of a secondary task).
  • the processing unit of the example system, method, and apparatus is configured to receive data indicative of the individual's physical actions to cause the avatar 1 102 to navigate the path.
  • the individual may be required to perform physical actions to "steer" the avatar, e.g., by changing the rotational orientation or otherwise moving a computing device.
  • Such action can cause a gyroscope or accelerometer or other motion or position sensor device to detect the movement, thereby providing measurement data indicative of the individual's degree of success in performing the navigation task.
  • the processing unit of the example system, method, and apparatus is configured to receive data indicative of the individual's physical actions to perform the target discrimination task.
  • the individual may be instructed prior to a trial or other session to tap, or make other physical indication, in response to display of a target object 1 106, and not to tap to make the physical indication in response to display of a non-target object 1 108.
  • the target discrimination task acts as an interference (i.e. , an instance of a secondary task) to the primary navigation task, in an interference processing multi-tasking implementation.
  • the example systems, methods, and apparatus can cause the processing unit to render a display feature to display the instructions to the individual as to the expected performance.
  • the processing unit of the example system, method, and apparatus can be configured to (i) receive the data indicative of the measure of the degree and type of the individual's response to the primary task substantially simultaneously as the data indicative of the measure of the degree and type of the individual's response to the interference is collected (whether the interference includes a target or a non-target), or (ii) to selectively receive data indicative of the measure of the degree and type of the individual's response to an interference that includes a target stimulus (i.e. , an interruptor) substantially simultaneously (i.e.
  • a target stimulus i.e. , an interruptor
  • FIGs. 12A - 12D show other non-limiting examples of the dynamics of tasks and interferences that can be rendered at user interfaces, according to the principles herein.
  • the primary task is a visuo-motor navigation task
  • the interference is target discrimination (as an instance of a secondary task).
  • the individual is required to perform the visuo-motor navigation task by controlling the motion of the avatar 1202 along a path.
  • the individual is required to provide a response to the tasks in the presence or absence of an interference 1204 (where the interference is rendered at the user interface as a discrimination task between a target or non-target object).
  • the adaptation of the difficulty of a task and/or interference may be adapted with each different stimulus that is presented as a computer-implemented time-varying element.
  • the example system, method, and apparatus herein can be configured to adapt a difficulty level of a task and/or interference one or more times in fixed time intervals or in other set schedule, such as but not limited to each second, in 10 second intervals, every 30 seconds, or on frequencies of once per second, 2 times per second, or more (such as but not limited to 30 times per second).
  • one or more of navigation speed, shape of the course (changing frequency of turns, changing turning radius), and number of obstacles and/or size of obstacles can be changed to modify the difficulty of a navigation game level, with the difficulty level increasing with at least one of increasing the speed, increasing the numbers, or increasing the sizes of obstacles (including types of milestone objects (e.g. , some milestone objects to avoid or some milestone objects to cross/coincide with).
  • the difficulty level of a task and/or interference of a subsequent level can also be changed in real-time as feedback, e.g. , the difficulty of a subsequent level can be increased or decreased in relation to the data indicative of the performance of the task.
  • the response recorded for the targeting task can be, but is not limited to, a touch, swipe or other gesture relative to a user interface or image collection device (including a touch-screen or other pressure sensitive screen, or a camera) to interact with a user interface.
  • the response recorded for the targeting task can be, but is not limited to, user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform, that is recorded using a sensor disposed in or otherwise coupled to the computing device (such as but not limited to a motion sensor or position sensor).
  • the cData and/or nData can be collected in real-time.
  • the adjustments to the type of tasks and/or CSIs can be made in real-time.
  • FIGs. 13A through 16 show the results of non-limiting example measurements made using an example system, method, and apparatus described herein.
  • FIGs. 13A - 13B show physiological measurement data (nData) and other data for a set of individuals with ages from 70 years or older (both male and female individuals).
  • the example physiological measurement data include data indicative of an amyloid grouping of the individuals (i.e., whether determined to the amyloid positive (+) or amyloid negative (-)), an ⁇ 4 status, standard uptake value ratios (SUVR) computed based on positron emission tomography (PET) imaging data, cortical thickness in signature regions for Alzheimer's disease (AD Signature regions), normalized bilateral hippocampal volumes, normalized bilateral whole-brain volumes, and MRI microbleed count (deep vs. lobar).
  • 13A - 13B also shows that certain of the individuals are also administered a drug (methylphenidate) as compared to a placebo. None of the structural MRI measures differentiated the populations (i.e., the amyloid positive (+) vs. the amyloid negative (-) individuals). See also C. Leurent et al., "A randomized, double blind, placebo controlled trial to study difference in cognitive learning associated with repeated self-administration of remote computer tablet-based application assessing dual-task performance based on amyloid status in healthy elderly volunteers," Clinical Trials on Alzheimer's Disease (CTAD), December 9, 2016.
  • FIGs. 13A - 13B show example data (including nData) which can be used as part of a training dataset for training a predictive model to generate a scoring output indicative of, e.g., amyloid status or APOE expression levels.
  • FIGs. 13A - 13B shows the type of data that can be used as part of a training dataset for training another predictive model to generate an output indicative of, e.g., a likelihood of an individual experiencing an adverse event in response to administration of an amount, concentration, or dose titration of the drug methylphenidate, or changes to that amount, concentration, or dose titration of the drug methylphenidate.
  • the example data can be used as part of a training dataset for training a predictive model to generate a scoring output indicative of, e.g., amyloid status or APOE status, or other type of output indicative of a likelihood of the individual experiencing an adverse event in response to administration of the drug methylphenidate, or a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the drug methylphenidate.
  • the predictive model also could also be trained as a classifier to generate a classification output to classify individuals as to amyloid status.
  • the generated indication of amyloid status may be used to give some insight into the likelihood of onset or stage of progression of Alzheimer's disease.
  • FIGs. 14A - 14B show plots of the results of performance metrics computed based on the response data measured from multiple interactions of the individuals (characteristics/measurements summarized in FIGs. 13A - 13B) from day 0 to day 28, with an example system or apparatus (configured as a cognitive platform). The analyses are adjusted for the covariates.
  • the computed performance metrics act as indicators, to provide an indication of the effect of the multiple interactions with the cognitive training on divided attention, based on computation of a performance metric (as an interference cost) on reaction time on hits.
  • the example performance metric act as indicators, to provide an indication of better performance for lower values.
  • amyloid (-) individuals have a measurable (and quantifiably significant) improvement after multiple interactions between day 0 and day 28 days with the example cognitive platform, whereas no measurable statistical change is indicated from the measured response data from the amyloid (+) individuals.
  • the effect on divided attention as measured by reaction time on hits show a positive training effect in both populations (p ⁇ 0.001 ) in both conditions, with greater reduction in the amyloid (-) population in the performance metric (interference cost) after training (p ⁇ 0.003).
  • the performance metrics computed based on the individual's interactions provides a distinction in performance metric between individuals having differing amyloid condition, thereby providing a classification of the individuals as to amyloid group.
  • the amyloid status of an individual potentially may be used to correlate with (or at least provide insight into) a potential likelihood of onset of and/or a stage of progression of Alzheimer's disease (a neurodegenerative condition).
  • the RAVLTTM test is observed to provide some differentiation between the amyloid (+) and (-) individuals of the population.
  • FIG. 16 shows plots of the results of measures from the Test of Variables of Attention (TOVA®) test (a test for sustained attention), where greater performance at Day 28 is observed for both populations p ⁇ 0.001 (i.e., the amyloid positive (+) vs. the amyloid negative (-) individuals), where the numerical improvement in standardized score are 9.0 for amyloid (-) and 9.4 for amyloid (+). No performance differences are observed between populations at either time point.
  • TOVA® Test of Variables of Attention
  • FIG. 17A shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
  • the at least one processing unit is used to render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task.
  • the at least one processing unit is used to render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task.
  • the at least one processing unit is used to render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference.
  • the interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction.
  • the at least one processing unit is used to render a user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor.
  • the at least one processing unit is used to generate a performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response.
  • the at least one processing unit is used to receive data indicative of one or both of an age or a gender identifier of the individual.
  • the at least one processing unit also is used to generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of (i) at least one of the age or the gender identifier, (ii) the performance score, and (iii) at least one of the first primary response or the first secondary response.
  • FIG. 17B-1 - 17B-2 show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
  • the processing unit is configured to execute a first trial at a first time interval, where the first trial is described in blocks 1722 through 1728 as follows.
  • the at least one processing unit is used to render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task.
  • the at least one processing unit is used to render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task.
  • the at least one processing unit is used to render a user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor.
  • the at least one processing unit is used to render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference.
  • the interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction.
  • the at least one processing unit is used to generate a first performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response.
  • the at least one processing unit is used to receive data indicative of one or both of an age or a gender identifier of the individual.
  • the at least one processing unit also is used, based on the first performance score and the data indicative of one or both of an age or a gender identifier of the individual, to adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at a second difficulty level one or more of a third instance of the primary task or a second interference.
  • the processing unit is configured to execute a second trial at a second time interval that is subsequent to the first time interval, the second trial is described in block 1736 as follows.
  • the at least one processing unit also is used to render at the user interface the third instance of the primary task with the second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference.
  • the second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction.
  • the at least one processing unit also is used to generate a second performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the third primary response to provide an indication of cognitive skills of the individual.
  • FIGs. 17C-1 - 17C-2 show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit.
  • the at least one processing unit Is used to render at least one user interface to present a computerized stimuli or interaction (CSI) or other interactive elements to the user, or cause an actuating component of the cognitive platform and/or platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with a user.
  • CSI computerized stimuli or interaction
  • the at least one processing unit is used to cause a component of the program product to receive data indicative of at least one user response based on the user interaction with the CSI or other interactive element (such as but not limited to cData) at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or ⁇ ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL).
  • the at least one processing unit can be programmed to cause the user interface to receive the data indicative of the at least one user response.
  • the at least one processing unit is used to cause a component of the program product to receive nData indicative of the measurements made using the one or more nData components (including the one or more physiological or monitoring components and/or cognitive testing components) before, during, and/or after the user interacts with the cognitive platform and/or platform product.
  • block 1754 may be performed in a similar timeframe, or substantially simultaneously with block 1756. In another example implementation of the method, block 1754 may be performed at different timepoints than block 1756.
  • the at least one processing unit also is used to: analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses (including based on differences in the cData) and differences in the associated the nData, and/or apply an example predictive model (such as but not limited to a classifier model) to the cData and nData, and/or adjust the difficulty level of the task(s) comprising the computerized stimuli or interaction (CSI) or other interactive elements based on the analysis of the cData and/or nData (including the measures of the individual's performance and/or physiological condition determined in the analysis), and/or provide an output or other feedback from the cognitive platform and/or platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or response to cognitive treatment, and/or assessed measures of cognition, and/or to classify an individual as to amyloid group,
  • the at least one processing unit can be programmed to execute processor executable instructions to apply a predictive model (such as but not limited to a classifier model) to the nData and cData collected at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or ⁇ ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TY), including to compare the cData at the initial timepoints and later timepoints, to perform the classifications.
  • a predictive model such as but not limited to a classifier model
  • Any classification of an individual as to likelihood of onset and/or stage of progression of a neurodegenerative condition in block 1758 can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow
  • formulation of a course of treatment for the individual or to modify an existing course of treatment including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or
  • the results of the analysis may be used to modify the difficulty level or other property of the computerized stimuli or interaction (CSI) or other interactive elements.
  • CSI computerized stimuli or interaction
  • FIG. 18 is a block diagram of an example computing device 1810 that can be used as a computing component according to the principles herein.
  • computing device 1810 can be configured as a console that receives user input to implement the computing component, including to apply the signal detection metrics in computer-implemented adaptive response-deadline procedures.
  • FIG. 18 also refers back to and provides greater detail regarding various elements of the example system of FIG. 5.
  • the computing device 1810 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing examples.
  • the non-transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like.
  • non-transitory tangible media for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives
  • computing device 1810 can store computer-readable and computer-executable instructions or software for performing the operations disclosed herein.
  • the memory 502 can store a software application 1840 which is configured to perform various of the disclosed operations (e.g., analyze cognitive platform and/or platform product measurement data and response data, apply an example classifier model, or performing a computation).
  • the computing device 1810 also includes configurable and/or programmable processor 504 and an associated core 1814, and optionally, one or more additional configurable and/or programmable processing devices, e.g., processor(s) 1812' and associated core(s) 1814' (for example, in the case of computational devices having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 502 and other programs for controlling system hardware.
  • Processor 504 and processor(s) 1812' can each be a single core processor or multiple core (1814 and 1814') processor.
  • Virtualization can be employed in the computing device 1810 so that infrastructure and resources in the console can be shared dynamically.
  • a virtual machine 1824 can be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
  • Memory 502 can include a computational device memory or random access memory, such as but not limited to DRAM, SRAM, EDO RAM, and the like.
  • Memory 502 can include a non-volatile memory, such as but not limited to a hard- disk or flash memory.
  • Memory 502 can include other types of memory as well, or combinations thereof.
  • the memory 502 and at least one processing unit 504 can be components of a peripheral device, such as but not limited to a dongle (including an adapter) or other peripheral hardware.
  • the example peripheral device can be programmed to communicate with or otherwise couple to a primary computing device, to provide the functionality of any of the example cognitive platform and/or platform product, apply an example classifier model, and implement any of the example analyses (including the associated computations) described herein.
  • the peripheral device can be programmed to directly communicate with or otherwise couple to the primary computing device (such as but not limited to via a USB or HDMI input), or indirectly via a cable (including a coaxial cable), copper wire (including, but not limited to, PSTN, ISDN, and DSL), optical fiber, or other connector or adapter.
  • the peripheral device can be programmed to communicate wirelessly (such as but not limited to Wi-Fi or Bluetooth®) with primary computing device.
  • the example primary computing device can be a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone), a television, a workstation, a desktop computer, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of computing device.
  • a smartphone such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
  • a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
  • a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
  • a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
  • a television such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM-based smartphone
  • a user can interact with the computing device 1810 through a visual display unit 1828, such as a computer monitor, which can display one or more user interfaces 1830 that can be provided in accordance with example systems and methods.
  • the computing device 1810 can include other I/O devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 1818, a pointing device 1820 (e.g., a mouse), a camera or other image recording device, a microphone or other sound recording device, an accelerometer, a gyroscope, a sensor for tactile, vibrational, or auditory signal, and/or at least one actuator.
  • the keyboard 1818 and the pointing device 1820 can be coupled to the visual display unit 1828.
  • the computing device 1810 can include other suitable conventional I/O peripherals.
  • the computing device 1810 can also include one or more storage devices 1838 (including a single core processor or multiple core processor 1836), such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that perform operations disclosed herein.
  • Example storage device 1838 (including a single core processor or multiple core processor 1836) can also store one or more databases for storing any suitable information required to implement example systems and methods. The databases can be updated manually or automatically at any suitable time to add, delete, and/or update one or more items in the databases.
  • the computing device 1810 can include a network interface 1822 configured to interface via one or more network devices 1832 with one or more networks, for example, Local Area Network (LAN), metropolitan area network (MAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.1 1 , T1 , T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above.
  • LAN Local Area Network
  • MAN metropolitan area network
  • WAN Wide Area Network
  • the network interface 1822 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 1810 to any type of network capable of communication and performing the operations described herein.
  • the computing device 1810 can be any computational device, such as a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an AndroidTM -based smartphone), a television, a workstation, a desktop computer, a server, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of gaming console, or other equivalent form of computing or telecommunications device that is capable of communication and that has or can be coupled to sufficient processor power and memory capacity to perform the operations described herein.
  • the one or more network devices 1832 may communicate using different types of protocols, such as but not limited to WAP (Wireless Application Protocol), TCP/IP (Transmission
  • IPX/SPX Internetwork Packet Exchange/Sequenced Packet Exchange.
  • the computing device 1810 can run any operating system 1826, such as any of the versions of the Microsoft® Windows® operating systems, iOS® operating system, AndroidTM operating system, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the console and performing the operations described herein.
  • the operating system 1826 can be run in native mode or emulated mode.
  • the operating system 1826 can be run on one or more cloud machine instances.
  • various aspects of the invention may be embodied at least in part as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium or non- transitory medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the technology discussed above.
  • the computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present technology as discussed above.
  • program or “software” are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present technology as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present technology need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present technology.
  • Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed as desired in various embodiments.
  • the technology described herein may be embodied as a method, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • a reference to "A and/or B", when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
  • At least one of A and B can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Neurology (AREA)
  • Surgery (AREA)
  • Developmental Disabilities (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Neurosurgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Physiology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Educational Technology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

Example systems, methods, and apparatus are provided for using data collected from the responses of an individual with the computerized tasks of a cognitive platform to derive performance metrics as an indicator of cognitive abilities, and applying predictive models to the performance metrics and data indicative of one or both of the individual's age and gender to generate an indication of neurodegenerative condition. The example systems, methods, and apparatus also can be configured to adapt the computerized tasks to enhance the individual's cognitive abilities, and for using data collected from the responses of the individual with the adapted computerized tasks to derive performance metrics and applying predictive models to the performance metrics and data indicative of one or both of the individual's age and gender to generate the indication of neurodegenerative condition.

Description

COGNITIVE PLATFORM CONFIGURED FOR DETERMINING THE PRESENCE OR LIKELIHOOD OF ONSET OF A NEUROPSYCHOLOGICAL DEFICIT OR
DISORDER
CROSS-REFERENCE TO RELATED APPLICATIONS
[001 ] This application claims priority benefit of U.S. provisional application no. 62/444,791 , entitled "COGNITIVE PLATFORM FOR DETERMINING THE PRESENCE OR LIKELIHOOD OF ONSET OF A NEUROPSYCHOLOGICAL DEFICIT OR DISORDER" filed on January 10, 2017, and is a continuation-in-part of U.S. international application no. PCT/US2017/058103, entitled "COGNITIVE PLATFORM CONFIGURED AS A BIOMARKER OR OTHER TYPE OF MARKER" filed on October 24, 2017, each of which is incorporated herein by reference in its entirety, including drawings.
BACKGROUND OF THE DISCLOSURE
[002] Neurodegenerative conditions can cause individuals to experience a certain amount of cognitive decline. This can cause an individual to experience increased difficulty in challenging situations, such as time-limited, attention- demanding conditions. In both older and younger individuals, certain cognitive conditions, diseases, or executive function disorders can result in compromised performance at tasks that require attention, memory, motor function, reaction, executive function, decision-making skills, problem-solving skills, language
processing, or comprehension. Alzheimer's disease and Huntington's disease, among other types of neurodegenerative conditions, eventually cause diminished cognitive abilities.
[003] To detect the potential onset or stage of progression of a
neurodegenerative condition, medical and healthcare practitioners use physiological techniques. But such physiological measurements can require massive equipment and, be expensive and time-consuming, and some can be painful. SUMMARY OF THE DISCLOSURE
[004] In view of the foregoing, apparatus, systems and methods are provided for generating an assessment of one or more cognitive skills in an individual as an indication of a neuropsychological deficit or disorder of the individual. In a general aspect, an apparatus includes a user interface; a memory to store processor- executable instructions; and a processing unit communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference. The interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction. The processing unit is configured to: instruct, using the user interface, the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor; generate a performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response; receive data indicative of one or both of an age or a gender identifier of the individual; and generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of (i) at least one of the age or the gender identifier, (ii) the performance score, and (iii) one or more of the first primary response and the first secondary response.
[005] In another general aspect, an apparatus for enhancing one or more cognitive skills in an individual is provided. The apparatus includes: a user interface; a memory to store processor-executable instructions; and a processing unit communicatively coupled to the user interface and the memory. Upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to: execute a first trial at a first time interval, the first trial including: rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; instructing, using the user interface, the individual not to respond to an interference with the primary task that is configured as a distraction and to respond to an interference with the primary task that is configured as an interruptor; and rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference. The first interference is configured to divert the individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction. The processing unit is configured to generate a first performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response; receive data indicative of one or both of an age or a gender identifier of the individual; and based on the performance score and the data indicative of one or both of an age or a gender identifier of the individual, adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at a second difficulty level one or more of a third instance of the primary task or a second interference. The processing unit is configured to execute a second trial at a second time interval that is subsequent to the first time interval, the second trial including: rendering at the user interface the third instance of the primary task with the second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference. The second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction. The processing unit is configured to generate a second performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the third primary response to provide an indication of cognitive skills of the individual.
[006] In another general aspect, a computer-implemented method for enhancing one or more cognitive skills in an individual is provided. The method includes executing, using a processing unit communicatively coupled to a user interface and a memory, a first trial at a first time interval, the first trial comprising: rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference. The first interference is configured to divert the individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction. The individual is instructed not to respond to the first interference that is configured as a distraction and to respond to the first interference that is configured as an interruptor. The method includes executing, using the processing unit, at least one second trial at a second time interval that is subsequent to the first time interval, the second trial comprising: rendering at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference. The second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction. The individual is instructed not to respond to the second interference that is configured as the distraction and to respond to the second interference that is configured as the interruptor. The method includes generating, using the processing unit, a performance score based at least in part on the data indicative of the first primary response, the second primary response, and the third primary response; receiving data indicative of one or both of an age or a gender identifier of the individual; and generating, using the processing unit, a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the performance score and one or both of the data indicative of the age or the gender identifier. [007] The details of one or more of the above aspects and implementations are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF DRAWINGS
[008] The skilled artisan will understand that the figures, described herein, are for illustration purposes only. It is to be understood that in some instances various aspects of the described implementations may be shown exaggerated or enlarged to facilitate an understanding of the described implementations. In the drawings, like reference characters generally refer to like features, functionally similar and/or structurally similar elements throughout the various drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the teachings. The drawings are not intended to limit the scope of the present teachings in any way. The system and method may be better understood from the following illustrative description with reference to the following drawings in which:
[009] FIG. 1 shows an example plot of data analysis from applying an example predictive model, according to the principles herein.
[0010] FIGs. 2 and 3 show example plots of data derived from a cross-validation using the example predictive model, according to the principles herein.
[001 1 ] FIG. 4 shows an example plot of data derived from applying the example predictive model, according to the principles herein.
[0012] FIG. 5 shows a block diagram of an example apparatus, according to the principles herein.
[0013] FIG. 6 shows a block diagram of an example computing device, according to the principles herein.
[0014] FIGs. 7 A - 7B show example systems, according to the principles herein.
[0015] FIG. 8 shows another example system, according to the principles herein.
[0016] FIGs. 9A - 9D show example user interfaces with instructions to a user that can be rendered to an example user interface, according to the principles herein.
[0017] FIGs. 10A - 10D show examples of the time-varying features of example objects (targets or non-targets) that can be rendered to an example user interface, according to the principles herein. [0018] FIGs. 1 1A - 1 1 T show examples of the rendering of tasks and interferences at user interfaces, according to the principles herein.
[0019] FIGs. 12A - 12D show examples of the rendering of tasks and interferences at user interfaces, according to the principles herein.
[0020] FIGs. 13A - 13B show example physiological measurement data from a plurality of individuals, according to the principles herein.
[0021 ] FIGs. 14A - 14B show plots of example performance metrics derived from measures of the individuals' performance, according to the principles herein.
[0022] FIG. 15 shows plots of the results of measures of a test for episodic memory, according to the principles herein.
[0023] FIG. 16 shows plots of the results of measures of a test for sustained attention, according to the principles herein.
[0024] FIGs. 17A - 17C-2 show flowcharts of example methods, according to the principles herein.
[0025] FIG. 18 shows a block diagram of an example computer system, according to the principles herein.
DETAILED DESCRIPTION
[0026] Non-limiting example computer-implemented cognitive platform described herein can be configured for generating an assessment of one or more cognitive skills of an individual based on data collected from as few as a single initial moment (as non-limiting examples, the first few seconds, about the first 5 seconds, about the first 10 seconds, about the first 20 seconds, about the first 30 seconds, about the first 45 seconds, about the first minute, about the first 1 .5 minutes, about the first 3 minutes, about the first 5 minutes, about the first 7.5 minutes, about the first 10 minutes, or about the first 15 minutes) of interaction of the individual with the cognitive platform, as a biomarker or other marker for the cognitive condition of an individual. In another non-limiting example, the example computer-implemented cognitive platform described herein can be configured for generating an assessment of one or more cognitive skills of an individual based on data collected from the initial moment of interaction and at least one subsequent interaction of the individual with the cognitive platform, as a biomarker or other marker for the cognitive condition of an individual. Non-limiting example computer-implemented cognitive platform described herein also can be configured for enhancing the cognitive skills of the individual, and serving as a biomarker or other marker for any change in the cognitive condition of the individual as a result of the enhanced cognitive skills. The cognitive condition can be a neurodegenerative condition, such that the example computer-implemented cognitive platform can be configured to serve as a biomarker or other marker for the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual.
[0027] In any example herein, the cognitive platform could be configured to server as biomarker for the likelihood of onset of the neurodegenerative condition based on indications that the neurodegenerative condition may develop in the near term, later in time, or potentially at some unspecified time in future. In any example herein, the cognitive platform could be configured to serve as biomarker for the likelihood of onset of the neurodegenerative condition based on indications that the neurodegenerative condition may develop, but not necessarily a definitive projection that it will develop.
[0028] An example system or apparatus including the computer-implemented cognitive platforms can be configured to apply a predictive model to an indicator of the individual's performance that is derived based on data collected from at least the initial moments of interaction of the individual, measured using components of the cognitive platform. For example, the cognitive platform can be configured to render a primary task and/or a secondary task, collect data indicative of the measured response(s) from the individual to the instance of the primary task, and analyze the collected data to determine at least one indicator of the cognitive ability of the individual. The example system or apparatus is configured to apply the predictive model to the at least one indicator and data indicative of an age and/or a gender identifier of the individual to generate a scoring output indicative of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition, thereby facilitating use of the cognitive platform as a biomarker or other marker.
[0029] Methods for training the example predictive model are also described. For example, the predictive model can be computed based on datasets including (i) data indicative of the responses of each individual of a plurality of individuals from at least the initial moments of interaction of that individual in their performance of the primary task and/or secondary task, (ii) data indicative of one or more physiological measurement from the individuals, and (iii) data indicative of an age and/or a gender identifier of the individual. The one or more physiological measurements can be made either before or after the individual interacts with the cognitive platform, and/or during at least a portion of time periods during which the individual is interacting with the cognitive platform. With the trained predictive model established, the cognitive platform can be implemented to present one or more computer-implemented task(s) to a user, collect data indicative of the user's responses to the one or more computer- implemented task(s), and compute the at least one indicator of the individual's performance. Application of the predictive model to the at least one indicator and data indicative of an age and/or a gender identifier of the individual provides the scoring output that serves as a biomarker or other marker of the neurodegenerative condition, by providing an indication of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual.
[0030] An advantage of the example systems and apparatus herein including the cognitive platform is that presenting the computer-implemented task(s) to the user and collecting the user's response to the computer-implemented task(s) is comparatively easier and more convenient to the user. In some examples, the indication of the user's cognitive skills and the markers of neurodegenerative condition (derived based on a scoring using the predictive model) can be evaluated without performing physiological measurements (such as but not limited to collecting samples of tissue or fluid from the user, or performing positron emission tomography (PET) scans). The computer- implemented task(s) presented by the cognitive platform can be made interesting, engaging, and fun so that the user is motivated to interact with the cognitive platform regularly, e.g., on a daily basis, or several days in a given month. This allows the indication of the cognitive skills of the user, and the markers of neurodegenerative condition of the user (derived based on a scoring using the predictive model), to be evaluated regularly in a convenient manner. The cognitive platform can track the at least one indicator of the user's cognitive skills over time, and the scoring using the predictive model may be used as a marker of the likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition in the individual. In an example where the scoring may be used to facilitate an early detection of a neurodegenerative condition in an individual, remedial measures may be performed relative to the individual's cognitive condition.
[0031 ] The computer-implemented cognitive platform provides a technical solution to the technical problem of using computers or machines to assist a medical practitioner or healthcare provider to evaluate the individual's cognitive condition (including the neurodegenerative condition of the individual). The use of the computer- implemented cognitive platform provides several improvements over conventional methods that may rely on physiological measurement data to detect a neurodegenerative condition or track the progression of the neurodegenerative condition of the user. Since the physiological measurements (e.g., measurements of types of protein and/or conformation of proteins in the tissue or fluid of an individual, or PET scans) often need to be performed by select medical or healthcare professionals, the physiological measurement data are updated infrequently, e.g., perhaps once or twice a year. For an individual with a neurodegenerative condition that progresses rapidly, such a delay could prove detrimental, and in the time delay between the annual or semi-annual checkups, the individual's cognitive skills or neurodegenerative condition may markedly degrade (e.g., if there is no intervention to enhance the cognitive skills of the individual and/or to administer a drug, biologic or other pharmaceutical agent).
[0032] By contrast, the example cognitive platform according to the principles herein are configured for ease of use, and the example cognitive platform can be operated by the users in a more comfortable setting (such as but not limited to at home) and can be more conveniently administered. In an example where the measures of the individual's cognitive abilities using the cognitive platform may provide an early sign of a biomarker or other marker of a likelihood of onset of the neurodegenerative condition and/or the stage of progression of the neurodegenerative condition, with the individual's consent, an output message may be transmitted to a medical practitioner or healthcare provider. The cognitive platform can be used in a user's home so that the individualized treatment can be conveniently administered to the user.
[0033] It should be appreciated that all combinations of the concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. It also should be appreciated that terminology explicitly employed herein that also may appear in any disclosure incorporated by reference should be accorded a meaning most consistent with the particular concepts disclosed herein.
[0034] Following below are more detailed descriptions of various concepts related to, and embodiments of, inventive methods, apparatus and systems comprising a cognitive platform and/or platform product configured for coupling with one or more other types of measurement components (such as but not limited to one or more physiological components), and for analyzing data collected from user interaction with the cognitive platform and/or from at least one measurement of the one or more other types of measurement components. As non-limiting examples, the cognitive platform and/or platform product can be configured for cognitive training and/or for clinical purposes.
[0035] According to the principles herein, the cognitive platform may be coupled to (including being in communication with), or integrated with one or more
physiological or monitoring components and/or cognitive testing components.
[0036] In another example implementation, the cognitive platform may be separate from, and configured for coupling with, the one or more physiological or monitoring components and/or one or more cognitive testing components.
[0037] In any example herein, the cognitive platform and systems including the cognitive platform can be configured to present computerized tasks and platform interactions that inform cognitive assessment (including screening and/or monitoring) or to deliver cognitive treatment.
[0038] In any example herein, the platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product by Akili Interactive Labs, Inc. (Boston, MA), which is configured for presenting computerized tasks and platform interactions that inform cognitive assessment (including screening and/or monitoring) or to deliver cognitive treatment.
[0039] It should be appreciated that various concepts introduced above and discussed in greater detail below may be implemented in any of numerous ways, as the disclosed concepts are not limited to any particular manner of implementation. Examples of specific implementations and applications are provided primarily for illustrative purposes. The example methods, apparatus and systems comprising the cognitive platform or platform product can be used by an individual, of a clinician, a physician, and/or other medical or healthcare practitioner to provide data that can be used for an assessment of the individual.
[0040] In non-limiting examples, the methods, apparatus and systems comprising the cognitive platform or platform product can be used to determine a predictive model tool of amyloid status, and/or as a clinical trial tool to aid in the assessment of the amyloid status of one or more individuals, and/or as a tool to aid in the assessment of amyloid status. The example tools can be built and trained using one or more training datasets obtained from individuals having known amyloid status.
[0041 ] In non-limiting examples, the methods, apparatus and systems comprising the cognitive platform or platform product can be used to determine a predictive model tool of the presence or likelihood of onset of a neuropsychological deficit or disorder, and/or as a clinical trial tool to aid in the assessment of the presence or likelihood of onset of a neuropsychological deficit or disorder of one or more individuals. The example tools can be built and trained using one or more training datasets obtained from individuals having known neuropsychological deficit or disorder.
[0042] As used herein, the term "includes" means includes but is not limited to, the term "including" means including but not limited to. The term "based on" means based at least in part on.
[0043] As used herein, the term "stimulus" refers to a sensory event configured to evoke a specified functional response from an individual. The degree and type of response can be quantified based on the individual's interactions with a measuring component (including using sensor devices or other measuring components). For example, the degree of response can be generated based on a degree of an action measured using a sensor (such as but not limited to the degree or rotation measured using a motion sensor or a gyroscope). Non-limiting examples of a stimulus include a navigation path (with an individual being instructed to control an avatar or other processor-rendered guide to navigate the path), or a discrete object, whether a target or a non-target, rendered to a user interface (with an individual being instructed to control a computing component to provide input or other indication relative to the discrete object). In any example herein, the task and/or interference includes a stimulus, which can be a time-varying feature as described hereinbelow.
[0044] As used herein, the term "target" refers to a type of stimulus that is specified to an individual (e.g., in instructions) to be the focus for an interaction. A target differs from a non-target in at least one characteristic or feature. Two targets may differ from each other by at least one characteristic or feature, but overall are still instructed to an individual as a target, in an example where the individual is instructed/required to make a response that indicates a choice.
[0045] As used herein, the term "non-target" refers to a type of stimulus that is not to be the focus for an interaction, whether indicated explicitly or implicitly to the individual.
[0046] As used herein, the term "task" refers to a goal and/or objective to be accomplished by an individual. Using the example systems, methods, and apparatus described herein, the computerized task is rendered using programmed computerized components, and the individual is instructed (e.g., using a computing device) as to the intended goal or objective from the individual for performing the computerized task. The task may require the individual to provide or withhold a response to a particular stimulus, using at least one component of the computing device (e.g., one or more sensor components of the computing device). The "task" can be configured as a baseline cognitive function that is being measured.
[0047] As used herein, the term "interference" refers to a type of stimulus presented to the individual such that it interferes with the individual's performance of a primary task. In any example herein, an interference is a type of task that is presented/rendered in such a manner that it diverts or interferes with an individual's attention in performing another task (including the primary task). In some examples herein, the interference is configured as an instance of a secondary task that is presented simultaneously with a primary task, either over a discrete time period (e.g., a short, discrete time period) or over an extended time period (e.g., less than the time frame over which the primary task is presented), or over the entire period of time of the primary task. In any example herein, the interference can be presented/rendered continuously, or continually (i.e., repeated in a certain frequency, irregularly, or somewhat randomly). For example, the interference can be presented at the end of the primary task or at discrete, interim periods during presentation of the primary task. The degree of interference can be modulated based on the type, amount, and/or temporal length of presentation of the interference relative to the primary task.
[0048] As used herein, a "trial" includes at least one iteration of rendering of a task and/or interference (either or both with time-varying feature) and at least one receiving of the individual's response(s) to the task and/or interference (either or both with time- varying feature). As non-limiting examples, a trial can include at least a portion of a single-tasking task and/or at least a portion of a multi-tasking task. For example, a trial can be a period of time during a navigation task (including a visuo-motor navigation task) in which the individual's performance is assessed, such as but not limited to, assessing whether or the degree of success to which an individual's actions in interacting with the platform result in a guide (including a computerized avatar) navigating along at least a portion of a certain path or in an environment for a time interval (such as but not limited to, fractions of a second, a second, several seconds, or more) and/or causes the guide (including computerized avatar) to cross (or avoid crossing) performance milestones along the path or in the environment. In another example, a trial can be a period of time during a targeting task in which the individual's performance is assessed, such as but not limited to, assessing whether or the degree of success to which an individual's actions in interacting with the platform result in identification/selection of a target versus a non-target (e.g., red object versus yellow object), or discriminates between two different types of targets. In these examples, the segment of the individual's performance that is designated as a trial for the navigation task does not need to be co-extensive or aligned with the segment of the individual's performance that is designated as a trial for the targeting task.
[0049] In any example herein, an object may be rendered as a depiction of a physical object (including a polygonal or other object), a face (human or non-human), or a caricature, other type of object.
[0050] In any of the examples herein, instructions can be provided to the individual to specify how the individual is expected to perform the task and/or interference (either or both with time-varying feature) in a trial and/or a session. In non-limiting examples, the instructions can inform the individual of the expected performance of a navigation task (e.g., stay on this path, go to these parts of the environment, cross or avoid certain milestone objects in the path or environment), a targeting task (e.g., describe or show the type of object that is the target object versus the non-target object, or describe or show the type of object that is the target object versus the non-target object, or two different types of target object that the individual is expected to choose between, and/or describe how the individual's performance is to be scored. In examples, the instructions may be provided visually (e.g., based on a rendered user interface) or via sound. In various examples, the instructions may be provided once prior to the performance of two or more trials or sessions, or repeated each time prior to the performance of a trial or a session, or some combination thereof.
[0051 ] While some example systems, methods, and apparatus described herein are based on an individual being instructed/required to decide/select between a target versus a non-target, in other example implementations, the example systems, methods, and apparatus can be configured such that the individual is instructed/required to decide/choose between two different types of targets (such as but not limited to between two different degrees of a facial expression or other characteristic/feature difference).
[0052] In addition, while example systems, methods, and apparatus may be described herein relative to an individual, in other example implementations, the example systems, methods, and apparatus can be configured such that two or more individuals, or members of a group (including a clinical population), perform the tasks and/or interference (either or both with time-varying feature), either individually or concurrently.
[0053] The example platform products and cognitive platforms according to the principles described herein can be applicable to many different types of neuropsychological conditions, such as but not limited to dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, or other neurodegenerative condition, autism spectrum disorder (ASD), presence of the 16p1 1 .2 duplication, and/or an executive function disorder (such as but not limited to attention deficit hyperactivity disorder (ADHD), sensory-processing disorder (SPD), mild cognitive impairment (MCI), Alzheimer's disease, multiple- sclerosis, schizophrenia, depression, or anxiety).
[0054] As described in greater detail below, the computing device can include an application (an "App program") to perform such functionalities as analyzing the data. For example, the data from the at least one sensor component can be analyzed as described herein by a processor executing the App program on an example computing device to receive (including to measure) substantially simultaneously to the response from the individual to a primary task and a secondary response of the individual to a secondary task rendered as an interference with the primary task. As another example, the data from the at least one sensor component can be analyzed as described herein by a processor executing the App program on an example computing device to analyze the data indicative of the response of the individual to the primary task and to the secondary task to compute at least one performance metric including at least one indicator of cognitive condition.
[0055] An example system according to the principles herein can be configured to implement a predictive model (including using a machine learning predictive model, such as but not limited to a machine learning classifier) to enable an assessment of cognitive skills in an individual using a predictive model and/or to enhance cognitive skills in an individual. The predictive model can include one or more of, e.g., a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, or an artificial neural network. In an example implementation, the example system employs an App program executing on a mobile communication device or other hand-held devices. Non-limiting examples of such mobile communication devices or hand-held device include a smartphone, such as but not limited to an iPhone®, a BlackBerry®, or an Android- based smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other computing system that can be used to render game-like elements. In some example implementations, the example system can include a head-mounted device, such as smart eyeglasses with built-in displays, a smart goggle with built-in displays, or a smart helmet with built-in displays, and the user can hold a controller or an input device having one or more sensors in which the controller or the input device communicates wirelessly with the head-mounted device. In some example implementations, the computing system may be stationary, such as a desktop computing system that includes a main computer and a desktop display (or a projector display), in which the user provides inputs to the App program using a keyboard, a computer mouse, a joystick, handheld consoles, wristbands, or other wearable devices having sensors that communicate with the main computer using wired or wireless communication. In other examples herein, the example system may be a virtual reality system, an augmented reality system, or a mixed reality system. In examples herein, the sensors can be configured to measure movements of the user's hands, feet, and/or any other part of the body. In some example implementations, the example system can be formed as a virtual reality (VR) system (a simulated environment including as an immersive, interactive 3-D experience for a user), an augmented reality (AR) system (including a live direct or indirect view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as but not limited to sound, video, graphics and/or GPS data), or a mixed reality (MR) system (also referred to as a hybrid reality which merges the real and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact substantially in real time).
[0056] As used herein, the term "predictive model" encompasses models trained and developed based on continuous output values and/or models based on discrete labels. In any example herein, the predictive model encompasses a classifier model. For example, the predictive model can be configured to determine scoring outputs that are continuous output values (such as but not limited to values of a psychometric curve) or discrete values (such as but not limited to a classification output). In another example, the systems, methods, and apparatus, scoring outputs that are continuous output values can be binned into two or more bins (each bin corresponding to a preset range of output values), to provide the classification output.
[0057] Any example predictive model according to the principles herein can be trained using a plurality of training datasets. Each training dataset corresponds to a previously measured individual of a plurality of individuals. Each training dataset includes data representing at least one indicator of the cognitive ability of the previously measured individual, generated based on the data indicative of the individual's responses from previous interactions with the tasks and/or interference executed by the cognitive platform, and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual. The trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's
interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output. The scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of
progression of the neurodegenerative condition of an individual.
[0058] The instant disclosure is directed to computer-implemented devices formed as example cognitive platforms or platform products configured to implement software and/or other processor-executable instructions for the purpose of
measuring data indicative of a user's performance at one or more tasks, to provide a user performance metric. The example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including physiological condition and/or cognitive condition). Non-limiting example cognitive platforms or platform products according to the principles herein can be configured to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or apolipoprotein E (APOE Expression) based on APOE expression level (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a condition, including a neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that data. Yet other non-limiting example cognitive platforms or platform products according to the principles herein can be configured to classify an individual as to likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition, based on the data collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that data. The neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[0059] Any scoring output of a predictive model (including a classification output of a classifier model) for an individual providing an indication as to likelihood of onset and/or stage of progression of a neurodegenerative condition according to the principles herein can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine
practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
[0060] In any example herein, the platform product or cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
[0061 ] The instant disclosure is also directed to example systems that include platform products and cognitive platforms that are configured for coupling with one or more physiological or monitoring component and/or cognitive testing component. In some examples, the systems include platform products and cognitive platforms that are integrated with the one or more other physiological or monitoring component and/or cognitive testing component. In other examples, the systems include platform products and cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring component and/or cognitive testing component, to receive data indicative of measurements made using such one or more components.
[0062] As used herein, the term "cData" refers to data collected from measures of an interaction of a user with a computer-implemented device formed as a platform product or a cognitive platform.
[0063] As used herein, the term "nData" refers to other types of data that can be collected according to the principles herein. Any component used to provide nData is referred to herein as a nData component.
[0064] In any example herein, the data (including cData and nData) is collected with user consent.
[0065] In any example herein, the cData and/or nData can be collected in realtime. In any example herein, the data (cData or nData) being collected in real-time can be data collected in a time interval at a resolution of up to about 1 .0 millisecond or greater. For example, the time interval can be, but are not limited to, about 2.0 milliseconds, about 3.0 millisecond, about 5.0 millisecond, about 10 milliseconds, about 25 milliseconds, about 40 milliseconds, about 50 milliseconds, about 60 milliseconds, about 70 milliseconds, about 100 milliseconds, about 500 milliseconds, about a second or greater.
[0066] In non-limiting examples, the nData can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components. In any example herein, the one or more physiological components are configured for performing physiological measurements. The physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
[0067] As a non-limiting example, nData can be collected from measurements of types of protein and/or conformation of proteins in the tissue or fluid (including blood) of an individual and/or in tissue or fluid (including blood) collected from the individual. In some examples, the tissue and or fluid can be in or taken from the individual's brain. In other examples, the measurement of the conformation of the proteins can provide an indication of amyloid formation (e.g., whether the proteins are forming aggregates). For example, nData can be collected from measurements made using a positron emission tomography (PET) scanner to provide data indicative of an individual's amyloid level, and/or using a test to measure the type and level of expression of a protein of clinical interest (e.g., a DNA test to provide data indicative of an individual's genotype and/or expression level of the apolipoprotein E ε4 allele (referred to herein as "APOE expression group")). The expression group can be defined based on a threshold expression level of the protein of clinical interest in the neurodegenerative condition, where a measured value of expression level above a pre-specified threshold defines a first expression group and a measured value of expression level approximately equal to or below the pre-specified threshold defines a second expression group.
[0068] As a non-limiting example, the nData can be collected from measurements of beta amyloid, cystatin, alpha-synuclein, huntingtin protein, and/or tau proteins. In some examples, the nData can be collected from measurements of other types of proteins that may be implicated in the onset and/or progression of a neurodegenerative condition, such as but not limited to Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[0069] In a non-limiting example, nData can be used to provide a classification or other grouping that can be assigned to an individual based on measurement data from the one or more physiological or monitoring components and/or cognitive testing components. For example, an individual can be classified in an amyloid group of amyloid positive (A+) or amyloid negative (A-) based on analysis of an image from a PET scan. In an example, certain cData collected from the individual's interaction with the cognitive platform and/or platform product can co-vary or otherwise correlate with the type of amyloid group the individual may be classified to.
[0070] A non-limiting example system, method and apparatus according to the principles herein can be executed to measure cData indicative of the response of the individual(s) to the tasks and/or interference presented to the individual(s), analyze the cData to generate at least one indicator of the cognitive ability of the individual, and apply a predictive model to the at least one indicator of the cognitive ability of the individual derived from the cData, to provide a scoring output indicative of a likelihood of onset and/or stage of progression of a neurodegenerative condition of the individual(s). As a non-limiting example, using a predictive model configured as a classifier model trained to provide a classification output of amyloid status (or grouping), an example system, method and apparatus could be implemented to classify an individual according to amyloid status (or grouping).
[0071 ] An example system, method and apparatus according to the principles herein can be used as an intelligent proxy for a nData measurement or analysis. For example, the system, method and apparatus can be implemented as an intelligent proxy for nData measurement or analysis indicative of a likelihood of onset and/or stage of progression of a neurodegenerative condition of the individual(s), through use of the predictive model to provide the scoring output.
[0072] In some examples, the nData can be an identification of a type of biologic, drug or other pharmaceutical agent administered or to be administered to an individual, and/or data collected from measurements of a level of the biologic, drug or other pharmaceutical agent in the tissue or fluid (including blood) of an individual, whether the measurement is made in situ or using tissue or fluid (including blood) collected from the individual. Non-limiting examples of a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, and crenezumab.
[0073] It is understood that reference to "drug" herein encompasses a drug, a biologic and/or other pharmaceutical agent.
[0074] In a non-limiting example, the physiological instrument can be a fMRI, and the nData can be measurement data indicative of the cortical thickness, brain functional activity changes, or other measure.
[0075] In other non-limiting examples, nData can include any data that can be used to characterize an individual's status, such as but not limited to age, gender or other similar data.
[0076] In another non-limiting example, the nData can be data indicative of an individual's performance using a testing component, such as but not limited to the Rey Auditory Verbal Learning Test (RAVLT™) by Western Psychological Services (Torrance, CA) and/or the Test of Variables of Attention (T.O.V.A.®) by The TOVA Company (Los Alamitos, CA).
[0077] In any example herein, the data (including cData and nData) is collected with the individual's consent. [0078] In any example herein, the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the nData. This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near-infrared spectroscopy, and/or pupil dilation measures, to provide the nData.
[0079] Other examples of physiological measurements to provide nData include, but are not limited to, the measurement of body temperature, heart or other cardiac- related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), blood pressure, electrical potential at a portion of the skin, galvanic skin response (GSR). Examples of instruments or techniques for performing the physiological measurements to provide nData include, but are not limited to, the use of body functional magnetic resonance imaging (fMRI), magneto-encephalogram (MEG), eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNIRS), and/or a positron emission tomography (PET) scanner. An EEG-fMRI or MEG-fMRI measurement allows for simultaneous acquisition of electrophysiology (EEG/MEG) nData and hemodynamic (fMRI) nData.
[0080] The fMRI also can be used to provide measurement data (nData) indicative of neuronal activation, based on the difference in magnetic properties of oxygenated versus de-oxygenated blood supply to the brain. The fMRI can provide an indirect measure of neuronal activity by measuring regional changes in blood supply, based on a positive correlation between neuronal activity and brain
metabolism.
[0081 ] A PET scanner can be used to perform functional imaging to observe metabolic processes and other physiological measures of the body through detection of gamma rays emitted indirectly by a positron-emitting radionuclide (a tracer). The tracer can be introduced into the user's body using a biologically-active molecule. Indicators of the metabolic processes and other physiological measures of the body can be derived from the scans, including from computer reconstruction of two- and three-dimensional images from nData of tracer concentration from the scans. The nData can include measures of the tracer concentration and/or the PET images (such as two- or three-dimensional images).
[0082] In any example herein, the cognitive platform and systems including the cognitive platform can be configured to present computerized tasks and platform interactions that inform cognitive assessment (screening or monitoring) or deliver treatment.
[0083] In any example herein, a task can involve one or more activities that a user is required to engage in. Any one or more of the tasks can be computer-implemented as computerized stimuli or interaction (described in greater detail below). For a targeting task, the cognitive platform may require temporally-specific and/or position- specific responses from a user. For a navigation task, the cognitive platform may require position-specific and/or motion-specific responses from the user. For a facial expression recognition or object recognition task, the cognitive platform may require temporally-specific and/or position-specific responses from the user. The multitasking tasks can include any combination of two or more tasks. In non-limiting examples, the user response to tasks, such as but not limited to targeting and/or navigation and/or facial expression recognition or object recognition task(s), can be recorded using an input device of the cognitive platform. Non-limiting examples of such input devices can include a touch, swipe or other gesture relative to a user interface or image capture device (such as but not limited to a touch-screen or other pressure sensitive screen, or a camera), including any form of user interface configured for recording a user interaction. In other non-limiting examples, the user response recorded using the cognitive platform for tasks, such as but not limited to targeting and/or navigation and/or facial expression recognition or object recognition task(s), can include user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform. Such changes in a position, orientation, or movement of a computing device can be recorded using an input device disposed in or otherwise coupled to the computing device, such as but not limited to a sensor. Non-limiting examples of sensors include a motion sensor, position sensor, and/or an image capture device (such as but not limited to a camera).
[0084] In an example implementation involving multi-tasking tasks, the computing device is configured (such as using at least one specially-programmed processing unit) to cause the cognitive platform to present to a user two or more different types of tasks, such as but not limited to, targeting and/or navigation and/or facial expression recognition or object recognition tasks, during a short time frame (including in real- time and/or substantially simultaneously). The computing device is also configured (such as using at least one specially-programmed processing unit) to collect data indicative of the type of user response received to the multi-tasking tasks, within the short time frame (including in real-time and/or substantially simultaneously). In these examples, the two or more different types of tasks can be presented to the individual within the short time frame (including in real-time and/or substantially simultaneously), and the computing device can be configured to receive data indicative of the user response(s) relative to the two or more different types of tasks within the short time frame (including in real-time and/or substantially simultaneously).
[0085] In some examples, the short time frame (including substantially simultaneously) can be of any time interval at a resolution of up to about 1 .0 millisecond or greater. The time intervals can be, but are not limited to, durations of time of any division of a periodicity of about 2.0 milliseconds or greater, up to any reasonable end time. The time intervals can be, but are not limited to, about 3.0 millisecond, about 5.0 millisecond, about 10 milliseconds, about 25 milliseconds, about 40 milliseconds, about 50 milliseconds, about 60 milliseconds, about 70 milliseconds, about 100 milliseconds, or greater. In other examples, the short time frame can be, but is not limited to, fractions of a second, about a second, between about 1 .0 and about 2.0 seconds, or up to about 2.0 seconds, or more.
[0086] In some examples, the platform product or cognitive platform can be configured to collect data indicative of a reaction time of a user's response relative to the time of presentation of the tasks. For example, the computing device can be configured to cause the platform product or cognitive platform to provide smaller or larger reaction time window for a user to provide a response to the tasks as a way of adjusting the difficulty level.
[0087] As used herein, the term "computerized stimuli or interaction" or "CSI" refers to a computerized element that is presented to a user to facilitate the user's interaction with a stimulus or other interaction. As non-limiting examples, the computing device can be configured to present auditory stimulus or initiate other auditory-based interaction with the user, and/or to present vibrational stimuli or initiate other vibrational-based interaction with the user, and/or to present tactile stimuli or initiate other tactile-based interaction with the user, and/or to present visual stimuli or initiate other visual-based interaction with the user.
[0088] Any task according to the principles herein can be presented to a user via a computing device, actuating component, or other device that is used to implement one or more stimuli or other interactive element. For example, the task can be presented to a user by rendering a user interface to present the computerized stimuli or interaction (CSI) or other interactive elements. In other examples, the task can be presented to a user as auditory, tactile, or vibrational computerized elements
(including CSIs) using an actuating component. Description of use of (and analysis of data from) one or more CSIs in the various examples herein also encompasses use of (and analysis of data from) tasks including the one or more CSIs in those examples.
[0089] In an example where the computing device is configured to present visual CSI, the CSI can be rendered using at least one user interface to be presented to a user. In some examples, at least one user interface is configured for measuring responses as the user interacts with CSI computerized element rendered using the at least one user interface. In a non-limiting example, the user interface can be configured such that the CSI computerized element(s) are active, and may require at least one response from a user, such that the user interface is configured to measure data indicative of the type or degree of interaction of the user with the platform product. In another example, the user interface can be configured such that the CSI computerized element(s) are a passive and are presented to the user using the at least one user interface but may not require a response from the user. In this example, the at least one user interface can be configured to exclude the recorded response of an interaction of the user, to apply a weighting factor to the data indicative of the response (e.g., to weight the response to lower or higher values), or to measure data indicative of the response of the user with the platform product as a measure of a misdirected response of the user (e.g., to issue a notification or other feedback to the user of the misdirected response).
[0090] In an example, the cognitive platform and/or platform product can be configured as a processor-implemented system, method or apparatus that includes at least one processing unit. In an example, the at least one processing unit can be programmed to render at least one user interface to present the computerized stimuli or interaction (CSI) or other interactive elements to the user for interaction. In other examples, the at least one processing unit can be programmed to cause an actuating component of the platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with the user. The at least one processing unit can be programmed to cause a component of the program product to receive data indicative of at least one user response based on the user interaction with the CSI or other interactive element (such as but not limited to cData), including responses provided using the input device. In an example where at least one user interface is rendered to present the computerized stimuli or interaction (CSI) or other interactive elements to the user, the at least one processing unit can be programmed to cause the user interface to receive the data indicative of at least one user response. The at least one processing unit also can be programmed to: analyze the cData to provide a measure of the individual's cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses (including based on differences in the cData), and/or adjust the difficulty level of the auditory, tactile, or vibrational computerized elements (including CSIs), the CSIs or other interactive elements based on the analysis of the cData (including the measures of the individual's performance determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or projected response to cognitive treatment, and/or assessed measures of cognition. In non- limiting examples, the at least one processing unit also can be programmed to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a
neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition, based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData. The neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[0091 ] In other examples, the platform product can be configured as a processor- implemented system, method or apparatus that includes a display component, an input device, and the at least one processing unit. The at least one processing unit can be programmed to render at least one user interface, for display at the display component, to present the computerized stimuli or interaction (CSI) or other interactive elements to the user for interaction. In other examples, the at least one processing unit can be programmed to cause an actuating component of the platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with the user.
[0092] Non-limiting examples of an input device include a touch-screen, or other pressure-sensitive or touch-sensitive surface, a motion sensor, a position sensor, a pressure sensor, joystick, exercise equipment, and/or an image capture device (such as but not limited to a camera).
[0093] In any example, the input device is configured to include at least one component configured to receive input data indicative of a physical action of the individual(s), where the data provides a measure of the physical action of the individual(s) in interacting with the cognitive platform and/or platform product, e.g., to perform the one or more tasks and/or tasks with interference.
[0094] The analysis of the individual's performance may include using the computing device to compute percent accuracy, number of hits and/or misses during a session or from a previously completed session. Other indicia that can be used to compute performance measures is the amount time the individual takes to respond after the presentation of a task (e.g., as a targeting stimulus). Other indicia can include, but are not limited to, reaction time, response variance, number of correct hits, omission errors, false alarms, learning rate, spatial deviance, subjective ratings, and/or performance threshold, etc.
[0095] In a non-limiting example, the user's performance can be further analyzed to compare the effects of two different types of tasks on the user's performances, where these tasks present different types of interferences (e.g., a distraction or an interruptor). The computing device is configured to present the different types of interference as CSIs or other interactive elements that divert the user's attention from a primary task. For a distraction, the computing device is configured to instruct the individual to provide a primary response to the primary task and not to provide a response (i.e. , to ignore the distraction). For an interruptor, the computing device is configured to instruct the individual to provide a response as a secondary task, and the computing device is configured to obtain data indicative of the user's secondary response to the interruptor within a short time frame (including at substantially the same time) as the user's response to the primary task (where the response is collected using at least one input device). The computing device is configured to compute measures of one or more of a user's performance at the primary task without an interference, performance with the interference being a distraction, and performance with the interference being an interruption. The user's performance metrics can be computed based on these measures. For example, the user's performance can be computed as a cost (performance change) for each type of interference (e.g. , distraction cost and interruptor/multi-tasking cost). The user's performance level on the tasks can be analyzed and reported as feedback, including either as feedback to the cognitive platform for use to adjust the difficulty level of the tasks, and/or as feedback to the individual concerning the user's status or
progression.
[0096] In a non-limiting example, the computing device can also be configured to analyze, store, and/or output the reaction time for the user's response and/or any statistical measures for the individual's performance (e.g. , percentage of correct or incorrect response in the last number of sessions, over a specified duration of time, or specific for a type of tasks (including non-target and/or target stimuli, a specific type of task, etc.).
[0097] In a non-limiting example, the computerized element includes at least one task rendered at a user interface as a visual task or presented as an auditory, tactile, or vibrational task. Each task can be rendered as interactive mechanics that are designed to elicit a response from a user after the user is exposed to stimuli for the purpose of cData and/or nData collection.
[0098] In a non-limiting example, the computerized element includes at least one platform interaction (gameplay) element of the platform rendered at a user interface, or as auditory, tactile, or vibrational element of a program product. Each platform interaction (gameplay) element of the platform product can include interactive mechanics (including in the form of videogame-like mechanics) or visual (or cosmetic) features that may or may not be targets for cData and/or nData collection. [0099] As used herein, the term "gameplay" encompasses a user interaction (including other user experience) with aspects of the platform product.
[00100] In a non-limiting example, the computerized element includes at least one element to indicate positive feedback to a user. Each element can include an auditory signal and/or a visual signal emitted to the user that indicates success at a task or other platform interaction element, i.e., that the user responses at the platform product has exceeded a threshold success measure on a task or platform interaction (gameplay) element.
[00101 ] In a non-limiting example, the computerized element includes at least one element to indicate negative feedback to a user. Each element can include an auditory signal and/or a visual signal emitted to the user that indicates failure at a task or platform interaction (gameplay) element, i.e., that the user responses at the platform product has not met a threshold success measure on a task or platform interaction element.
[00102] In a non-limiting example, the computerized element includes at least one element for messaging, i.e., a communication to the user that is different from positive feedback or negative feedback.
[00103] In a non-limiting example, the computerized element includes at least one element for indicating a reward. A reward computer element can be a computer generated feature that is delivered to a user to promote user satisfaction with the CSIs and as a result, increase positive user interaction (and hence enjoyment of the user experience).
[00104] In a non-limiting example, the cognitive platform can be configured to render multi-task interactive elements. In some examples, the multi-task interactive elements are referred to as multi-task gameplay (MTG). The multi-task interactive elements include interactive mechanics configured to engage the user in multiple temporally-overlapping tasks, i.e., tasks that may require multiple, substantially simultaneous responses from a user.
[00105] In a non-limiting example, the cognitive platform can be configured to render single-task interactive elements. In some examples, the single-task interactive elements are referred to as single-task gameplay (STG). The single-task interactive elements include interactive mechanics configured to engage the user in a single task in a given time interval. [00106] According to the principles herein, the term "cognition" or "cognitive" refers to the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. This includes, but is not limited to,
psychological concepts/domains such as, executive function, memory, perception, attention, emotion, motor control, and interference processing. An example computer-implemented device according to the principles herein can be configured to collect data indicative of user interaction with a platform product, and to compute metrics that quantify user performance. The quantifiers of user performance can be used to provide measures of cognition (for cognitive assessment) or to provide measures of status or progress of a cognitive treatment.
[00107] In a non-limiting example implementation, an example platform product herein may be formed as, be based on, or be integrated with, an AKILI® platform product (also referred to herein as an "APP") by Akili Interactive Labs, Inc., Boston, MA.
[00108] According to the principles herein, the term "treatment" or "treat" refers to any manipulation of CSI in a platform product (including in the form of an APP) that results in a measurable change (including improvement) of the measures of cognitive abilities of a user, such as but not limited to improvements related to cognition, a user's mood, emotional state, and/or level of engagement or attention to the cognitive platform. The degree or level of change (including improvement) can be quantified based on user performance measures as describe herein. In an example, the term "treatment" may also refer to a therapy.
[00109] According to the principles herein, the term "session" refers to a discrete time period, with a clear start and finish, during which a user interacts with a platform product to receive assessment or treatment from the platform product (including in the form of an APP). A session can include two or more trials, including up to multiple trials.
[001 10] According to the principles herein, the term "session" refers to a portion of a trial that is less than the full trial.
[001 1 1 ] According to the principles herein, the term "assessment" refers to at least one session of user interaction with CSIs or other feature or element of a platform product. The data collected from one or more assessments performed by a user using a platform product (including in the form of an APP) can be used as to derive measures or other quantifiers of cognition, or other aspects of a user's abilities.
[001 12] According to the principles herein, the term "cognitive load" refers to the amount of mental resources that a user may need to expend to complete a task. This term also can be used to refer to the challenge or difficulty level of a task or gameplay.
[001 13] In an example, the platform product includes a computing device that is configured to present to a user a cognitive platform based on interference
processing. In an example system, method and apparatus that implements interference processing, at least one processing unit is programmed to render at least one first user interface or cause an actuating component to generate an auditory, tactile, or vibrational signal, to present first CSIs as a first task that requires a first type of response from a user. The example system, method and apparatus is also configured to cause the at least one processing unit to render at least one second user interface or cause the actuating component to generate an auditory, tactile, or vibrational signal, to present second CSIs as a first interference with the first task, requiring a second type of response from the user to the first task in the presence of the first interference. In a non-limiting example, the second type of response can include the first type of response to the first task and a secondary response to the first interference. In another non-limiting example, the second type of response may not include, and be quite different from, the first type of response. The at least one processing unit is also programmed to receive data indicative of the first type of response and the second type of response based on the user interaction with the platform product (such as but not limited to cData), such as but not limited to by rendering the at least one user interface to receive the data. The platform product also can be configured to receive nData indicative of measurements made before, during, and/or after the user interacts with the cognitive platform (including nData from measurements of physiological or monitoring components and/or cognitive testing components). The at least one processing unit also can be programmed to: analyze the cData and/or nData to provide a measure of the individual's condition (including physiological and/or cognitive condition), and/or analyze the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses (including based on differences in the cData) and differences in the associated nData. The at least one processing unit also can be programmed to: adjust the difficulty level of the first task and/or the first interference based on the analysis of the cData and/or nData
(including the measures of the individual's performance and/or condition (including physiological and/or cognitive condition) determined in the analysis), and/or provide an output or other feedback from the platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or projected response to cognitive treatment, and/or assessed measures of cognition. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to a neuropsychological condition, including as to amyloid group, and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, based on nData and the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData. In non-limiting examples, the at least one processing unit also can be programmed to classify an individual as to likelihood of onset and/or stage of progression of a
neuropsychological condition, including as to a neurodegenerative condition, based on nData and the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations) of that cData and the nData. The neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[001 14] In an example, the feedback from the differences in the individual's performance based on determining the differences between the measures of the user's first type and second type of responses and the nData can be used as an input in the cognitive platform that indicates real-time performance of the individual during one or more session(s). The data of the feedback can be used to as an input to a computation component of the computing device to determine a degree of adjustment that the cognitive platform makes to a difficulty level of the first task and/or the first interference that the user interacts within the same ongoing session and/or within a subsequently-performed session. [001 15] As a non-limiting example, the cognitive platform based on interference processing can be a cognitive platform based on the Project: EVO™ platform by Akili Interactive Labs, Inc. (Boston, MA).
[001 16] In an example system, method and apparatus according to the principles herein that is based on interference processing, the user interface is configured such that, as a component of the interference processing, one of the discriminating features of the targeting task that the user responds to is a feature in the platform that displays an emotion, a shape, a color, and/or a position that serves as an interference element in interference processing.
[001 17] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to set baseline metrics of CSI levels/attributes in APP session(s) based on measurements of nData indicative of physiological condition and/or cognition condition (including indicators of neuropsychological disorders), to increase accuracy of assessment and efficiency of treatment. The CSIs may be used to calibrate a nData component to individual user dynamics of nData.
[001 18] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to detect states of attentiveness or
inattentiveness to optimize delivery of CSIs related to treatment or assessment.
[001 19] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use analysis of nData with CSI cData to detect and direct attention to specific CSIs related to treatment or assessment through subtle or overt manipulation of CSIs.
[00120] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use analysis of CSIs patterns of cData with nData within or across assessment or treatment sessions to generate user profiles (including profiles of ideal, optimal, or desired user responses) of cData and nData and manipulate CSIs across or within sessions to guide users to replicate these profiles.
[00121 ] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData for indicators of parameters related to user engagement and to optimize the cognitive load generated by the CSIs to align with time in an optimal engaged state to maximize neural plasticity and transfer of benefit resulting from treatment. As used herein, the term "neural plasticity" refers to targeted re-organization of the central nervous system. As a non-limiting example, an EEG measurement of the individual can be used to provide nData measures indicative of an attention of the individual as the individual interacts with the task and/or interference.
[00122] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData indicative of anger and/or frustration to promote continued user interaction (also referred to as "play") with the cognitive platform by offering alternative CSIs or disengagement from CSIs.
[00123] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to change CSI dynamics within or across assessment or treatment sessions to optimize nData related to cognition or other physiological or cognitive aspects of the user.
[00124] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to adjust the CSIs or CSI cognitive load if nData signals of task automation are detected, or the physiological measurements that relate to task learning show signs of attenuation.
[00125] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to combine signals from CSI cData with nData to optimize individualized treatment promoting improvement of indicators of cognitive abilities, and thereby, cognition.
[00126] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use a profile of nData to confirm/verify/authenticate a user's identity.
[00127] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to detect positive emotional response to CSIs in order to catalog individual user preferences to customize CSIs to optimize enjoyment and promote continued engagement with assessment or treatment sessions.
[00128] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to generate user profiles of cognitive improvement (such as but not limited to, user profiles associated with users classified or known to exhibit improved working memory, attention, processing speed, and/or perceptual detection/discrimination), and deliver a treatment that adapts CSIs to optimize the profile of a new user as confirmed by profiles from nData.
[00129] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to provide to a user a selection of one or more profiles configured for cognitive improvement.
[00130] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to monitor nData from auditory and visual physiological measurements to detect interference from external environmental sources that may interfere with the assessment or treatment being performed by a user using a cognitive platform or program product.
[00131 ] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use cData and/or nData (including metrics from analyzing the data) as a determinant or to make a decision as to whether a user (including a patient using a medical device) is likely to respond or not to respond to a treatment (such as but not limited to a cognitive treatment and/or a treatment using a biologic, a drug or other pharmaceutical agent). For example, the system, method, and apparatus can be configured to select whether a user (including a patient using a medical device) should receive treatment based on specific physiological or cognitive measurements that can be used as signatures that have been validated to predict efficacy of the cognitive platform in a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status). Such an example system, method, and apparatus configured to perform the analysis (and associated computation) described herein can be used as a biomarker to perform monitoring and/or screening. As a non-limiting example, the example system, method and apparatus configured to provide a quantitative measure of the degree of efficacy of a cognitive treatment (including the degree of efficacy in conjunction with use of a biologic, a drug or other pharmaceutical agent) for a given individual or certain individuals of the population (e.g., individual(s) classified to a given group based on amyloid status). In some examples, the individual or certain individuals of the population may be classified as having a certain neurodegenerative condition.
[00132] An example system, method, and apparatus according to the principles herein includes a cognitive platform and/or platform product (which may include an APP) that is configured to use nData to monitor a user's ability to anticipate CSI(s) and manipulate CSIs patterns and/or rules to disrupt user anticipation of response to CSIs, to optimize treatment or assessment in use of a cognitive platform or program product.
[00133] Non-limiting examples of analysis (and associated computations) that can be performed based on various combinations of different types of nData and cData are described. The following example analyses and associated computations can be implemented using any example system, method and apparatus according to the principles herein.
[00134] Non-limiting example system, method, and apparatus according to the principles herein provide a cognitive platform and/or platform product that is configured to produce a fast and accurate assessment for amyloid status or a neuropsychological condition in older individuals.
[00135] The example cognitive platform and/or platform product is configured to implement a predictive model (such as but not limited to a classifier model) trained using clinical trial data set that includes an indication of the amyloid status or a neuropsychological condition of individuals participating in the clinical trial.
[00136] In various examples herein, an individual older than about 50, about 55 or about 60 years of age or older can be classified as an older individual.
[00137] Non-limiting example system, method, and apparatus according to the principles herein provide a cognitive platform and/or platform product that is configured to implement an example predictive model (such as but not limited to a classifier model) that is configured to identify individuals having a positive amyloid status versus a negative amyloid status with a high degree of accuracy based on measurement data (including cData) from a least one user interaction with the example cognitive platform and/or platform product. For example, the example predictive model (such as but not limited to a classifier model) can be configured to identify individuals that have positive amyloid status with about a 77% degree of accuracy, and to identify individuals that have negative amyloid status with about a 90% degree of accuracy, based on measurement data (including cData) from a least one user interaction with the example cognitive platform and/or platform product (including a single user interaction).
[00138] FIG. 1 shows data derived from applying an example predictive model (such as but not limited to a classifier model) to data indicative of user interaction (a screen) with an example cognitive platform and/or platform product in an initial screen. The graph shows plots of data indicative of sensitivity and specificity as values of percentage (y-axis) versus values of cData (as score on targeting tasks) derived from the user interaction with the example cognitive platform and/or platform product (x-axis). The targeting score for users having negative amyloid status (indicated using triangles) appear at a first set of values, and targeting score for users having positive amyloid status (indicated using circles) appear at a second set of values. The graph shows the predictive model (such as but not limited to a classifier model) based on data from the initial screen can be used to separate a population of user according to an indication of amyloid status.
[00139] FIGs. 2 and 3 show plots of data derived from a cross-validation routine conducted on the predictive model (such as but not limited to a classifier model) of FIG. 1 , to show the predictive accuracy of the model.
[00140] The non-limiting example predictive model (such as but not limited to a classifier model) can be trained to generate predictors of the amyloid status of individuals using training cData and corresponding nData, and based on metrics collected from at least one interaction of users with an example cognitive platform and/or platform product. The training nData can includes data indicative of the amyloid status and age of each user that corresponds to cData collected for a given user (such as but not limited to that user's score from at least one interaction with any example cognitive platform and/or platform product herein). In some examples, the nData can include data indicative of the gender of the user. For example, the cData can be collected based on a limited user interaction, e.g., on the order of a few minutes, with any example cognitive platform and/or platform product herein. The length of time of the limited user interaction can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes. The example cognitive platform and/or platform product can be configured to implement an assessment session (such as but not limited to an assessment implemented using a Project: EVO™ platform).
[00141 ] Non-limiting example system, method, and apparatus according to the principles herein also provide a cognitive platform and/or platform product that is configured to implement an example predictive model (such as but not limited to a classifier model) that is configured to identify individuals having a positive amyloid status versus a negative amyloid status with a high degree of accuracy based on measurement data (including cData) from a plurality of user interactions with the example cognitive platform and/or platform product. For example, the example predictive model (such as but not limited to a classifier model) can be configured to identify individuals that have positive amyloid status with about a 83% degree of accuracy, and to identify individuals that have negative amyloid status with about a 79% degree of accuracy, based on measurement data (including cData) from comparing baseline performance data in the first moments of the user performance of a first assessment using the example cognitive platform and/or platform product with values of performance data from the user performance of three (3) subsequent assessments using the example cognitive platform and/or platform product.
[00142] FIG. 4 shows data derived from applying an example predictive model (such as but not limited to a classifier model) to data indicative of user interactions (screens) with an example cognitive platform and/or platform product in a plurality of screens (in this example, four (4) screens). Each screen is at least one trial or session of interaction with the cognitive platform and/or platform product. The graph shows plots of data indicative of sensitivity and specificity as values of percentage (y-axis) versus values of cData (as score on targeting tasks) derived from the user interaction with the example cognitive platform and/or platform product (x-axis). The targeting score for users having negative amyloid status (indicated using triangles) appear at a first set of values, and targeting score for users having positive amyloid status (indicated using circles) appear at a second set of values. The graph shows the predictive model (such as but not limited to a classifier model) based on data from the multiple screens can be used to separate a population of user according to an indication of amyloid status. [00143] The non-limiting example predictive model (such as but not limited to a classifier model) according to the principles herein can be trained to generate predictors of the amyloid status or a neuropsychological condition, including as to a neurodegenerative condition, of individuals using training cData and corresponding nData, and based on metrics collected from a plurality of interactions of users with an example cognitive platform and/or platform product. The training nData can includes data indicative of the amyloid status or a neuropsychological condition, including as to a neurodegenerative condition, and age of each user. In some examples, the nData can include data indicative of the gender of the user. The corresponding cData is collected for a given user (such as but not limited to that user's score from at least one interaction with any example cognitive platform and/or platform product herein). For example, the cData can be collected based on a plurality of interaction sessions of a user using a cognitive platform and/or platform product herein, e.g., two or more interaction sessions. The length of time of each interaction session can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes. The example cognitive platform and/or platform product can be configured to implement the plurality of assessment sessions (such as but not limited to an assessment implemented using a Project: EVO™ platform).
[00144] Example systems, methods, and apparatus according to the principles herein also provide a cognitive platform and/or platform product (which may include an APP) that is configured to implement computerized tasks to produce cData. The example cognitive platform and/or platform product can be configured to use cData from a user interaction as inputs to a predictive model (such as but not limited to a classifier model) that determines the likelihood of positive amyloid burden of the user to a high degree of accuracy using a classifier model. The example cognitive platform and/or platform product can be configured to use cData from a user interaction as inputs to a predictive model (such as but not limited to a classifier model) that determines the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, such as but not limited to attention deficit
hyperactivity disorder (ADHD), sensory-processing disorder (SPD), mild cognitive impairment (MCI), Alzheimer's disease, multiple-sclerosis, schizophrenia,
depression, or anxiety. [00145] The example cognitive platform and/or platform product (which may include an APP) can be configured to collect performance data from a single assessment procedure that is configured to sequentially present a user with tasks that challenge cognitive control and executive function to varying degrees, and use the resulting cData representative of time ordered performance measures as the basis for the determination of a user's amyloid status, or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, using a classifier model.
[00146] According to the principles herein, an example cognitive platform and/or platform product (which may include an APP) can be configured to implement a predictive model (such as but not limited to a classifier model) and computerized tasks, such that data indicative of an individual's performance of the computerized tasks (including cData) in the first moments of the individual's interaction with the example cognitive platform and/or platform product can be compared to data indicative of the individual's performance of the computerized tasks (including cData) in subsequent moments to provide an indication of the user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder. In various examples, the data indicative of the first moments of the individual's interaction with the example cognitive platform and/or platform product can be collected in time periods ranging from the first moments of the individual's interaction, e.g., the first few seconds, about the first 5 seconds, about the first 10 seconds, about the first 20 seconds, about the first 30 seconds, about the first 45 seconds, about the first minute, about the first 1 .5 minutes, about the first 3 minutes, about the first 5 minutes, about the first 7.5 minutes, about the first 10 minutes, or other reasonable initial time interval, of the individual's initial interaction with the example cognitive platform and/or platform product. In these examples, the data indicative of the individual's performance of the computerized tasks (including cData) in the subsequent moments of the individual's interaction with the example cognitive platform and/or platform product can be collected over other time points or time intervals of the individual's interaction with the example cognitive platform and/or platform product. [00147] According to the principles herein, an example cognitive platform and/or platform product (which may include an APP) can be configured to implement a computational predictive model (such as but not limited to a classifier model) that combines data collected during these first moments and subsequent moments to provide a metric that can be compared to a population sample of matched age and gender, to provide an indication of the user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a
neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
[00148] The validity of the predictive model (such as but not limited to a classifier model) and the principles of sequential testing of novel executive function tasks of this innovation is evaluated using a cross-validation procedure. The results indicate that the example cognitive platform and/or platform product (which may include an APP) can be configured to implement the predictive model (such as but not limited to a classifier model) to detect amyloid burden in individuals with a high degree of accuracy.
[00149] The example cognitive platforms or platform products are configured to present assessments that sufficiently challenge a user's cognitive control, attention, working memory, and task engagement.
[00150] The example classifier models according to the principles herein can be used to predict, with a greater degree of accuracy, a user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on data (including cData) generated from a user's first interaction with the example cognitive platform and/or platform product (e.g., as an initial screening).
[00151 ] The example classifier models according to the principles herein can be used to predict, with a greater degree of accuracy, a user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on a comparison of data (including cData) generated from a user's first moments of interaction (including first trial or first session) with the example cognitive platform and/or platform product and the subsequent moments of interaction (including one or more subsequent trials or sessions) with the example cognitive platform and/or platform product.
[00152] In any example herein, the length of time of the user interaction in the first trial or first session can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes. In any example herein, the length of time of the user interaction in each of the one or more
subsequent trials or sessions can be, e.g., about 5 minutes, about 7 minutes, about 10 minutes, about 15 minutes, about 20 minutes, or about 30 minutes.
[00153] In a non-limiting example, the example analyses (and associated
computations) can be implemented by applying one or more linear mixed model regression models to the data (including data and metrics derived from the cData and/or nData). As a non-limiting example, the analysis can be based on a covariate adjustment of comparisons of data for given individuals, i.e., an analysis of factors with multiple measurements (usually longitudinal) for each individual. As a non- limiting example, the analysis can be configured to account for the correlation between measurements, since the data originates from the same source. In this example as well, the analysis can be based on a covariate adjustment of
comparisons of data between individuals using a single dependent variable or multiple variables.
[00154] In each example implementation, the cData is obtained based on interactions of each individual with any one or more of the example cognitive platforms and/or platform products described herein.
[00155] In a non-limiting example implementation, the cData used can be derived as described herein using an example cognitive platform and/or platform product that is configured to implement a sequence that could include at least one initial assessment session. Examples of additional assessments can include a first challenge session (including one or more sessions that are rendered with an adaptation (i.e., adjustment or other change) in the difficulty level of the primary task and/or the interference over a previous session, as described herein), a first training session (including one or more sessions that are rendered with a similar difficulty level of the primary task and/or the interference over a previous session, as described herein), a second training session (including one or more sessions that are rendered with a similar difficulty level of the primary task and/or the interference over a previous session, as described herein), and/or a second challenge session (including one or more sessions that are rendered with an adaptation (i.e., adjustment or other change) in the difficulty level of the primary task and/or the interference over a previous session, as described herein). The cData is collected based on measurements of the responses of the individual with the example cognitive platform and/or platform product during one or more segments of the assessment(s). For example, the cData can include data collected by the cognitive platform and/or platform product to quantify the interaction of the individual with the first moments of an initial assessment as well as data collected to quantify the interaction of the individual with the subsequent moments of an initial assessment. In another example, the cData can include data collected by the cognitive platform and/or platform product to quantify the interaction of the individual with the initial assessment as well as data collected to quantify the interaction of the individual with one or more additional assessmentsO For one or more of the sessions (i.e., sessions of the initial assessments and/or the additional assessment), the example cognitive platform and/or platform product can be configured to present computerized tasks and platform interactions that inform cognitive assessment (screening or monitoring) or deliver treatment. The tasks can be single-tasking tasks and/or multi-tasking tasks (that include primary tasks with an interference). One or more of the tasks can include CSIs.
[00156] Non-limiting examples of the types of cData that can be derived from the interactions of an individual with the cognitive platform and/or platform product are as follows. The cData can be one or more scores generated by the cognitive platform and/or platform product based on the individual's response(s) in performance of a single-tasking task presented by the cognitive platform and/or platform product. The single-tasking task can be, but is not limited to, a targeting task, a navigation task, a facial expression recognition task, or an object recognition task. The cData can be one or more scores generated by the cognitive platform and/or platform product based on the individual's response(s) in performance of a multi-tasking task presented by the cognitive platform and/or platform product. The multi-tasking task can include a targeting task and/or a navigation task and/or a facial expression recognition task and/or an object recognition task, where one or more of the multi-tasking tasks can be presented as an interference with one or more primary tasks. The cData collected can be a scoring representative of the individual's response(s) to each task of the multitask task(s) presented, and/or combination scores representative of the individual's overall response(s) to the multi-task task(s). The combination score can be derived based on computation using any one or more of the scores collected from the individual's response(s) to each task of the multi-task task(s) presented, such as but not limited to a mean, mode, median, average, difference (or delta), standard deviation, or other type of combination. In a non-limiting example, the cData can include measures of the individual's reaction time to one or more of the tasks. The cData can be generated based on an analysis (and associated computation) performed using the other cData collected or derived using the cognitive platform and/or platform product. The analysis can include computation of an interference cost or other cost function. The cData can also include data indicative of an individual's compliance with a pre-specified set and type of interactions with the cognitive platform and/or platform product, such as but not limited to a percentage completion of the pre- specified set and type of interactions. The cData can also include data indicative of an individual's progression of performance using the cognitive platform and/or platform product, such as but not limited to a measure of the individual's score versus a pre- specified trend in progress.
[00157] In the non-limiting example implementations, the cData can be collected from a user interaction with the example cognitive platform and/or platform product at, and/or two or more separate times during the time period leading up to, one or more specific timepoints: an initial timepoint (T1 ) representing an endpoint of the first moments (as defined herein) of an initial assessment session, and a second timepoint (T2) and/or a third timepoint (T3) representing endpoints of the subsequent moments of the initial assessment session.
[00158] As non-limiting examples, the measurement timepoints T2 and T1 can be separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or more. As non-limiting examples, the measurement timepoints T3 and T2 can be separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or more.
[00159] In the non-limiting example implementations, the example cognitive platform and/or platform product can be configured for interaction with the individual over multiple different assessment sessions. In an example, the cData can be collected at timepoints T; associated with the initial assessment session and later timepoints TL associated with the interactions of the individual with the multiple additional assessment sessions. For one or more of these multiple different sessions, the example cognitive platform and/or platform product can be configured for screening, for monitoring, and/or for treatment, as described in the various examples herein.
[00160] In a non-limiting example implementation, the example analyses (and associated computations) can be implemented based at least in part on the cData and nData such as but not limited to data indicative of age, gender, APOE level, and fMRI measures (e.g., cortical thickness, or brain functional activity changes). The results of these example analyses (and associated computations) can be used to provide data indicative of amyloid status of individual(s) and/or the individual's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder. As described herein, the example cData and nData can be used to train an example classifier model. The example predictive model (such as but not limited to a classifier model) can be implemented using a cognitive platform and/or platform product to provide data indicative of differences between the individuals having an amyloid positive (A+) status and the individuals having an amyloid negative (A-) status. Example system, method and apparatus herein are configured to perform the analysis (and associated computation) described herein using a predictive model (such as but not limited to a classifier model) to designate individuals of the population as having an amyloid positive (A+) status or an amyloid negative (A-) status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
[00161 ] In a non-limiting example, the example analyses (and associated computations) can be implemented by applying one or more ANCOVA (analysis of covariance) models to the data (including data and metrics derived from the cData and/or nData). ANCOVA provides a linear model that blends analysis of variance (ANOVA) and regression. As a non-limiting example, the analysis can be based on a covariate adjustment of comparisons of data between individuals using a single dependent variable or multiple variables.
[00162] A non-limiting example predictive model (such as but not limited to a classifier model) can be configured to perform the analysis (and associated computation) using the cData and nData based on various analysis models. Differing analysis models can be applied to data collected from user interactions with the cognitive platform or the platform product (cData) collected at, and/or collected two or more separate times during the time period leading up to, the initial timepoints (T1 and/or Τ;) and/or collected two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL). The analysis model can be based on an ANCOVA model and/or a linear mixed model regression model, applied to a restricted data set (based on age and gender nData) or a larger data set (based on age, gender, APOE expression group, fMRI, and other nData). The example cognitive platform or platform product can be used to collect cData at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or Τ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL), to apply the predictive model (such as but not limited to a classifier model) to compare the cData collected at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or Τ;) to the cData collected at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL) to derive an indicator that designates individuals of the population as having an amyloid positive (A+) status or an amyloid negative (A-) status, and/or that indicates the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
[00163] In a non-limiting example classifier model, the analysis (and associated computation) can be performed to determine a measure of the sensitivity and specificity of the cognitive platform or the platform product to identify and classify the individuals of the population that have an amyloid positive (A+) status, based on applying a logistic regression model to the data collected (including the cData and/or the nData).
[00164] In the non-limiting example classifier model, the analysis (and associated computation) can be performed to determine a measure of the sensitivity and specificity of the cognitive platform or the platform product to identify and classify the individuals of the population according to APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition), based on applying a logistic regression model to the data collected (including the cData and/or the nData), including the amyloid level data.
[00165] In this example implementation, certain cData collected from the individual's interaction with the tasks (and associated CSIs) presented by the cognitive platform and/or platform product, and/or metrics computed using the cData based on the analysis (and associated computations) described, can co-vary or otherwise correlate with the nData, such as but not limited to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition). An example cognitive platform and/or platform product according to the principles herein can be configured to classify an individual as to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations). The example cognitive platform and/or platform product can include, or communicate with, a machine learning tool or other computational platform that can be trained using the cData and nData to perform the classification using the example classifier model.
[00166] The example analysis (and associated computation) can be performed by comparing each variable using any example model described herein for the nData corresponding to the drug group along with a covariate set. The example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions, where the cData (from performance of single-tasking tasks and/or multitasking tasks) are compared to determine the efficacy of the drug on the individual's performance. The example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions for sessions of user interaction with the cognitive platform and/or platform product, where the cData (from performance of single-tasking tasks and/or multi-tasking tasks) are compared to determine the efficacy of the drug on the individual's performance. The example analysis (and associated computation) also can be performed by comparing effects of group classification (such as but not limited to grouping based on amyloid status) versus drug interactions for sessions (and types of tasks) of user interaction with the cognitive platform and/or platform product, where the cData (from performance of single-tasking tasks and/or multi-tasking tasks) are compared to determine the efficacy of the drug on the individual's performance.
[00167] In this example implementation of a classifier model, certain cData collected from the individual's interaction with the tasks (and associated CSIs) presented by the cognitive platform and/or platform product, and/or metrics computed using the cData based on the analysis (and associated computations) described, can co-vary or otherwise correlate with the nData, such as but not limited to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent. An example cognitive platform and/or platform product according to the principles herein can be configured to classify an individual as to amyloid group and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition) and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent based on the cData collected from the individual's interaction with the cognitive platform and/or platform product and/or metrics computed based on the analysis (and associated computations). The example cognitive platform and/or platform product can include, or communicate with, a machine learning tool or other computational platform that can be trained using the cData and nData to perform the classification using the example classifier model.
[00168] A non-limiting example predictive model (such as but not limited to a classifier model) based on data from an initial screen can be derived from
assessment measurement data (cData) collected from a single initial session (such as but not limited to a session lasting as few as about 5 to 7 minutes). Additional inputs can be nData such as but not limited to the participants age and/or gender. The example predictive model (such as but not limited to a classifier model) can be generated based on a formulation of a linear model and fitting techniques to estimate the model's parameters.
[00169] In a non-limiting example, the predictive model can be expressed as a function of variables related to (i) the age (ageScore) and/or gender (genderCode) of the individual, and (ii) a performance score that depends on measurement data indicative of the individual's physical actions in response to interactions with computerized primary tasks in the presence of an interference (intScore) as described herein. In another non-limiting example, the performance score of the predictive model can be expressed based on differences between measurement data indicative of the individual's physical actions in response to interactions with computerized primary tasks in the presence of an interference (intScore) and measurement data indicative of the individual's physical actions in response to interactions with computerized tasks without interference (isoScore), such as a primary task without interference or a secondary task without interference.
[00170] In a non-limiting example, to determine the parameters of the example predictive model, the model can be fit to the data in the plurality of training datasets described herein. As described, each training dataset corresponds to a previously classified individual of a plurality of individuals, and each training dataset includes: (i) data representing at least one of the performance score, age, or gender identifier of the classified individual and (ii) data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
[00171 ] In a non-limiting example, the first example predictive model (such as but not limited to a classifier model) can be expressed as follows:
YisoScore ~
Figure imgf000050_0001
Page ^ageScore β gender ^gender Code
Figure imgf000050_0002
where coefficient β indicates an estimated coefficient, coefficient/variable x represents a participant's score or categorical assignment, γ is the predicted value of the classifier model, and ε is the model error. The non-limiting example predictive model computes a scoring based on values such as data indicative of an age
(ageScore) and a gender identifier (genderCode) of the individual, data indicative of the individual's performance of tasks without interference (isoScore) and in the presence of interference (e.g., intScore), and data indicative of physiological measure (e.g., amyloidstatus or amyloid code). This example predictive model (such as but not limited to a classifier model) can be trained using a machine learning tool or other computational platform using the cData and nData collected from user interactions (with the example cognitive platform and/or platform product) of individuals having known amyloid status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder. [00172] A second example predictive model (such as but not limited to a classifier model) can be based on data from user interaction with multiple screens, i.e., user interaction with an initial screen and with multiple additional sessions, such as but not limited to three (3) additional assessments. The second example predictive model (such as but not limited to a classifier model) is constructed similarly to the first example predictive model, except the participants interference score (xintscore ) (i.e., a score for a task involving a primary task presented with and an interference as described herein) is replaced by the average of the score on a single-tasking task (isoScore) from about three (3) subsequent assessments (xmeanPostisoscore ). In this non-limiting example, the computation for the predictive model (such as but not limited to a classifier model) can be expressed as follows.
YisoScore ~
Figure imgf000051_0001
postlso ^meanPostlsoScore Page ^ageScore β gender ^gender Code AmyloidStatus\Type^amyloidCode ^typeCode
Figure imgf000051_0002
where coefficient β indicates an estimated coefficient, coefficient/variable x represents a participant's score or categorical assignment, γ is the predicted value of the classifier model, and ε is the model error. This example predictive model (such as but not limited to a classifier model) can be trained using a machine learning tool or other computational platform using the cData and nData collected from user interactions (with the example cognitive platform and/or platform product) of individuals having known amyloid status, to indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder.
[00173] An example cognitive platform and/or platform product configured to implement the predictive model (such as but not limited to a classifier model) provides certain attributes. The example cognitive platform and/or platform product can be configured to classify a user according to the user's likelihood of positive amyloid burden, and/or the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, based on faster data collection. For example, the data collection from an assessment performed using the example cognitive platform and/or platform product herein can be in a few minutes {e.g., in as few as about 5 or 7 minutes for an example predictive model (such as but not limited to a classifier model) based on an initial screen). This is much faster than existing assessments, which can require lengthy office visits or time-consuming medical procedures. In an example where a predictive model (such as but not limited to a classifier model) based on multiple assessment sessions is implemented for additional accuracy, the time requirements are still acceptably short {e.g., up to about 40 minutes for a total of four (4) assessments).
[00174] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be easily and remotely deployable on a mobile device such as but not limited to a smart phone or tablet. Existing assessments may require clinician participation, may require the test to be performed in a laboratory/clinical setting, and/or may require invasive on-site medical procedures.
[00175] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be delivered in an engaging format (such as but not limited to a 'game-like' format) that encourages user engagement and improves effective use of the assessment, thus increases accuracy.
[00176] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be configured to combine orthogonal metrics from different tasks collected in a single session for highly accurate results.
[00177] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) provides an easily deployable, cost effective, engaging, short-duration assessment of amyloid status, and/or indicate the user's likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, with a high degree of accuracy.
[00178] As non-limiting examples, at least a portion of the example predictive model (such as but not limited to a classifier model) herein can be implemented in the source code of an example cognitive platform and/or platform product, and/or within a data processing application program interface housed in an internet server. [00179] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be used to provide data indicative of a likelihood of positive amyloid status to one or more of an individual, a physician, a clinician, or other medical or healthcare practitioner, or physical therapist.
[00180] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be used as a screening tool to determine amyloid positive or amyloid negative status of individuals, such as but not limited to, for clinical trials, or other drug trials, or for use by a private physician/clinician practice, and/or for an individual's self- assessment (with corroboration by a medical practitioner).
[00181 ] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be used as a screening tool to provide an accurate assessment of an individual's amyloid status to inform if additional tests, such as but not limited to a PET scan, is to be performed to confirm or clarify amyloid status.
[00182] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be used as a clinical trial screening product that increases the efficiency of identifying amyloid status (whether amyloid positive or negative) of individuals and provide significant cost savings by eliminating the need for unnecessary and expensive traditional detection methods (such as but not limited to PET scans).
[00183] An example cognitive platform and/or platform product herein configured to implement the predictive model (such as but not limited to a classifier model) can be used in a clinical or private healthcare setting to provide an indication of the likelihood of a positive or negative amyloid status of an individual without need for expensive traditional tests (which may be unnecessary).
[00184] As described hereinabove, the example systems, methods, and apparatus according to the principles herein can be implemented, using at least one processing unit of a programmed computing device, to provide the cognitive platform and/or platform product. FIG. 5 shows an example apparatus 500 according to the principles herein that can be used to implement the cognitive platform and/or platform product including the predictive model (such as but not limited to a classifier model) described hereinabove herein. The example apparatus 500 includes at least one memory 502 and at least one processing unit 504. The at least one processing unit 504 is communicatively coupled to the at least one memory 502.
[00185] Example memory 502 can include, but is not limited to, hardware memory, non-transitory tangible media, magnetic storage disks, optical disks, flash drives, computational device memory, random access memory, such as but not limited to DRAM, SRAM, EDO RAM, any other type of memory, or combinations thereof.
Example processing unit 504 can include, but is not limited to, a microchip, a processor, a microprocessor, a special purpose processor, an application specific integrated circuit, a microcontroller, a field programmable gate array, a general- purpose graphics processing unit, a neural network chip, any other suitable processor, or combinations thereof.
[00186] The at least one memory 502 is configured to store processor-executable instructions 506 and a computing component 508. The computing component 508 can include a set of executable instructions that causes the processing unit 504 to analyze the cData and nData. In a non-limiting example, the computing component 508 can be used to analyze the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more physiological or monitoring components and/or cognitive testing components coupled to the cognitive platform as described herein. As shown in FIG. 5, the memory 502 also can be used to store data 510, such as but not limited to the nData 512 (including computation results from application of an example classifier model, measurement data from measurement(s) using one or more physiological or monitoring components and/or cognitive testing components) and/or data indicative of the response of an individual to the one or more tasks (cData), including responses to tasks rendered at a user interface of the apparatus 500 and/or tasks generated using an auditory, tactile, or vibrational signal from an actuating component coupled to or integral with the apparatus 500. The data 510 can be received from one or more physiological or monitoring components and/or cognitive testing components that are coupled to or integral with the apparatus 500.
[00187] In any example herein, the computing device can include the computing component 508.
[00188] In any example herein, the user interface can be a graphical user interface. [00189] In a non-limiting example, the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to analyze the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more physiological or monitoring components and/or cognitive testing components coupled to the cognitive platform as described herein, using the computing component 508. The at least one processing unit 504 also can be configured to execute processor-executable instructions 506 stored in the memory 502 to apply the example predictive model (such as but not limited to a classifier model) to the cDdata and nData, to generate computation results indicative of the classification of an individual according to likelihood of amyloid burden
(status), and/or likelihood of onset and/or stage of progression of a
neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder. The at least one processing unit 504 also executes processor-executable instructions 506 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the cognitive platform and/or platform product coupled with the one or more
physiological or monitoring components and/or cognitive testing components as described herein, and/or controls the memory 502 to store values indicative of the analysis of the cData and/or nData.
[00190] In another non-limiting example, the measurement data (nData 512) can be collected from measurements using one or more physiological or monitoring components and/or cognitive testing components. In any example herein, the one or more physiological components are configured for performing physiological measurements. The physiological measurements provide quantitative measurement data of physiological parameters and/or data that can be used for visualization of physiological structure and/or functions.
[00191 ] In any example herein, the cData can include reaction time, response variance, correct hits, omission errors, number of false alarms (such as but not limited to a response to a non-target), learning rate, spatial deviance, subjective ratings, and/or performance threshold, or data from an analysis, including percent accuracy, hits, and/or misses in the latest completed trial or session. Measures of the reaction time indicate the time the individual takes to initiate a response to an interference (such as but not limited to a target or non-target) from the moment the interference is launched. The performance threshold can be set by the example system or apparatus based on previous measurement cData indicating the individual's performance of the tasks and/or interference. Other non-limiting examples of cData include response time (total of reaction time and the time it takes the individual to complete the response to the interference), task completion time, number of tasks completed in a set amount of time, preparation time for task, accuracy of responses, accuracy of responses under set conditions (e.g., stimulus difficulty or magnitude level and association of multiple stimuli), number of responses a participant can register in a set time limit, number of responses a participant can make with no time limit, number of attempts at a task needed to complete a task, movement stability, accelerometer and gyroscope data, and self-rating.
[00192] In any example herein, the one or more physiological components can include any means of measuring physical characteristics of the body and nervous system, including electrical activity, heart rate, blood flow, and oxygenation levels, to provide the measurement data (nData 512). This can include camera-based heart rate detection, measurement of galvanic skin response, blood pressure measurement, electroencephalogram, electrocardiogram, magnetic resonance imaging, near- infrared spectroscopy, and/or pupil dilation measures, to provide the measurement data (nData 512). The one or more physiological components can include one or more sensors for measuring parameter values of the physical characteristics of the body and nervous system, and one or more signal processors for processing signals detected by the one or more sensors.
[00193] Other examples of physiological measurements to provide measurement data (nData 512) include, but are not limited to, the measurement of body temperature, heart or other cardiac-related functioning using an electrocardiograph (ECG), electrical activity using an electroencephalogram (EEG), event-related potentials (ERPs), functional magnetic resonance imaging (fMRI), blood pressure, electrical potential at a portion of the skin, galvanic skin response and/or galvanic skin response (GSR). The physiological measurements can be made using, e.g., functional magnetic resonance imaging (fMRI), magneto-encephalogram (MEG), and/or functional near-infrared spectroscopy (fNIRS). The devices for making physiological measurements can include, e.g., an eye-tracking device or other optical detection device including processing units programmed to determine degree of pupillary dilation, functional near-infrared spectroscopy (fNIRS), and/or a positron emission tomography (PET) scanner. An EEG-fMRI or MEG-fMRI measurement allows for simultaneous acquisition of electrophysiology (EEG/MEG) data and hemodynamic (fMRI) data.
[00194] The example apparatus 500 of FIG. 5 can be configured as a computing device for performing any of the example methods described herein. The computing device can include an App program for performing some of the functionality of the example methods described herein.
[00195] In any example herein, the example apparatus 500 can be configured to communicate with one or more of a monitoring component, a disease monitoring component, and a physiological measurement component, to provide for biofeedback and/or neurofeedback of data to the computing device, for adjusting a type or a difficulty level of one or more of the task, and the interference, to achieve the desired performance level of the individual. Uses of the biofeedback and neurofeedback are described below. Examples of the monitoring component include a device for performing a TOVA® test. Examples of the disease monitoring component include any type of device that can be used for monitoring symptoms of a disease. As a non- limiting example, the biofeedback can be based on physiological measurements of the individual as the individual interacts with the apparatus 500, in which the biofeedback can be used by the computing component 508 to modify the type or a difficulty level of one or more of the task, and the interference. As a non-limiting example, the neurofeedback can be based on measurement and monitoring of the individual using a cognitive and/or a disease monitoring component as the individual interacts with the apparatus 500, in which the neurofeedback can be used by the computing component 508 to modify the type or a difficulty level of one or more of the task, and the interference, based on the measurement data indicating, e.g., the individual's cognitive state, to facilitate generating a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
[00196] In a non-limiting example, the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task; and render at the user interface a second instance of the primary task with an interference (configured as a second instance of a secondary task), requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference. The at least one processing unit 504 is configured to render the interference such that it diverts the individual's attention from the second instance of the primary task and is configured as an interruptor or a distraction. The at least one processing unit 504 is configured to use the user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor. The at least one processing unit 504 is further configured to receive data indicative of the first primary response, the first secondary response, the second primary response, and the response to the interference, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response, and analyzing one or both of the first secondary response or the response to the interference, to determine at least one indicator of the cognitive ability of the individual. The at least one processing unit 504 is further configured to execute a first predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
[00197] In a non-limiting example, the computing component 508 can be used to receive (including to measure) substantially simultaneously two or more of: (i) the response from the individual to a primary task (providing at least a portion of cData), (ii) a secondary response of the individual to an interference as an instance of a secondary task (providing at least a portion of cData), and (iii) at least one physiological measure of the individual (using a measurement of at least one physiological component to provide at least a portion of nData). In a non-limiting example, the computing component 508 can be used to analyze the cData and/or nData received from the cognitive platform coupled with one or more physiological components as described herein to compute at least one indicator of cognitive abilities. In another non-limiting example, the computing component 508 can be used to compute the at least one indicator and/or apply the predictive model to generate the scoring output. As shown in FIG. 5, the memory 502 also can be used to store data 510, such as but not limited to the physiological measurement data (nData 512) received from a physiological component coupled to or integral with the apparatus 500 and/or data indicative of the response of an individual to the one or more tasks, including responses to tasks rendered at a user interface of the apparatus 500 and/or tasks generated using an auditory, tactile, and/or vibrational signal from an actuating component coupled to or integral with the apparatus 500, and/or data indicative of one or more of an amount, concentration, or dose titration, or other treatment regimen of a drug, pharmaceutical agent, biologic, or other medication being or to be administered to an individual.
[00198] In a non-limiting example, the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at least to measure substantially simultaneously two or more of: (i) the response from the individual to a task (providing at least a portion of cData), (ii) a secondary response of the individual to an interference as an instance of a secondary task (providing at least a portion of cData), and (iii) at least one physiological measure of the individual (using a measurement of at least one physiological component to provide at least a portion of nData). The at least one processing unit 504 also executes the processor-executable instructions 506 stored in the memory 502 at least to analyze the cData and/or nData received from the one or more physiological components coupled with the cognitive platform, to compute at least one indicator of cognitive abilities, using the computing component 508. The at least one processing unit 504 also executes processor- executable instructions 506 to control a transmission unit to transmit values indicative of the analysis of the cData and/or nData received from the one or more physiological components coupled with the cognitive platform, and/or controls the memory 502 to store values indicative of the analysis of the cData and/or nData (including the at least one performance metric). The at least one processing unit 504 also may be programmed to execute processor-executable instructions 506 to control a transmission unit to transmit values indicative of the computed performance metrics and/or controls the memory 502 to store values indicative of the performance metrics.
[00199] In a non-limiting example, the at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the first timepoint T1 at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; and render a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference. The at least one processing unit 504 is configured to use the user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor. The at least one processing unit 504 is configured to render the interference such that it diverts the individual's attention from the second instance of the primary task and is configured as an interruptor or a distraction. The at least one processing unit 504 executes the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the second timepoint T2 (subsequent to first timepoint T1 ) at least to: render a third instance of a primary task at the user interface, requiring a third primary response from the individual to the third instance of the primary task; and render a fourth instance of the primary task with an interference, requiring a fourth primary response from the individual to the fourth instance of the primary task in the presence of the interference. The at least one processing unit 504 is further configured to receive data indicative of the first primary response, the second primary response, the third primary response, and the fourth primary response, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of two or more of: the first primary response, the second primary response, the third primary response, or the fourth primary response, to determine at least a first indicator of the cognitive ability of the individual. The at least one processing unit 504 is further configured to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
[00200] In a non-limiting example, the at least one processing unit 504 may be further configured to execute the processor-executable instructions 506 stored in the memory 502 at least to: adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at least one subsequent instance (i.e., at a later timepoint) of the primary task and/or the interference at a second difficulty level. For example, one or both of the third or fourth instance of the primary task can be rendered at a same difficulty level, or at a second (different) difficulty level than one or both of the first or second instance of the primary task. Independently, the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T2 may be the same or different than the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T1.
[00201 ] In a non-limiting example, the at least one processing unit 504 may be further configured to execute the processor-executable instructions 506 stored in the memory 502 at, and/or two or more separate times during the time period leading up to, the third timepoint T3 (subsequent to second timepoint T2) at least to: render a fifth instance of a primary task at the user interface, requiring a fifth primary response from the individual to the fifth instance of the primary task; and render a sixth instance of the primary task with an interference, requiring a sixth primary response from the individual to the sixth instance of the primary task in the presence of the interference. The at least one processing unit 504 is further configured to receive data indicative of the first primary response, the second primary response, the third primary response, the fourth primary response, the fifth primary response and the sixth primary response, and to analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of three or more of: the first primary response, the second primary response, the third primary response, the fourth primary response, the fifth primary response or the sixth primary response, to determine at least a first indicator of the cognitive ability of the individual. The at least one processing unit 504 is further configured to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition.
[00202] In some examples, one or both the of fifth or sixth instance of the primary task can be rendered at a same difficulty level, or at a second (different) difficulty level than one or both of the third or fourth instance of the primary task. Independently, the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T3 may be the same or different than the interference rendered at, and/or two or more separate times during the time period leading up to, the timepoint T2.
[00203] In a non-limiting example, the at least one processing unit 504 further executes the processor-executable instructions 506 stored in the memory 502 to apply a predictive model based at least in part on the at least one indicator determined using any of the examples herein to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition. The predictive model can be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least the first indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual. The trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output. The scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
[00204] Using any example system, method or apparatus according to the principles herein, the scoring output (such as but not limited to a classification output) for an individual as to a likelihood of onset and/or stage of progression of a neurodegenerative condition can be transmitted as a signal to one or more of a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
[00205] In some examples, the results of the analysis may be used to modify the difficulty level or other property of the computerized stimuli or interaction (CSI) or other interactive elements.
[00206] FIG. 6 shows another example apparatus according to the principles herein, configured as a computing device 600 that can be used to implement the cognitive platform according to the principles herein. The example computing device 600 can include a communication module 610 and an analysis engine 612. The communication module 610 can be implemented to receive data indicative of at least one response of an individual to the primary task in the absence of an interference, and/or at least one response of an individual to the primary task that is being rendered in the presence of the interference. In an example, the communication module 610 can be implemented to receive substantially simultaneously two or more of: (i) the response from the individual to a primary task, and (ii) a secondary response of the individual to an interference. The analysis engine 612 can be implemented to analyze the data from the at least one sensor component as described herein and/or to analyze the data indicative of the responses to compute at least one indicator of cognitive abilities. In another example, the analysis engine 612 can be implemented to apply a predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition. The predictive model can be trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least the first indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual. As shown in the example of FIG. 6, the computing device 600 can include processor-executable instructions such that a processor unit can execute an application program (App 614) that a user can implement to initiate the analysis engine 612. In an example, the processor- executable instructions can include software, firmware, or other instructions. The analysis engine 612 can be configured to apply the trained predictive model to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output. The scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
[00207] The example communication module 610 can be configured to implement any wired and/or wireless communication interface by which information may be exchanged between the computing device 600 and another computing device or computing system. Non-limiting examples of wired communication interfaces include, but are not limited to, USB ports, RS232 connectors, RJ45 connectors, Thunderbolt™ connectors, and Ethernet connectors, and any appropriate circuitry associated therewith. Non-limiting examples of wireless communication interfaces may include, but are not limited to, interfaces implementing Bluetooth® technology, Wi-Fi, Wi-Max, IEEE 802.1 1 technology, radio frequency (RF) communications, Infrared Data Association (IrDA) compatible protocols, Local Area Networks (LAN), Wide Area Networks (WAN), and Shared Wireless Access Protocol (SWAP).
[00208] In an example implementation, the example computing device 600 includes at least one other component that is configured to transmit a signal from the apparatus to a second computing device. For example, the at least one component can include a transmitter or a transceiver configured to transmit a signal including data indicative of a measurement by at least one sensor component to the second computing device.
[00209] In any example herein, the App 614 on the computing device 600 can include processor-executable instructions such that a processor unit of the computing device implements an analysis engine to analyze data indicative of the individual's response to the rendered tasks and/or interference (either or both with computer- implemented time-varying element) and the response of the individual to the at least one computer-implemented time-varying element to compute at least one indicator of cognitive abilities. In another example, the App 614 on the computing device 600 can include processor-executable instructions such that a processor unit of the computing device implements an analysis engine to analyze the data indicative of the individual's response to the rendered tasks and/or interference (either or both with computer- implemented time-varying element) and the response of the individual to the at least one computer-implemented time-varying element to provide a predictive model based on the computed values of the performance metric, to generate a predictive model output indicative of a measure of cognition, a mood, a level of cognitive bias, or an affective bias of the individual. In some examples, the App 614 can include processor- executable instructions such that the processing unit of the computing device implements the analysis engine to apply a predictive model trained to provide a scoring output as to a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition based at least in part on applying the first predictive model to the at least one indicator, and other metrics and analyses described herein.
[00210] In some example, the App 614 can include processor-executable instructions to provide one or more of: (i) a predictive model output indicative of the cognitive capabilities of the individual, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iv) a change in the individual's cognitive capabilities, (v), a recommended treatment regimen, (vi) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vii) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
[0021 1 ] In an example, the App 614 can include processor-executable instructions such that the processing unit of the computing device implements the analysis engine to apply a second predictive model (including a second classifier) that is trained to provide a second scoring that is indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv), a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise. The second predictive model can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals. Each training dataset includes data representing the scoring output derived as described herein from the application of a first predictive model to the at least one indicator of the cognitive ability of the classified individual. Each training dataset also includes data indicative of one or more of (i) an indication (including a description) of the adverse event the individual experienced in response to administration of the pharmaceutical agent, drug, or biologic, or in response to a change in one or more of the amount, concentration, or dose titration of that pharmaceutical agent, drug, or biologic, (ii) the treatment regimen of the individual and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of the regimen in connection with the treatment or management of symptoms of the neurodegenerative condition, (iii) a type and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of any behavioral therapy, counseling, or physical exercise the individual is undergoing in connection with the treatment or management of symptoms of the neurodegenerative condition. The trained second predictive model can be applied to the scoring output (including a classification output) of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate the second scoring as an output.
[00212] In any example herein, the App 614 can be configured to receive measurement data including physiological measurement nData of an individual received from a physiological component, and/or cData indicative of the response of an individual to a task and/or an interference rendered at a user interface of the apparatus 500 (as described in greater detail below), and/or data indicative of one or more of an amount, concentration, or dose titration, or other treatment regimen of a drug, pharmaceutical agent, biologic, or other medication being or to be administered to an individual.
[00213] FIG. 7 A shows a non-limiting example system, method, and apparatus according to the principles herein, where the platform product (which may include an APP) is configured as a cognitive platform 702 that is separate from, but configured for coupling with, one or more of the physiological components 704.
[00214] FIG. 7B shows another non-limiting example system, method, and apparatus according to the principles herein, where the platform product (which may include an APP) is configured as an integrated device 710, where the cognitive platform 712 is integrated with one or more of the physiological components 714.
[00215] FIG. 8 shows a non-limiting example implementation where the platform product (which may include an APP) is configured as a cognitive platform 802 that is configured for coupling with a physiological component 804. In this example, the cognitive platform 802 is configured as a tablet including at least one processor programmed to implement the processor-executable instructions associated with the tasks and CSIs described hereinabove, to receive cData associated with user responses from the user interaction with the cognitive platform 802, to receive the nData from the physiological component 804, to analyze the cData and/or nData as described hereinabove, and to analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition. The cognitive platform can be configured to analyze the differences in the individual's performance based on determining the differences between sets of data, each set of data including data indicative of the user's responses to the tasks in the presence and in the absence of interference and the nData, and/or adjust the difficulty level of the computerized stimuli or interaction (CSI) or other interactive elements based on the individual's performance determined in the analysis and based on the analysis of the cData and/or nData. The cognitive platform can be configured to provide an output or other feedback from the platform product indicative of at least one of: (i) the individual's performance, (ii) cognitive assessment, (iii) projected response to cognitive treatment, or (iv) assessed measures of cognition. In this example, the physiological component 804 is configured as an EEG device mounted to a user's head, to perform the measurements before, during and/or after user interaction with the cognitive platform 802, to provide the nData.
[00216] In a non-limiting example implementation, the EEG device can be a low- cost EEG device for medical treatment validation and personalized medicine. The low-cost EEG device can be easier to use and has the potential to vastly improve the accuracy and the validity of medical applications. In this example, the platform product may be configured as an integrated device including the EEG component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the EEG component.
[00217] In a non-limiting example use for treatment validation, the user interacts with a cognitive platform, and the EEG device is used to perform physiological measurements of the user. Changes, if any, in EEG measurements data (such as brainwaves) are monitored based on the actions of the user in interacting with the cognitive platform. The nData (e.g., brainwave measurements) using the EEG device can be collected and analyzed to detect changes in the EEG measurements. This analysis can be used to determine the types of response from the user, such as whether the user is performing according to an optimal or desired profile.
[00218] In a non-limiting example use for personalized medicine, the nData from the EEG to measurements can be used to identify changes in user performance/condition that indicate that the cognitive platform treatment is having the desired effect. The nData from the EEG measurements can also be used to determine the type of tasks and/or CSIs that works for a given user. The analysis can be used to determine whether the cognitive platform should be configured to provide tasks and/or CSIs to enforce or diminish these user results detected by the EEG device by adjusting the user's experience when interacting with the cognitive platforms.
[00219] In a non-limiting example implementation, measurements are made using a cognitive platform that is configured for coupling with a fMRI, for use for medical application validation and personalized medicine. Consumer-level fMRI devices may be used to improve the accuracy and the validity of medical applications by tracking and detecting changes in brain part stimulation.
[00220] In a non-limiting example, fMRI measurements can be used to provide measurement data of the cortical thickness and other similar measurement data.
[00221 ] In a non-limiting example use for treatment validation, the user interacts with a cognitive platform, and the fMRI is used to measure physiological data. The user is expected to have stimulation of a particular brain part or combination of brain parts (such as but not limited to the prefrontal cortex, the visual cortex, or the hippocampus) based on the actions of the user while interacting with the cognitive platform. In this example, the platform product may be configured as an integrated device including the fMRI component coupled with the cognitive platform, or as a cognitive platform that is separate from, but configured for coupling with the fMRI component. Using the cognitive platform with the fMRI, measurement can be made of the stimulation of portions of the user brain, and analysis can be performed to detect changes in the measurements to determine whether the user exhibits the desired responses.
[00222] In a non-limiting example use for personalized medicine, the fMRI can be used to collect measurement data to be used to identify the progress of the user in interacting with the cognitive platform. The analysis can be used to determine whether the cognitive platform should be configured to provide tasks and/or CSIs to enforce or diminish these user results detected by the fMRI, by adjusting the user's experience in interacting with the cognitive platform.
[00223] In any example herein, the system, methods or apparatus can be configured to make adjustments in real-time to one or both of the difficulty levels or the type of the tasks and/or interference (including CSIs).
[00224] An example system, method, and apparatus according to the principles herein can be configured to train a predictive model of a measure of the cognitive capabilities of individuals based on feedback data from the output for individuals that are previously classified as to the measure of cognitive abilities of interest. For example, a predictive model (including a condition classifier) can be trained using a plurality of training datasets, in which each training dataset is associated with a previously classified individual from a group of individuals. Each training dataset includes data representing the at least one indicator of the cognitive ability of the classified individual and nData indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual, based on the classified individual's interaction with an example apparatus, system, or computing device described herein. The example classifier also can take as input at least one of: (i) data indicative of the performance of the classified individual at a cognitive test, (ii) data indicative of the behavioral test, and/or data indicative of a diagnosis of a status, or (iii) data indicative of the progression of the neurodegenerative condition. The trained classifier can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output (such as but not limited to a classification output). The scoring output (including a classification output) can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
[00225] In any example herein, the at least one processing unit can be programmed to cause an actuating component of a system (including the cognitive platform) to effect auditory, tactile, and/or vibrational computerized elements to effect the stimulus or other interaction with the individual. In a non-limiting example, the at least one processing unit can be programmed to cause a component of the cognitive platform to receive data indicative of at least one response from the individual based on the user interaction with the task and/or interference, including responses received at an input device. In an example where at least one user interface (including a graphical user interface) is rendered to present the computerized stimulus to the individual, the at least one processing unit can be programmed to cause the user interface to receive the data indicative of at least one response from the individual.
[00226] In any example herein, the data indicative of the response of the individual to a task and/or an interference can be measured using at least one sensor device contained in and/or coupled to an example system or apparatus herein, such as but not limited to a gyroscope, an accelerometer, a motion sensor, a position sensor, a pressure sensor, an optical sensor, an auditory sensor, a vibrational sensor, a video camera, a pressure-sensitive surface, a touch-sensitive surface, or other type of sensor. In other examples, the data indicative of the response of the individual to the task and/or an interference can be measured using other types of sensor devices, including a video camera, a microphone, a joystick, a keyboard, a mouse, a treadmill, an elliptical, a bicycle, steppers, or a gaming system (including a Wii®, a Playstation®, or an Xbox® or another gaming system). The data can be generated based on physical actions of the individual that are detected and/or measured using the at least one sensor device when the individual executes a response to the stimuli presented with the task and/or interference.
[00227] The user may respond to tasks by interacting with the computer device. In an example, the user may execute a response using a keyboard for alpha-numeric or directional inputs; a mouse for GO/NO-GO clicking, screen location inputs, and movement inputs; a joystick for movement inputs, screen location inputs, and clicking inputs; a microphone for audio inputs; a camera for still or motion optical inputs; sensors such as accelerometer and gyroscopes for device movement inputs; among others. Non-limiting example inputs for a game system include but are not limited to a game controller for navigation and clicking inputs, a game controller with accelerometer and gryroscope inputs, and a camera for motion optical inputs. Example inputs for a mobile device or tablet include a touch screen for screen location information inputs, virtual keyboard alpha-numeric inputs, GO/NO-GO tapping inputs, and touch screen movement inputs; accelerometer and gyroscope motion inputs; a microphone for audio inputs; and a camera for still or motion optical inputs, among others. In other examples, data indicative of the individual's response can include physiological sensors/measures to incorporate inputs from the user's physical state, such as but not limited to electroencephalogram (EEG), magnetoencephalography (MEG), heart rate, heart rate variability, blood pressure, weight, eye movements, pupil dilation, electrodermal responses such as the galvanic skin response, blood glucose level, respiratory rate, and blood oxygenation.
[00228] In any example herein, the individual may be instructed to provide a response via a physical action of clicking a button and/or moving a cursor to a correct location on a screen, head movement, finger or hand movement, vocal response, eye movement, or other action of the individual.
[00229] As a non-limiting example, an individual's response to a task or interference rendered at the user interface that requires a user to navigate a course or environment or perform other visuo-motor activity may require the individual to make movements (such as but not limited to steering) that are detected and/or measured using at least one type of the sensor device. The data from the detection or measurement provides the data indicative of the response.
[00230] As a non-limiting example, an individual's response to a task or interference rendered at the user interface that requires a user to discriminate between a target and a non-target may require the individual to make movements (such as but not limited to tapping or other spatially or temporally discriminating indication) that are detected and/or measured using at least one type of the sensor device. The data that is collected by a component of the system or apparatus based on the detection or other measurement of the individual's movements (such as but not limited to at least one sensor or other device or component described herein) provides the data indicative of the individual's responses.
[00231 ] The example system, method, and apparatus can be configured to apply the predictive model, using computational techniques and machine learning tools, such as but not limited to linear/logistic regression, principal component analysis, generalized linear mixed models, random decision forests, support vector machines, or artificial neural networks, to the data indicative of the individual's response to the tasks and/or interference, and/or data from one or more physiological measures, to create composite variables or profiles that are more sensitive than each measurement alone for generating a predictive model output indicative of the cognitive response capabilities of the individual. In an example, the predictive model output can be configured for other indications such as but not limited to detecting an indication of a disease, disorder or cognitive condition, or assessing cognitive health.
[00232] The example predictive models herein can be trained to be applied to data collected from interaction sessions of individuals with the cognitive platform to provide the output. In a non-limiting example, the predictive model can be used to generate a standards table, which can be applied to the data collected from the individual's response to task and/or interference to classify the individual's cognitive response capabilities.
[00233] Non-limiting examples of assessment of cognitive abilities include assessment scales or surveys such as the Mini Mental State Exam, CANTAB cognitive battery, Test of Variables of Attention (TOVA), Repeatable Battery for the Assessment of Neuropsychological Status, Clinical Global Impression scales relevant to specific conditions, Clinician's Interview-Based Impression of Change, Severe Impairment Battery, Alzheimer's Disease Assessment Scale, Positive and Negative Syndrome Scale, Schizophrenia Cognition Rating Scale, Conners Adult ADHD Rating Scales, Hamilton Rating Scale for Depression, Hamilton Anxiety Scale, Montgomery-Asberg Depressing Rating scale, Young Mania Rating Scale, Children's Depression Rating Scale, Penn State Worry Questionnaire, Hospital Anxiety and Depression Scale, Aberrant Behavior Checklist, Activities for Daily Living scales, ADHD self-report scale, Positive and Negative Affect Schedule, Depression Anxiety Stress Scales, Quick Inventory of Depressive Symptomatology, and PTSD Checklist.
[00234] In other examples, the assessment may test specific functions of a range of cognitions in cognitive or behavioral studies, including tests for perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making, and other specific example measurements, including but are not limited to TOVA, MOT (motion-object tracking), SART (Sustained Attention to Response Task), CDT (change detection task), UFOV (useful field of view), Filter task, WAIS (Wechsler Adult Intelligence Scale) digit symbol, Troop, Simon task, Attentional Blink, N-back task, PRP (Psychological Refractory Period) task, task-switching test, and Flanker task.
[00235] In non-limiting examples, the example systems, methods, and apparatus according to the principles described herein can be applicable to many different types of neuropsychological conditions, such as but not limited to dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, or other neurodegenerative condition.
[00236] The instant disclosure is directed to computer-implemented devices formed as example cognitive platforms configured to implement software and/or other processor-executable instructions for the purpose of measuring data indicative of a user's performance at one or more tasks, to provide a user performance metric. The example performance metric can be used to derive an assessment of a user's cognitive abilities and/or to measure a user's response to a cognitive treatment, and/or to provide data or other quantitative indicia of a user's condition (including physiological condition and/or cognitive condition). Non-limiting example cognitive platforms according to the principles herein can be configured to classify an individual as to a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition, and/or potential efficacy of use of the cognitive platform when the individual is being administered (or about to be administered) a drug, biologic or other pharmaceutical agent, based on the data collected from the individual's interaction with the cognitive platform and/or metrics computed based on the analysis (and associated computations) of that data. The neurodegenerative condition can be, but is not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[00237] Any classification of an individual as to likelihood of onset and/or stage of progression of a neurodegenerative condition according to the principles herein can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual.
[00238] In any example herein, the cognitive platform can be configured as any combination of a medical device platform, a monitoring device platform, a screening device platform, or other device platform.
[00239] The instant disclosure is also directed to example systems that include cognitive platforms that are configured for coupling with one or more physiological or monitoring component and/or cognitive testing component. In some examples, the systems include cognitive platforms that are integrated with the one or more other physiological or monitoring component and/or cognitive testing component. In other examples, the systems include cognitive platforms that are separately housed from and configured for communicating with the one or more physiological or monitoring component and/or cognitive testing component, to receive data indicative of measurements made using such one or more components.
[00240] In an example system, method, and apparatus herein, the processing unit can be programmed to control the user interface to modify a temporal length of the response window associated with a response-deadline procedure.
[00241 ] In an example system, method, and apparatus herein, the processing unit can be further programmed to control the user interface to render the task as a continuous visuo-motor tracking task.
[00242] In an example system, method, and apparatus herein, the processing unit controls the user interface to render the interference as a target discrimination task.
[00243] As used herein, a target discrimination task may also be referred to as a perceptual reaction task, in which the individual is instructed to perform a two-feature reaction task including target stimuli and non-target stimuli through a specified form of response. As a non-limiting example, that specified type of response can be for the individual to make a specified physical action in response to a target stimulus (e.g., move or change the orientation of a device, tap on a sensor-coupled surface such as a screen, move relative to an optical sensor, make a sound, or other physical action that activates a sensor device) and refrain from making such specified physical action in response to a non-target stimulus.
[00244] In a non-limiting example, the individual is required to perform a visuomotor task (as a primary task) with a target discrimination task as an interference (an instance of a secondary task) (either or both including a computer-implemented time- varying element). To effect the visuomotor task, a programmed processing unit renders visual stimuli that require fine motor movement as reaction of the individual to the stimuli. In some examples, the visuomotor task is a continuous visuomotor task. The processing unit is programmed to alter the visual stimuli and record data indicative of the motor movements of the individual over time (e.g., at regular intervals including 1 , 5, 10, or 30 times per second). Example stimuli rendered using the programmed processing unit for a visuomotor task requiring fine motor movement may be a visual presentation of a path that an avatar is required to remain within. The programmed processing unit may render the path with certain types of obstacles that the individual is either required to avoid or to navigate towards. In an example, the fine motor movements effect by the individual, such as but not limited to tilting or rotating a device, are measured using an accelerometer and/or a gyroscope (e.g., to steer or otherwise guide the avatar on the path while avoiding or crossing the obstacles as specified). The target discrimination task (serving as the interference), can be based on targets and non-targets that differ in shape and/or color.
[00245] In any example, the apparatus may be configured to instruct the individual to provide the response to the computer-implemented time-varying element as an action that is read by one or more sensors, such as a movement that is sensed using a gyroscope or accelerometer or a motion or position sensor, or a touch that is sensed using a touch-sensitive, pressure sensitive or capacitance-sensitive sensor.
[00246] In some examples, the task and/or interference can be a visuomotor task, a target discrimination task, and/or a memory task.
[00247] Within the context of a computer-implemented adaptive response-deadline procedure, the response-deadline can be adjusted between trials or blocks of trials to manipulate the individual's performance characteristics towards certain goals. A common goal is driving the individual's average response accuracy towards a certain value by controlling the response deadline.
[00248] In a non-limiting example, the hit rate may be defined as the number of correct responses to a target stimuli divided by the total number of target stimuli presented. The false alarm rate can be calculated as the number of responses to a distractor stimuli divided by the number of distractor stimuli presented. The miss rate can be calculated as the number of nonresponses to a target stimuli divided by the number of incorrect responses, including the nonresponses to a target stimuli added to the number of responses to a distractor stimuli. The correct response rate can be calculated as the proportion of correct responses not containing signal (e.g., where the individual correctly responds to the distractor by refraining from effecting an action, such as a tap or other action, if the distractor is rendered). In an example, the correct response rate may be calculated as the number of non-responses to the distractor stimuli divided by the number of non-responses to the distractor stimuli plus the number of responses to the target stimuli.
[00249] In some examples, the tasks and/or interference are presented to the individual in two or more trials and/or sessions, with an interspersed interval between each trial and/or session. In some examples, the computing system is configured to implement the tasks and/or interference in the subsequent trial(s) and/or session(s) at a difficulty level that is changed or maintained the same from one trial to another and/or from one session to another. For example, the difficulty level in each subsequent trial and/or each subsequent session can be dependent on the performance of the individual in the previous trial and/or previous session. Based on an analysis by the computing system indicating that the number of correct inputs in the responses made by the individual in a previous trial and/or session increases or reaches a specific threshold (e.g. a pre-determined percentage of correct responses), the computing system is configured to implement the tasks and/or interference in the subsequent trial and/or session at a higher difficulty level than the previous trial and/or session. Based on an analysis by the computing system indicating that the number of correct inputs in the responses made by the individual is decreased, is at or below a specified threshold, achieves a specified level of failure, or fails to achieve a level of success, in the previous trial and/or session, the computing system is configured to implement the tasks and/or interference in the subsequent trial and/or session at a lower difficulty level than the previous trial and/or session. In some examples, the computing system is configured to implement the tasks and/or interference in the subsequent trial(s) and/or session(s) at a difficulty level in a step-wise and/or in a peaks and valley fashion.
[00250] To modulate the difficulty level of a trial and/or a session, the computing system can be configured to modify the difficulty level of the primary task, or of the interference, or of some combination of the primary task and the interference. The modulation of the difficulty level may be based on either the data indicative of the actual performance of the individual in performing the task or interference (as determined by measurement as the input to a task or interference) or a more indirect parameter governed by the analysis, e.g., a performance metric such as but not limited to the interference cost. In some examples, the level of difficulty of the task and/or the interference can be adjusted based on an adaptive staircase algorithm at an accuracy of about 50%, about 55%, about 60%, about 65%, about 70%, about 75%, about 80%, about 85%, or about 90% or more.
[00251 ] In another example, the computing system can be configured to modify the difficulty level such that the platform is specifically tailored to an individual, e.g., by maintaining the difficulty level at or around a threshold success rate for the individual. For example, the computing system can be configured to target the difficulty level to maintain a substantially constant error rate from an individual (e.g., to maintain substantially approximately 80% response accuracy). In other examples, the computing system can be configured to target the difficulty level to maintain an accuracy of performance from the individual of about 50%, about 55%, about 60%, about 65%, about 70%, about 75%, about 80%, about 85%, or about 90% or more. The difficulty level of a task for a given individual may be determined by implementing the task without interference (e.g. , single-tasking) initially at a default difficulty level for a category of individuals (e.g. average for an age range), a lowest level of difficulty, or a level comparable based on the individual's prior assessment. In subsequent trials and/or sessions, the difficulty level can be change until analysis of the measured data indicates that the individual is performing at a specific threshold level (e.g., percent accuracy).
[00252] In an example, the difficulty of the task (potentially including a computer- implemented time-varying element) adapts with every stimuli that is presented, which could occur more often than once at regular time intervals (e.g., every 5 seconds, every 10 seconds, every 20 seconds or other regular schedule). [00253] In another example, the difficulty of a continuous task (potentially including a computer-implemented time-varying element) can be adapted on a set schedule, such as but not limited to every 30 seconds, 10 seconds, 1 second, 2 times per second, or 30 times per second.
[00254] In an example, the length of time of a trial depends on the number of iterations of rendering (of the tasks/interference) and receiving (of the individual's responses) and can vary in time. In an example, a trial can be on the order of about 500 milliseconds, about 1 second (s), about 10 s, about 20 s, about 25 s, about 30 s, about 45 s, about 60 s, about 2 minutes, about 3 minutes, about 4 minutes, about 5 minutes, or greater. Each trial may have a pre-set length or may be dynamically set by the processing unit (e.g., dependent on an individual's performance level or a requirement of the adapting from one level to another).
[00255] In an example, the task and/or interference (either or both including a computer-implemented time-varying element) can be modified based on targeting changes in one or more specific metrics by selecting features, trajectory, and response window of the targeting task, and level/type of parallel task interference to progressively require improvements in those metrics in order for the apparatus to indicate to an individual that they have successfully performed the task. This could include specific reinforcement, including explicit messaging, to guide the individual to modify performance according to the desired goals.
[00256] In an example, the indication of the modification of the cognitive abilities can include a change in a measure of one or more of affective bias, mood, level of cognitive bias, sustained attention, selective attention, attention deficit, impulsivity, inhibition, perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making.
[00257] Example systems, methods and apparatus according to the principles herein can be implemented using a programmed computing device including at least one processing unit, to determine a potential biomarker for clinical populations.
[00258] An example system, method, and apparatus according to the principles herein can be configured to enhance the cognitive skills in an individual. In an example implementation, a programmed processing unit is configured to execute processor- executable instructions at least to: render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; and render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task. The programmed processing unit is configured to use the user interface to instruct the individual not to respond to an interference with the primary task that is configured as a distraction and to respond to an interference with the primary task that is configured as an interruptor. The programmed processing unit is configured to render at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference. The first interference is configured to divert the individual's attention from the second instance of the primary task and is configured as an instance of the secondary task that is rendered as an interruptor or a distraction. The programmed processing unit is also configured to receive data indicative of the first primary response, the first secondary response, the second primary response, and the response to the interference; and to adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders one or both of a third instance of the primary task or the interference at a second difficulty level. The programmed processing unit is further configured to render at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference; receive data indicative of the third primary response and the response to the second interference; and analyze the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of two or more of the first primary response, the second primary response, and the third primary response, and also analyzing at least one of the first secondary response, to determine at least one indicator of the cognitive ability of the individual. The programmed processing unit is further configured to execute a first predictive model based at least in part on the at least one indicator to generate a scoring output (such as but not limited to a classification output) indicative of a likelihood of onset of a neurodegenerative condition of the individual and/or a stage of progression of the neurodegenerative condition. As described herein, the first predictive model is trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset including data representing the at least one indicator of the cognitive ability of the classified individual and data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual. The trained predictive model can be applied to the at least one indicator of the cognitive ability of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate a scoring output. The scoring output can be used to provide an indication of a likelihood of onset of a neurodegenerative condition and/or a stage of progression of the neurodegenerative condition of an individual.
[00259] The example processing unit is also configured, based at least in part on the scoring output (such as but not limited to a classification output), to generate an output to the user interface indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to a change in one or more of a recommended amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv) a recommended a treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
[00260] In an example, the processing unit also can be configured to apply a second predictive model (including a second classifier) that is trained to provide a second scoring that is indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv), a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise. The second predictive model can be trained using a plurality of training datasets, where each training dataset is associated with a previously classified individual from a group of individuals. Each training dataset includes data representing the scoring output derived as described herein from the application of a first predictive model to the at least one indicator of the cognitive ability of the classified individual. Each training dataset also includes data indicative of one or more of (i) an indication (including a description) of the adverse event the individual experienced in response to administration of the pharmaceutical agent, drug, or biologic, or in response to a change in one or more of the amount, concentration, or dose titration of that pharmaceutical agent, drug, or biologic, (ii) the treatment regimen of the individual and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of the regimen in connection with the treatment or management of symptoms of the neurodegenerative condition, (iii) a type and a rating as to a degree of effectiveness (i.e., a rating of a success or failure) of any behavioral therapy, counseling, or physical exercise the individual is undergoing in connection with the treatment or management of symptoms of the neurodegenerative condition. The trained second predictive model can be applied to the scoring output (including a classification output) of the individual (derived from the data indicative of the individual's interactions with the tasks and/or interference executed by the cognitive platform), to generate the second scoring as an output.
[00261 ] In a non-limiting example, the processing unit can be further configured to render a second instance of the task at the user interface, requiring a second response from the individual to the second instance of the task, and analyze a difference between the data indicative of the first response and the second response to compute an interference cost as a measure of at least one additional indication of cognitive abilities of the individual.
[00262] In a non-limiting example, based on the results of the analysis of the performance metrics, a medical, healthcare, or other professional (with consent of the individual) can gain a better understanding of potential adverse events which may occur (or potentially are occurring) if the individual is administered a particular type of, amount, concentration, or dose titration of a pharmaceutical agent, drug, biologic, or other medication, including potentially affecting cognition.
[00263] In a non-limiting example, a searchable database is provided herein that includes data indicative of the results of the analysis of the performance metrics (including a condition indicator) for particular individuals, along with known levels of efficacy of at least one type of pharmaceutical agent, drug, biologic, or other medication experienced by the individuals in interacting with the cognitive platform, and/or quantifiable information on one or more adverse events experienced by the individual with administration of the at least one type of pharmaceutical agent, drug, biologic, or other medication. The searchable database can be configured to provide metrics for use to determine whether a given individual is a candidate for benefiting from a particular type of pharmaceutical agent, drug, biologic, or other obtained for the individual in interacting with the task and/or interference rendered at the computing device, and/or the scoring output (such as but not limited to a classification output) generated using the predictive model, and/or a level of expression of one or more of an amyloid, an cystatin, an alpha-synuclein, a huntingtin protein, a tau protein, or an apolipoprotein, and/or an indicator of efficacy of an administered drug, biologic or other pharmaceutical agent (such as but not limited to one or more of methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, or crenezumab) in the individual's interaction(s) with the cognitive platform.
[00264] As a non-limiting example, performance metrics can assist with identifying whether the individual is a candidate for a particular type of drug (such as but not limited to a stimulant, e.g., methylphenidate or amphetamine) or whether it might be beneficial for the individual to have the drug administered in conjunction with a regiment of specified repeated interactions with the tasks and/or interference rendered to the computing device. Other non-limiting examples of a biologic, drug or other pharmaceutical agent applicable to any example described herein include methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, and crenezumab.
[00265] In a non-limiting example, based on the results of the analysis of the performance metric (including a condition indicator), a medical, healthcare, or other professional (with consent of the individual) can gain a better understanding of potential adverse events which may occur (or potentially are occurring) if the individual is administered a different amount, concentration, or dose titration of a pharmaceutical agent, drug, biologic, or other medication, including potentially affecting cognition.
[00266] In any example implementation, data and other information from an individual is collected, transmitted, and analyzed with their consent.
[00267] As a non-limiting example, the cognitive platform described in connection with any example system, method and apparatus herein, including a cognitive platform based on interference processing, can be based on or include the Project: EVO™ platform by Akili Interactive Labs, Inc., Boston, MA. Non-limiting Example Tasks and Interference
[00268] Following is a summary of reported results showing the extensive physiological, behavioral, and cognitive measurements data and analysis of the regions of the brain, neural activity, and/or neural pathways mechanisms involved (e.g., activated or suppressed) as an individual interact with stimuli under differing cognitive or emotional load. The articles also described the differences that can be sensed and quantifiably measured based on the individual's performance at cognitive tasks versus stimuli with computer-implemented time-varying elements.
[00269] Based on physiological and other measurements, regions of the brain implicated in emotional processing, cognitive tasks, and tasks, are reported. For example, in the review article by Pourtois et al., 2013, "Brain mechanisms for emotional influences on perception and attention: What is magic and what is not," Biological Psychology, 92, 492-512, it is reported that the amygdala monitors the emotional value of stimuli, projects to several other areas of the brain, and sends feedback to sensory pathways (including striate and extrastriate visual cortex). It is also reported that, due to an individual's limited processing capacity, the individual cannot fully analyze simultaneous stimuli in parallel, and these stimuli compete for processing resources in order to gain access to higher cognitive stages and awareness of the individual. With an individual having to direct attention to the location or features of a given stimulus, neural activity in brain regions representing this stimulus increases, at the expense of other concurrent stimuli. Pourtois et al. indicates that this phenomenon has been extensively demonstrated by neuronal recordings as well as imaging methods (EEG, PET, fMRI), and attributed to a gain control. Pourtois et al. concludes that emotion signals may enhance processing efficiency and competitive strength of emotionally significant events through gain control mechanisms similar to those of other attentional systems, but mediated by distinct neural mechanisms in the amygdala and interconnected prefrontal areas, and indicate that alterations in these brain mechanisms might be associated with psychopathological conditions, such as anxiety or phobia. It is also reported that anxious or depressed patients can show maladaptive attentional biases towards negative information. Pourtois et al. also reports that imaging results from EEG and fMRI support a conclusion that the processing of emotional (such as fearful or threat- related) stimuli yields a gain control effect in the visual cortex and the emotional gain control effect can account for the more efficient processing of threat-related stimuli, in addition to or in parallel with any concurrent modulation by other task-dependent or exogenous stimulus-driven mechanisms of attention (see also Brosch et al., 201 1 , "Additive effects of emotional, endogenous, and exogenous attention: behavioral and electrophysiological evidence," Neuropsychologia 49, 1779-1787).
[00270] During selective visual attention tests, EEG measurements can provide useful results in the modulation of the gamma band. (See, e.g., Muller et al., (2000). "Modulation of induced gamma band activity in the human EEG by attention and visual information processing." International Journal of Psychophysiology 38.3: 283-299). There are also studies showing modification in the EEG alpha band signal during attentional shifts. (See, e.g., Sauseng et al. (2005) "A shift of visual spatial attention is selectively associated with human EEG alpha activity." European Journal of Neuroscience 22.1 1 : 2917-2926.) The P300 event-related potential (ERP) also provides data cues about attention. For example, Naatanen et al., (1978) "Early selective-attention effect on evoked potential reinterpreted", Acta Psychologica, 42, 313-329, discloses studies of the auditory attention, which show that the evoked potential has an improved negative response when a subject is presented with infrequent stimuli as compared to frequent stimuli. Naatanen et al discloses that this negative component, called the mismatch negativity, occurs 100 to 200 ms after the stimuli, a time which is perfectly in the range of the pre-attentive attention phase.
[00271 ] As described hereinabove, emotional processing and cognitive processing each require interactions within and among specific brain networks. The degree to which a cognitive assessment, monitor, or treatment is successful can depend on the degree of user engagement, attention, and focus. Major depressive disorder and other similar or related disorders can be associated with changes to cognitive capabilities in multiple cognitive domains including attention (concentration), memory (learning), decision making (judgment), comprehension, judgment, reasoning, understanding, learning, and remembering. The cognitive changes associated with depression can contribute to some of the disabilities experienced by individuals with this disorder.
[00272] As described hereinabove, the individual's response to a stimulus can vary depending on the state of the individual, including based on the individual's cognitive condition, disease, or executive function disorder. Measurements of the individual's performance can provide insight into the individual's status relative to a cognitive condition, disease, or executive function disorder, including the likelihood of onset and/or stage of progression of the cognitive condition, disease, or executive function disorder.
[00273] The foregoing non-limiting examples of physiological measurement data, behavioral data, and other cognitive data, show that the responses of an individual to tasks can differ based on the type of stimuli. Furthermore, the foregoing examples indicate that the degree to which an individual is affected by a computer-implemented time-varying element, and the degree to which the performance of the individual at a task is affected in the presence of the computer-implemented time-varying element, is dependent on the degree to which the individual exhibits a form of emotional or affective bias. As described herein, the differences in the individual's performance may be quantifiably sensed and measured based on the performance of the individual at cognitive tasks versus stimuli with computer-implemented time-varying elements (e.g., emotional or affective elements). The reported physiological measurement data, behavioral data, and other cognitive data, also show that the cognitive or emotional load evoked by a stimulus can vary depending on the state of an individual, including based on the individual's cognitive condition, disease state, or presence or absence of executive function disorder. As described herein, measurements of the differences in the individual's performance at cognitive tasks versus stimuli with computer- implemented time-varying elements can provide quantifiable insight into the likelihood of onset and/or stage of progression of a cognitive condition, disease, and/or executive function disorder, in the individual, such as but not limited to, social anxiety, depression, bipolar disorder, major depressive disorder, post-traumatic stress disorder, schizophrenia, autism spectrum disorder, attention deficit hyperactivity disorder, dementia, Parkinson's disease, Huntington's disease, or other neurodegenerative condition, Alzheimer's disease, or multiple-sclerosis.
[00274] The effects of interference processing on the cognitive control abilities of individuals has been reported. See, e.g., A. Anguera, Nature 501 , p. 97 (September 5, 2013) (the "Nature article"). See, also, U.S. Publication No. 20140370479A1 (U.S. Application 13/879,589), filed on Nov. 10, 201 1 , which is incorporated herein by reference. Some of those cognitive abilities include cognitive control abilities in the areas of attention (selectivity, sustainability, etc.), working memory (capacity and the quality of information maintenance in working memory) and goal management (ability to effectively parallel process two attention-demanding tasks or to switch tasks). As an example, children diagnosed with ADHD (attention deficit hyperactivity disorder) exhibit difficulties in sustaining attention. Attention selectivity was found to depend on neural processes involved in ignoring goal-irrelevant information and on processes that facilitate the focus on goal-relevant information. The publications report neural data showing that when two objects are simultaneously placed in view, focusing attention on one can pull visual processing resources away from the other. Studies were also reported showing that memory depended more on effectively ignoring distractions, and the ability to maintain information in mind is vulnerable to interference by both distraction and interruption. Interference by distraction can be, e.g., an interference that is a non-target, that distracts the individual's attention from the primary task, but that the instructions indicate the individual is not to respond to. Interference by interruption/interruptor can be, e.g., an interference that is a target or two or more targets, that also distracts the individual's attention from the primary task, but that the instructions indicate the individual is to respond to (e.g., for a single target) or choose between/among (e.g., a forced-choose situation where the individual decides between differing degrees of a feature).
[00275] There were also fMRI results reported showing that diminished memory recall in the presence of a distraction can be associated with a disruption of a neural network involving the prefrontal cortex, the visual cortex, and the hippocampus (involved in memory consolidation). Prefrontal cortex networks (which play a role in selective attention) can be vulnerable to disruption by distraction. The publications also report that goal management, which requires cognitive control in the areas of working memory or selective attention, can be impacted by a secondary goal that also demands cognitive control. The publications also reported data indicating beneficial effects of interference processing as an intervention with effects on an individual's cognitive abilities, including to diminish the detrimental effects of distractions and interruptions. The publications described cost measures that can be computed (including an interference cost) to quantify the individual's performance, including to assess single-tasking or multitasking performance.
[00276] An example cost measure disclosed in the publications is the percentage change in an individual's performance at a single-tasking task as compared to a multitasking task, such that a greater cost (that is, a more negative percentage cost) indicates increased interference when an individual is engaged in single-tasking vs multi-tasking. The publications describe an interference cost determined as the difference between an individual's performance on a primary task in isolation versus a primary task with one or more interference applied, where the interference cost provide an assessment of the individual's susceptibility to interference.
[00277] The tangible benefits of computer-implemented interference processing are also reported. For example, the Nature paper states that multi-tasking performance assessed using computer-implemented interference processing was able to quantify a linear age-related decline in performance in adults from 20 to 79 years of age. The Nature paper also reports that older adults (60 to 85 years old) who interacted with an adaptive form of the computer-implemented interference processing exhibited reduced multitasking costs, with the gains persisting for six (6) months. The Nature paper also reported that age-related deficits in neural signatures of cognitive control, as measured with electroencephalography, were remediated by the multitasking training (using the computer-implemented interference processing), with enhanced midline frontal theta power and frontal-posterior theta coherence. Interacting with the computer-implemented interference processing resulted in performance benefits that extended to untrained cognitive control abilities (enhanced sustained attention and working memory), with an increase in midline frontal theta power predicting a boost in sustained attention and preservation of multitasking improvement six (6) months later.
[00278] The example systems, methods, and apparatus according to the principles herein are configured to classify an individual as to cognitive abilities and/or to enhance those cognitive abilities based on implementation of interference processing using a computerized cognitive platform. The example systems, methods, and apparatus are configured to implement a form of multi-tasking using the capabilities of a programmed computing device, where an individual is required to perform a primary task and an interference substantially simultaneously, where the task and/or the interference includes a computer-implemented time-varying element, and the individual is required to respond to the computer-implemented time-varying element. The sensing and measurement capabilities of the computing device are configured to collect data indicative of the physical actions taken by the individual during the response execution time to respond to the task at substantially the same time as the computing device collects the data indicative of the physical actions taken by the individual to respond to the computer-implemented time-varying element. The capabilities of the computing devices and programmed processing units to render the task and/or the interference in real time to a user interface, and to measure the data indicative of the individual's responses to the task and/or the interference and the computer-implemented time-varying element in real time and substantially simultaneously can provide quantifiable measures of an individual's cognitive capabilities. In some examples, the computing devices and programmed processing units are configured to rapidly switch to and from different tasks and interferences. In some examples, the computing devices and programmed processing units are configured to perform multiple, different, tasks or interferences in a row (including for single-tasking, where the individual is required to perform a single type of task for a set period of time).
[00279] In some examples herein, the task and/or interference includes a response deadline, such that the user interface imposes a limited time period for receiving at least one type of response from the individual interacting with the apparatus or computing device. For example, the period of time that an individual is required to interact with a computing device or other apparatus to perform a primary task and/or an interference can be a predetermined amount of time, such as but not limited to about 30 seconds, about 1 minute, about 4 minutes, about 7 minutes, about 10 minutes, or greater than 10 minutes.
[00280] The example systems, methods, and apparatus can be configured to implement a form of multi-tasking to provide measures of the individual's capabilities in deciding whether to perform one action instead of another and to activate the rules of the current task in the presence of an interference such that the interference diverts the individual's attention from the task, as a measure of an individual's cognitive abilities in executive function control.
[00281 ] The example systems, methods, and apparatus can be configured to implement a form of single-tasking, where measures of the individual's performance at interacting with a single type of task (i.e., with no interference) for a set period of time (such as but not limited to navigation task only or a target discriminating task only) can also be used to provide a measure of an individual's cognitive abilities.
[00282] The example systems, methods, and apparatus can be configured to implement sessions that involve differing sequences and combinations of single- tasking and multi-tasking trials. In a first example implementation, a session can include a first single-tasking trial (with a first type of task), a second single-tasking trial (with a second type of task), and a multi-tasking trial (a primary task rendered with an interference). In a second example implementation, a session can include two or more multi-tasking trials (a primary task rendered with an interference). In a third example implementation, a session can include two or more single-tasking trials (all based on the same type of tasks or at least one being based on a different type of task).
[00283] The performance can be further analyzed to compare the effects of two different types of interference (e.g. distraction or interruptor) on the performances of the various tasks. Some comparisons can include performance without interference, performance with distraction, and performance with interruption. The cost of each type of interference (e.g. distraction cost and interruptor/multi-tasking cost) on the performance level of a task is analyzed and reported to the individual.
[00284] The interference processing provides a quantifiable way to measure and improve the ability to process interference events (interruptions and distractions). Interference susceptibility is recognized as a limiting factor across global executive function (including attention and memory) and is known to be fragile in multiple diseases. Changes in EEG signals are shown to occurred at neurological loci associated with cognitive control. For example, midline frontal theta (MFT) power as measured by stimulus-locked electroencephalography (EEG) before, during, or after an individual performs the interference processing can provide indications of attention and interference susceptibility.
[00285] In any example herein, the interference can be an instance of a secondary task that includes a stimulus that is either a non-target (as a distraction) or a target (as an interruptor), or a stimulus that is differing types of targets (e.g., differing degrees of a facial expression or other characteristic/feature difference).
[00286] Based on the capability of a programmed processing unit to control the effecting of multiple separate sources (including sensors and other measurement components) and the receiving of data selectively from these multiple different sources at substantially simultaneously (i.e., at roughly the same time or within a short time interval) and in real-time, the example systems, methods, and apparatus herein can be used to collect quantitative measures of the responses form an individual to the task and/or interference, which could not be achieved using normal human capabilities. As a result, the example systems, methods, and apparatus herein can be configured to implement a programmed processing unit to render the interference substantially simultaneously with the task over certain time periods.
[00287] In some example implementations, the example systems, methods, and apparatus herein also can be configured to receive the data indicative of the measure of the degree and type of the individual's response to the task substantially simultaneously as the data indicative of the measure of the degree and type of the individual's response to the interference is collected (whether the interference includes a target or a non-target). In some examples, the example systems, methods, and apparatus are configured to perform the analysis by applying scoring or weighting factors to the measured data indicative of the individual's response to a non-target that differ from the scoring or weighting factors applied to the measured data indicative of the individual's response to a target, in order to compute a cost measure (including an interference cost).
[00288] In an example systems, methods, and apparatus herein, the cost measure can be computed based on the difference in measures of the performance of the individual at one or more tasks in the absence of interference as compared to the measures of the performance of the individual at the one or more tasks in the presence of interference, where the one or more tasks and/or the interference includes one or more computer-implemented time-varying elements. As described herein, the requirement of the individual to interact with (and provide a response to) the computer- implemented time-varying element(s) can introduce cognitive or emotional load that quantifiably affects the individual's capability at performing the task(s) and/or interference due to the requirement for emotional processing to respond to the computer-implemented time-varying element. In an example, the interference cost computed based on the data collected herein can provide a quantifiable assessment of the individual's susceptibility to interference. The determination of the difference between an individual's performance on a task in isolation versus a task in the presence of one or more interference (the task and/or interference including the computer-implemented time-varying element) provides an interference cost metric that can be used to assess and classify cognitive capabilities of the individual. The interference cost computed based on the individuals performance of tasks and/or interference performed can also provide a quantifiable measure of the individual's cognitive condition, disease state, or presence or stage of an executive function disorder, such as but not limited to, Alzheimer's disease, dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, or Huntington's disease.
[00289] The example systems, methods, and apparatus herein can be configured to perform the analysis of the individual's susceptibility to interference (including as a cost measure such as the interference cost), as a reiterating, cyclical process. For example, where an individual is determined to have minimized interference cost for a given task and/or interference, the example systems, methods, and apparatus can be configured to require the individual to perform a more challenging task and/or interference (i.e., having a higher difficulty level) until the individual's performance metric indicates a minimized interference cost in that given condition, at which point example systems, methods, and apparatus can be configured to present the individual with an even more challenging task and/or interference until the individual's performance metric once again indicates a minimized interference cost for that condition. This can be repeated any number of times until a desired end-point of the individual's performance is obtained.
[00290] As a non-limiting example, the interference cost can be computed based on measurements of the individual's performance at a single-tasking task (without an interference) as compared to a multi-tasking task (with interference), to provide an assessment. For example, an individual's performance at a multi-tasking task (e.g., targeting task with interference) can be compared to their performance at a single- tasking targeting task without interference to provide the interference cost.
[00291 ] Example systems, apparatus and methods herein are configured to analyze data indicative of the degree to which an individual is affected by a computer- implemented time-varying element, and/or the degree to which the performance of the individual at a task is affected in the presence of the computer-implemented time- varying element, to provide performance metric including a quantified indicator of cognitive abilities of the individual. The performance metric can be used as an indicator of the degree to which the individual exhibits a form of emotional or affective bias.
[00292] In some example implementations, the example systems, methods, and apparatus herein also can be configured to selectively receive data indicative of the measure of the degree and type of the individual's response to an interference that includes a target stimulus (i.e., an interruptor) substantially simultaneously (i.e., at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected and to selectively not collect the measure of the degree and type of the individual's response to an interference that includes a non-target stimulus (i.e., a distraction) substantially simultaneously (i.e., at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected. That is, the example systems, methods, and apparatus are configured to discriminate between the windows of response of the individual to the target versus non-target by selectively controlling the state of the sensing/measurement components for measuring the response either temporally and/or spatially. This can be achieved by selectively activating or deactivating sensing/measurement components based on the presentation of a target or non-target, or by receiving the data measured for the individual's response to a target and selectively not receiving (e.g., disregarding, denying, or rejecting) the data measured for the individual's response to a non-target.
[00293] As described herein, using the example systems, methods, and apparatus herein can be implemented to provide a measure of the cognitive abilities of an individual in the area of attention, including based on capabilities for sustainability of attention over time, selectivity of attention, and reduction of attention deficit. Other areas of an individual's cognitive abilities that can be measured using the example systems, methods, and apparatus herein include affective bias, mood, level of cognitive bias, impulsivity, inhibition, perceptive abilities, reaction and other motor functions, visual acuity, long-term memory, working memory, short-term memory, logic, and decision-making.
[00294] As described herein, using the example systems, methods, and apparatus herein can be implemented to adapt the tasks and/or interference (at least one including a computer-implemented time-varying element) from one user session to another (or even from one user trial to another) to enhance the cognitive skills of an individual based on the science of brain plasticity. Adaptivity is a beneficial design element for any effective plasticity-harnessing tool. In example systems, methods, and apparatus, the processing unit is configured to control parameters of the tasks and/or interference, such as but not limited to the timing, positioning, and nature of the stimuli, so that the physical actions of the individual can be recorded during the interaction(s). As described hereinabove, the individual's physical actions are affected by their neural activity during the interactions with the computing device to perform single-tasking and multi-tasking tasks. The science of interference processing shows (based on the results from physiological and behavioral measurements) that the aspect of adaptivity can result in changes in the brain of an individual in response to the training from multiple sessions (or trials) based on neuroplasticity, thereby enhancing the cognitive skills of the individual. The example systems, methods, and apparatus are configured to implement tasks and/or interference with at least one computer-implemented time-varying element, where the individual performs the interference processing. As supported in the published research results described hereinabove, the effect on an individual of performing tasks can tap into novel aspects of cognitive training to enhance the cognitive abilities of the individual.
[00295] FIGs. 9A - 12D show non-limiting example user interfaces that can be rendered using example systems, methods, and apparatus herein to render the tasks and/or interferences (either or both with computer-implemented time-varying element) for user interactions. The non-limiting example user interfaces of FIGs. 9A - 12D also can be used for one or more of: to display instructions to the individual for performing the tasks and/or interferences, interact with the computer-implemented time-varying element, to collect the data indicative of the individual's responses to the tasks and/or the interferences and the computer-implemented time-varying element, to show progress metrics, and to provide analysis metrics.
[00296] FIGs. 9A - 9D show non-limiting example user interfaces rendered using example systems, methods, and apparatus herein. As shown in FIGs. 9A - 9B, an example programmed processing unit can be used to render to the user interfaces (including graphical user interfaces) display features 900 for displaying instructions to the individual for performing the tasks and/or interference, and metric features 902 to show status indicators from progress metrics and/or results from application of analytics to the data collected from the individual's interactions (including the responses to tasks/interferences) to provide the analysis metrics. In any example systems, methods, and apparatus herein, the predictive model can be used to provide the analysis metrics provided as a response output. In any example systems, methods, and apparatus herein, the data collected from the user interactions can be used as input to train the predictive model. As shown in FIGs. 9A - 9B, an example programmed processing unit also may be used to render to the user interfaces (including graphical user interfaces) an avatar or other processor-rendered guide 904 that an individual is required to control (such as but not limited to navigate a path or other environment in a visuo-motor task, and/or to select an object in a target discrimination task). As shown in FIG. 9B, the display features 900 can be used to instruct the individual what is expected to perform a navigation task while the user interface depicts (using the dashed line) the type of movement of the avatar or other processor-rendered guide 904 required for performing the navigation task. In an example, the navigation task may include milestone objects 910 that the individual is required to steer an avatar to cross or avoid, in order to determine the scoring. As shown in FIG. 9C, the display features 900 can be used to instruct the individual what is expected to perform a target discrimination task while the user interface depicts the type of object(s) 906 and 908 that may be rendered to the user interface, with one type of object 906 designated as a target while the other type of object 908 that may be rendered to the user interface is designated as a non-target, e.g., by being crossed out in this example. As shown in FIG. 9D, the display features 900 can be used to instruct the individual what is expected to perform both a navigation task as a primary task and a target discrimination as a secondary task (i.e., an interference) while the user interface depicts (using the dashed line) the type of movement of the avatar or other processor-rendered guide 904 required for performing the navigation task, and the user interface renders the object type designated as a target object 906 and the object type designated as a non-target object 908.
[00297] The measured data indicative of the individual's response to the single- tasking task rendered as a targeting task can be analyzed to provide quantitative insight into the cognitive domains of perception (detection & discrimination), motor function (detection & discrimination), impulsivity/inhibitory control, and visual working memory. The measured data indicative of the individual's response to the single- tasking task rendered as a navigation task can be analyzed to provide quantitative insight into the cognitive domains of visuomotor tracking and motor function. The measured data indicative of the individual's response to a primary task (rendered as a navigation task) in the presence of an interference (rendered as a targeting task), in a multi-tasking task, can be analyzed to provide quantitative insight into the cognitive domains of divided attention and interference management.
[00298] FIGs. 10A - 10D show examples of the features of object(s) (targets or non- targets) that can be rendered as time-varying characteristics to an example user interface, according to the principles herein. FIG. 10A shows an example where the modification to the time-varying characteristics of an aspect of the object 1000 rendered to the user interface is a dynamic change in position and/or speed of the object 1000 relative to environment rendered in the graphical user interface. FIG. 10B shows an example where the modification to the time-varying characteristics of an aspect of the object 1002 rendered to the user interface is a dynamic change in size and/or direction of trajectory/motion, and/orientation of the object 1002 relative to the environment rendered in the graphical user interface. FIG. 10C shows an example where the modification to the time-varying characteristics of an aspect of the object 1004 rendered to the user interface is a dynamic change in shape or other type of the object 1004 relative to the environment rendered in the graphical user interface. In this non-limiting example, the time-varying characteristic of object 1004 is effected using morphing from a first type of object (a star object) to a second type of object (a round object). In another non-limiting example, the time-varying characteristic of object 1004 is effected by rendering a blendshape as a proportionate combination of a first type of object and a second type of object. FIG. 10C shows an example where the modification to the time-varying characteristics of an aspect of the object 1004 rendered to the user interface is a dynamic change in shape or other type of the object 1004 rendered in the graphical user interface (in this non-limiting example, from a star object to a round object). FIG. 10D shows an example where the modification to the time-varying characteristics of an aspect of the object 1006 rendered to the user interface is a dynamic change in pattern, or color, or visual feature of the object 1006 relative to environment rendered in the graphical user interface (in this non-limiting example, from a star object having a first pattern to a star object having a second pattern). In another non-limiting example, the time-varying characteristic of the object can be a rate of change of a facial expression depicted on or relative to the object.
[00299] FIGs. 1 1A - 1 1 T show a non-limiting example of the dynamics of tasks and interferences that can be rendered at user interfaces, according to the principles herein. In this example, the primary task is a visuo-motor navigation task, and the interference is target discrimination (as a secondary task). As shown in FIGs. 1 1 D, 1 1 1 - 1 1 K, and 1 10 - 1 1 Q, the individual is required to perform the navigation task by controlling the motion of the avatar 1 102 along a path that coincides with the milestone objects 1 104. FIGs. 1 1A - 1 1T show a non-limiting example implementation where the individual is expected to actuate an apparatus or computing device (or other sensing device) to cause the avatar 1 102 to coincide with the milestone object 1 104 as the response in the navigation task, with scoring based on the success of the individual at crossing paths with (e.g., hitting) the milestone objects 1 104. In another example, the individual is expected to actuate an apparatus or computing device (or other sensing device) to cause the avatar 1 102 to miss the milestone object 1 104, with scoring based on the success of the individual at avoiding the milestone objects 1 104. FIGs. 1 1A - 1 1 C show the dynamics of a target object 1 106 (a star having a first type of pattern). FIGs. 1 1 E— 1 1 H show the dynamics of a non-target object 1 108 (a star having a second type of pattern). FIGs. 1 1 1 - 1 1 T show the dynamics of other portions of the navigation task, where the individual is expected to guide the avatar 1 102 to cross paths with the milestone object 1 104 in the absence of an interference (an instance of a secondary task).
[00300] In the example of FIGs. 1 1 A - 1 1 T, the processing unit of the example system, method, and apparatus is configured to receive data indicative of the individual's physical actions to cause the avatar 1 102 to navigate the path. For example, the individual may be required to perform physical actions to "steer" the avatar, e.g., by changing the rotational orientation or otherwise moving a computing device. Such action can cause a gyroscope or accelerometer or other motion or position sensor device to detect the movement, thereby providing measurement data indicative of the individual's degree of success in performing the navigation task.
[00301 ] In the example of FIGs. 1 1 A - 1 1 C and 1 1 E - 1 1 H, the processing unit of the example system, method, and apparatus is configured to receive data indicative of the individual's physical actions to perform the target discrimination task. For example, the individual may be instructed prior to a trial or other session to tap, or make other physical indication, in response to display of a target object 1 106, and not to tap to make the physical indication in response to display of a non-target object 1 108. In FIGs. 1 1 A - 1 1 C and 1 1 E - 1 1 H, the target discrimination task acts as an interference (i.e. , an instance of a secondary task) to the primary navigation task, in an interference processing multi-tasking implementation. As described hereinabove, the example systems, methods, and apparatus can cause the processing unit to render a display feature to display the instructions to the individual as to the expected performance. As also described hereinabove, the processing unit of the example system, method, and apparatus can be configured to (i) receive the data indicative of the measure of the degree and type of the individual's response to the primary task substantially simultaneously as the data indicative of the measure of the degree and type of the individual's response to the interference is collected (whether the interference includes a target or a non-target), or (ii) to selectively receive data indicative of the measure of the degree and type of the individual's response to an interference that includes a target stimulus (i.e. , an interruptor) substantially simultaneously (i.e. , at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected and to selectively not collect the measure of the degree and type of the individual's response to an interference that includes a non-target stimulus (i.e., a distraction) substantially simultaneously (i.e. , at substantially the same time) as the data indicative of the measure of the degree and type of the individual's response to the task is collected.
[00302] FIGs. 12A - 12D show other non-limiting examples of the dynamics of tasks and interferences that can be rendered at user interfaces, according to the principles herein. In this example, the primary task is a visuo-motor navigation task, and the interference is target discrimination (as an instance of a secondary task). Similarly to FIGs. 1 1 A - 1 1 T, the individual is required to perform the visuo-motor navigation task by controlling the motion of the avatar 1202 along a path. The individual is required to provide a response to the tasks in the presence or absence of an interference 1204 (where the interference is rendered at the user interface as a discrimination task between a target or non-target object).
[00303] In a non-limiting example, the adaptation of the difficulty of a task and/or interference may be adapted with each different stimulus that is presented as a computer-implemented time-varying element.
[00304] In another non-limiting example, the example system, method, and apparatus herein can be configured to adapt a difficulty level of a task and/or interference one or more times in fixed time intervals or in other set schedule, such as but not limited to each second, in 10 second intervals, every 30 seconds, or on frequencies of once per second, 2 times per second, or more (such as but not limited to 30 times per second).
[00305] In a non-limiting example of a visuo-motor task (a type of navigation task), one or more of navigation speed, shape of the course (changing frequency of turns, changing turning radius), and number of obstacles and/or size of obstacles can be changed to modify the difficulty of a navigation game level, with the difficulty level increasing with at least one of increasing the speed, increasing the numbers, or increasing the sizes of obstacles (including types of milestone objects (e.g. , some milestone objects to avoid or some milestone objects to cross/coincide with).
[00306] In a non-limiting example, the difficulty level of a task and/or interference of a subsequent level can also be changed in real-time as feedback, e.g. , the difficulty of a subsequent level can be increased or decreased in relation to the data indicative of the performance of the task.
[00307] In an example, the response recorded for the targeting task can be, but is not limited to, a touch, swipe or other gesture relative to a user interface or image collection device (including a touch-screen or other pressure sensitive screen, or a camera) to interact with a user interface. In another example, the response recorded for the targeting task can be, but is not limited to, user actions that cause changes in a position, orientation, or movement of a computing device including the cognitive platform, that is recorded using a sensor disposed in or otherwise coupled to the computing device (such as but not limited to a motion sensor or position sensor).
[00308] In this example and any other example herein, the cData and/or nData can be collected in real-time.
[00309] In this example and any other example herein, the adjustments to the type of tasks and/or CSIs can be made in real-time.
[00310] FIGs. 13A through 16 show the results of non-limiting example measurements made using an example system, method, and apparatus described herein.
[0031 1 ] FIGs. 13A - 13B show physiological measurement data (nData) and other data for a set of individuals with ages from 70 years or older (both male and female individuals). The example physiological measurement data include data indicative of an amyloid grouping of the individuals (i.e., whether determined to the amyloid positive (+) or amyloid negative (-)), an ΑροΕε4 status, standard uptake value ratios (SUVR) computed based on positron emission tomography (PET) imaging data, cortical thickness in signature regions for Alzheimer's disease (AD Signature regions), normalized bilateral hippocampal volumes, normalized bilateral whole-brain volumes, and MRI microbleed count (deep vs. lobar). FIGs. 13A - 13B also shows that certain of the individuals are also administered a drug (methylphenidate) as compared to a placebo. None of the structural MRI measures differentiated the populations (i.e., the amyloid positive (+) vs. the amyloid negative (-) individuals). See also C. Leurent et al., "A randomized, double blind, placebo controlled trial to study difference in cognitive learning associated with repeated self-administration of remote computer tablet-based application assessing dual-task performance based on amyloid status in healthy elderly volunteers," Clinical Trials on Alzheimer's Disease (CTAD), December 9, 2016.
[00312] FIGs. 13A - 13B show example data (including nData) which can be used as part of a training dataset for training a predictive model to generate a scoring output indicative of, e.g., amyloid status or APOE expression levels. In another example, FIGs. 13A - 13B shows the type of data that can be used as part of a training dataset for training another predictive model to generate an output indicative of, e.g., a likelihood of an individual experiencing an adverse event in response to administration of an amount, concentration, or dose titration of the drug methylphenidate, or changes to that amount, concentration, or dose titration of the drug methylphenidate. The example data can be used as part of a training dataset for training a predictive model to generate a scoring output indicative of, e.g., amyloid status or APOE status, or other type of output indicative of a likelihood of the individual experiencing an adverse event in response to administration of the drug methylphenidate, or a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the drug methylphenidate. The predictive model also could also be trained as a classifier to generate a classification output to classify individuals as to amyloid status. The generated indication of amyloid status (whether a scoring output or classification output) may be used to give some insight into the likelihood of onset or stage of progression of Alzheimer's disease.
[00313] FIGs. 14A - 14B show plots of the results of performance metrics computed based on the response data measured from multiple interactions of the individuals (characteristics/measurements summarized in FIGs. 13A - 13B) from day 0 to day 28, with an example system or apparatus (configured as a cognitive platform). The analyses are adjusted for the covariates. The computed performance metrics act as indicators, to provide an indication of the effect of the multiple interactions with the cognitive training on divided attention, based on computation of a performance metric (as an interference cost) on reaction time on hits. As shown in FIGs. 14A - 14B, the example performance metric act as indicators, to provide an indication of better performance for lower values. The analysis of the data indicates that the amyloid (-) individuals have a measurable (and quantifiably significant) improvement after multiple interactions between day 0 and day 28 days with the example cognitive platform, whereas no measurable statistical change is indicated from the measured response data from the amyloid (+) individuals. After multiple interactions with the example cognitive platform, the effect on divided attention as measured by reaction time on hits show a positive training effect in both populations (p<0.001 ) in both conditions, with greater reduction in the amyloid (-) population in the performance metric (interference cost) after training (p<0.003). The results indicate that, after multiple interactions with the example cognitive platform, the performance metrics computed based on the individual's interactions provides a distinction in performance metric between individuals having differing amyloid condition, thereby providing a classification of the individuals as to amyloid group. The amyloid status of an individual potentially may be used to correlate with (or at least provide insight into) a potential likelihood of onset of and/or a stage of progression of Alzheimer's disease (a neurodegenerative condition).
[00314] FIG. 14 shows plots of the results of measures from the Rey Auditory Verbal Learning (RAVLT™) test (a test for episodic memory), where some difference is shown between populations (i.e., the amyloid positive (+) vs. the amyloid negative (-) individuals) on the RAVLT™ test on Day 0 (p=0.0553) for total recall score. The RAVLT™ test is observed to provide some differentiation between the amyloid (+) and (-) individuals of the population.
[00315] FIG. 16 shows plots of the results of measures from the Test of Variables of Attention (TOVA®) test (a test for sustained attention), where greater performance at Day 28 is observed for both populations p<0.001 (i.e., the amyloid positive (+) vs. the amyloid negative (-) individuals), where the numerical improvement in standardized score are 9.0 for amyloid (-) and 9.4 for amyloid (+). No performance differences are observed between populations at either time point.
[00316] FIG. 17A shows a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. In block 1702, the at least one processing unit is used to render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task. In block 1704, the at least one processing unit is used to render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task. In block 1706, the at least one processing unit is used to render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference. The interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction. In block 1708, the at least one processing unit is used to render a user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor. In block 1710, the at least one processing unit is used to generate a performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response. In block 1712, the at least one processing unit is used to receive data indicative of one or both of an age or a gender identifier of the individual. In block 1714, the at least one processing unit also is used to generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of (i) at least one of the age or the gender identifier, (ii) the performance score, and (iii) at least one of the first primary response or the first secondary response.
[00317] FIG. 17B-1 - 17B-2 show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. The processing unit is configured to execute a first trial at a first time interval, where the first trial is described in blocks 1722 through 1728 as follows. In block 1722, the at least one processing unit is used to render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task. In block 1724, the at least one processing unit is used to render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task. In block 1726, the at least one processing unit is used to render a user interface to instruct the individual not to respond to the interference that is configured as a distraction and to respond to the interference that is configured as an interruptor. In block 1728, the at least one processing unit is used to render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference. The interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction. In block 1730, the at least one processing unit is used to generate a first performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response. In block 1732, the at least one processing unit is used to receive data indicative of one or both of an age or a gender identifier of the individual. In block 1734, the at least one processing unit also is used, based on the first performance score and the data indicative of one or both of an age or a gender identifier of the individual, to adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at a second difficulty level one or more of a third instance of the primary task or a second interference. The processing unit is configured to execute a second trial at a second time interval that is subsequent to the first time interval, the second trial is described in block 1736 as follows. In block 1736, the at least one processing unit also is used to render at the user interface the third instance of the primary task with the second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference. The second interference is configured to divert the individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction. In block 1738, the at least one processing unit also is used to generate a second performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the third primary response to provide an indication of cognitive skills of the individual.
[00318] FIGs. 17C-1 - 17C-2 show a flowchart of a non-limiting example method that can be implemented using a cognitive platform or platform product that includes at least one processing unit. In block 1752, the at least one processing unit Is used to render at least one user interface to present a computerized stimuli or interaction (CSI) or other interactive elements to the user, or cause an actuating component of the cognitive platform and/or platform product to effect auditory, tactile, or vibrational computerized elements (including CSIs) to effect the stimulus or other interaction with a user. In block 1754, the at least one processing unit is used to cause a component of the program product to receive data indicative of at least one user response based on the user interaction with the CSI or other interactive element (such as but not limited to cData) at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or Τ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TL). In an example where at least one user interface is rendered to present the computerized stimuli or interaction (CSI) or other interactive elements to the user, the at least one processing unit can be programmed to cause the user interface to receive the data indicative of the at least one user response. In block 1756, the at least one processing unit is used to cause a component of the program product to receive nData indicative of the measurements made using the one or more nData components (including the one or more physiological or monitoring components and/or cognitive testing components) before, during, and/or after the user interacts with the cognitive platform and/or platform product. In an example implementation of the method, block 1754 may be performed in a similar timeframe, or substantially simultaneously with block 1756. In another example implementation of the method, block 1754 may be performed at different timepoints than block 1756. In block 1758, the at least one processing unit also is used to: analyze the cData and/or nData to provide a measure of the individual's physiological condition and/or cognitive condition, and/or analyze the differences in the individual's performance based on determining the differences between the user's responses (including based on differences in the cData) and differences in the associated the nData, and/or apply an example predictive model (such as but not limited to a classifier model) to the cData and nData, and/or adjust the difficulty level of the task(s) comprising the computerized stimuli or interaction (CSI) or other interactive elements based on the analysis of the cData and/or nData (including the measures of the individual's performance and/or physiological condition determined in the analysis), and/or provide an output or other feedback from the cognitive platform and/or platform product that can be indicative of the individual's performance, and/or cognitive assessment, and/or response to cognitive treatment, and/or assessed measures of cognition, and/or to classify an individual as to amyloid group, and/or APOE expression group (or other expression group based on the expression level of other protein(s) that can be of clinical interest in a neurodegenerative condition), and/or potential efficacy of use of the cognitive platform and/or platform product when the individual is administered a drug, biologic or other pharmaceutical agent, and/or to generate computation results indicative of the classification of an individual according to likelihood of amyloid burden (status), and/or likelihood of onset of a neuropsychological condition and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, and/or to classify an individual as to likelihood of onset and/or stage of progression of a neuropsychological condition, including as to a neurodegenerative condition and/or an executive function disorder, and/or to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or combination of drug, biologic or other pharmaceutical agent to the individual. For any one or more of the analyses and classifications described in block 1758, the at least one processing unit can be programmed to execute processor executable instructions to apply a predictive model (such as but not limited to a classifier model) to the nData and cData collected at, and/or two or more separate times during the time period leading up to, the initial timepoints (T1 and/or Τ;) and at, and/or two or more separate times during the time period leading up to, the later timepoints (T2, and/or T3, and/or TY), including to compare the cData at the initial timepoints and later timepoints, to perform the classifications.
[00319] Any classification of an individual as to likelihood of onset and/or stage of progression of a neurodegenerative condition in block 1758 can be transmitted as a signal to a medical device, healthcare computing system, or other device, and/or to a medical practitioner, a health practitioner, a physical therapist, a behavioral therapist, a sports medicine practitioner, a pharmacist, or other practitioner, to allow
formulation of a course of treatment for the individual or to modify an existing course of treatment, including to determine a change in dosage of a drug, biologic or other pharmaceutical agent to the individual or to determine an optimal type or
combination of drug, biologic or other pharmaceutical agent to the individual.
[00320] In some examples, the results of the analysis may be used to modify the difficulty level or other property of the computerized stimuli or interaction (CSI) or other interactive elements.
[00321 ] FIG. 18 is a block diagram of an example computing device 1810 that can be used as a computing component according to the principles herein. In any example herein, computing device 1810 can be configured as a console that receives user input to implement the computing component, including to apply the signal detection metrics in computer-implemented adaptive response-deadline procedures. For clarity, FIG. 18 also refers back to and provides greater detail regarding various elements of the example system of FIG. 5. The computing device 1810 can include one or more non-transitory computer-readable media for storing one or more computer-executable instructions or software for implementing examples. The non-transitory computer-readable media can include, but are not limited to, one or more types of hardware memory, non-transitory tangible media (for example, one or more magnetic storage disks, one or more optical disks, one or more flash drives), and the like. For example, memory 502 included in the
computing device 1810 can store computer-readable and computer-executable instructions or software for performing the operations disclosed herein. For example, the memory 502 can store a software application 1840 which is configured to perform various of the disclosed operations (e.g., analyze cognitive platform and/or platform product measurement data and response data, apply an example classifier model, or performing a computation). The computing device 1810 also includes configurable and/or programmable processor 504 and an associated core 1814, and optionally, one or more additional configurable and/or programmable processing devices, e.g., processor(s) 1812' and associated core(s) 1814' (for example, in the case of computational devices having multiple processors/cores), for executing computer-readable and computer-executable instructions or software stored in the memory 502 and other programs for controlling system hardware. Processor 504 and processor(s) 1812' can each be a single core processor or multiple core (1814 and 1814') processor.
[00322] Virtualization can be employed in the computing device 1810 so that infrastructure and resources in the console can be shared dynamically. A virtual machine 1824 can be provided to handle a process running on multiple processors so that the process appears to be using only one computing resource rather than multiple computing resources. Multiple virtual machines can also be used with one processor.
[00323] Memory 502 can include a computational device memory or random access memory, such as but not limited to DRAM, SRAM, EDO RAM, and the like. Memory 502 can include a non-volatile memory, such as but not limited to a hard- disk or flash memory. Memory 502 can include other types of memory as well, or combinations thereof.
[00324] In a non-limiting example, the memory 502 and at least one processing unit 504 can be components of a peripheral device, such as but not limited to a dongle (including an adapter) or other peripheral hardware. The example peripheral device can be programmed to communicate with or otherwise couple to a primary computing device, to provide the functionality of any of the example cognitive platform and/or platform product, apply an example classifier model, and implement any of the example analyses (including the associated computations) described herein. In some examples, the peripheral device can be programmed to directly communicate with or otherwise couple to the primary computing device (such as but not limited to via a USB or HDMI input), or indirectly via a cable (including a coaxial cable), copper wire (including, but not limited to, PSTN, ISDN, and DSL), optical fiber, or other connector or adapter. In another example, the peripheral device can be programmed to communicate wirelessly (such as but not limited to Wi-Fi or Bluetooth®) with primary computing device. The example primary computing device can be a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an Android™-based smartphone), a television, a workstation, a desktop computer, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of computing device.
[00325] A user can interact with the computing device 1810 through a visual display unit 1828, such as a computer monitor, which can display one or more user interfaces 1830 that can be provided in accordance with example systems and methods. The computing device 1810 can include other I/O devices for receiving input from a user, for example, a keyboard or any suitable multi-point touch interface 1818, a pointing device 1820 (e.g., a mouse), a camera or other image recording device, a microphone or other sound recording device, an accelerometer, a gyroscope, a sensor for tactile, vibrational, or auditory signal, and/or at least one actuator. The keyboard 1818 and the pointing device 1820 can be coupled to the visual display unit 1828. The computing device 1810 can include other suitable conventional I/O peripherals.
[00326] The computing device 1810 can also include one or more storage devices 1838 (including a single core processor or multiple core processor 1836), such as a hard-drive, CD-ROM, or other computer readable media, for storing data and computer-readable instructions and/or software that perform operations disclosed herein. Example storage device 1838 (including a single core processor or multiple core processor 1836) can also store one or more databases for storing any suitable information required to implement example systems and methods. The databases can be updated manually or automatically at any suitable time to add, delete, and/or update one or more items in the databases.
[00327] The computing device 1810 can include a network interface 1822 configured to interface via one or more network devices 1832 with one or more networks, for example, Local Area Network (LAN), metropolitan area network (MAN), Wide Area Network (WAN) or the Internet through a variety of connections including, but not limited to, standard telephone lines, LAN or WAN links (for example, 802.1 1 , T1 , T3, 56kb, X.25), broadband connections (for example, ISDN, Frame Relay, ATM), wireless connections, controller area network (CAN), or some combination of any or all of the above. The network interface 1822 can include a built-in network adapter, network interface card, PCMCIA network card, card bus network adapter, wireless network adapter, USB network adapter, modem or any other device suitable for interfacing the computing device 1810 to any type of network capable of communication and performing the operations described herein. Moreover, the computing device 1810 can be any computational device, such as a smartphone (such as but not limited to an iPhone®, a BlackBerry®, or an Android™ -based smartphone), a television, a workstation, a desktop computer, a server, a laptop, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, or other electronic reader or hand-held, portable, or wearable computing device, or any other equivalent device, an Xbox®, a Wii®, or other equivalent form of gaming console, or other equivalent form of computing or telecommunications device that is capable of communication and that has or can be coupled to sufficient processor power and memory capacity to perform the operations described herein. The one or more network devices 1832 may communicate using different types of protocols, such as but not limited to WAP (Wireless Application Protocol), TCP/IP (Transmission
Control Protocol/Internet Protocol), NetBEUI (NetBIOS Extended User Interface), or IPX/SPX (Internetwork Packet Exchange/Sequenced Packet Exchange).
[00328] The computing device 1810 can run any operating system 1826, such as any of the versions of the Microsoft® Windows® operating systems, iOS® operating system, Android™ operating system, the different releases of the Unix and Linux operating systems, any version of the MacOS® for Macintosh computers, any embedded operating system, any real-time operating system, any open source operating system, any proprietary operating system, or any other operating system capable of running on the console and performing the operations described herein. In some examples, the operating system 1826 can be run in native mode or emulated mode. In an example, the operating system 1826 can be run on one or more cloud machine instances. [00329] Conclusion
[00330] The above-described embodiments can be implemented in any of numerous ways. For example, some embodiments may be implemented using hardware, software or a combination thereof. When any aspect of an embodiment is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
[00331 ] In this respect, various aspects of the invention may be embodied at least in part as a computer readable storage medium (or multiple computer readable storage media) (e.g., a computer memory, compact disks, optical disks, magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium or non- transitory medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the technology discussed above. The computer readable medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present technology as discussed above.
[00332] The terms "program" or "software" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present technology as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present technology need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present technology.
[00333] Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments. [00334] Also, the technology described herein may be embodied as a method, of which at least one example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[00335] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms.
[00336] The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
[00337] The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[00338] As used herein in the specification and in the claims, "or" should be understood to have the same meaning as "and/or" as defined above. For example, when separating items in a list, "or" or "and/or" shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as "only one of" or "exactly one of," or, when used in the claims, "consisting of," will refer to the inclusion of exactly one element of a number or list of elements. In general, the term "or" as used herein shall only be interpreted as indicating exclusive alternatives (i.e. "one or the other but not both") when preceded by terms of exclusivity, such as "either," "one of," "only one of," or "exactly one of." "Consisting essentially of," when used in the claims, shall have its ordinary meaning as used in the field of patent law.
[00339] As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
[00340] In the claims, as well as in the specification above, all transitional phrases such as "comprising," "including," "carrying," "having," "containing," "involving," "holding," "composed of," and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases "consisting of" and "consisting essentially of" shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 21 1 1.03.

Claims

WHAT IS CLAIMED IS:
1 . An apparatus for generating an assessment of one or more cognitive skills in an individual as an indication of a neuropsychological deficit or disorder of the individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
a processing unit communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to:
render a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task;
render a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task;
render at the user interface a second instance of the primary task with an interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the interference; wherein the interference is configured to divert the individual's attention from the second instance of the primary task and is configured as a second instance of the secondary task that is rendered as an interruptor or a distraction;
instruct, using the user interface, the individual not to respond to the interference that is configured as a distraction and to respond to the
interference that is configured as an interruptor;
generate a performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response; receive data indicative of one or both of an age or a gender identifier of the individual; and generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of (i) at least one of the age or the gender identifier, (ii) the performance score, and (iii) at least one of the first primary response or the first secondary response.
2. The apparatus of claim 1 , wherein the processing unit is further configured to:
render at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second
interference;
wherein the second interference is configured to divert the individual's attention from the fourth instance of the primary task and is rendered as the interruptor or the distraction; and
wherein a difficulty level of one or both of the third instance of the
primary task or the second interference is determined based on the performance score; and
generate a performance change score based on the differences in the individual's performance at least in part by determining differences between the data indicative of the first primary response and the second primary response to provide an indication of cognitive skills of individual.
3. An apparatus for enhancing one or more cognitive skills in an individual, said apparatus comprising:
a user interface;
a memory to store processor-executable instructions; and
a processing unit communicatively coupled to the user interface and the memory, wherein upon execution of the processor-executable instructions by the processing unit, the processing unit is configured to:
execute a first trial at a first time interval, the first trial comprising:
rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task; rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task;
instructing, using the user interface, the individual not to respond to an interference with the primary task that is configured as a distraction and to respond to an interference with the primary task that is configured as an interruptor; and
rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference;
wherein the first interference is configured to divert the
individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction;
generate a first performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the second primary response;
receive data indicative of one or both of an age or a gender identifier of the individual;
based on the first performance score and the data indicative of one or both of an age or a gender identifier of the individual, adjust a difficulty of one or both of the primary task or the interference such that the apparatus renders at a second difficulty level one or more of a third instance of the primary task or a second interference; execute a second trial at a second time interval that is subsequent to the first time interval, the second trial comprising:
rendering at the user interface the third instance of the primary task with the second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference;
wherein the second interference is configured to divert the
individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction;
and
generate a second performance score based on the differences in the individual's performance from performing the primary task without interference and with interference at least in part by determining differences between the data indicative of the first primary response and the third primary response to provide an indication of cognitive skills of the individual.
4. The apparatus of claim 1 or 3, wherein:
prior to rendering the first instance of the primary task at the user interface, the processing unit is configured to receive data indicative of one or more of an amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic being or to be administered to an individual; and
based at least in part on the scoring output, generates an output to the user interface indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to a change one or more of a recommended amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to
administration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv) a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
5. The apparatus of claim 4, wherein the processing unit is configured to generate an output to the user interface indicative of a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, or a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic; and
wherein the biologic, drug or other pharmaceutical agent comprises one or more of methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, or crenezumab.
- Ill -
6. The apparatus of claim 1 or 3, wherein the apparatus comprises one or more sensor components, and wherein the processing unit is configured to control the one or more sensor components to measure the data indicative of one or both of the first primary response and the second primary response.
7. The apparatus of claim 6, wherein the one or more sensor components comprises one or more of a gyroscope, an accelerometer, a motion sensor, a position sensor, a pressure sensor, an optical sensor, a video camera, an auditory sensor, or a vibrational sensor.
8. The apparatus of claim 1 or 3, wherein the processing unit is configured to render the primary task as a continuous visuo-motor tracking task, and wherein the first instance of the primary task is a discrete time interval of the continuous visuo- motor task.
9. The apparatus of claim 1 or 3, wherein the processing unit is configured to control the user interface to render one or both of the interference as a target discrimination interference.
10. A system comprising an apparatus of claim 1 or 3, wherein the system is configured as at least one of a virtual reality system, an augmented reality system, or a mixed reality system.
1 1 . A system comprising an apparatus of claim 1 or 3, wherein the apparatus is configured as at least one a smartphone, a tablet, a slate, an electronic-reader (e- reader), a digital assistant, a portable computing device, a wearable computing device, or a gaming device.
12. The apparatus of claim 1 or 3, wherein the performance score is computed as an interference cost.
13. The apparatus of claim 1 , wherein the performance score is computed based on data collected during about a first 5 seconds, about a first 10 seconds, about a first 20 seconds, about a first 30 seconds, about a first 45 seconds, about a first minute, about a first 1 .5 minutes, about a first 3 minutes, about a first 5 minutes, about a first 7.5 minutes, or about a first 10 minutes of at least one of the first primary response or the second primary response.
14. The apparatus of claim 1 , wherein the predictive model is trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset comprising: (i) data representing at least one of the performance score, age, or gender identifier of the classified individual and (ii) data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
15. The apparatus of claim 1 , wherein the predictive model is computed at least in part as a function of variables related to (i) the age and/or gender identifier of the individual, and (ii) the performance score.
16. The apparatus of claim 3, wherein the first time interval and the second time interval are separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or greater than a month.
17. The apparatus of claim 3, wherein at least one of the first time interval or the second time interval is about 3 minutes, about 5 minutes, about 7.5 minutes, about 10 minutes, about 15 minutes, about 30 minutes, or about 1 hour.
18. The apparatus of claim 3, wherein the first performance score is computed based on data collected during about a first 5 seconds, about a first 10 seconds, about a first 20 seconds, about a first 30 seconds, about a first 45 seconds, about a first minute, about a first 1 .5 minutes, about a first 3 minutes, about a first 5 minutes, about a first 7.5 minutes, or about a first 10 minutes of at least one of the first primary response or the second primary response.
19. The apparatus of claim 3, wherein the predictive model is computed at least in part as a function of variables related to (i) the age and/or gender identifier of the individual, (ii) the first performance score, and (iii) the second performance score.
20. The apparatus of claim 3, wherein the processing unit is configured to generate a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the data indicative of the first performance score, the second performance score, and one or both of the age or the gender identifier.
21 . The apparatus of claim 1 or 20, wherein the predictive model comprises one or more of a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, or an artificial neural network.
22. The apparatus of claim 1 or 20, wherein the neuropsychological deficit or disorder is one or more of dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, autism spectrum disorder (ASD), presence of the 16p1 1 .2 duplication, attention deficit hyperactivity disorder (ADHD), sensory-processing disorder (SPD), mild cognitive impairment (MCI), Alzheimer's disease, multiple-sclerosis, schizophrenia, depression, or anxiety.
23. The apparatus of claim 1 or 20, wherein the predictive model is trained to classify the individual as to a level of expression of one or more of an amyloid, an cystatin, an alpha-synuclein, a huntingtin protein, a tau protein, or an apolipoprotein E.
24. The apparatus of claim 1 or 20, wherein the processing unit is configured to transmit the scoring output to the user and/or display the scoring output on the user interface.
25. The apparatus of claim 1 or 20, wherein the predictive model serves as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
26. A system comprising one or more physiological components and an apparatus of any one of claim 1 or 20, wherein upon execution of the processor- executable instructions by the processing unit, the processing unit is configured to: receive data indicative of one or more measurements of the physiological component; and
analyze the data indicative of the first primary response and the second primary response, and the data indicative of the one or more measurements of the physiological component to compute the scoring output.
27. A computer-implemented method for enhancing one or more cognitive skills in an individual, said method comprising:
executing, using a processing unit communicatively coupled to a user interface and a memory, a first trial at a first time interval, the first trial comprising:
rendering a first instance of a primary task at the user interface, requiring a first primary response from the individual to the first instance of the primary task;
rendering a first instance of a secondary task at the user interface, requiring a first secondary response from the individual to the first instance of the secondary task;
rendering at the user interface a second instance of the primary task with a first interference, requiring a second primary response from the individual to the second instance of the primary task in the presence of the first interference;
wherein the first interference is configured to divert the
individual's attention from the second instance of the primary task and is rendered as an interruptor or a distraction; and
wherein the individual is instructed not to respond to the first interference that is configured as a distraction and to respond to the first interference that is configured as an interruptor;
executing, using the processing unit, at least one second trial at a second time interval that is subsequent to the first time interval, the second trial comprising: rendering at the user interface a third instance of the primary task with a second interference, requiring a third primary response from the individual to the third instance of the primary task in the presence of the second interference;
wherein the second interference is configured to divert the
individual's attention from the third instance of the primary task and is rendered as the interruptor or the distraction; and
wherein the individual is instructed not to respond to the second interference that is configured as the distraction and to respond to the second interference that is configured as the interruptor;
generating, using the processing unit, a performance score based at least in part on the data indicative of the first primary response, the second primary response, and the third primary response;
receiving data indicative of one or both of an age or a gender identifier of the individual; and
generating, using the processing unit, a scoring output indicative of a likelihood of onset of a neuropsychological deficit or disorder and/or a stage of progression of the neuropsychological deficit or disorder based at least in part on applying a predictive model to the performance score and one or both of the data indicative of the age or the gender identifier.
28. The method of claim 27, wherein the predictive model is trained using a plurality of training datasets, each training dataset corresponding to a previously classified individual of a plurality of individuals, and each training dataset comprising: (i) data representing at least one of the performance score, age, or gender identifier of the classified individual and (ii) data indicative of a diagnosis of a status or progression of the neurodegenerative condition in the classified individual.
29. The method of claim 27, wherein the predictive model comprises one or more of a linear/logistic regression, principal component analysis, a generalized linear mixed model, a random decision forest, a support vector machine, or an artificial neural network.
30. The method of claim 27, wherein the predictive model is computed at least in part as a function of variables related to (i) the age and/or gender identifier of the individual, and (ii) the performance score.
31 . The method of claim 27, wherein:
prior to rendering the first instance of the primary task at the user interface, the processing unit is configured to receive data indicative of one or more of an amount, concentration, or dose titration of a pharmaceutical agent, drug, or biologic being or to be administered to an individual; and
based at least in part on the scoring output, generate an output to the user interface indicative of one or more of: (i) a likelihood of the individual experiencing an adverse event in response to a change of one or more of a recommended amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic, (ii) a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, (iii) a change in the individual's cognitive capabilities, (iv) a recommended treatment regimen, (v) a recommendation of at least one of a behavioral therapy, counseling, or physical exercise, or (vi) a determination of a degree of effectiveness of at least one of a behavioral therapy, counseling, or physical exercise.
32. The method of claim 27, wherein the processing unit is configured to generate an output to the user interface indicative of a likelihood of the individual experiencing an adverse event in response to administration of the pharmaceutical agent, drug, or biologic, or a likelihood of the individual experiencing an adverse event in response to a change in one or more of the amount, concentration, or dose titration of the pharmaceutical agent, drug, or biologic; and
wherein the biologic, drug or other pharmaceutical agent comprises one or more of methylphenidate (MPH), scopolamine, donepezil hydrochloride, rivastigmine tartrate, memantine HCI, solanezumab, aducanumab, or crenezumab.
33. The method of claim 27, wherein the neuropsychological deficit or disorder is one or more of dementia, Parkinson's disease, cerebral amyloid angiopathy, familial amyloid neuropathy, Huntington's disease, autism spectrum disorder (ASD), presence of the 16p1 1 .2 duplication, attention deficit hyperactivity disorder (ADHD), sensory-processing disorder (SPD), mild cognitive impairment (MCI), Alzheimer's disease, multiple-sclerosis, schizophrenia, depression, or anxiety.
34. The method of claim 27, wherein the predictive model is trained to classify the individual as to a level of expression of one or more of an amyloid, an cystatin, an alpha-synuclein, a huntingtin protein, a tau protein, or an apolipoprotein E.
35. The method of claim 27, wherein the apparatus comprises one or more sensor components, and wherein the processing unit is configured to control the one or more sensor components to measure the data indicative of one or both of the first primary response and the second primary response.
36. The method of claim 35, wherein the one or more sensor components comprises one or more of a gyroscope, an accelerometer, a motion sensor, a position sensor, a pressure sensor, an optical sensor, a video camera, an auditory sensor, or a vibrational sensor.
37. The method of claim 27, wherein the processing unit is configured to transmit the scoring output to the user and/or display the scoring output on the user interface.
38. The method of claim 27, wherein the predictive model serves as an intelligent proxy for subsequent measures of the neurodegenerative condition of the individual.
39. The method of claim 27, wherein the processing unit is configured to render the primary task as a continuous visuo-motor tracking task, and wherein the first instance of the primary task is a first time interval of the continuous visuo-motor task.
40. The method of claim 27, wherein the processing unit is configured to control the user interface to render one or both of the interference as a target discrimination interference.
41 . The method of claim 27, wherein the first time interval and the second time interval are separated by about 5 minutes, about 7 minutes, about 15 minutes, about 1 hour, about 12 hours, about 1 day, about 5 days, about 10 days, about 15 days, about 20 days, about 28 days, about a month, or greater than a month.
42. The method of claim 27, wherein at least one of the first time interval or the second time interval is about 3 minutes, about 5 minutes, about 7.5 minutes, about 10 minutes, about 15 minutes, about 30 minutes, or about 1 hour.
43. The method of claim 27, wherein the performance score is computed based on data collected during about a first 5 seconds, about a first 10 seconds, about a first 20 seconds, about a first 30 seconds, about a first 45 seconds, about a first minute, about a first 1 .5 minutes, about a first 3 minutes, about a first 5 minutes, about a first 7.5 minutes, or about a first 10 minutes of at least one of the first primary response or the second primary response.
44. A system comprising an apparatus of claim 27, wherein the system is configured as at least one of a virtual reality system, an augmented reality system, or a mixed reality system.
45. A system comprising a processing unit configured to execute the method of claim 27, wherein the system comprises at least one a smartphone, a tablet, a slate, an electronic-reader (e-reader), a digital assistant, a portable computing device, a wearable computing device, or a gaming device.
46. A system comprising one or more physiological components and a processing unit configured to execute the method of claim 27, wherein the processing unit is configured to:
receive data indicative of one or more measurements of the physiological component; and
analyze the data indicative of the first primary response and the second primary response, and the data indicative of the one or more measurements of the physiological component to compute the scoring output.
47. The method of claim 27, wherein the performance score is computed as an interference cost.
PCT/US2018/013182 2017-01-10 2018-01-10 Cognitive platform configured for determining the presence or likelihood of onset of a neuropsychological deficit or disorder WO2018132483A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762444791P 2017-01-10 2017-01-10
US62/444,791 2017-01-10

Publications (1)

Publication Number Publication Date
WO2018132483A1 true WO2018132483A1 (en) 2018-07-19

Family

ID=62839989

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/013182 WO2018132483A1 (en) 2017-01-10 2018-01-10 Cognitive platform configured for determining the presence or likelihood of onset of a neuropsychological deficit or disorder

Country Status (1)

Country Link
WO (1) WO2018132483A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399853A (en) * 2019-07-30 2019-11-01 苏州智乐康医疗科技有限公司 Self-closing disease information processing system based on expression data and depth convolutional neural networks
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
WO2020191292A1 (en) * 2019-03-20 2020-09-24 University Of Washington Systems and methods for measurement of retinal acuity
WO2021063935A1 (en) 2019-09-30 2021-04-08 F. Hoffmann-La Roche Ag Prediction of disease status
US20210142199A1 (en) * 2019-11-12 2021-05-13 Optum Services (Ireland) Limited Predictive data analysis with cross-temporal probabilistic updates
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment
CN113730750A (en) * 2020-05-29 2021-12-03 德尔格制造股份两合公司 Connecting assembly having a volume flow sensor and a homogenization unit for artificial respiration of a patient, and method for producing the same
US20220139505A1 (en) * 2018-10-02 2022-05-05 Origent Data Sciences, Inc. Systems and methods for designing clinical trials
EP4000521A1 (en) * 2020-11-13 2022-05-25 University Of Fribourg Technique for controlling a human machine interface
US11386712B2 (en) 2019-12-31 2022-07-12 Wipro Limited Method and system for multimodal analysis based emotion recognition
WO2023240089A1 (en) * 2022-06-07 2023-12-14 The Board Of Regents Of The University Of Texas System Neurological condition characterization and diagnosis systems, devices, and methods
CN117243569A (en) * 2023-10-12 2023-12-19 国家康复辅具研究中心 Cognitive function assessment method and system based on multi-source information fusion
EP4322177A1 (en) * 2022-08-12 2024-02-14 Universität Zürich Method for determining a fatigue value of a person

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078780A1 (en) * 2014-09-15 2016-03-17 The Arizona Board Of Regents For And On Behalf Of The University Of Arizona Method and System for Aerobic and Cognitive Training
US20160262680A1 (en) * 2015-03-12 2016-09-15 Akili Interactive Labs, Inc. Processor Implemented Systems and Methods for Measuring Cognitive Abilities

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160078780A1 (en) * 2014-09-15 2016-03-17 The Arizona Board Of Regents For And On Behalf Of The University Of Arizona Method and System for Aerobic and Cognitive Training
US20160262680A1 (en) * 2015-03-12 2016-09-15 Akili Interactive Labs, Inc. Processor Implemented Systems and Methods for Measuring Cognitive Abilities

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10748644B2 (en) 2018-06-19 2020-08-18 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11120895B2 (en) 2018-06-19 2021-09-14 Ellipsis Health, Inc. Systems and methods for mental health assessment
US11942194B2 (en) 2018-06-19 2024-03-26 Ellipsis Health, Inc. Systems and methods for mental health assessment
US12002553B2 (en) * 2018-10-02 2024-06-04 Origent Data Sciences, Inc. Systems and methods for designing clinical trials
US20220139505A1 (en) * 2018-10-02 2022-05-05 Origent Data Sciences, Inc. Systems and methods for designing clinical trials
WO2020191292A1 (en) * 2019-03-20 2020-09-24 University Of Washington Systems and methods for measurement of retinal acuity
CN110399853A (en) * 2019-07-30 2019-11-01 苏州智乐康医疗科技有限公司 Self-closing disease information processing system based on expression data and depth convolutional neural networks
WO2021063935A1 (en) 2019-09-30 2021-04-08 F. Hoffmann-La Roche Ag Prediction of disease status
US11645565B2 (en) * 2019-11-12 2023-05-09 Optum Services (Ireland) Limited Predictive data analysis with cross-temporal probabilistic updates
US20210142199A1 (en) * 2019-11-12 2021-05-13 Optum Services (Ireland) Limited Predictive data analysis with cross-temporal probabilistic updates
US11386712B2 (en) 2019-12-31 2022-07-12 Wipro Limited Method and system for multimodal analysis based emotion recognition
CN113730750A (en) * 2020-05-29 2021-12-03 德尔格制造股份两合公司 Connecting assembly having a volume flow sensor and a homogenization unit for artificial respiration of a patient, and method for producing the same
CN113730750B (en) * 2020-05-29 2024-05-17 德尔格制造股份两合公司 Connection assembly with a volume flow sensor and a homogenization unit for artificial respiration of a patient and method for producing the same
EP4000521A1 (en) * 2020-11-13 2022-05-25 University Of Fribourg Technique for controlling a human machine interface
US11837107B2 (en) 2020-11-13 2023-12-05 Université De Fribourg Technique for controlling a human machine interface
WO2023240089A1 (en) * 2022-06-07 2023-12-14 The Board Of Regents Of The University Of Texas System Neurological condition characterization and diagnosis systems, devices, and methods
EP4322177A1 (en) * 2022-08-12 2024-02-14 Universität Zürich Method for determining a fatigue value of a person
WO2024033541A1 (en) * 2022-08-12 2024-02-15 Universität Zürich Method for determining a fatigue value of a person
CN117243569A (en) * 2023-10-12 2023-12-19 国家康复辅具研究中心 Cognitive function assessment method and system based on multi-source information fusion
CN117243569B (en) * 2023-10-12 2024-05-07 国家康复辅具研究中心 Cognitive function assessment method and system based on multi-source information fusion

Similar Documents

Publication Publication Date Title
US20240000370A1 (en) Cognitive platform configured as a biomarker or other type of marker
US12016700B2 (en) Cognitive platform coupled with a physiological component
KR102369850B1 (en) Cognitive platform including computerized associative elements
WO2018132483A1 (en) Cognitive platform configured for determining the presence or likelihood of onset of a neuropsychological deficit or disorder
US11846964B2 (en) Cognitive platform including computerized elements
WO2019161050A1 (en) Cognitive platform including computerized elements coupled with a therapy for mood disorder
US20200380882A1 (en) Cognitive platform including computerized evocative elements in modes
WO2019173189A1 (en) Cognitive screens, monitor and cognitive treatments targeting immune-mediated and neuro-degenerative disorders

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18739036

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18739036

Country of ref document: EP

Kind code of ref document: A1