EP4185192A1 - Systems and methods for augmented neurologic rehabilitation - Google Patents

Systems and methods for augmented neurologic rehabilitation

Info

Publication number
EP4185192A1
EP4185192A1 EP21845844.6A EP21845844A EP4185192A1 EP 4185192 A1 EP4185192 A1 EP 4185192A1 EP 21845844 A EP21845844 A EP 21845844A EP 4185192 A1 EP4185192 A1 EP 4185192A1
Authority
EP
European Patent Office
Prior art keywords
patient
beat
data
time
tempo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21845844.6A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP4185192A4 (en
Inventor
Owen MCCARTHY
Brian Harris
Alex Kalpaxis
Jeffrey Chu
Brian Bousquet-Smith
Eric Richardson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medrhythms Inc
Original Assignee
Medrhythms Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medrhythms Inc filed Critical Medrhythms Inc
Publication of EP4185192A1 publication Critical patent/EP4185192A1/en
Publication of EP4185192A4 publication Critical patent/EP4185192A4/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • A61B5/1038Measuring plantar pressure during gait
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/162Testing reaction times
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6803Head-worn items, e.g. helmets, masks, headphones or goggles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7455Details of notification to user or communication with user or patient ; user input means characterised by tactile indication, e.g. vibration or electrical stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6824Arm or wrist
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6829Foot or ankle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/003Repetitive work cycles; Sequence of movements

Definitions

  • Provisional Patent Application No. 63/054,599 titled “Systems and Methods for Augmented Neurologic Rehabilitation,” to McCarthy et ah, filed July 21, 2020, and further is a continuation- in-part of U.S. Patent Application No. 16/569,388 for “Systems and Methods for Neurologic Rehabilitation,” to McCarthy et al., which is a continuation of U.S. Patent No. 10,448,888, titled, “Systems and Methods for Neurologic Rehabilitation,” issue date October 22, 2019, which is based on and claims priority to U.S. Provisional Patent Application No. 62/322,504 filed on April 14, 2016, entitled “Systems and Methods for Neurologic Rehabilitation,” which are each hereby incorporated by reference as if set forth in their respective entireties herein.
  • the present disclosure relates generally to systems and methods for rehabilitation of a user having a physical impairment by providing music therapy.
  • a system for augmented neurologic rehabilitation of a patient comprises a computing system having a processor configured by software modules comprising machine-readable instructions stored in a non-transitory storage medium.
  • the software modules include an AA/AR modelling module that, when executed by the processor, configures the processor to generate an augmented-reality (AR) visual content and rhythmic auditory stimulus (RAS) for output to a patient during a therapy session.
  • AR augmented-reality
  • RAS rhythmic auditory stimulus
  • the RAS comprises beat signals output at a beat tempo and the AR visual content includes visual elements moving in a prescribed spatial and temporal sequence based on the beat tempo.
  • the system further comprises an input interface in communication with the processor for receiving real-time patient data including time-stamped biomechanical data of the patient relating to repetitive movements performed by the patient in time with the AR visual content and RAS.
  • the biomechanical data is measured using a sensor associated with the patient.
  • the software modules further include a critical thinking algorithm
  • CTA CTA module that configures the processor to analyze the time-stamped biomechanical data to determine a temporal relationship of the patient’ s repetitive movements relative to the visual elements and beat signals output at the beat tempo to determine a level of entrainment relative to a target parameter.
  • the AA/AR modelling module further configures the processor to dynamically adjust the AR visual and RAS output to the patient in synchrony and based on the determined level of entrainment.
  • the method includes the step of providing rhythmic auditory stimulus
  • RAS for output to a patient via an audio output device during a therapy session.
  • the RAS comprises beat signals output at a beat tempo.
  • FIG. 5 illustrates an exemplary display of a component of a system for rehabilitation of a user by providing music therapy in accordance with exemplary embodiments of the disclosed subject matter
  • FIGS. 16-17 illustrate a patient response in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 20 illustrates an implementation of a technique for intonation training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 21 illustrates an implementation of a technique for musical stimulation training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 22 illustrates an implementation of a technique for gross motor training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 23 illustrates an implementation of a technique for grip strength training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 24 illustrates an implementation of a technique for speech cueing training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIGS. 26-28 illustrates an implementation of a technique for attention training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 30 illustrates an implementation of a technique for oral motor training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 31 illustrates an implementation of a technique for respiratory training of a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 32 is a diagram illustrating an augmented neurologic rehabilitation, recovery or maintenance (“ANR”) system for providing therapy to a patient in accordance with exemplary embodiments of the disclosed subject matter;
  • ANR augmented neurologic rehabilitation, recovery or maintenance
  • FIG. 33 is a graphical visualization of measured parameters, system responses and target/goal parameters during a therapy session performed using the ANR system of FIG. 32 in accordance with exemplary embodiments of the disclosed subject matter;
  • FIG. 34 is a graph depicting exemplary results relating to metabolic change resulting from a training session performed using the ANR system of FIG. 32 in accordance with exemplary embodiments of the disclosed subject matter;
  • FIG. 35 is an exemplary augmented reality (AR) display generated by the ANR system for display to the patient during a therapy session in accordance with exemplary embodiments of the disclosed subject matter;
  • AR augmented reality
  • FIG. 36B is an exemplary AR display generated by the ANR system for display to the patient during a therapy session in accordance with exemplary embodiments of the disclosed subject matter;
  • FIG. 37 illustrates an implementation of a technique for gait training by providing augmented audio and visual stimulus to a patient in accordance with exemplary embodiments of the disclosed subject matter
  • FIG. 38 is a hybrid system and process diagram conceptually illustrating the ANR system configured for implementing the gait-training technique in accordance with exemplary embodiments of the disclosed subject matter;
  • FIG. 39 is a hybrid system and process diagram conceptually illustrating the augmented audio (AA) device component of the ANR system of FIG. 38 in greater detail in accordance with exemplary embodiments of the disclosed subject matter;
  • FIG. 40 is an exemplary AR display generated by the ANR system for display to the patient during a therapy session in accordance with exemplary embodiments of the disclosed subject matter; and
  • FIG. 41 is an exemplary AR display generated by the ANR system for display to the patient during a therapy session in accordance with exemplary embodiments of the disclosed subject matter;
  • the present invention relates generally to systems, methods and apparatus for implementing a dynamic closed-loop rehabilitation platform system that monitors and directs human behavior and functional changes. Such changes are in language, movement, and cognition that are temporally triggered by musical rhythm, harmony, melody, and force cues.
  • the sensor 200 can include a foot pressure pad 202 having a heel pad
  • the multiple zone pressure sensing with the 6-degrees of freedom motion capture device allows for map-able spatial and temporal gait dynamics tracking while walking.
  • a schematic diagram of the sensor 200 is illustrated in FIG. 3.
  • the patient P uses two foot sensors 200, one for each foot designated as Right 200R and Left 200L.
  • the right foot sensor 200R wirelessly communicates time-stamped internal measurement unit data and heel strike pressure data over a first channel, e.g., channel 5, in the IEEE 802.15.4 direct sequence spread spectrum (DSSS) RF band.
  • a first channel e.g., channel 5
  • DSSS direct sequence spread spectrum
  • a video analytics domain can be used to extract patient semantic and event information about therapy sessions.
  • Patient actions and interactions are components in the therapy that affect the therapy context and regiment.
  • one or more image capture devices 206 such as video cameras, (see FIG. 4) are used with a time-synched video feed. Any appropriate video may be incorporated into the system to capture patient movement; however, Near Infrared (NIR) video capture is useful to preserve the patient’s privacy and to reduce the video data to be processed.
  • NIR video capture device captures NIR video images of a patient’s body such as the position of the patient’s torso and limbs. Further, it captures the patient's real-time dynamic gait characteristics as a function of a music therapy session.
  • the video is captured with a stationary camera, in which the background is subtracted to segment out foreground pixels.
  • wearable wireless real-time Electromyogram [0063] In some embodiments, wearable wireless real-time Electromyogram
  • EMG devices 208 can be worn by the patient.
  • EMG sensors provide the entire bi-ped profile for major muscle firing for locomotion. Such sensors provide data regarding the exact time when the muscle are fired.
  • the edge processor 104 can be a microprocessor, such as a 32-bit microprocessor incorporated into the foot pressure/6-degrees of freedom motion capture device that enables fast multiple zone scanning at a rate of 100 to 400 complete foot pressure/6-degrees of freedom motion profiles per second.
  • raw frames data is pre-processed, taking the instant data and “gating” it, e.g., identifying a window, and then analyzing data within that window to identify outliers and to perform analysis on the data, e.g., exponential analysis, averaging data among multiple windows. Fusion of sensor data, by including both IMU data and heel-strike pressure data, allows for more precise identification of onset times for a single stride or other repeated units of motion than using data from a single sensor.
  • SPISendChar is called to send a 0x7E byte, which is the 2nd code byte and then the SPIWaitTransferDone is called again to verify the send is done. With these code bytes sent, then the rest of the packet is sent using a for loop, where psTxPkt u8DataLength+l are the number of iterations to a series of sequential to SPISendChar, SPIWaitTransferDone, SPIClearRecieveDataReg.
  • the RF transceiver is loaded with the packet to send.
  • the ANTENNA_SWITCH is set to transmit, the LNA_ON mode enabled, and finally a RTXENAssert call made to actually send the packet.
  • the collector 106 operates on a local computer that includes a memory, a processor and a display.
  • Exemplary devices on which the collector is installed can include augmented reality (AR) devices, virtual reality (VR) devices, tablets, mobile devices, laptop computers, desktop computers, and the like.
  • FIG. 2 illustrates a handheld device 220 having a display 222, and which performs the collector functions.
  • the connection parameters for transferring data between the patient sensor and the collector are made include the use of Device Manager in Windows (e.g., Baud rate: 38400, data bits: 8; parity: none, stop bits: 1).
  • the collector 106 includes a processor that is held or worn by the music therapy patient.
  • the collector 106 includes a processor that is remote from the music therapy patient and carried by a therapist, and connected wirelessly or via a wired connection to the music therapy patient.
  • This RF packet session level process for node to node communication is the analysis of the RF packet data payload.
  • This payload contains the foot pressure profile based on the current variable pressure following the 6-degrees of freedom motion. This is structured as follows: I 0x10 I start I FI I F2 I F3 I F4 lAx I Ay I Az I Pi I Yi I Ri IXOR checksuml.
  • control of the foot pressure/6-degrees of freedom motion collecting node's RF transceiver and data transfers are accomplished by means of a Serial Peripheral Interface (SPI).
  • SPI Serial Peripheral Interface
  • the normal SPI protocol is based on 8-bit transfers
  • the foot pressure/6-degrees of freedom motion collecting collector node's RF transceiver imposes a higher level transaction protocol that is based on multiple 8-bit transfers per transaction.
  • a singular SPI read or write transaction consists of an 8-bit header transfer followed by two 8-bit data transfers.
  • the header denotes access type and register address.
  • the following bytes are read or write data.
  • the SPI also supports recursive ‘data burst’ transactions in which additional data transfers can occur.
  • the recursive mode is primarily intended for Packet RAM access and fast configuration of the foot pressure/6-degrees of freedom motion collecting node's RF
  • a call is made to SPIDry Write to update the TX packet length field.
  • SPIClearRecieveStatReg is made to clear the status register followed by a call to SPIClearRecieveDataReg to clear the receive data register to make the SPI interface ready for reading or writing.
  • SPISendChar sending a OxFF character which represents the 1st code byte and then SPIWaitTransferDone is called to verify the send is done.
  • FIG. 5 is an exemplary output 300 that may be provided on display 222 of the handheld device.
  • the display output 300 may include a portion for the right foot 302 and a portion for the left foot 304.
  • the display for the right foot includes accelerations A x 310a, A y 312a, and A z 314a, and foot pressure 316a.
  • the display for the left foot includes acceleration A x 310a, A y 312a, and A z 314a, and foot pressure 316a.
  • Context refers to the circumstances or facts that form the setting for an event, statement, situation, or idea.
  • Context-aware algorithms examine the “who,” “what,” “when” and “where” related to the environment and time in which the algorithm is executed against certain data.
  • Some context-aware actions include an identity, location, time, and activity being executed. In using contextual information to formulate a deterministic action, context interfaces occur among the patient, the environment, and the music therapy session.
  • the patient’s reaction context to a music therapy session can involve a layer of algorithms that interpret the fused sensor data to infer higher-level information. These algorithms distill the patient reaction context. For example, a patient's bio-mechanical gait sequence is analyzed as it relates to a specific portion of the music therapy session. In one example, “lateral tremor” is the classifier of interest. Accordingly, it is determined that the patient’s gait becomes more fluid with less lateral tremor.
  • the analytics systems 108 store large models/archives and include machine learning/analytics processing, with the models described herein.
  • a web interface for login to view archived data, and a dashboard is also provided.
  • the analytics system 108 is located on a remote server computer which receives data from the collector 106 running on a handheld unit such as handheld device or tablet 220. It is contemplated that the processing capability needed to perform the analytics and machine learning functions of the analytics system 108 may be also located on the handheld device 220.
  • the analytics processing 400 includes a user- interface 402 for receiving data from the collector 106.
  • a database storage 404 receives incoming data from the collector 106 for storage.
  • Training data as well as outputs of the analytics processing, e.g., the ensemble machine learning system 410, may also be stored on storage 404 to facilitate the creation and refinement of the predictive models and classifiers.
  • a data bus 406 allows flow of data through the analytics processing.
  • a training process 408 is performed on training data to derive one or more predictive models.
  • An ensemble machine learning system 410 utilizes the predictive models. The output of the ensemble machine learning system 410 is an aggregation of these predictive models.
  • This aggregated output is also used for classification requirements with template classifiers 412, such as tremor, symmetry, fluidity, or learned biomechanical parameters such as entrainment, initiation, etc.
  • An API 418 connects to the collector and/or music therapy Center.
  • Therapy algorithms 414 and predictive algorithms 416 include multi-layer perceptron neural networks, hidden Markov models, Radal based function networks, Bayesian inference models, etc.
  • An exemplary application of the systems and methods described herein is analysis of a patient’s bio-mechanical gait.
  • the gait sequence is feature-extracted into a series of characteristic features. The presence of these and other features in captured sensor-fused data inform the context detection algorithm if the patient’s bio-mechanical gait sequence is valid.
  • Bio-mechanical gait sequence capture requires robust context detection, which is then abstracted over a representative population of music therapy patients.
  • An example of such an activity is the location of a patient at an instance in time and their response to the music therapy at that time.
  • the recognition and correlation of patient music therapy responses allows for recognition specific patterns of music therapy patient responses.
  • Specific music therapy regimes are then benchmarked and analyzed for performance and efficacy by creating a baseline of music therapy patient responses and correlating them to future music therapy patient responses.
  • a distance metric with gait bio mechanics capture is used to determine patient path trajectory using temporal and spatial variations/deviations between two or more music therapy sessions. From this sensor-fused data capture, features are extracted and classified to label various key patient therapy responses. Further sensor-fused data analysis uses histograms to allow for initial music therapy response pattern detection.
  • the prediction routine a Multi-Layer Perceptron Neural Network
  • MLPNN uses a directed graph node-based model having a top layer root-node which predicts requirements for reaching a subsequent node and obtaining a patient’s sensor- fused data feature vector.
  • This sensor fused data feature vector contains time-series processed motion data, music signature data, and video image data that is specifically significant for further processing.
  • the directed graph in this case, look like trees that are drawn upside down, where the leaves are at the bottom of the tree and the roots are the root-node.
  • the model uses two types of input variables: ordered variables and categorical variables.
  • An ordered variable is a value that is compared with a threshold that is also stored in a node.
  • a categorical variable is a discrete value that is tested to see whether it belongs to a certain limited subset of values and stored in a node. This can be applied to various classifications. For example, mild, medium, and severe can be used to describe tremor and is an example of a categorical variable. Conversely, a fine grained range of values or a numerical scale, can be used to similarly describe tremor but in a numerical fashion.
  • the routine goes to the left node and if not, it goes to the right node.
  • a pair of entities: variable_index , decision_rule (threshold/subset) are used to make this decision. This pair is called a split which splits on the variable: variable_index.
  • Multi-Layer Perceptron Neural Network Once the Multi-Layer Perceptron Neural Network is built, it may be pruned using a cross-validation routine. To avoid model over-fitting, some of the branches of the tree are cut off. This routine may be applied to standalone decisions.
  • One salient property of the decision algorithm (MLPNN), described above, is an ability to compute the relative decisive power and importance of each variable.
  • FIG. 8 illustrates the ensemble machine learning system 410, as an aggregation of the predictive models Ml 506a, M2506b, M3 506c ... MN 506n on sample data 602 e.g., feature extracted data, to provide multiple predictive outcome data 606a, 606b, 606b ...606n.
  • An aggregation layer 608, e.g., including decision rules and voting, is used to derive the output 610, given a plurality of predictive models.
  • the MR ConvNet system has two layers, where the first layer is a convolutional layer with mean pooling support.
  • the MR ConvNet system second layer is a fully connected layer that supports multinomial logistic regression.
  • Multinomial logistic regression also called Softmax, is a generalization of logistic regression for handling multiple classes. In the case of logistic regression, the labels are binary.
  • the cross-entropy function is:
  • an EP Ratio is calculated as a ratio of the time duration between time beats to the time duration between time steps:
  • entrainment potential is not constant within a tolerance
  • adjustments are made at step 1818, e.g., speed up or slow down the beat tempo, increase volume, increase sensory input, overlay metronome or other related sound, etc.
  • the entrainment is accurate, e.g., entrainment potential is constant within a tolerance
  • an incremental change is made to the tempo at step 1820. For example, the baseline tempo of the music played with handheld device is increased towards a goal tempo, e.g., by 5%.
  • a connection is made to the entrainment model for prediction and classification. (It is understood that such connection may be pre-existing or initiated at this time.)
  • An optional entrainment analysis 1846 is applied to the sensor data, substantially as described above in step 1816, with the differences noted herein. For example, entrainment may be compared with previous entrainment data from earlier in the session, from previous sessions with the patient, or with data relating to entrainment of other patients. As an output from the entrainment analysis, a determination is made regarding the accuracy of the entrainment, e.g., how closely the patient’s gait matches the baseline tempo. If the entrainment is not accurate, adjustments are made at step 1848, substantially in the same manner as described above at step 1818.
  • FIG. 21 illustrates a technique useful for musical stimulation training.
  • FIG. 22 The flow diagram illustrated in FIG. 22 for gross motor training is substantially identical to the flow illustrated in FIG. 18 for gait training, with the differences noted herein.
  • the patient’s is provided with cues to move in time with the baseline beats of a musical selection.
  • the analytics system 108 evaluates the patient’s responses and classifies the responses in terms of accuracy of motion and entrainment as discussed above and provides instructions to increase or decrease the tempo of the music played.
  • FIG. 23 illustrates a technique useful for grip strength training.
  • the hardware includes a gripper device having pressure sensors, a connected speaker associated with a handheld device 220. Key inputs are the pressure provided by the patient to the gripping device in a similar manner to the heel strike pressure measured by sensor 200.
  • the appropriate populations include patients with neurological, orthopedic, strength, endurance, balance, posture, range of motion, TBI, SCI, stroke, and Cerebral Palsy.
  • FIG. 24 illustrates a technique useful for speech cueing training.
  • the hardware can include a speaker for receiving and processing the singing by the patient, and in some embodiments a therapist can manually provide an input regarding speech accuracy.
  • Key inputs are the tone of voice and words spoken and rhythm of speech, and music preferences.
  • the appropriate populations include patients with robot, word finding and stuttering speech issues.
  • FIG. 27 illustrates a flow diagram for alternating attention training in which the instructions are provided, either by cues appearing on the display 222 or verbally by a therapist, to follow along or perform a task to audio cues which alternate between the left and the right ear.
  • FIG. 28 illustrates a flow diagram for divided attention in which the instructions are provided to follow along or perform a task to audio cues with audio signals in both the left and right ear.
  • augmented reality AR
  • AA augmented audio
  • exemplary embodiments of the augmented neurologic rehabilitation, recovery or maintenance (“ANR”) systems and methods disclosed herein build upon entrainment techniques by utilizing additional sensor streams to make determinations of therapeutic benefit and inform a closed loop therapeutic algorithm.
  • Exemplary systems and methods for neurologic rehabilitation which can be utilized to realize embodiments of the ANR systems and methods, are shown and described above and in co-pending and commonly assigned U.S. Patent Application No. 16/569,388 for “Systems and Methods for Neurologic Rehabilitation,” to McCarthy et ah, which is a continuation of U.S. Patent No.
  • the ANR systems and methods include a method for combining AA techniques for repetitive motion activities.
  • Augmented Audio AA
  • AA Augmented Audio
  • the neuroscience of rhythm at its core uses the stimulus to engage the motor system for repetitive motion activities such as walking.
  • Adding AA to the therapy protocols enhances therapeutic effect, increases adherence to the therapy protocols and provides greater safety in the form of enhanced situational awareness to the patient.
  • the disclosed embodiments configured for adding AA can mix many audio signals, including external environmental sound inputs, recorded content, rhythmic content, and voice guidance into a synchronized state taking advantage of the neuroscience of music.
  • Neuroplasticity, entrainment, the science of mirror neurons are the foundational scientific components supporting the disclosed embodiments. Entrainment is a term for the activation of the motor centers of the brain in response to an external rhythmic stimulus. Studies have shown that audio-motor pathways exist in the Reticulospinal tract, the part of the brain that is responsible for movement. Priming and timing of movements via these pathways demonstrate the motor system’s ability to couple with the auditory system in order to drive movement patterns (Rossignol and Melville, 1976). The entrainment process has been shown to effectively increase walking speed (Cha, 2014), decrease gait variability (Wright, 2016), and lower fall risk (Trombetti, 2011).
  • the ANR systems and methods are configured to process the images/videos to remove/add people/objects smaller or larger than a specified size from the images/videos, in response to patient or therapist exception conditions received as inputs to the ANR system.
  • exceptions can be a patient response such as an instruction to reduce scene complexity, a therapist instruction to introduce occlusions which could be people/objects increasing scene complexity.
  • the embodiments can support recording all data of all patient or therapist exception conditions besides the session data itself.
  • the ANR systems and methods include a telepresence method allowing the linking of a therapist to a remotely located patient using the system.
  • the telepresence method besides fully supporting all the local features experienced by patient and therapist when being in the same location, includes biomechanical motion tracking of the patient relative to the AR 3-D dynamic model of people/objects.
  • FIG. 32 depicts a conceptual overview of principal components of an exemplary ANR system 3200 that uses a closed loop feedback that measures, analyzes, and acts on a person to facilitate outcomes towards a clinical or training goal.
  • the ANR system 3200 can be realized using the various hardware and/or software components of the system 100 described above.
  • the ANR system measures or receives inputs relating to gait parameters, environment, context / user intent (including past performance), physiological parameters, and real time feedback on outcomes (e.g. closed loop, real time decision making).
  • CTA clinical thinking algorithm module
  • an AR/AA output modelling module 3210 programmed to dynamically generate/modify outputs for the patient.
  • the outputs are provided to the patient via one or more output devices 3220, such as visual and/or audio output devices and/or tactile feedback devices.
  • output devices 3220 such as visual and/or audio output devices and/or tactile feedback devices.
  • AR visual content can be output to AR glasses 3222 worn by the patient.
  • Augmented audio content can be provided to the patient via audio speakers or headphones 3225.
  • other suitable visual or audio display devices can be used without departing from the scope of the disclosed embodiments.
  • the inputs to the ANR system 3200 are important to enable the system to measure, analyze, and act in a continuous loop facilitating outcomes towards a clinical or training goal.
  • sensors could be used to measure other input parameters, which could include respiratory rate, heart rate, oxygen level, temperature, electroencephalogram (EEG) for recording of the brain's spontaneous electrical activity, electrocardiogram (ECG or EKG) for measuring the electrical activity of the heart, electromyogram (EMG) for evaluating and recording the electrical activity produced by skeletal muscles, photoplethysmogram (PPG) for detecting blood volume changes in the microvascular bed of tissue often using a pulse oximeter which measures changes in light absorption of skin, optical, inertial measurement units, video cameras, microphones, accelerometers, gyroscopes, infrared, ultrasonic, radar, RF motion detection, GPS, barometers, RFID’s, radar, humidity, or other sensors that detect physiological or biomechanical parameters.
  • EEG electroencephalogram
  • ECG electrocardiogram
  • EKG electromyogram
  • PPG photoplethysmogram
  • PPG photoplethysmogram
  • gait parameters can be measured using one or more sensors 3252 such as IMUs, footpad sensors, smartphone sensors (e.g., accelerometers) and environmental sensors.
  • physiology parameters can be measured using one or more sensors 3254 such as PPG, EMG/EKG and respiratory rate sensors.
  • contextual information about the outcomes desired, the use environment, data from past sessions, other technologies, and other environmental conditions can be received as inputs to the CTA module and adjust the CTA’s response.
  • contextual information 3258 and environment input 3256 information can be received as inputs that further inform operation of the CTA 3208.
  • An example of using contextual information is that information from the past about a user’s gait pattern could be used in combination with Artificial Intelligence (AI) or Machine Learning (ML) systems to provide more personalized clinical goals and actions for the patient. These goals could modify target parameters such as limits on steps per minute, walking velocity, heart rate variability, oxygen consumption (V02 max), breathing rate, session length, asymmetry, variability, distance walked, or desired heart rate.
  • AI Artificial Intelligence
  • ML Machine Learning
  • Bluetooth Low Energy (BLE) beacons or other wireless proximity techniques, such as wireless triangulation or angle of arrival, to facilitate wireless location triggers to have people/objects appear and/or disappear in the patient's field of view with respect to an AR 3-D dynamic model depending on detected location.
  • the AR 3-D dynamic model that is output by the ANR system can be controlled by the therapist and/or beacon triggers to change or maintain navigation requirements for the patient.
  • These triggers could be used with gait or physiological data as described above to provide additional triggers, beside the wireless beacon triggers.
  • gait data feedback from IMU products allows for a gait feedback loop that provides the ANR system 3200 with the ability to effect change in the AR 3-D dynamic model software process.
  • the CTA module 3208 implements clinical thinking algorithms that are configured to control the applied therapy to facilitate outcomes towards a clinical or training goal.
  • Clinical goals could include items such as those discussed in connection with Figures 18 through Figures 31, and, by way of further example, interventions for agitation in Alzheimer’s, dementia, bipolar disorder, and schizophrenia, and training/physical activity goals.
  • This section discusses different non-limiting exemplary techniques that can be used to deliver an appropriate rehabilitation response as determined by the CTA, for example, modulating the rhythmic tempo and the synchronized AR visual scenery.. Each of these techniques could be implemented using a stand-alone CTA or combined with each other.
  • the system 3200 can be configured to combine CTA(s) with entrainment principles for repetitive motion activities and, in other cases, they can be combined with each other towards other goals.
  • ANR system 3200 can be configured to utilize the combination of the biomechanical, physiological data, and context to create a virtual treadmill output via the AR/AA output interfaces. While a treadmill keeps pace for someone with the movement of the physical belt, the virtual treadmill is dynamically adjusted using the CTA in accordance with the entrainment principle to modulate a person’s walking or movement pace in a free-standing manner, similar to other movement interventions. Though, in addition to or instead of using a rhythmic stimulus to drive the individual towards a bio-mechanical goal as discussed previously, the virtual treadmill can be generated and dynamically controlled based on entrainment of the patient towards target parameters such as those listed above as target parameters.
  • the CTA module 3208 is configured to utilize the biomechanical data, physiological data, and context to provide gait training therapy in the form of a virtual treadmill and rhythmic auditory stimulus output via the AR and AA output interfaces 3220.
  • FIG. 37 is a process flow diagram illustrating an exemplary routine 3750 for providing gait training therapy to a patient using the ANR system 3200.
  • FIG. 38 is a hybrid system and process diagram conceptually illustrating aspects of the ANR system 3200 for implementing the gait-training routine 3750 in accordance with exemplary embodiments of the disclosed subject matter.
  • sensors 3252 particularly foot mounted IMUs, capture the sensor data relating to gait parameters that are provided to the CTA module 3208.
  • the AA/AR modelling component 3210 comprising an audio engine receives inputs from the CTA and is configured to generate an audio cueing ensemble comprising one or more of rhythmic music and cuing content, interactive voice guidance and spatial and audio effects processing.
  • the AA/AR modelling component 3210 comprising an AR/VR modelling engine (also referred to as AR 3-D dynamic model) is shown as receiving inputs from the CTA and is configured to generate a visual cueing ensemble comprising one or more of virtual AR actors and objects (e.g., a virtual person walking), background motion animation (e.g., virtual treadmill, steps/footprints and animations) and scene lighting and shading.
  • virtual AR actors and objects e.g., a virtual person walking
  • background motion animation e.g., virtual treadmill, steps/footprints and animations
  • FIG. 39 is a hybrid system and process diagram conceptually illustrating an exemplary audio output device 3225 and augmented audio generation components of the ANR system 3200 in greater detail.
  • the AA device can capture environmental sounds using, for example, stereo microphones.
  • the AA device can also generate audio outputs using stereo transducers.
  • the AA device can also comprise head- mounted IMUs.
  • the AA device can also comprise audio signal processing hardware and software components for receiving, processing and outputting the augmented audio content received from the AA/AR module 3210 alone or in combination with other content such as environmental sounds.
  • the CTA module 3208 receives gait parameters including those received from sensors including foot mounted IMUs and head mounted IMUs. Additionally, in an embodiment the CTA receives data relating to physiological parameters from other sensor devices such as PPG sensors.
  • a patient wearing the AR/AA output device 3220 and IMU sensor 3252 starts to walk as the ANR system 3200 calibrates and collects preliminary gait data such as stride length, speed, gait cycle time and symmetry.
  • the CTA module 3208 determines the baseline rhythmic tempo for both the music playback and virtual AR scene to be displayed.
  • the baseline rhythmic tempo can be determined by the CTA as described in connection with FIG. 18.
  • the audio engine i.e., the audio modelling component of AR/AA modelling module 3210
  • the visual AR engine will generate a moving virtual scene, such as those understood in the video game industry. More specifically, in an embodiment, the virtual scene includes visual elements that are presented under the control of the CTA and share the common timing reference with the audio engine, in order to synchronize the elements of the visual scene to the music and rhythm tempo.
  • the AR scene described herein includes a virtual treadmill or virtual person and footsteps, the AR scene could be any one or more of a variety of examples discussed herein, such as a virtual treadmill, a virtual person walking, a virtual crowd or dynamic virtual scene.
  • the music/rhythm and visual content are delivered to the patient using an AR/AA device 3220 such as a lightweight heads-up display (e.g., AR goggles 3222) with earphones 3225.
  • an AR/AA device 3220 such as a lightweight heads-up display (e.g., AR goggles 3222) with earphones 3225.
  • the patient receives instructions at steps 3706 and 3707 regarding the therapy via voiceover cues generated by the audio engine. This could include a pre-walk training preview in order for the patient to become accustomed to and practice with the visual scenery and audio experience.
  • FIG. 40 shows an exemplary AR virtual scene 4010 presented for the patient to entrain with.
  • the scene can comprise an animated 3-D image of another person walking “in front” of the patient and whose steps and walking motions are synchronized to the music tempo.
  • the AR actor walks to the same tempo as the baseline beat tempo of the audio content generated by the CTA and audio engine.
  • the patient goal can be to match their steps both rhythmically with the audio, and visually with the actor.
  • the AR scene can comprise a plurality of footsteps with additional cues such as L and R indicating left and right foot.
  • the scene including the footsteps can be virtually moving toward the patient at a prescribed rate, while the virtual actor is walking in front of the patient in the direction away from the patient.
  • additional cues generated in connection with the AR scene can include rhythmic audio cues that reinforce the visual cues.
  • one effective reinforcement method can include the AA system 3210 generating the sound of the virtual actor’s footfalls in synchrony with the rhythm, simulating group-marching to a common beat.
  • FIG. 41 shows an exemplary AR virtual scene 4110 presented for the patient to entrain with.
  • the virtual treadmill can be generated and dynamically controlled based on entrainment of the patient towards target parameters such as those listed above as target parameters.
  • the generated AR treadmill animates movement of the treadmill surface 4115 and generates virtual steps at the same tempo as defined by the CTA for the auditory stimulus.
  • the 3-D animation of a virtual treadmill could include visually highlighted steps or tiles that a patient can use as visual goals while simultaneously entraining to the rhythm generated under control of the CTA. Accordingly, in this example, the patient goal is to match their steps both rhythmically to the audio and visually with the animated goal steps.
  • FIG. 41 shows an exemplary AR virtual scene 4110 presented for the patient to entrain with.
  • the biomechanical sensors e.g., sensors 3252
  • the entrainment potential is determined by the CTA module 3208 and used to determine how the training session goal is to be met.
  • entrainment potential can be the basis for modifying the rhythmic audio stimulus and visual scenery, which occurs at step 3710.
  • the CTA analyses the incoming data history of the patient’s gait cycle times in comparison to the rhythmic intervals of the beats delivered to the patient by the audio device.
  • Exemplary approaches for modifying the audio stimulus based on entrainment potential are similarly described above.
  • the CTA module can instruct the AA/AR modelling module to adjust (e.g., reduce) the tempo of the RAS and correspondingly adjust the motion speed of the AR scene in sync with the RAS.
  • the CTA module can instruct the AA/AR modelling module to adjust (e.g., reduce) the tempo of the RAS and correspondingly adjust the motion speed of the AR scene in sync with the RAS.
  • the patient is considered to be entraining by the CTA.
  • the CTA evaluates whether the patient has reached a goal. If a goal has not been reached, then one or more target parameters can be adjusted at step 3712. For instance, in one example, the CTA compares the RAS tempo and associated AR scene speed to a target tempo parameter (e.g., a training/therapy goal) before a rhythmic tempo and/or scenery motion speed is changed in view of the comparison. Exemplary methods that the CTA module 3208 can implement for adjusting the rhythmic auditory stimulus according to entrainment potential are shown and described above, for example, in connection with FIG. 18.
  • a target tempo parameter e.g., a training/therapy goal
  • modifying the target parameters could include increasing or decreasing the music tempo at step 3712. This would drive the patient to walk faster or slower using the RAS mechanism of action.
  • another training goal could be to lengthen a patient’s stride length, which can be achieved by slowing down the imagery’s motion speed parameter.
  • the audio and visual outputs are mutually reinforcing stimuli: the visual scenery is layered together in synchrony to the rhythmic stimulus.
  • the CTA module 3208 makes dynamic adjustments to the visual scenery and rhythmic tempo in order to meet the therapy goal.
  • CTA module 3208 can control the synchronization of music tempo and AR scenery based on biomechanical sensor inputs in furtherance of gait training. It should be understood that the principles of this embodiment are applicable to many disease indications and rehabilitation scenarios.
  • the virtual treadmill can be generated by the ANR system 3200 to modulate the patient’s walking towards a target parameter of oxygen consumption.
  • the virtual treadmill is generated and controlled in order to modulate the walking speed towards an oxygen consumption or efficiency target parameter using entrainment.
  • FIG. 33 is a graphical visualization of a real time session performed using the ANR system 3200 with V02 max as target parameter, tempo changes used as interim goal, and entrainment used to drive the physiological changes related to v02 max.
  • FIG. 33 shows an example of how this process works in real time. More specifically, FIG.
  • FIG. 33 is a graphical user interface illustrating various salient data-points and parameter values that are measured, calculated and/or adjusted by the ANR system in real-time during a session.
  • the top of the interface shows a chart of entrainment potential values calculated for each step in real-time throughout the session.
  • the top bar shows individual EP calculated per step, which in this example is the phase correlation between step time intervals and beat time intervals.
  • the next window down provides a status bar showing whether parameters are within a safe range.
  • the next window down shows a real time response driven by the CTA based on, inter alia, the measured parameters, entrainment and other aforementioned inputs and feedback to the CTA.
  • the circle icons represent algorithm responses, which include both the tempo changes and rhythmic stimulus level (e.g. volume) changes.
  • the bar below that shows just the tempo and tempo changes by themselves.
  • the next window down shows the real-time tempo of rhythmic stimulus provided to the patient over time in accordance with the CTA response.
  • the bottom window shows measured oxygen consumption over time and target parameter.
  • the patient can be presented with an augmented reality scene (e.g., the virtual treadmill) with visual elements animated in synchrony to the rhythmic stimulus and dynamically adjusted in synchrony the with the adjustments to the real time tempo of the rhythmic stimulus.
  • An example of how the AA/VR module 3210 can be configured to synchronize the visual animation speed and audio can include defining the relationship between displayed repetitive motion rates and the tempo of the audio cues. For example, based on the beat tempo, the rate of the treadmill and spacing of the steps are calculated to define the relationships between audio and visual elements. Furthermore, a reference position of the treadmill, timing of the footsteps and any beat-timed animations are synchronized to the output time of the beats comprising the beat tempo. Using time scaling and video frame interpolation techniques known the animation industry, a wide range of synchronized virtual scenes can be programmatically generated by the AA/VR module 3210 on demand according to the defined relationships between audio and visual elements.
  • FIG. 34 is a graph depicting Metabolic change during a first training session for 7 patients (denoted by respective sets of two dots connected by a dashed line).
  • FIG. 34 shows data that supports purposely entraining can improve the oxygen consumption of an individual.
  • the graph shows a person’s oxygen consumption (ml of oxygen/kg/meter) pre-training to rhythm and post-training to rhythm with the ANR system.
  • This figure shows an average reduction of 10%. The results indicate that the entrainment process can improve endurance and reduce energy expenditure while walking.
  • the ANR system 3200 can be configured to compare real-time measured information concerning movements of a person to AR images and/or components of music content (e.g. instantaneous tempo, rhythm, harmony, melody, etc) being output to a patient during the therapy session. This can be used by the system to calculate an entrainment parameter, determine phase entrainment, or establish baseline and carryover characteristics.
  • the AR images could be moving at the same speed or cadence as the target parameter.
  • the AR relevant movements of the images could be entrained to the rhythmic stimulus in synchrony with how the person should be moving.
  • An example of an AR 3-D dynamic model output can include projecting a therapist (virtual actor) or person (virtual actor) walking in the patient’s field of view which is initiated by the person performing the therapy (real therapist).
  • FIG. 35 for instance illustrates the view of a therapist or coach projected in front of patient or trainer via AR, using for example AR glasses known in the art.
  • This AR 3-D dynamic model is controlled with one or a variety of CTA’s.
  • the virtual therapist could start with the approach shown and described in connection with FIG. 22, and then have them proceed with a gait training regimen, like shown and described in connection with FIG. 18. Alternatively, these could be done simultaneously with dual tasking.
  • the virtual actor can be controllably displayed by the system as walking or moving backwards or forward with a smooth movement similar to the non-affected side of the patient.
  • This process can also include providing an audio stimulus to sync the virtual and/or physical person to the stimulus.
  • the AR 3-D dynamic model can be configured to simulate a scenario in which the patient is walking in or around a crowd of people and/or people with objects in front and/or the side of the patient.
  • FIG. 36A illustrates the view of a crowd of people projected in front of the patient via AR.
  • the system can be configured to project the crowd or person traveling faster or slower than the baseline of the person to encourage them to move at a similar speed or stopping/starting in a real-world environment.
  • the crowd or person could be entrained to the beats of the rhythmic auditory stimulus or another desired goal. Varying levels of difficulty in navigation can be initiated by the AR 3-D dynamic model.
  • the AR view of a therapist, crowd, person, obstacles and the like can be dynamically adjusted using the AR 3-D dynamic model according to the output of the CTA’s.
  • the AR 3-D dynamic model can be configured to simulate a scenario in which the patient is walking in or around an arrangement of cones which implements a virtual obstacle course for the patient to navigate.
  • Cones are a normal obstacle in a therapy environment, however other embodiments of this could be configured to simulate normal activities of daily living (e.g., grocery shopping).
  • These cones, along with virtual obstacles can encourage direction changes by virtue of walking with side steps to each side and walking backwards, rather than just forward walking directional changes.
  • wireless beacon triggers can be used to cause the ANR system to present cones that appear and/or disappear. The beacons would be triggered based on detecting the location of the person related to the cones.
  • the target parameter for this example can be a measure of walking speed or walking quality. Successful navigation would be to navigate around the cones without virtually hitting them.
  • the system can be configured to present levels that get more difficult (e.g. more obstacles and faster speeds) as long as the person is successfully avoiding the obstacles and the quality of walking does not degrade (as measured by increase in variability or worsening asymmetry).
  • the AR 3-D dynamic model can be configured to simulate a scenario in which the patient is walking in or around caves and/or cliffs which can include obstacles for a reality effect.
  • the realism would heighten details required for navigation over the prior presented use cases.
  • a winding path can be presented where it requires a person to take a longer step on their affected side. This winding path could also be separate cliffs that they have to step over a valley to not fall off.
  • Wireless beacon triggers can be used to cause the ANR system to make cave and/or cliff obstacles appear and/or disappear, thus varying levels of difficulty in navigation times and path lengths.
  • Sensor data can be used by the system to sync movements to the winding path.
  • the navigation requirements by the patient could be biomechanical responses for navigating changes in a baseline prescribed course.
  • the system is configured such that wireless spatial and temporal beacon triggers affect the changes in the AR 3-D dynamic model.
  • the temporal aspect of these wireless triggers is the ability to turn them on and off. This would allow for maximum flexibility in scripting navigation paths for the courses that patients should take as part of the therapy sessions.
  • the target parameter for this instant is a measure of walking speed or walking quality. Successful navigation would be to navigate the paths without stepping off the path or falling off the cliff.
  • the system can be configured to present levels that would get more difficult (e.g. more obstacles and faster speeds) as long as the person is successfully staying on the path and the quality of walking does not degrade (as measured by increase in variability or worsening asymmetry).
  • the AR 3-D dynamic model can be configured to simulate a scenario in which the patient is standing or seated stationary and asked to march as a virtual object is presented and approaches each foot.
  • FIGS. 36B illustrates the view of foot prints projected in front of patient via AR.
  • the ANR system can generate a virtual scene in which the object may approach to the left or right of the patient to encourage side stepping. The object will be presented as approaching the patient at a pre-defined tempo or beat which will follow a decision tree as described in FIG. 22. A visual of the correct movement by therapist or patient from past therapy may also be projected.
  • the ANR system can be configured to incorporate haptic feedback into the therapy.
  • Haptic feedback for example, can be employed by the system as a signal if the user gets too close to objects or people in the projected AR surrounding.
  • Rhythmic haptic feedback may also be synced with the auditory cue to amplify sensory input.
  • AR may also be adaptively and individually enabled to cue initiation of movement, for example, during a freezing of gait episode in someone with Parkinson’s Disease.
  • the ANR system can be further configured to incorporate optical and head tracking.
  • This tracking may be incorporated as feedback to the ANR system configured to trigger auditory input in response to where their eyes or head is facing. For example, someone with left neglect who is interacting with only the right side of their environment, the eye and head tracking can provide input into how much of their left hemisphere environment is being engaged and trigger the system to generate an auditory cue to drive more attention to the left hemisphere.
  • This data can also be used to track progress over time, as clinical improvement can be measured by degrees of awareness in each hemisphere. Another example of this is with people who have ocular motor disorders, where visual scanning from left to right may be improved by doing it to an external auditory rhythm.
  • the ANR system can be configured to provide a digital presence of past sessions to display a user’s improvement.
  • These models could be replayed after a session to compare from session to session or the lifetime of the treatment.
  • the digital presence of past sessions (or augmented session) when paired with the audio input of that session, could be used as a mental imagery task for practice in between walking sessions and limit fatigue.
  • the model would display differences in walking speed, cadence, stride length, and symmetry to help show the users changes over time and how the treatment may be improving their gait.
  • This presence could also be used by therapists before a session to help prepare training plans or techniques for follow-on sessions.
  • This modeled presence could also be used by researchers and clinicians to better visualize and reanimate in 3-D imagery the evolution of a patient’s progress.
  • AR/VR environments synced with the music content could create different walking or dance patterns to include ovals, spirals, serpentines, crossing paths with others, and dual task walking.
  • Dance rhythms such as a Tango have been shown to have benefits stemming from Neurologic Music Therapy (NMT) and RAS that can apply to the entire human body.
  • NMT Neurologic Music Therapy
  • the ANR system can be configured to utilize AA techniques to enhance the entrainment process, provide environmental context to a person, and aid in the AR experience.
  • the system can be configured to generate the exemplary AA experiences further described herein based on inputs taken from the environment, sensor data, AR environments, entrainment, and other methods.
  • An example of AA of a therapy/medical use case would be to address safety concerns and mitigate risk to patients who are performing therapy exercises.
  • the ANR system can be configured to improve situational awareness while listening to music with headphones by mixing external sounds that exceeds a minimum audio loudness threshold instantaneously into the therapy’s rhythmic and audio cueing content.
  • An example of an external sound would be the honking of a car or the sirens of an emergency vehicle, which would in synchrony automatically interrupt the normal auditory stimulus to provide awareness to the person as to the potential for danger.
  • the listening apparatus could have additional microphones and digital signal processing dedicated to performing this task.
  • the ANR system implementing AA can be configured to combine aspects of AA and the manipulation of spatial perception by aligning the rhythmic auditory cueing with a patient’s “affected side” while they are performing a walking therapy session. If, for example, the right side of the patient requires a greater degree of therapy, the audio cueing content can be spatially aligned with the right side for emphasis.
  • Exemplary systems and methods for neurologic rehabilitation using techniques for side-specific rhythmic auditory stimulus are disclosed in co-pending and commonly assigned U.S. Patent Application Number 62/934,457, titled SYSTEMS AND METHODS FOR NEUROLOGIC REHABILITATION, filed on November 12, 2019, which is hereby incorporated by reference as if set forth in its entirety herein.
  • the ANR system implementing AA can be configured to provide unique auditory cueing to increase spatial awareness of head position while gait training, encouraging the user to keep head up, at midline and eyes forward, improving balance and spatial awareness while going through an entrainment process or other CTA experience.
  • the ANR system implementing AA can be configured to provide binaural beat sound and tie it into human physiology (e.g. breathing rate, electrical brain activity (EEG) and heart rate) to improve cognition and enhance memory.
  • the ANR system can be configured to provide the binaural beat audio signal input in complement to RAS signal input.
  • the real-time entrainment and quality of gait measurements being made by the system would likewise be complemented by physiological measurements.
  • the system as configured for binaural beat audio uses differential frequency signals output in the left and right ears, whose difference is 40Hz - the “Gamma” frequency of neural oscillation. These frequencies can reduce amyloid buildup in Alzheimer’s patients and can help with cognitive flexibility.
  • a second type of neural entrainment can be achieved simultaneously with the biomechanical-RAS entrainment.
  • the network hypothesis of brain activation implies that both walking and cognition would be impacted.
  • Such auditory sensory stimulation would therefore entrain neural oscillations in the brain while rhythmic auditory stimulation entrains the motor system.
  • the ANR system implementing AA can be configured to provide a phase coherent soundstage (e.g. the correct audio spatial perspective) when a patient rotates their head or changes its attitude.
  • a sound stage is the imaginary 3-D image created by stereo speakers or headphones. It allows the listener to accurately hear the location of sound sources.
  • An example of manipulating the soundstage in a therapeutic session would be keeping the voice sound of a virtual coach “in-front” of the patient, even while their head may be turned to the side. This feature could help avoid disorientation, thus creating a more stable, predictable and safe audio experience while performing the therapy.
  • This feature could be combined with the AR virtual coach / therapist in front of a person in FIG. 34. It could also be combined with knowledge of the course or the direction the person needs to take in the real world.
  • the ANR system can be configured to combine
  • the ANR system can be configured to combine AA with Augmented Reality (AR) in such a way to create virtual instrument therapy.
  • Instruments such as bells, drums, piano, and guitar can be common training tools for patients.
  • the patient can be given an immersive experience and the perception that they are physically playing an instrument. This could be modified for difficulty to help the progression of a patient over time and show improvements. Examples of modifications could include adding more keys on a piano or more strings on a guitar.
  • virtual sheet music or musical notation could be displayed in real time as the patient is playing the instruments, either virtual instruments or real instruments.
  • Other examples could be in combination with the concepts discussed in connection with FIG. 19, wherein the connected hardware could be replaced by AR. Similar logic could be used to other of the documented interventions.
  • the ANR system can be configured to implement AA in combination with a telepresence to provide a spatially accurate audio experience for the therapist.
  • the audio could also be generated to create a perception of distance from, or nearness to, an object.
  • the ANR system can implement a type of AA, namely, a Rhythmic Stimulus Engine (RSE).
  • the rhythmic stimulus engine is a bespoke rhythmic auditory stimulus, which embodies the principles of entrainment to drive a therapeutic benefit while generating original and custom auditory rhythmic content for the patient.
  • RSE Rhythmic Stimulus Engine
  • the rhythmic stimulus engine is a bespoke rhythmic auditory stimulus, which embodies the principles of entrainment to drive a therapeutic benefit while generating original and custom auditory rhythmic content for the patient.
  • An RSE could be configured to perform this continuous background rhythmic neurostimulation without the need to access pre recorded music.
  • the ANR system can be configured to implement AA in combination with the Rhythmic Stimulus Engine (RSE) and AR to create a completely synchronized feedback state between incoming biometric data, external audio inputs from the therapy environment, to the generated rhythmic content, AR and AA outputs.
  • the system can be configured to modulate the tempo of the rhythmic audio content generated by an RSE by the walking cadence of the patient in an interactive fashion.
  • the tempo and time signature of the rhythmic audio content generated by the RSE could be modulated by the entrainment precision and beat factor of the patient user, such as one using a cane or assistive device in an interactive fashion.
  • an RSE could provide the neurostimulation that, in combination with assistive technologies such as exo-suits, exo skeletons and/or FES devices, increases the effectiveness of walking therapy.
  • an RSE could generate from a stored library of traditional dance rhythm templates, the rhythmic audio content that could extend therapy to the patient’s upper body and limbs. This could be extended to combine with AR techniques mentioned above, such as a dancing crowd or virtual dancefloor.
  • machine learning techniques such as self-learning AI and/or a rules-based system could generate rhythm in real-time moderated by inertial motion units (IMUs) inputs that report cadence and quality of gait parameters, such as symmetry and gait cycle time variability.
  • IMUs inertial motion units
  • the ANR system can implement a type of AA, namely, Sonification, which means applying varying amounts of signal distortion to the music content depending on how close to or far from a patient is to a target goal.
  • the degree and type of sonification helps nudge the patient to a correction state.
  • the novel combination of sonification and entrainment could provide a feed-forward mechanism for auditory motor synchrony through entrainment, while simultaneously providing a feedback mechanism via distortion of the music content of some other biomechanical or physiological parameter that the individual can adjust. For example, adding signal distortion to the music signal while increasing the volume of the rhythmic cueing could in combination have greater effectiveness than either method by itself.
  • the ANR system can implement CTA in combination with a neurotoxin injection is as follows.
  • the CTA could apply the entrainment principle to work towards improving a motor function, such as gait.
  • the neurotoxin injection can also be used to target gait improvements by targeting spasticity in the muscles. These injections take 2-4 days to take effect and last up to 90 days of effect (e.g. effectiveness period).
  • the dosing of the CTA for entrainment principles e.g., the setting of one or more parameters of the CTA
  • the ANR system can be configured to calculate entrainment parameter using the syncing of the heartbeat or the respiration rate to the music content, instead of the biomechanical movement parameter.
  • a use case is for people with agitation from various forms of dementia, Alzheimer’s, bi-polar disorder, schizophrenia, etc.
  • the baseline parameter could be determined by the heart rate or respiration rate.
  • the entrainment or phased entrainment could be determined by the comparison of the music content to the heart rate or respiration. Additionally, goals could be set to lower the amount of agitation to enhance the quality of life of these people.
  • systems for augmented neurologic rehabilitation of a patient in accordance with one or more embodiments of the disclosure can comprise one or more of the following points:
  • a computing system for having one or more physical processors configured by software modules comprising machine-readable instructions.
  • the software modules can include a 3D AR modelling module that, when executed by the processor, configures the processor to generate and present augmented-reality visual and audio content to a patient during a therapy session.
  • the content includes visual elements moving in a prescribed spatial and temporal sequence and rhythmic audio elements output at a beat tempo.
  • the computing system also includes an input interface in communication with the processor for receiving inputs including time-stamped biomechanical data of the patient relating to the movements performed by the patient in relation to the AR visual and audio content and physiological parameters measured using one or more sensors associated with the patient.
  • the software modules also include a critical thinking algorithm that configures the processor to analyze the time-stamped biomechanical data to determine a spatial and temporal relationship of the patient’s movements relative to the visual and audio elements and determine a level of entrainment of the patient relative to a target physiological parameter. Additionally, the 3D AR modelling module further configures the processor to dynamically adjust the augmented-reality visual and audio content output to the patient based on the determined level of entrainment relative to the target parameter.
  • the above systems, devices, methods, processes, and the like may be realized in hardware, software, or any combination of these suitable for an application.
  • the hardware may include a general-purpose computer and/or dedicated computing device. This includes realization in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices or processing circuitry, along with internal and/or external memory. This may also, or instead, include one or more application specific integrated circuits, programmable gate arrays, programmable array logic components, or any other device or devices that may be configured to process electronic signals.
  • a realization of the processes or devices described above may include computer-executable code created using a structured programming language such as C, an object oriented programming language such as C++, or any other high-level or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies) that may be stored, compiled or interpreted to run on one of the above devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software.
  • the methods may be embodied in systems that perform the steps thereof, and may be distributed across devices in several ways. At the same time, processing may be distributed across devices such as the various systems described above, or all the functionality may be integrated into a dedicated, standalone device or other hardware.
  • Embodiments disclosed herein may include computer program products comprising computer-executable code or computer-usable code that, when executing on one or more computing devices, performs any and/or all the steps thereof.
  • the code may be stored in a non-transitory fashion in a computer memory, which may be a memory from which the program executes (such as random access memory associated with a processor), or a storage device such as a disk drive, flash memory or any other optical, electromagnetic, magnetic, infrared or other device or combination of devices.
  • any of the systems and methods described above may be embodied in any suitable transmission or propagation medium carrying computer- executable code and/or any inputs or outputs from same.
  • performing the step of X includes any suitable method for causing another party such as a remote user, a remote processing resource (e.g., a server or cloud computer) or a machine to perform the step of X.
  • performing steps X, Y and Z may include any method of directing or controlling any combination of such other individuals or resources to perform steps X, Y and Z to obtain the benefit of such steps.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Pulmonology (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Fuzzy Systems (AREA)
  • Social Psychology (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Rehabilitation Tools (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)
EP21845844.6A 2020-07-21 2021-07-21 SYSTEMS AND METHODS FOR ENHANCED NEUROLOGICAL REHABILITATION Pending EP4185192A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063054599P 2020-07-21 2020-07-21
PCT/US2021/042606 WO2022020493A1 (en) 2020-07-21 2021-07-21 Systems and methods for augmented neurologic rehabilitation

Publications (2)

Publication Number Publication Date
EP4185192A1 true EP4185192A1 (en) 2023-05-31
EP4185192A4 EP4185192A4 (en) 2024-08-21

Family

ID=79728942

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21845844.6A Pending EP4185192A4 (en) 2020-07-21 2021-07-21 SYSTEMS AND METHODS FOR ENHANCED NEUROLOGICAL REHABILITATION

Country Status (6)

Country Link
EP (1) EP4185192A4 (ko)
JP (1) JP2023537681A (ko)
KR (1) KR20230042066A (ko)
CN (1) CN116096289A (ko)
CA (1) CA3186120A1 (ko)
WO (1) WO2022020493A1 (ko)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316982B (zh) * 2022-09-02 2024-08-20 中国科学院沈阳自动化研究所 一种基于多模态传感的肌肉形变智能检测系统及方法
CN115868967A (zh) * 2023-01-10 2023-03-31 杭州程天科技发展有限公司 一种基于imu的人体动作捕捉方法、系统及存储介质
JP7449463B1 (ja) 2023-11-06 2024-03-14 株式会社Tree Oceans 歩行補助ウェアラブルデバイス、制御方法、及びプログラム
CN117594245B (zh) * 2024-01-18 2024-03-22 凝动万生医疗科技(武汉)有限公司 一种骨科患者康复进程跟踪方法及系统
CN117766098B (zh) * 2024-02-21 2024-07-05 江苏森讯达智能科技有限公司 一种基于虚拟现实技术的健身优化训练方法及系统
CN117929173B (zh) * 2024-03-18 2024-07-12 中国汽车技术研究中心有限公司 一种汽车碰撞假人肋骨组分力学性能测试对标方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6644976B2 (en) * 2001-09-10 2003-11-11 Epoch Innovations Ltd Apparatus, method and computer program product to produce or direct movements in synergic timed correlation with physiological activity
US20070074617A1 (en) * 2005-10-04 2007-04-05 Linda Vergo System and method for tailoring music to an activity
US9675776B2 (en) * 2013-01-20 2017-06-13 The Block System, Inc. Multi-sensory therapeutic system
CN109875501B (zh) * 2013-09-25 2022-06-07 曼德美姿集团股份公司 生理参数测量和反馈系统
US10448888B2 (en) * 2016-04-14 2019-10-22 MedRhythms, Inc. Systems and methods for neurologic rehabilitation
KR102491130B1 (ko) * 2016-06-20 2023-01-19 매직 립, 인코포레이티드 시각적 프로세싱 및 지각 상태들을 포함하는 신경학적 상태들의 평가 및 수정을 위한 증강 현실 디스플레이 시스템
KR20230150407A (ko) * 2017-07-24 2023-10-30 메드리듬스, 아이엔씨. 반복적 모션 활동을 위한 음악 향상

Also Published As

Publication number Publication date
CA3186120A1 (en) 2022-01-27
JP2023537681A (ja) 2023-09-05
CN116096289A (zh) 2023-05-09
EP4185192A4 (en) 2024-08-21
WO2022020493A1 (en) 2022-01-27
KR20230042066A (ko) 2023-03-27

Similar Documents

Publication Publication Date Title
US11779274B2 (en) Systems and methods for neurologic rehabilitation
US12127851B2 (en) Systems and methods for augmented neurologic rehabilitation
EP4185192A1 (en) Systems and methods for augmented neurologic rehabilitation
US12128270B2 (en) Systems and methods for neurologic rehabilitation
US10593349B2 (en) Emotional interaction apparatus
US11690530B2 (en) Entrainment sonification techniques
EP3341093A1 (en) Systems and methods for movement skill analysis and skill augmentation and cueing
US11786147B2 (en) Distributed sensor-actuator system for synchronized movement
US20230364469A1 (en) Distributed sensor-actuator system for synchronized motion
JP7510499B2 (ja) 神経性リハビリテーションのためのシステムおよび方法
Sunela Real-time musical sonification in rehabilitation technology

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230119

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230629

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20240719

RIC1 Information provided on ipc code assigned before grant

Ipc: G09B 19/00 20060101ALI20240715BHEP

Ipc: G06F 3/01 20060101ALI20240715BHEP

Ipc: A61B 5/16 20060101ALI20240715BHEP

Ipc: A61B 5/08 20060101ALI20240715BHEP

Ipc: A61B 5/0205 20060101ALI20240715BHEP

Ipc: A61B 5/11 20060101ALI20240715BHEP

Ipc: A61B 5/103 20060101ALI20240715BHEP

Ipc: A61B 5/00 20060101AFI20240715BHEP