CN116096289A - Systems and methods for enhancing neurological rehabilitation - Google Patents

Systems and methods for enhancing neurological rehabilitation Download PDF

Info

Publication number
CN116096289A
CN116096289A CN202180062559.5A CN202180062559A CN116096289A CN 116096289 A CN116096289 A CN 116096289A CN 202180062559 A CN202180062559 A CN 202180062559A CN 116096289 A CN116096289 A CN 116096289A
Authority
CN
China
Prior art keywords
patient
beat
data
time
ras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180062559.5A
Other languages
Chinese (zh)
Inventor
欧文·麦卡锡
布莱恩·哈里斯
亚历克斯·卡尔帕西斯
杰弗里·朱
布莱恩·布斯凯·史密斯
埃里克·理查森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Medrhythms Inc
Original Assignee
Medrhythms Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medrhythms Inc filed Critical Medrhythms Inc
Publication of CN116096289A publication Critical patent/CN116096289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Detecting, measuring or recording devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • A61B5/1038Measuring plantar pressure during gait
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7455Details of notification to user or communication with user or patient ; user input means characterised by tactile indication, e.g. vibration or electrical stimulation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/486Bio-feedback
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6824Arm or wrist
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6829Foot or ankle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7225Details of analog processing, e.g. isolation amplifier, gain or sensitivity adjustment, filtering, baseline or drift compensation
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/742Details of notification to user or communication with user or patient ; user input means using visual displays

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • Pulmonology (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Rehabilitation Tools (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • User Interface Of Digital Computer (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

A method and system for enhanced nerve rehabilitation (ANR) of a patient is disclosed. The ANR system generates Rhythmic Auditory Stimuli (RAS) and visual Augmented Reality (AR) scenes that are synchronized according to a common beat rate and output to the patient during a treatment session. The patient worn sensor captures biomechanical data related to the patient's synchronization of the AR visual content and the repetitive motion performed by the RAS. The critical thinking algorithm analyzes the sensor data to determine the spatial and temporal relationship of the patient's motion to visual and audio elements, and to determine the patient's entrainment level and progress of clinical/therapeutic objectives. Further, the 3DAR modeling module configures the processor to dynamically adjust the augmented reality visual and audio content output to the patient based on the determined entrainment level and whether the training objectives have been achieved.

Description

Systems and methods for enhancing neurological rehabilitation
Cross Reference to Related Applications
The present application is based on and claims the priority and benefit of U.S. provisional patent application No. 63/054,599 entitled "Systems and Methods for Augmented Neurologic Rehabilitation" filed by McCarthy et al at 7.21 in 2020, and further is a continuation of the section entitled "Systems and Methods for Neurologic Rehabilitation" in U.S. patent application No. 16/569,388 by McCarthy et al, which is a continuation of U.S. patent application No. 10,448,888 entitled "Systems and Methods for Neurologic Rehabilitation" issued at 10.22 in 2019, and U.S. patent No. 10,448,888 is based on and claims the priority of U.S. provisional patent application No. 62/322,504 entitled "Systems and Methods for Neurologic Rehabilitation" filed at 14.4.14 in 2016, all of which are each incorporated herein by reference as if set forth in their entirety herein.
Technical Field
The present disclosure relates generally to systems and methods for rehabilitating a physically impaired user by providing musical therapy.
Background
Many control studies in the past decade have emphasized the clinical role of music in neurological rehabilitation. For example, it is well known that regular musical therapy can directly enhance cognition, motion, and language. The process of listening to music enhances brain activity in a variety of forms, exciting a broad bilateral network of brain regions related to attention, semantic processing, memory, cognition, motor functions, and emotional processing.
Clinical data support musical therapy to enhance memory, attention, executive function, and emotion. PET scan studies of the neural mechanism behind music indicate that pleasant music can stimulate a broad network between the cortex and subcortical areas, including ventral striatum, nucleus accumbens, amygdala, island leaves, hippocampus, hypothalamus, ventral capped area, anterior cingulate, orbital frontal cortex, and ventral medial frontal cortex. The ventral tegmental area produces dopamine and is in direct communication with the amygdala, hippocampus, anterior cingulate, and prefrontal cortex. This mid-cortical rim system, which can be activated by music, plays a key role in regulating arousal, mood, rewards, memory attention and executive function.
Neuroscience research reveals how the basic organizational process of memory formation in music shares a mechanism with non-musical memory processes. The basis of phrase grouping, hierarchical abstraction and music patterns has direct resemblance to the time-chunking principle of the non-music memorization process. This means that a music-activated memory process can transition and enhance non-music processes.
Accordingly, there remains a need for improved devices, systems, and methods to protect the use of user identities and to securely provide personal information.
Disclosure of Invention
In one aspect of the disclosed subject matter, a system for enhancing neurological rehabilitation of a patient is provided. The system includes a computing system having a processor configured by a software module including machine readable instructions stored in a non-transitory storage medium.
The software module includes an AA/AR modeling module that, when executed by the processor, configures the processor to generate augmented-reality (AR) visual content and rhythmic auditory stimuli (rhythmic auditory stimulus, RAS) for output to the patient during the treatment session. In particular, the RAS includes a beat signal output at a beat speed, and the AR visual content includes visual elements moving in a prescribed spatial and temporal sequence based on the beat speed.
The system also includes an input interface in communication with the processor for receiving real-time patient data including time-stamped biomechanical data of the patient related to repeated movements of the patient with the AR visual content and RAS in time. In particular, biomechanical data is measured using sensors associated with the patient.
The software module also includes a critical thinking algorithm (critical thinking algorithm, CTA) module that configures the processor to analyze the time-stamped biomechanical data to determine a temporal relationship of the patient's repetitive motion relative to the visual element and the beat signal output at the beat speed to determine an entrainment level relative to the target parameter. In addition, the AA/AR modeling module further configures the processor to dynamically adjust the AR vision and RAS output to the patient synchronously and based on the determined entrainment level.
According to another aspect, a method for enhancing neurological rehabilitation of a patient with physical injury is provided. The method is implemented on a computer system having a physical processor configured with machine-readable instructions that, when executed, perform the method.
The method includes the step of providing a Rhythmic Auditory Stimulus (RAS) for output to a patient via an audio output device during a treatment session. In particular, the RAS includes a beat signal output at a beat speed.
The method further includes the step of generating Augmented Reality (AR) visual content for output to the patient via an AR display device. In particular, the AR visual contents include visual elements that move in a prescribed spatial and temporal sequence based on beat speed and are output in synchronization with the RAS. The method further includes the step of instructing the patient to perform repeated movements in time with corresponding movements of the beat signal of the RAS and the visual element of the AR visual content.
The method further includes the step of receiving real-time patient data including time-stamped biomechanical data of the patient related to repeated movements of the patient with the AR visual content and RAS in time. In particular, biomechanical data is measured using sensors associated with the patient.
The method further includes the step of analyzing the time-stamped biomechanical data to determine a time relationship of the repeated motion of the patient relative to the visual element and the beat signal output from the beat signal to determine the entrainment potential. Furthermore, the method comprises the steps of: the AR visual content and RAS are dynamically adjusted for output to the patient synchronously and based on the determined entrainment potential that does not meet the prescribed entrainment potential, and the treatment session is continued with the RAS using the adjusted AR visual content.
Drawings
The foregoing and other objects, features and advantages of the apparatus, systems and methods described herein will be apparent from the following description of particular embodiments, as illustrated in the accompanying drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the devices, systems and methods described herein.
FIG. 1 is a diagram illustrating a system for treating a user by providing musical therapy in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 2 is a diagram illustrating several components of a system for rehabilitating a user by providing musical therapy, according to an exemplary embodiment of the disclosed subject matter;
FIG. 3 is a schematic diagram of a sensor for measuring biomechanical motion of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 4 is a diagram illustrating several components of a system in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 5 illustrates an exemplary display of components of a system for rehabilitating a user by providing musical therapy, according to an exemplary embodiment of the disclosed subject matter;
FIG. 6 is a flow chart of one implementation of an analysis process in accordance with an exemplary embodiment of the disclosed subject matter;
7-10 are flowcharts of one implementation of a process according to an exemplary embodiment of the disclosed subject matter;
FIG. 11 is a time plot showing music and body movements of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIGS. 12-13 illustrate patient responses in accordance with an exemplary embodiment of the disclosed subject matter;
FIGS. 14-15 illustrate patient responses in accordance with an exemplary embodiment of the disclosed subject matter;
FIGS. 16-17 illustrate patient responses in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 18 illustrates an implementation of a technique for gait training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 19 illustrates an implementation of a technique for ignoring training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 20 illustrates an implementation of a technique for intonation training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 21 illustrates an implementation of a technique for musical stimulation training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 22 illustrates an implementation of a technique for large exercise training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 23 illustrates an implementation of a technique for grip training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 24 illustrates an implementation of a technique for voice prompt training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 25 illustrates an implementation of a technique for training a minimally conscious patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIGS. 26-28 illustrate implementations of techniques for attention training of a patient in accordance with exemplary embodiments of the disclosed subject matter;
FIG. 29 illustrates an implementation of a technique for dexterous training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 30 illustrates an implementation of a technique for oral exercise training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 31 illustrates an implementation of a technique for respiratory training of a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 32 is a diagram illustrating an enhanced neurological rehabilitation, restoration, or maintenance ("ANR") system for providing therapy to a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 33 is a graphical visualization of parameters, system responses, and index/target parameters measured during a treatment session performed using the ANR system of FIG. 32, in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 34 is a graph depicting exemplary results related to metabolic changes resulting from a training session performed using the ANR system of FIG. 32, in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 35 is an exemplary Augmented Reality (AR) display generated by the ANR system for display to a patient during a treatment session in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 36A is an exemplary AR display generated by the ANR system for display to a patient during a treatment session in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 36B is an exemplary AR display generated by the ANR system for display to a patient during a treatment session in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 37 illustrates an implementation of a technique for gait training by providing enhanced audio and visual stimuli to a patient in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 38 is a hybrid system and process diagram conceptually illustrating an ANR system configured for implementing gait training techniques in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 39 is a hybrid system and process diagram conceptually illustrating enhanced audio (AA) device components of the ANR system of FIG. 38, in accordance with an exemplary embodiment of the disclosed subject matter;
FIG. 40 is an exemplary AR display generated by the ANR system for display to a patient during a treatment session in accordance with an exemplary embodiment of the disclosed subject matter; and
Fig. 41 is an exemplary AR display generated by the ANR system for display to a patient during a treatment session in accordance with an exemplary embodiment of the disclosed subject matter.
Detailed Description
The present invention relates generally to systems, methods, and apparatus for implementing a dynamic closed loop rehabilitation platform system that monitors and directs human behavior and functional changes. Such changes are changes in language, motion and cognition that are temporarily triggered by musical tempo, harmony, melody and dynamics cues.
In various embodiments of the present invention, a dynamic closed loop rehabilitation platform music therapy system 100 shown in fig. 1 is provided that includes a sensor assembly and system 102, an edge processing assembly 104, a collector assembly 106, an analysis system 108, and a music therapy center 110. As will be described in more detail below, sensor components, edge processing components, collector components, machine learning processes, and music therapy centers may be provided on various hardware components. For example, in one embodiment, the sensor assembly and edge processing assembly may be positioned or worn by a patient. In such an embodiment, the collector assembly and musical therapy center may be provided on a handheld device. In such an embodiment, the analysis system may be located on a remote server.
Sensor system
Throughout the description herein, the term "patient" is used to refer to an individual receiving a musical therapy diagnosis. The term "therapist" refers to an individual who provides a musical therapy diagnosis. In some embodiments, the patient is able to interact with the systems described herein to administer the treatment without the presence of a therapist.
The sensor assembly 102 provides sensed biomechanical data about the patient. In some embodiments, the sensor assembly may include (1) a wearable wireless real-time motion sensing device or IMU (inertial measurement units, inertial measurement unit), (2) a wearable wireless real-time combined multi-zone plantar pressure/6-dimensional motion capture (IMU) device, such as sensor 200, (3) a wearable wireless real-time Electromyogram (EMG) device, such as sensor 208, and (4) a real-time wireless Near Infrared (NIR) video capture device, such as imaging device 206 (see fig. 4).
As shown in fig. 2, the systems and methods described herein are used to treat a patient's walking disorder. Thus, the example sensor 200 may be a combined multi-zone plantar pressure/6 degree of freedom motion capture device. As the patient walks during the musical therapy session, the sensor 200 records the patient's foot pressure and 6 degrees of freedom motion profile. In some embodiments, the foot pressure/6 degree of freedom motion capture device has a variable recording duration interval, with a sampling rate of 100Hz for a foot pressure profile comprising 1 to 4 zones, resulting in 100 to 400 pressure data points per foot per second.
Sensor 200 may include a foot pad 202 having a heel pad (for measuring one pressure zone, such as heel strike pressure) to a full insole (for measuring 4 pressure zones). Pressure measurements are made by sensing changes in resistance in the transducer material due to compression caused by the weight of the patient being transferred to the foot. These foot pressure maps are obtained for each sampling interval or at a particular time during a musical therapy session.
The sensor 200 may include a 6-dimensional motion capture device 204 that detects motion changes through a 6-degree-of-freedom microelectromechanical system (Micro-Electro-Mechanical Systems, MEMS) based sensor that determines 3-dimensional linear acceleration a x 、A y 、A z And as rotational motions of pitch, yaw and roll. Sampling at 100Hz will produce 600 action data points per second. These foot motion captures are obtained for each sampling interval or at a particular time during a musical therapy session.
Multi-zone pressure sensing with a 6-degree-of-freedom motion capture device allows for mappable spatial and temporal gait dynamics tracking while walking. A schematic of the sensor 200 is shown in fig. 3.
From a system perspective, as shown in fig. 4, patient P uses two foot sensors 200, one for each foot, denoted right 200R and left 200L, respectively. In an exemplary embodiment, right foot sensor 200R wirelessly transmits time-stamped internal measurement unit data and heel strike pressure data on a first channel (e.g., channel 5) in the ieee802.15.4 direct sequence spread spectrum (direct sequence spread spectrum, DSSS) RF band. Left foot sensor 200L wirelessly transmits time-stamped internal measurement unit data and heel-strike pressure data on a second channel (e.g., channel 6) in the ieee802.15.4 Direct Sequence Spread Spectrum (DSSS) RF band. As described below, tablet or laptop 220 optionally used by therapist T includes a wireless USB hub containing two IEEE802.15.4DSSSRF transceivers tuned to a first channel and a second channel (e.g., channel 5 and channel 6) to capture right foot/left foot sensor RF data. The handheld wireless trigger 250 is used to start and stop video and/or to mark and index time streams, as discussed in more detail below.
The video analytics domain may be used to extract patient semantic and event information about the treatment session. Patient action and interaction are components of the treatment that affect the treatment environment and team. In some embodiments, one or more image capture devices 206 (e.g., video cameras) (see fig. 4) are used with the time-synchronized video feed. Any suitable video may be incorporated into the system to capture patient motion; however, near Infrared (NIR) video capture is useful for protecting patient privacy and reducing video data to be processed. The NIR video capture device captures NIR video images of the patient's body, such as the position of the patient's torso and limbs. In addition, it captures the real-time dynamic gait characteristics of the patient according to the musical therapy session. In some embodiments, the video is captured with a still camera, where the background is subtracted to segment out the foreground pixels.
As shown in fig. 4, when a treatment session begins, the tablet or laptop application triggers one or more video cameras 206. The therapist may stop or activate the video camera 206 by holding the wireless trigger unit 250. This allows for creating a marked timestamp index in the captured biomechanical sensor data and video data stream.
In some embodiments, the patient may wear a wearable wireless real-time Electromyography (EMG) device 208. The EMG sensor provides the entire bipedal distribution for the primary muscle activation of the exercise. Such sensors provide data regarding the exact time of muscle activation.
Edge processing
In some embodiments, edge processing is performed at the sensor 200, where sensor data is captured from the IMU and the pressure sensor. The sensor data is filtered, grouped into various array sizes for further processing into frames reflecting the extracted attributes and features, and these frames are sent, for example, wirelessly to a collector 106 on a tablet or laptop. It should be appreciated that raw biomechanical sensor data obtained from sensor 200 may alternatively be transmitted to a remote processor for collection of edge processing functions.
The wearable sensors 200, 208 and the video capture device 206 generate sensor data streams that are integrally processed to facilitate biomechanical feature extraction and classification. Sensor fusion, which combines the outputs of multiple sensors capturing a common event, captures results better than any single constituent sensor input.
Capturing patient activity in a musical therapy environment, formalizing interactions applied to musical therapy, and developing patient-specific and generalized formal indicators of musical therapy performance and efficacy. Video features are extracted and then analyzed to capture semantic, advanced information about patient behavior.
In processing video, a background model is created using learned background differencing techniques that incorporates any changes in lighting conditions and occlusion in the physical region where musical therapy occurs. The result of the background differencing is a binary foreground map of the foreground speckle array with a two-dimensional contour. Thus, the video is segmented into individual image frames for future image processing and sensor fusion. By combining edge-processed sensor data from the IMU, foot pads, and EMG sensors, the video information is provided with additional metadata. The sensor data may be time synchronized with other data using RF triggers. The data may be sent directly to the collector, stored in the internal board's memory, or analyzed at the edge of the running OpenCV library.
The edge processor 104 may be a microprocessor, such as a 32-bit microprocessor, that is incorporated into a foot pressure/6 degree of freedom motion capture device capable of fast multi-zone scanning at a rate of 100 to 400 full foot pressure/6 degree of freedom motion profiles per second.
The foot pressure/6-degree-of-freedom motion capture device collects foot pressure/6-degree-of-freedom motion distribution data for real-time gait analysis, thereby performing feature extraction and classification. In some embodiments, the foot pressure/6 degree of freedom motion capture device initializes a microcontroller unit (micro controller unit, MCU), a continuous operator process (continuous operator process, COP), a general purpose input output (general purpose input output, GPIO), a serial peripheral interface (serial peripheral interface, SPI), an interrupt request (interrupt request, IRQ), and sets a desired RF transceiver clock frequency by calling routines including microcontroller unit initialization (MCUInit), general purpose input output initialization (GPIO nit), serial peripheral interface initialization (SPIINit), interrupt request acknowledge initialization (IRQInit), interrupt request acknowledge (interrupt request acknowledge, IRQACK), serial peripheral interface driver read (Serial Peripheral Interface Driver Read, SPIDRvRead), and IRQPinEnable. MCUInit is a master initialization routine that closes the MCU watchdog and sets the timer module to pre-scale to 32 using the bus clock (buslk) as a reference.
The state variable gu8RTxMode is set to system_reset_mode and routines GPIOInit, SPIInit and IRQInit are called. The state variable gu8RTxMode is set to rf_ TRANSCEIVER _reset_mode and IRQFLAG is checked to see if IRQ is asserted. The RF transceiver interrupt is first cleared using spidrread and then checked for ATTNIRQ interrupt by the RF transceiver. Finally, for MCUInit, PLMEPyReset is invoked to reset the physical MAC layer, IRQACK (acknowledging pending IRQ interrupts) and IRQPinEnable, i.e., enable, IE and irqplr are pinned on the negative edge of the signal.
The foot pressure/6 degree of freedom motion sensor 200 will wait for a response from the foot pressure/6 degree of freedom motion collection node, e.g., 250 milliseconds, to determine whether a default full foot pressure scan will be made or a mapped foot pressure scan will be initiated. In the case of a mapping foot pressure scan, the foot pressure/6 degree of freedom motion acquisition node will send foot pressure scan mapping configuration data to the appropriate electrode.
One aspect of the analysis pipeline is a feature set engineering process that will define these captured sensor values and their generated sensor fusion values that are used to create feature vectors to define the input data structure for analysis. Representative values are Ax (i), ay (i), az (i), and Ph (i); where i is the i-th sample; where Ax (i) is the acceleration in the x-direction, which is transverse to the foot sensor; ay (i) is the acceleration in the y direction, which is in front of the foot sensor; az (i) is the acceleration in the z direction, which is upward relative to the foot sensor; ph (i) is the heel strike pressure. The sensor values are shown in table 1:
Figure BDA0004121637820000091
In some embodiments, the sensor fusion technique uses the heel-strike pressure value Ph (i) to "gate" the analysis of the following exemplary feature values to derive a data window, as described below. For example, "start" (beginning) may be determined based on the heel pressure exceeding a threshold indicative of heel strike, and "stop" may be determined based on the heel pressure being below a threshold indicative of heel lift, as shown in Table 2 below. It will be appreciated that heel strike pressure is one example of a parameter that may be used for "gating" analysis. In some embodiments, "gating" is determined using IMU sensor data, video data, and/or EMG data.
Figure BDA0004121637820000092
/>
Figure BDA0004121637820000101
Higher level feature values are calculated from the fused sensor values, such as the example values in table 3:
Figure BDA0004121637820000102
/>
Figure BDA0004121637820000111
the systems described herein are capable of "gating" patient biomechanical data or providing a "window" for patient biomechanical data. Gating of biomechanical data is very useful for repeated patient movements, such as repeated strides while the patient is walking. Sensor data, such as pressure sensors, IMU sensors, video data, and EMG data, from one or more sources is used to identify a period of motion that is repeated over time. For example, as a patient walks, foot pressure repeatedly increases and decreases as the patient's foot contacts the ground and is then lifted off the ground. Also, as the foot moves forward, the speed of the foot increases and when the foot is placed on the ground, the speed of the foot decreases to zero. As a further example, the Y position or height of the patient's foot cycles between a low position (above ground) and a high position (about mid-stride). The "gating" technique identifies repeated periods or "windows" in these data. For a patient to walk, the cycle is repeated for each step. Although there may be differences between cycles (e.g., between steps), some patterns repeat with each cycle. The selection of the start time (start time) of each cycle involves locating identifiable points (maxima or minima) of the biomechanical parameter. The parameter selection of the start time is based on the available data. Thus, in some embodiments, the time at which the heel strike pressure exceeds the threshold may be used to calibrate the start time of each cycle. (see, e.g., fig. 5, pressures 316a and 316b include cycling characteristics. The "start" may be determined when the pressure exceeds a threshold.) similarly, the start time may be calibrated when the foot speed drops to zero.
In some embodiments, raw frame data is preprocessed to obtain and "gate" on-the-fly data, e.g., identifying a window, and then analyzing the data within the window to identify outliers and analyzing the data, e.g., exponentially, to average the data across multiple windows. Fusion of sensor data (by including IMU data and heel strike pressure data) allows for more accurate identification of the start time of a single stride or other repetitive motion unit than data using a single sensor. The sensor data captured within a single stride is considered a "window", and the information extracted from this analysis includes, for example, stride length, stride count, cadence, stride occurrence time, travel distance, stance/swing, dual support time, speed, symmetry analysis (e.g., between left and right legs), outward swing, foot-on, power vector, lateral acceleration, stride width, variability in each of these dimensions, additional parameters derived from the above information, and the like. Feature extraction may be processed on a microprocessor chip (e.g., a 32-bit chip). The capture of wireless synchronous gated biomechanical sensor data and video data capture functions allow for the creation of time series templates.
The patient or therapist may index the data during the musical therapy session. The "gating" function described above may be used to relate an abnormal situation to a particular stride or step. For example, a therapist may observe specific abnormalities or behaviors (e.g., abnormalities or events) in the patient's movements. The indexing function allows the therapist to initiate (e.g., capture) "record" the abnormal condition or behavior through a user interface (e.g., wireless trigger unit 250 shown in fig. 4) or voice control on a handheld tablet or laptop. A marker may be created that contains a time stamp and annotation, such as a "trip" of the patient while walking. Such indexing facilitates creation of time series templates. These time series templates will be studied to examine treatment session events and develop time series templates for training machine learning algorithms, such as non-linear multi-layered perceptrons (NLMLP), convolutional neural networks (convolutional neural networks, CNN) and recurrent neural networks (recurrent neural networks, RNN) with long and short term memory (long short term memory, LSTM).
In one embodiment, a communication protocol is provided to transmit sensor data from edge processing 104 (e.g., at sensor 200) to collector 106. See table 4 below. In some embodiments, if the connection is idle for more than 100ms, the RF has timed out.
Figure BDA0004121637820000131
In one embodiment, foot pressure sensor region scanning is performed by a FootScan routine, in which FootDataBufferIndex is initialized, and the foot pressure sensor region is activated by enabling the MCU direction mode to output [ PTCDD_PTCDDN=output ] and making the associated port line low [ PTCDDPTCD6 =0 ]. When the foot pressure sensor area is activated based on the foot pressure sensor area scan, the foot pressure sensor area connected to the MCU analog signal port will be sampled and then converted into digital form (instant zone foot pressure) by the current voltage readings.
Several variables (e.g., footDataBufferIndex and IMUBufferIndex) are used to prepare IEEE802.15.4RF packets gsTxPacket.gau8TxDataBuffer [ ], for transmitting data to be used in FootDataBuffer [ ] and IMUBBuffer [ ]. The RF data packets are sent using an RFSendRequest (& gstxppacket) routine. The routine checks to see if gu8RTxMode is set to idle_mode and uses gstxppacket as a pointer to call the RAMDrvWriteTx routine, which then calls spidrread to read the TX packet length register contents of the RF transceiver. Using these, the mask length setting is updated, then CRC is incremented by 2, and codeword sections are incremented by 2.
The SPISendChar is called to send the 0 x 7E byte, which is the second code byte, and then the SPIWAITTransferDone is called again to verify that the transmission is complete. After transmitting these code bytes, the rest of the packet is transmitted using the for loop, where pstxpkt→u8datalength+1 is the number of iterations of the SPISendChar, SPIWaitTransferDone, SPIClearRecieveDataReg sequence. After completion, the RF transceiver loads the data packet to be transmitted. The ANTENNA_SWITCH is set to send, LNA_ON mode is enabled, and finally an RTXENAssert call is made to actually send the packet.
Collector device
The main functions of the collector 106 are to capture data from the edge processing 104, transmit data to the analysis system 108 and receive processed data from the analysis system 108, and transmit data to the music therapy center 110, as described below. In some embodiments, the collector 106 provides control functions, such as a user interface for logging in, configuring the system, and interacting with a user, and includes a display unit for visualizing/displaying data. The collector 106 may include a lightweight analysis or machine learning algorithm for classification (e.g., lateral tremors, asymmetry, instability, etc.).
The collector 106 receives body, motion, and positioning data from the edge processor 104. The data received at the collector 106 may be raw or processed at the edge 104 before being transmitted to the collector. For example, the collector 106 receives fused sensor data that has been "windowed" and feature extracted. The transmitted data may include two levels of data: (1) RF data packets sent from the right/left foot sensors as described in table 1, (2) RF data packets containing higher level attributes and features sent from the left/right foot sensors as described in tables 2 and 3. The collector 106 stores the data locally. In some embodiments, the collector 106 classifies the motion according to the received data, e.g., compares it to models stored locally (pre-downloaded from the analysis system) or sent to the analysis system for classification. The collector may include a display unit to visualize/display the data.
In some embodiments, collector 106 operates on a local computer that includes a memory, a processor, and a display. Exemplary devices on which collectors are installed may include Augmented Reality (AR) devices, virtual Reality (VR) devices, tablet computers, mobile devices, laptop computers, desktop computers, and the like. Fig. 2 shows a handheld device 220 having a display 222 and performing a collector function. In some embodiments, the connection parameters for transferring data between the patient sensor and the collector include the use of a device manager in Windows (e.g., baud rate: 38400, data bit: 8; parity: none, stop bit: 1). In some embodiments, the collector 106 includes a processor that is held or worn by the music therapy patient. In some embodiments, the collector 106 includes a processor that is remote from and carried by the therapist and connected to the music therapy patient either wirelessly or via a wired connection.
In one embodiment, the foot pressure/6 degree of freedom motion collection node captures RF transmission data packets containing real-time foot pressure/6 degree of freedom motion distribution data from a foot pressure/6 degree of freedom motion capture device. This begins with a foot pressure/6 degree of freedom action collection node that creates an RF packet receive queue driven by a callback function upon interruption of RF transceiver packet reception.
When an RF data packet is received from the foot pressure/6-degree of freedom motion capture device 200, a check is first made to determine whether this is from a new foot pressure/6-degree of freedom motion capture device or from an existing foot pressure/6-degree of freedom motion capture device. If this is from an existing foot pressure/6 degree of freedom motion capture device, the RF data packet sequence number is checked to determine continuous synchronization before further analyzing the data packet. If this is a foot pressure capture/6 degree of freedom motion device, a foot pressure/6 degree of freedom motion capture device environmental status block is created and initialized. The environmental status block includes information such as foot pressure profile.
Above the RF packet session level procedure for node-to-node communication is an analysis of the RF packet data payload. The payload contains a foot pressure profile based on the current variable pressure after 6 degrees of freedom motion. The structure is as follows: |0×10| start |f1|f2|f3|f4|ax|ay|az|pi|yi|ri|xor checksum|.
The ieee802.15.4 standard specifies a maximum packet size of 127 bytes, and the time synchronization trellis protocol (Time Synchronized Mesh Protocol, TSMP) reserves 47 bytes for operation and the remaining 80 bytes for the payload. IEEE802.15.4 is compliant with 2.4GHz Industrial, scientific, and Medical (ISM) band Radio Frequency (RF) transceivers.
The RF module contains a complete 802.15.4 Physical layer (PHY) modem designed to support the IEEE 802.15.4 wireless standards for peer, star and mesh networks. It is combined with the MCU to create the required wireless RF data links and networks. The IEEE 802.15.4 transceiver supports 250kbps O-QPSK data in a 5.0MHz channel and full spread spectrum encoding and decoding.
In some embodiments, control, status reading, data writing, and data reading are performed through the RF transceiver interface port of the sensing system node device. The MPU of the sensing system node device accesses the RF transceiver of the sensing system node device through an interface "transaction" in which multiple bursts of byte-length data are transmitted over an interface bus. Each transaction has three or more bursts long, depending on the type of transaction. Transactions always have read or write access to a register address. The length of the associated data accessed by any single register is always 16 bits.
In some embodiments, control of the RF transceiver and data transmission of the foot pressure/6 degree of freedom motion collection node is achieved through a serial peripheral interface (Serial Peripheral Interface, SPI). While the normal SPI protocol is based on 8-bit transmissions, the RF transceiver of the foot pressure/6-degree of freedom action collection collector node imposes a higher level transaction protocol based on multiple 8-bit transmissions per transaction. A single SPI read or write transaction consists of one 8-bit header transmission and two 8-bit data transmissions.
The header indicates the access type and register address. The following bytes are read or write data. SPI also supports recursive "data burst" transactions in which additional data transfers may occur. The recursive mode is mainly used for packet RAM access and fast configuration of the foot pressure/6 degree of freedom action collection node RF.
In some embodiments, all foot pressure sensor areas are scanned sequentially and the entire process is repeated until a reset condition or an inactive power down mode. The 6 degree of freedom motion is captured by a serial UART interface of the MCU and an Inertial Measurement Unit (IMU). The sampling rate for all sensing dimensions (i.e., ax, ay, az, pitch, yaw, and roll) is 100-300Hz, and the sampled data is stored in IMUBuffer [ ].
The spiderywrite is invoked to update the TX packet length field. Next, the SPICaleRecieveleTatReg is called to clear the status register, and then the SPICaleRecieveleDataReg is called to clear the receive data register, making the SPI interface ready for reading or writing. After the SPI interface is ready, SPISendChar is called to send a 0xFF character (representing the first codeword section), and SPIWAITTransferDone is then called to verify that the send is complete.
Fig. 5 is an exemplary output 300 that may be provided on the display 222 of the handheld device. For example, when providing therapy to a patient's gait, the display output 300 may include a portion of the right foot 302 and a portion of the left foot 304. The display of the right foot over time includes accelerations Ax310a, ay312a and Az314a, and foot pressure 316a. Similarly, the display of the left foot includes accelerations Ax310a, ay312a and Az314a, and foot pressure 316a.
Classification is understood as the correlation of data (e.g., sensor fusion data, feature data, or attribute data) with real world events (e.g., patient activity or treatment). Typically, classification is created and performed on the analysis system 108. In some embodiments, the collector 106 has some local copy of the "templates" and therefore, the input sensor data and feature extracted data may be classified at the collector or analysis system.
An environment refers to a situation or fact that constitutes an event, statement, condition, or idea setting. The context aware algorithm checks "who", "what", "when" and "where" are related to the context and time the algorithm is performed on specific data. Some context-aware actions include identity, location, time, and activity being performed. The environmental interface occurs between the patient, the environment, and the musical therapy session when using the environmental information to formulate deterministic actions.
The patient's reaction environment to the musical therapy session may involve a layer of algorithms that interpret the fused sensor data to infer higher level information. These algorithms extract the patient response environment. For example, the biomechanical gait sequence of the patient is analyzed as it relates to a specific portion of the musical treatment session. In one example, "lateral tremor" is the classifier of interest. Thus, with less lateral tremor, determining the gait of the patient becomes smoother.
Analysis system
The analysis system 108 (sometimes referred to as a backend system) stores large modules/profiles and includes machine learning/analysis processing, as well as the modules described herein. In some embodiments, a web interface for logging in to view archived data is provided, and a dashboard is also provided. In some embodiments, the analysis system 108 is located on a remote server computer that receives data from the collector 106 running on a handheld unit 220, such as a handheld device or tablet computer. It is contemplated that the processing power required to perform the analysis and machine learning functions of analysis system 108 may also be located on handheld device 220.
Data is transmitted from the collector 106 to an analysis system 108 for analysis processing. As shown in fig. 6, the analysis process 400 includes a user interface 402 for receiving data from the collector 106. Database memory 404 receives input data from collector 106 for storage. The training data and the output of the analysis process (e.g., integrated machine learning system 410) may also be stored on memory 404 to facilitate the creation and refinement of predictive models and classifiers. The data bus 406 allows the data stream to be processed by analysis. A training process 408 is performed on the training data to derive one or more predictive models. The integrated machine learning system 410 utilizes a predictive model. The output of the integrated machine learning system 410 is an aggregation of these predictive models. The aggregate output is also used for classification requirements of the template classifier 412, such as tremors, symmetry, flowability, or learned biomechanical parameters, such as entrainment, actuation, etc. The API418 is connected to a collector and/or music treatment center. The treatment algorithm 414 and the prediction algorithm 416 include a multi-layer perceptron neural network, a hidden Markov model, a radial basis function network, a Bayesian inference model, and the like.
An exemplary application of the systems and methods described herein is the analysis of a patient's biomechanical gait. Gait sequences are characterized as a series of typical features. The presence of these and other features in the captured sensor fusion data informs the environment detection algorithm that the biomechanical gait sequence of the patient is valid. Biomechanical gait sequence capture requires robust environmental detection and then abstracts a representative population of musically treated patients.
An example of such an activity is the position of the patient at a certain point in time and their then-current response to a musical therapy. The identification and correlation of patient musical therapy responses allows for the identification of specific patterns of musical therapy patient responses. Benchmark testing and performance and efficiency analysis are then performed on specific musical treatment regimens by creating a baseline of musical treatment patient responses and correlating them with future musical treatment patient responses.
In conjunction with motion sensing, using a distance metric captured by gait biomechanics, the patient path trajectory is determined using temporal and spatial variations/offsets between two or more musical therapy sessions. From this sensor-fused data capture, features are extracted and classified to label various critical patient treatment responses. Further sensor fusion data analysis uses histograms to allow initial musical therapy response pattern detection.
For musical therapy session sensor fusion data analysis, initially, a patient-specific bayesian inference model is used with a markov chain. The state of the chain represents the patient-specific response pattern captured from the musical therapy baseline session. The reasoning is based on knowledge of the patient response pattern occurrence and time links to previous states for each sample interval.
A prediction routine, a multi-layer perceptron neural network (MLPNN), uses a directed graph node-based model with top-level root nodes that predicts the need to reach subsequent nodes and obtain sensor fusion data feature vectors for the patient. The sensor fusion data feature vector contains motion data for time series processing, music signature data, and video image data that is particularly important for further processing. In this case, the directed graph looks like a tree drawn upside down, with leaves at the bottom of the tree and the root as the root node. From each node, the routine may be to the left, the left being the left node of the top layer next to the root node, selecting the left child node as the next observation node, or the routine may be to the right, the right being the right node of the top layer next to the root node, based on the value of a variable whose index is stored in the observation node. If the value is less than the threshold, the routine goes to the left node, and if it is greater than the threshold, the routine goes to the right node. These regions (here left and right) become prediction spaces.
The model uses two types of input variables: ordered variables and categorical variables. The ordered variable is a value that is compared to a threshold value stored in the node. A classification variable is a discrete value that is tested to determine if it belongs to some finite subset of values and is stored in a node. This can be applied to various classifications. For example, mild, moderate and severe may be used to describe tremors and are one example of a classification variable. Instead, a fine-grained range of values or numerical scale may be used to similarly describe tremors, but numerically.
If the classification variable belongs to a limited set of values, the routine goes to the left node, and if not, goes to the right node. In each node, a pair of entities is used: variable index, decision _order (threshold/subset) makes decisions. This symmetry is split, split variable: variable_index.
Once a node is reached, the value assigned to that node will be used as the output of the prediction routine. The multi-layer perceptron neural network is recursively built starting from the root node. As previously described, all training data, feature vectors, and responses are used to split the root node; wherein the entity: variable_ index, decision _le (threshold/subset) partitions the prediction region. In each node, the best decision rule for the best primary split is found based on the base "purity" criteria for classification and the sum of square errors for regression. The base index is based on the total variance measurement of the set class. The "purity" criterion of the kene refers to a small kene index value, indicating that the node contains mainly observations from a single class, which is an ideal state.
Once the multi-layer perceptron neural network is established, it may be trimmed using a cross-validation routine. To avoid over fitting of the model, some branches of the tree are cut off. This routine may be applied to independent decisions. As described above, one significant attribute of the decision algorithm (MLPNN) is the ability to calculate the relative decision force and importance of each variable.
The variable importance ratings are used to determine the most frequent type of interaction for the patient interaction feature vector. Pattern recognition begins with defining a decision space that is suitable for distinguishing between different categories of musical therapy responses and musical therapy events. The decision space may be represented by a graph having N dimensions, where N is the number of attributes or measures that are considered to represent musical therapy responses and musical therapy events. The N attributes constitute a feature vector or signature, which may be plotted in a graph. After enough samples are entered, the decision space reveals clusters of musical therapy responses and musical therapy events belonging to different categories for associating new vectors with these clusters.
The dynamic closed loop rehabilitation platform music therapy system utilizes multiple deep learning neural networks to learn and recall patterns. In one embodiment, a non-linear decision space is constructed using an adaptive radial basis function (RadialBasisFunction, RBF) model generator. The new vector may be calculated using the RBF model and/or using the K-nearest neighbor classifier. Fig. 6 shows the workflow of the machine learning subsystem of the dynamic closed loop rehabilitation platform music therapy system.
Fig. 7 shows a supervised training process 408 comprising a plurality of training samples 502, for example, the inputs will be features as described in table 3 above, and the example outputs will be terms such as tremors, asymmetry and power, the extent of these terms, predictions of changes, classification of patient recovery. It is known that the new output is learned as part of the process. This provides a basis for a higher level of abstraction of predictions and classifications, as it is applicable to different use cases (e.g. different disease states, combinations with drugs, notifications to providers, fitness and fall prevention). These training samples 502 are run using learning algorithms a1504a, a2504b, a3504c … AN504n to obtain predictive models in M1506a, M2506b, M3506c … MN506 n. Exemplary algorithms include multi-layer perceptron neural networks, hidden Markov models, radial basis function networks, bayesian inference models.
Fig. 8 illustrates an integrated machine learning system 410 that is an aggregation of predictive models M1506a, M2506b, M3506c … MN506n over sample data 602 (e.g., feature extraction data) to provide a plurality of predictive outcome data 606a, 606b … n. Given multiple predictive models, an aggregation layer 608 (e.g., including decision rules and votes) is used to derive an output 610.
The MRConvNet system has two layers, with the first layer being the convolutional layer with average pooling support. The second layer of the MR ConvNet system is a fully connected layer supporting multiple logistic regression. Multiple logistic regression, also known as Softmax, is one generalization of logistic regression that deals with multiple classes. In the case of logistic regression, the tags are binary.
Softmax is a model for predicting the probability of different possible outputs. The following assumes that the final output layer has a multi-class classifier of m discrete classes by Softmax:
Y1=Softmax(W11*X1+W12*X2+W13*X3+B1) [1]
Y2=Softmax(W21*X1+W22*X2+W23*X3+B2) [2]
Y3=Softmax(W31*X1+W32*X2+W33*X3+B3) [3]
Ym=Softmax(Wm1*X1+Wm2*X2+Wm3*X3+Bm) [4]
overall, y=softmax (W x+b) [5]
Softmax (X) i=exp (Xi)/sumofep (Xj), j=1 to N [6]
Wherein Y = classifier output; x = sample input (all scaled (normalized) eigenvalues); w=weight matrix. For example, the classification will score for asymmetry, such as "6 points of a moderate asymmetry score of 10 (10 points highly asymmetric, 0 points without asymmetry)" or "8 points normal in gait fluidity" gait fluidity score 10, etc. The analysis pipeline is shown in fig. 9.
Softmax regression allows multiple classes to be processed that are more than two. For logistic regression: p (x) =1/(1+exp (-Wx)), where W contains model parameters trained to minimize the cost function. In addition, x is an input feature vector, and
((x(1),y(1)),…,(x(i),y(i))) [7]
Will represent the training set. For multi-class classification, softmax regression is used, where y can take N different values representing the class, instead of 1 and 0 in the binary case. Thus, for training set ((x (1), y (1)), …, (x (i), y (i))), y (N) may be any value in the range of 1 to N classes.
Next, p (y=n|x; W) is the probability of each value of i=1, …, N. The Softmax regression process is mathematically illustrated as follows:
Y(x)=(p(y=1|x;W),p(y=2|x;W),…p(y=N|x;W)) [8]
where Y (x) is the answer to the hypothesis, given an input x, the probability distributions of all classes are output such that their normalized sum is 1.
The MR ConvNet system convolves each windowed biomechanical data frame as a vector and each biomechanical template filter as a vector, and then generates a response using an averaging pool function that averages the characteristic responses. The convolution process calculates Wx while adding any bias, which is then passed to a logistic regression (sigmoid) function.
Next, in the second layer of the MR ConvNet system, the sub-sampled biomechanical template filter responses are moved into a two-dimensional matrix, with each column representing a windowed biomechanical data frame as a vector. The Softmax regression activation process is now initiated using the following method:
Y(x)=(1/(exp(Wx)+exp(Wx)+….+exp(Wx))*(exp(Wx),exp(Wx),…,(exp(Wx)) [9]
The MRConvNet system is trained with an optimization algorithm (gradient descent) in which a cost function J (W) is defined and minimized:
J(W)=1/j*((H(t(j=1),p(y=1|x;W)+H(t(j=2),p(y=2|x;W)+…+H(t(j),p(y=N|x;W)) [10]
where t (j) is the target class. This averages all cross entropy of j training samples. The cross entropy function is:
H(t(j),p(y=N|x;W)=-t(j=1)*log(p(y=1|x;W))+t(j=2)*log(p(y=2|x;W))+…+t(j)*p(y=N|x;W) [11]
in fig. 10, the integrated machine learning system 408 includes a plurality of predictive models, such as template series 1 (tremor) 706a, template series 2 (symmetry) 706b, template series 3 (fluidity) 706c …, additional templates (other learned biomechanical parameters, e.g., entrainment, startup, etc.) 706n applied to the condition input 702, which may be, for example: the stride length of the right and left features (X1, X2), the variance of the stride length of the right and left features (X3, X4), the cadence of the right and left features (X6, X7), the cadence variance of the right and left features (X8, X9), etc., where the sample (X1, X2,. Xn) is referred to as a vector X, which is input to 702 in the set of ML algorithms. These are conditional reference normalization and/or scaling inputs ]. The aggregation classifier 708 outputs information such as tremor scale, symmetry scale, fluidity scale, and the like.
Music treatment center
Music therapy center 110 is a decision making system running on a processor, such as handheld device or laptop 220 of fig. 2. The musical therapy center 110 takes input from the features in the extracted sensor data at the collector 106, compares them to a defined process for delivering therapy, and then delivers the content of the auditory stimulus played through the music delivery system 230.
Embodiments of the present invention use environmental information to determine the cause of the occurrence of the condition and then encode the observed actions, which results in dynamic and modulated changes in system state, thereby conducting the musical therapy session in a closed loop manner.
The interaction between the patient and the musical therapy session provides real-time data for determining the environmental perception of the musical therapy patient, including motion, posture, stride, and gait response. After the sensing nodes collect the input data (at the sensors), the embedded nodes process the context-aware data (at the edge processing) and provide immediate dynamic actions and/or transmit the data to the analytics system 108 (e.g., elastic network-based processing cloud environment) for storage and further processing and analysis.
The program will change the tempo, chords, beats and musical cues (e.g., melody, harmony, tempo and dynamics cues) using any existing song content, depending on the input. The system may superimpose metronomes on existing songs. The song content may be beat mapped (e.g., if W is responsive to an AV or MP3 file) or MIDI format so that accurate knowledge of beat occurrence times can be used to calculate entrainment potentials. The sensor on the patient may be configured to provide haptic/vibratory feedback pulses at the musical content.
Example
Exemplary applications of the method are described herein. Gait training analyzes the real-time relationship between the music beats played by the patient and the individual steps taken by the patient in response to these particular music beats. As described above, gating analysis is used to determine a window of data that repeats with each step or repeat motion with a certain change. In some embodiments, the beginning of the window is determined as the time at which the heel strike pressure exceeds a threshold (or other sensor parameter). Fig. 11 is an exemplary time plot showing the beat "time beat" of music and the steps "time steps" taken by the patient. Thus, the start time in this case is related to the "time step". In particular, the plot shows the time beat 1101 of the music at time beat 1. After a period of time, the patient takes steps at time step 1 in response to the time beat 1001, i.e., time step 1102. The entrainment potential 1103 represents the delay (if any) between time beat 1 and time step 1.
Fig. 12-13 illustrate examples of entraining patient gait using the systems described herein. Fig. 12 shows "perfect" entrainment, e.g., constant entrainment potential of zero. This occurs when there is no delay or a negligible delay between the time beat and the associated time step taken in response to the time beat. Fig. 13 shows a phase shift entrainment, e.g. where the entrainment potential is non-zero but remains constant over time or with minimal variation. This occurs when there is a consistent delay between the beat of time and the step of time within a tolerance range.
With continued reference to fig. 11, the ep ratio is calculated as the ratio of the duration between time beats to the duration between time steps:
Figure BDA0004121637820000221
/>
where time beat 11101 corresponds to the time of the first music beat and time step 11102 corresponds to the time of the patient step in response to time beat 1. The time beat 21106 corresponds to the time of the second music beat, and the time step 21108 corresponds to the time of the patient step in response to the time beat 2. The objective is EP ratio=1 or EP ratio/coefficient=1. The coefficients are determined as follows:
Figure BDA0004121637820000222
this coefficient allows the beats to be subdivided or let someone step 3 beats out of every 3 beats or every 4 beats. It may provide flexibility for different scenarios.
Figures 14 and 15 illustrate entrainment responses over time for patients using the techniques described herein. Fig. 14 (left Y-axis: EP ratio; right Y-axis: beats per minute; X-axis: time) shows the scatter of points 1402, the points 1402 representing the average of the EP ratios for the first patient gait. The figure shows an upper limit 1404 of +0.1 and a lower limit 1406 of-0.1. Line 1408 shows the speed over time (starting at 60 beats per minute, increasing in steps to 100 bpm). Fig. 14 shows that the EP ratio remains around 1 (±0.1) as the speed increases from 60bpm to 100 bpm. Figure 15 shows the EP rate for a second patient gait, wherein (as the speed increases from 60bpm to 100bpm, the EP rate also remains at a level close to 1 (+ -0.1).
Fig. 16 and 17 (Y-axis: entrainment potential, X-axis: time) show the response of two patients to a change in time beat (e.g., speed change) and/or chord change, haptic feedback change, foot cue change (e.g., left-right or left-right cane cue), etc. Fig. 16 shows a time-based plot in which the gait of the patient is balanced with a "perfect entrainment" (constant zero or negligible entrainment potential) or constant phase shift entrainment potential. As shown, a certain period of time, golden time 1602, is required until equilibrium occurs. Fig. 17 shows a time-based plot in which the gait imbalance of the patient, for example, does not achieve a perfect entrainment or constant phase shift entrainment potential after changing the time beat. The golden time is useful because it represents a data set independent of the accuracy of the measurement entrainment. The golden time parameter may also be used to screen for suitability of future songs. For example, when a patient exhibits a long golden time value while using a musical composition, such a musical composition is less suitable for treatment.
Fig. 18 illustrates a technique for gait training, wherein the repetitive motion refers to the steps taken by the patient while walking. Gait training is suitable for individual patient populations, diagnosis and situations to provide personalized and personalized musical interventions. Based on the inputs, the program alters the content, cadence, major/minor chords, meters, and musical cues (e.g., melody, harmony, and dynamics cues) as applicable. The program may provide a passive music playlist for periodic use by selecting music using the date of birth, listed music preferences, and entrainment rate. Key inputs to gait training are the pace, symmetry and stride length at which the user performs physical activities (e.g., walking). The program provides haptic/vibratory feedback at the BPM of the music using connected hardware. Suitable populations for gait training include traumatic brain injury (traumatic brain injury, TBI), stroke, parkinson's disease, MS and elderly patients.
The method begins at step 1802. At step 1804, biomechanical data is received at collector 106 based on data from sensors (e.g., sensors 200, 206, 208). Biomechanical data includes startup, stride length, cadence, symmetry, auxiliary device data stored and generated by the analysis system 108, or other such patient feature sets. Exemplary biomechanical data parameters are listed in tables 1, 2, and 3 above. The baseline condition is determined by one or more data sources. First, the gait of the patient without playing any music is sensed. The sensor and feature data regarding the patient's start, stride length, pace, symmetry, auxiliary device data, etc., includes patient baseline biomechanical data for the treatment session. Second, sensor data from previous sessions of the same patient, as well as any higher-level classification data from the analysis system 108, includes historical data for the patient. Third, sensor data and higher level classification data for other similarly situated patients includes crowd data. Thus, the baseline condition may include data from one or more of the following: (a) patient baseline biomechanical data for a treatment session, (b) data for a previous session of the patient, and (c) demographic data. A baseline beat rate is then selected from the baseline conditions. For example, a baseline beat rate may be selected to match the patient's current pace before music is played. Alternatively, the baseline beat rate may be selected as a fraction or multiple of the patient's current pace. As another alternative, the baseline tempo may be selected to match the baseline beat tempo used in a previous session of the same patient. As another alternative, the baseline beat rate may be selected based on baseline beat rates for other patients with similar physical conditions. Finally, the baseline beat rate can be selected based on a combination of any of the above. The target beat rate can also be determined from the data. For example, the target beat rate may be selected as a percentage increase in the baseline beat rate by referring to improvements exhibited by other similarly situated patients. Tempo is understood to mean the frequency of beats in the music.
At step 1806, music provided to the patient from the handheld device 220 on the music delivery device 230 (e.g., earbud or earphone or speaker) begins at a baseline speed or subdivision of the baseline speed. To provide music to the patient at a baseline rate, music with a constant baseline rate is selected from the database, or existing music is modified, e.g., selectively accelerated or decelerated, to provide beat signals at a constant rate.
At step 1808, the patient is instructed to listen to the beat of the music. At step 1810, the patient is instructed to walk at a baseline beat rate, optionally receiving prompts for left and right feet. The patient is instructed to walk with each step closely matching the beat of the music, e.g., to follow the beat speed "in time". Steps 1806, 1808 and 1810 may be initiated by a therapist or by audible or visual instructions on handheld device 220.
In step 1812, the sensors 200, 206, 208 on the patient are used to record patient data, such as heel strike pressure, 6-dimensional motion, EMG activity, and video recordings of patient motion. All sensor data is time stamped. Data analysis is performed on the time stamped sensor data, including the "gating" analysis discussed herein. For example, sensor data (e.g., heel strike pressure) is analyzed to determine a start time for each step. The received additional data includes a time associated with each beat signal of the music provided to the patient.
At step 1814, an entrainment model (e.g., integrated machine learning system 410 of analysis system 108 or a model downloaded at collector 106 and running on handheld device 220) is connected to make predictions and classifications. (it will be appreciated that such a connection may be pre-existing or initiated at this time.) such a connection is typically very fast or transient.
At step 1816, an optional entrainment analysis performed by the analysis system 108 is applied to the sensor data. The entrainment analysis includes determining a delay between the beat signal and the beginning of each step taken by the patient. As an output of the entrainment analysis, the accuracy of the entrainment is determined, e.g., a measure of the instantaneous relationship between the baseline speed and the patient step, as discussed above with respect to the entrainment potential and EP ratio. If the entrainment is inaccurate, e.g., the entrainment potential is not constant within tolerances, then adjustments are made at step 1818, e.g., to speed up or slow down beat speed, increase volume, increase sensory input, superimpose metronome or other related sounds, etc. If the entrainment is accurate, e.g., the entrainment potential is constant within the tolerance, then the velocity is incrementally changed at step 1820. For example, the baseline rate of music played with the handheld device increases toward the target rate, for example by 5%.
In step 1822, a connection is made to the entrainment model for prediction and classification. (it should be appreciated that this connection may be pre-existing or initiated at this time.) at step 1824, an optional symmetry analysis is applied to the sensor data. As an output of the symmetry analysis, symmetry of the patient's gait is determined, e.g., how well the patient's left foot motion matches the patient's right foot motion in terms of stride length, speed, stance phase, swing phase, etc. If the step is not symmetrical, e.g., below a threshold, the music broadcast to the patient is adjusted by the handheld device at step 1826. A first modification may be made to music played during movement of one foot of the patient and a second modification may be made to music played during movement of the other foot of the patient. For example, a minor chord (or increased volume, sensory input, speed change, or superposition of sounds/beats) may be played on one side (e.g., the affected side) and a major chord on the other side (e.g., the non-affected side). The machine learning system 410 predicts in advance when a symmetry problem occurs based on a "fingerprint" of the scene that caused the symmetry problem, e.g., by analyzing motion that indicates asymmetry. Asymmetry can be determined by comparing a person's normal gait parameters with their background, and it can be determined how the side is affected and compared to the other side.
In step 1828, a connection is made to the entrainment model for prediction and classification. (it should be appreciated that this connection may be pre-existing or initiated at this time.) at step 1830, an optional center of balance analysis is performed on the sensor data, e.g., whether the patient is leaning forward. Analysis may be performed by combining the outputs of the foot sensors and the video output. As an output from the balance center analysis, it is determined whether the patient is leaning forward. If the patient is leaning forward, the patient is prompted to "stand straight" at step 1832, provided by the therapist or by audible or visual instructions on the handheld device.
In step 1834, a connection is made to the entrainment model for prediction and classification. (it should be appreciated that this connection may be pre-existing or initiated at this time.) at step 1836, an initiation analysis is applied to the sensor data, e.g., the patient presents hesitation or difficulty initiating walking. As an output of the start-up analysis, it is determined whether the patient exhibits a start-up problem. If the patient exhibits an initiating problem, e.g., below a threshold, tactile feedback may be provided to the patient, which may include a countdown of the tempo at step 1838 or a countdown before the song begins.
At step 1840, it is optionally determined whether the patient is using an auxiliary device, such as a cane, crutch, walker, or the like. In some embodiments, the handheld device 220 provides a user interface for a patient or therapist to enter information regarding the use of the auxiliary device. If a cane is present, the analysis is changed to three gauges, e.g., cane, right foot, left foot, and "left foot", "right foot" and "cane" prompts are provided by the therapist or by audible or visual instructions on the handheld device at step 1842.
At step 1844, a connection is made to the entrainment model for prediction and classification. (it should be appreciated that this connection may be pre-existing or initiated at this point.) optional entrainment analysis 1846 is applied to the sensor data, substantially as described above in step 1816, but with the differences noted herein. For example, previous entrainment data early in the entrainment session, data related to previous sessions of the patient or entrainment of other patients may be compared. As an output of the entrainment analysis, the accuracy of the entrainment, such as the degree of matching of the patient's gait to the baseline speed, is determined. If entrainment is inaccurate, then the adjustment is made at step 1848, essentially in the same manner as step 1818 described above.
If entrainment is accurate, then a determination is made at step 1850 as to whether the patient is walking at the target speed. If the target speed is not reached, the method proceeds to step 1820 (described above) to make incremental changes to the speed. For example, the baseline speed of music played with the handheld device increases or decreases toward the target speed, e.g., 5%. If the target speed has been reached, the patient may continue treatment for the remaining time in the session (step 1852). At step 1854, the desired tempo of the music to be used when not in the treatment session may be sorted and retained on the device 220 in fig. 2. The music content is used as a home work/exercise for the patient between dedicated treatment sessions. In step 827, the procedure ends.
It will be appreciated that the steps described above and shown in fig. 18 may be performed in a different order than that disclosed. For example, the evaluations at steps 1816, 1824, 1830, 1836, 1840, 1846 and 1850 may be performed simultaneously. Further, multiple connections to the analysis system 108 (e.g., steps 1814, 1822, 1828, 1834, and 1844) may be performed at once throughout the described treatment session.
Fig. 19 illustrates a technique for ignoring training. For ignoring training, the systems and methods described herein use connected hardware to provide haptic/vibratory feedback when the patient hits the target correctly. The connected hardware includes a device, a video motion capture system, or a connected bell. All of these devices are connected to the described system and vibrate with the tap and have a speaker to play audible feedback. For example, a connected bell provides data to the system in the same manner as sensor 200, e.g., data regarding a bell tap of the patient. The video motion capture system provides video data to the system in the same manner as the video camera 206. The key input to ignore training is information about tracking movements to a particular location. The program uses connected hardware to provide tactile/vibratory feedback when the patient hits the target correctly. Suitable populations for which training is omitted include patients with spatial or unilateral visual ignorance.
The flow chart for neglecting training shown in fig. 19 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. For example, the baseline test determines the status of the patient and/or improvement over previous tests. In some embodiments, the baseline test includes displaying four objects on a screen (e.g., display 222 of handheld device 220) that are evenly spaced from left to right. The patient is instructed to strike the subject in time with the beat of the background music by a prompt displayed on the display 222 or a verbal prompt by the therapist. As with gait training, the patient is instructed to strike the bell in time with the beat of the background music. Feedback is provided for each accurate impact. After the baseline information is collected, a plurality of objects uniformly distributed from left to right are displayed on the screen. As described above, the patient is instructed to strike the subject sequentially from left to right as the tempo of the background music. Feedback is provided for each accurate impact. As with gait training, the analysis system 108 evaluates and classifies the patient's response and provides instructions to increase or decrease the subject, or increase or decrease the speed of music to achieve a target speed.
Fig. 20 illustrates a technique for intonation training. For intonation training, the systems and methods described herein rely on speech processing algorithms. Commonly selected phrases are common words in the following categories: double lips, laryngeal and vowels. The hardware is connected to the patient and tactile feedback is provided to one hand of the patient at beats per minute. Key inputs for intonation training are pitch and spoken words and speaking cadence. Suitable populations for intonation training include bloodless, expressive aphasia, non-fluent aphasia, autism spectrum disorders and down syndrome patients.
The flow chart for intonation training shown in fig. 20 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. For example, tactile feedback is provided to a patient's hand to encourage tapping. The patient is then instructed to listen to the played music, either by a prompt displayed on the display 222 or by a therapist verbal prompt. The spoken phrase to be learned is played by dividing it into two parts, the first of which is a high-pitched sound and the second of which is a low-pitched sound. The patient is then instructed to sing the phrase with the device using the two tones being played, either by a prompt displayed on the display 222 or by a verbal prompt from the therapist. As with gait training, the analysis system 108 evaluates the patient's response and classifies the response according to the accuracy of the patient's or assistant/therapist's pitch, the spoken words and rank, and provides instructions to provide alternative phrases and compare the response to the target speech parameters.
Fig. 21 illustrates a technique for musical stimulus training. For musical stimulus training, the systems and methods described herein rely on speech processing algorithms. Familiar songs are used with algorithms to isolate the desired portion (called the desired violation). The hardware includes speakers for receiving and processing patient singing, and in some embodiments, the therapist may manually provide input regarding singing accuracy. The key inputs are information related to speech intonation, the words spoken, speech tempo and music preferences. Suitable populations include patients suffering from Brazilian aphasia, non-fluent aphasia, TBI, stroke, and primary progressive aphasia.
The flow chart for musical stimulus training shown in fig. 21 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. For example, a song is played for the patient and the patient is instructed to listen to the song via a prompt presented on the display 222 or a verbal prompt from the therapist. Music cues are added to songs. Then, at the expected point, a word or sound is missed, and a gesture music prompt is played to prompt the patient to sing the missing word or sound. As with gait training, the analysis system 108 evaluates the patient's response and classifies the response according to the pitch accuracy, spoken words and ranking of the patient or assistant/therapist and provides instructions to play additional portions of the song to enhance the speech to the target speech parameters.
Fig. 22 illustrates a technique for large sport training. For large athletic training, the systems and methods described herein are intended to aid in movement disorders, range of motion, or initiation. A more challenging part of the exercise is musical "accents", for example by using melodies, harmony, rhythms and/or dynamics cues. The key input is information related to motion in X, Y and Z capture by the connected hardware or video camera system. Suitable populations include neurological, orthopedic, strength, endurance, balance, posture, range of motion, TBI, SCI, stroke, and cerebral palsy patients.
The flow chart for large exercise training shown in fig. 22 is substantially the same as the flow chart for gait training shown in fig. 18, but some differences are noted here. As with gait training, the patient is provided with cues that move in time with the music selected baseline beat. The analysis system 108 evaluates the patient's response and classifies the response according to the accuracy of the actions and entrainment as described above and provides instructions to increase or decrease the speed of the music being played.
Fig. 23 illustrates a technique for grip training. For grip training, the systems and methods described herein rely on sensors associated with the gripper apparatus. The hardware includes a holder device with a pressure sensor, a connected speaker associated with the hand-held device 220. The key input is the pressure that the patient provides to the gripper device in a similar manner to the heel strike pressure measured by sensor 200. Suitable populations include neurological, orthopedic, strength, endurance, balance, posture, range of motion, TBI, SCI, stroke, and cerebral palsy patients.
The flow chart for grip training shown in fig. 23 is substantially the same as the flow chart for gait training shown in fig. 18, but some differences are noted here. As with gait training, the patient is provided with cues to apply a force to the gripper device in time against the baseline beat of the music selection. The analysis system 108 evaluates the patient's response and classifies the response according to the accuracy of the actions and entrainment as described above and provides instructions to increase or decrease the rate at which music is played.
FIG. 24 illustrates a technique for voice prompt training. For voice prompt training, the systems and methods described herein rely on voice processing algorithms. The hardware may include a speaker for receiving and processing patient singing, and in some embodiments, the therapist may manually provide input regarding the accuracy of the speech. The key inputs are speech utterances and spoken words and speech cadence and music preferences. Suitable populations include patients with robotic, word-finding and stuttering problems.
The flow chart for voice prompt training shown in fig. 24 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are noted here. As with gait training, the patient is prompted to speak a sentence by speaking a syllable in time in each beat selected with the music, either by a prompt displayed on the display 222 or by a verbal prompt from the therapist. The analysis system 108 evaluates the patient's speech and classifies the response according to the accuracy of the speech and entrainment as described above and provides instructions to increase or decrease the speed at which music is played.
Fig. 25 illustrates a technique for training a minimally conscious patient. The systems and methods described herein rely on an imaging system, such as a 3D camera, to measure whether the patient's eyes are open, the direction the patient is looking at, and the resulting patient's pulse or heart rate. The program searches for and optimizes heart rate, stimulus, respiratory rate, eye closure, posture and anxiety. Suitable populations include unconscious and conscious impaired patients.
The flow chart for training a minimally conscious patient shown in fig. 25 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are noted herein. As with gait training, the patient is provided with increased stimulation at the patient's respiratory rate (breathing rate of the patient, PBR). For example, the patient is first provided with a stimulus under the PBR of the musical chord and is observed whether the eyes of the patient are open. If the patient's eyes are not open, the stimulus will increase in sequence, from humming a simple melody under the PBR, singing "aah" under the PBR, singing the patient's name on the PBR (or playing a recording of such sounds), and checking if the patient's eyes are open at each input. The analysis system 108 evaluates the patient's eye tracking and classifies the response according to the level of consciousness and provides instructions to alter the stimulus.
Fig. 26-28 illustrate techniques for attention training. For attention training, the systems and methods described herein operate in a closed loop manner to assist the patient in sustaining, segmenting, alternating and selecting attention. No visual cues are allowed to signal which movements are to be performed. Suitable populations include brain tumors, multiple sclerosis, parkinson's disease, neurological disorders and injured patients.
The flow chart for continuous attention training shown in fig. 26 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. As with gait training, the patient is provided with an instrument (e.g., any instrument may work, such as a drumstick, drum, keyboard, or respective wireless connection version) and instructed to follow or perform the task of audio prompts as defined by levels 1 through 9 shown in fig. 26, via prompts presented on display 222 or verbal prompts by the therapist. The analysis system 108 evaluates the patient's ability to accurately complete the task and classifies the response to change the speed or difficulty of the task. Similarly, FIG. 27 shows a flow chart for alternating attention training in which instructions are provided to follow or perform the task of alternating audio cues between the left and right ears by cues appearing on display 222 or therapist verbal cues. Fig. 28 shows a flow chart for distraction in which instructions are provided to follow or perform the task of audio cues for audio signals in the left and right ears.
The flow chart for mobility training shown in fig. 29 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. For agility training, the patient is instructed to tap the piano keyboard with his finger to collect baseline motion and range of motion information. The song starts at a particular beats per minute and the patient starts tapping with a baseline number of fingers. The analysis system 108 evaluates the patient's ability to accurately complete the task and classifies the response to change the speed or difficulty of the task.
The flow chart for oral exercise training shown in fig. 30 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are noted here. For oral exercise training, the patient is instructed to alternate tasks between two sounds (e.g., "ooh" and "aah"). The analysis system 108 evaluates the patient's ability to accurately complete the task and classifies the response to change the speed or difficulty of the task, such as by providing different target sounds.
The flow chart for respiratory training shown in fig. 31 is substantially the same as the flow chart for gait training shown in fig. 18, but the differences are pointed out here. For respiratory training, a baseline respiratory rate and respiratory shallow are determined. The music is provided with a baseline rate at the patient's respiratory rate and the patient is instructed to perform the respiratory tasks depicted in the levels in fig. 31. The analysis system 108 evaluates the patient's ability to accurately complete the task and classifies the response to change the speed or difficulty of the task, for example, by providing different breathing patterns.
Methods, systems, and devices for supporting next generation medical and therapeutic systems using augmented reality (augmented reality, AR) and Augmented Audio (AA) to improve or maintain motion functions are further described herein. Exemplary embodiments of the enhanced neurological rehabilitation, rehabilitation or maintenance ("augmented neurologic rehabilitation, ANR") systems and methods disclosed herein are based on entrainment techniques by utilizing additional sensor flows to determine therapeutic benefits and inform closed-loop therapeutic algorithms. Exemplary systems and methods for nerve rehabilitation that may be used to implement embodiments of the ANR systems and methods are shown and described above and in McCarthy et al, co-pending and co-assigned U.S. patent application Ser. No. 16/569,388, "Systems and Methods for Neurologic Rehabilitation," continuation of U.S. patent No. 10,448,888 entitled "Systems and Methods for Neurologic Rehabilitation," issued on month 22 of 2019, U.S. patent No. 10,448,888, based on and claiming priority from U.S. provisional patent application Ser. No. 62/322,504 entitled "Systems and Methods for Neurologic Rehabilitation," filed on month 4 of 2016, all of which are each incorporated by reference as if set forth in their entirety herein.
According to one or more embodiments, the ANR systems and methods may include a method for providing an AR3D dynamic model of a particular person or object, the method including obtaining images, videos of the person or image by querying a cloud database and a local database. The ANR systems and methods may also be configured to fuse and/or synchronize dynamic models, patients or humans, audio content, and environments related to natural environments to a synchronized state using neuroscience of music improving the ability of the action function and how visual images affect the restored neuroscience (e.g., use of mirror neurons).
In accordance with one or more embodiments, the ANR systems and methods include methods for combining AA technologies for repetitive motion activities. Enhanced audio (AA) combines real-world sounds with additional computer-generated audio "layers" that enhance sensory input. At the heart of rhythmic neuroscience is the use of stimuli to engage the motor system in repetitive motion activities, such as walking. The addition of AA to a treatment regimen can enhance the therapeutic effect, increase compliance with the treatment regimen, and provide greater safety in a form that enhances the situational awareness of the patient. The disclosed embodiments for adding AA can utilize the neuroscience of music to mix many audio signals including external natural environment sound input, recorded content, rhythmic content, and voice guidance into a synchronized state. Furthermore, it may include the ability to combine algorithmically generated music with basal tempo cues as described in [00218] for patients with movement or physical disorders. This may be achieved by fusing such a generated rhythm with input from the patient's real-time biometric data to an interactive feedback state. Exemplary systems and methods for algorithmically producing auditory stimuli for neurological rehabilitation are shown and described in co-pending and commonly assigned U.S. patent application Ser. No. 16/743,946, entitled "ENHANCING MUSIC FOR REPETITIVE MOTION ACTIVITIES," filed on even 15 at 1 month in 2020, U.S. patent application Ser. No. 16/044,240, filed on even 24 at 7 month in 2018, and issued in continuation of U.S. patent No. 10,556,087 entitled "ENHANCINGMUSICFORREPETITIVEMOTION ACTIVITIES," filed on even 11 at 2 months in 2020, U.S. patent application Ser. No. 10,556,087 claims priority under U.S. patent Law 119 (e), filed on even 25 at 7, all of which are incorporated herein in their entirety as if set forth in their entirety herein.
Neuroplasticity, entrainment, mirror image neuronal science are fundamental scientific components that support the disclosed embodiments. Entrainment is the term for activation of the brain motor hub in response to an external rhythmic stimulus. Studies have shown that the audio movement pathways are present in reticular spinal cord bundles, which are the parts of the brain responsible for movement. Starting and timing the motion through these pathways suggests that the motion system can be coupled with the auditory system to drive the motion pattern (Rossignol and Melville, 1976). The entrainment process has been shown to be effective in improving walking speed (Cha, 2014), reducing gait variability (Wright, 2016), and reducing fall risk (Trombetti, 2011). Neuroplasticity refers to the ability of the brain to strengthen pre-existing neural connections, allowing an individual to acquire new skills over time. Studies have shown that music promotes changes in certain areas of brain movement, indicating that music can promote neuroplasticity (Moore et al, 2017). Mirror neurons fire both when you perform one action and when you observe another. Thus, when you see another person doing an action, your brain will react as if you were the person doing the action. This allows us to learn behavior through simulation. This is important in the context of the disclosed embodiments, because when patients observe augmented reality human simulations, they can use mirrored neurons to mimic their actions.
In accordance with one or more embodiments, the ANR systems and methods are configured to process images/videos, remove/add people/objects from the images/videos that are smaller or larger than a specified size, in response to patient or therapist accidents received as input to the ANR systems. Such an incident may be a patient's reaction (e.g., instructions to reduce scene complexity), therapist instructions to introduce occlusion of a person/object that may be increasing scene complexity. Embodiments may support recording all data for all patient or therapist accidents except the session data itself.
According to one or more embodiments, the ANR systems and methods include a telepresence method that allows a therapist to link to a remote patient using the system. The telepresence method includes biomechanical motion tracking of the patient relative to the AR3D dynamic model of the person/object in addition to fully supporting all local features experienced by the patient and therapist when in the same location.
The AR3D dynamic model is a software-based algorithmic process of the ANR systems and methods for generating an AR/VR vision scene that is animated based on the basic principles of neural plasticity, entrainment, and mirror neuron science to facilitate the outcome of clinical or training objectives. The telepresence method is configured to provide the ability to manipulate interactive video links between therapists and remote patients using the present invention. The telepresence method of the present invention supports projecting images/video from a remotely located patient to indicate the relative position of the patient in the AR3D dynamic model. The telepresence method may also provide therapists with the ability to adjust the AR model (spatial location or which items are available) in real time and allow for modification of the session. It will also allow the therapist to view the relationship between the patient and the model.
Fig. 32 depicts a conceptual overview of the major components of an exemplary ANR system 3200 that uses closed-loop feedback that measures, analyzes, and acts on a person to facilitate results toward clinical or training goals. It should be appreciated that the ANR system 3200 may be implemented using the various hardware and/or software components of the system 100 described above. As shown, the ANR system measures or receives inputs related to gait parameters, natural environment, environment/user intent (including past performance), physiological parameters, and real-time feedback of results (e.g., closed loop, real-time decisions). One or a combination of these inputs may be directed into a clinical thinking algorithm module (CTA) 3208, which controls the analysis and action of this information. In some embodiments, an example of real-time feedback would be the CTA3208 determining that the quality of the user measured gait metrics (e.g., symmetry, stability, and gait cycle time change) has exceeded the safety threshold of the module, triggering a new prompt response. Such alert responses are visual and audio, for example, by adding metronome audio layers and modifying the AR scene to increase beat saliency in music to simulate or guide the user into safer gait speed and locomotor behavior.
In the CTA module 3208, actions performed by the ANR system are defined based on the input analysis. The action may be output to the patient in the form of various types of stimuli, including music (or other components), cadence, sonication of sound, spoken words, augmented reality, virtual reality, augmented audio, or tactile feedback. As shown in fig. 32, the decisions made by the critical thinking algorithm regarding the various inputs, results, etc. are provided to an AR/AA output modeling module 3210, which is programmed to dynamically generate/modify the patient's output. The output is provided to the patient via one or more output devices 3220 (such as visual and/or audio output devices and/or haptic feedback devices). For example, the AR visual content may be output to AR glasses 3222 worn by the patient. Enhanced audio content may be provided to the patient via audio speakers or headphones 3225. As will be appreciated, other suitable visual or audio display devices may be used without departing from the scope of the disclosed embodiments. Furthermore, while the various elements of the system are shown separately in fig. 32, it should be understood that features and functions of various aspects of the system may be combined.
The clinical thinking algorithm module 3208 in the medical/therapeutic system may be configured to implement critical thinking algorithms focused on the restoration, maintenance, or enhancement of motor function, including but not limited to upper limb, lower limb, agitation, postural stability, foot drop, dynamic stability, wheezing, oral exercise, respiration, endurance, heart rate training, respiratory rate, optical flow, boundary support training, strategic training (ankle, knee, and hip), attractor coupling, muscle firing, training optimization, and gait. One or more of these CTAs may also be implemented in synchronization or in combination with other interventions with similar objectives. Examples may include implementing CTA in conjunction with functional electrical stimulation, deep brain stimulation, transcutaneous electrical nerve stimulation (transcutaneous electrical nerve stimulation, TENS), gamma frequency audio entrainment (20-50 Hz), or other electrical stimulation systems. In addition, the administration and manipulation of CTA may be combined with the administration of an anti-spasmodic drug or neurotoxin injection. For example, CTA may be applied or initiated during a time window that has been shown that these interventions are at peak effect, or used before reaching that point, as a method of initiating the locomotor system.
ANR system input
The input to the ANR system 3200 is important to enable the system to measure, analyze, and operate in a continuous loop manner, helping to achieve clinical or training goals.
Receiving input at the system may include measuring the motion of the person by a sensor to determine biomechanical parameters of the motion (e.g., temporal, spatial, and left/right comparisons). These motion sensors may be placed anywhere on the body and may be a single sensor or an array of sensors. Other types of sensors may be used to measure other input parameters, which may include respiratory rate, heart rate, oxygen level, temperature, electroencephalogram (EEG) for recording brain spontaneous activity, electrocardiogram (ECG or EKG) for measuring heart electrical activity, electromyogram (EMG) for evaluating and recording skeletal muscle generated electrical activity, photoplethysmogram (PPG) for detecting blood volume changes in a tissue microvascular bed, which typically uses a pulse oximeter, optical, inertial measurement unit, video camera, microphone, accelerometer, gyroscope, infrared, ultrasound, radar, RF motion detection, GPS, barometer, RFID, radar, humidity sensor or other sensor for detecting physiological or biomechanical parameters. For example, in the exemplary ANR system 3200 shown in fig. 32, one or more sensors 3252 (e.g., IMU, footpad sensor, smartphone sensor (e.g., accelerometer), and environmental sensor) may be used to measure gait parameters. Further, one or more sensors 3254 (e.g., PPG, EMG/EKG, and respiratory rate sensors) may be used to measure physiological parameters.
In addition, environmental information regarding the intended outcome, use environment, past meeting data, other techniques, and other environmental conditions may be received as inputs to the CTA module and adjust the response of the CTA. For example, as shown in fig. 32, environmental information 3258 and environmental input 3256 information may be received as inputs to further inform the operation of CTA 3208. One example of usage environment information is that information from the past regarding user gait patterns may be used in conjunction with Artificial Intelligence (AI) or Machine Learning (ML) systems to provide more personalized clinical goals and actions for the patient. These targets may modify target parameters such as step size per minute limits, walking speed, heart rate variability, oxygen consumption (VO 2 max), respiration rate, session length, asymmetry, variability, walking distance, or desired heart rate.
In addition, bluetooth low energy (BluetoothLowEnergy, BLE) beacons or other wireless proximity techniques (such as wireless triangulation or angle of arrival) may be used to detect environmental information in order for the wireless location trigger to appear and/or disappear in the patient's field of view relative to the AR3D dynamic model based on the detected location. In some embodiments, the AR3D dynamic model output by the ANR system may be controlled by a therapist and/or beacon triggers to change or maintain the patient's navigation requirements. These triggers may be used with gait or physiological data as described above to provide additional triggers in addition to the wireless beacon trigger. For example, gait data feedback from IMU products allows for a gait feedback loop that provides the ANR system 3200 with the ability to implement changes in the AR3D dynamic model software process.
Clinical thinking algorithm
According to one or more embodiments, CTA module 3208 implements a clinical thought algorithm configured to control applied therapy to facilitate outcome toward a clinical or training goal. Clinical goals may include items related to fig. 18-31, and as further examples, anxiety interventions for alzheimer's disease, dementia, bipolar disorder, and schizophrenia, as well as training/physical activity goals. This section discusses different non-limiting exemplary techniques that may be used to provide appropriate rehabilitation responses for CTA determination, e.g., adjusting cadence speed and synchronizing AR vision scenes. Each of these techniques may be implemented using separate CTAs or combined with each other. In one or more embodiments, the system 3200 can be configured to combine CTA with entrainment principles for repetitive motion activities, and in other cases they can be combined with each other toward other targets.
By way of example and not limitation, the CTA module 3208 of the ANR system 3200 may be configured to create a virtual treadmill output through an AR/AA output interface using a combination of biomechanics, physiological data, and environment. While the treadmill maintains pace for someone through the movement of the physical belt, the virtual treadmill uses CTA to dynamically adjust according to the entrainment principle, in an independent manner, to adjust the person's walking or movement speed, similar to other athletic interventions. However, in addition to or instead of using rhythmic stimulation to drive an individual toward a biomechanical target as described above, the virtual treadmill may be generated and dynamically controlled based on entrainment of the patient toward target parameters (e.g., those listed above as target parameters).
The target parameters may be set based on clinician, user, historical data, input of baseline conditions, or points of clinical significance. In addition, the target parameters may be set or follow recommendations for specific motor length, duration and intensity for certain conditions, such as heart disease, asthma, chronic Obstructive Pulmonary Disease (COPD), fall prevention, musculoskeletal disease, osteoarthritis and general aging.
Further described herein are exemplary embodiments of the ANR system 3200, wherein the CTA module 3208 is configured to utilize biomechanical data, physiological data, and environment to provide gait training therapy in the form of a virtual treadmill and rhythmic auditory stimuli output via the AR and AA output interface 3220.
Fig. 37 is a process flow diagram illustrating an exemplary routine 3750 for providing gait training therapy to a patient using the ANR system 3200. Fig. 38 is a hybrid system and process diagram conceptually illustrating aspects of the ANR system 3200 for implementing the gait training routine 3750, in accordance with an exemplary embodiment of the disclosed subject matter. As shown, the sensor 3252, particularly the foot-mounted IMU, captures sensor data related to gait parameters provided to the CTA module 3208. Further, AA/AR modeling component 3210, including an audio engine, receives input from the CTA and is configured to generate a set of audio cues including one or more of rhythmic music and cues, interactive speech guidance, and spatial and audio effects processing. Similarly, an AA/AR modeling component 3210, which includes an AR/VR modeling engine (also referred to as an AR3-D dynamic model), is shown as receiving input from the CTA and is configured to generate a set of visual cues including one or more of virtual AR actors and objects (e.g., virtual human walking), background motion animations (e.g., virtual running machine, footsteps/footprints, and animations), and scene lighting and shadows.
Fig. 39 is a mixing system and process diagram conceptually illustrating an exemplary audio output device 3225 and enhanced audio generating component of the ANR system 3200 in more detail. As shown in fig. 39, in one embodiment, the AA device may capture ambient sound using, for example, a stereo microphone. AA devices may also generate audio output using stereo transducers. The AA device may also include a head-mounted IMU. As will be appreciated, the AA device may also include audio signal processing hardware and software components for receiving, processing, and outputting the enhanced audio content received from the AA/AR module 3210, alone or in combination with other content such as ambient sound, etc. As shown, the CTA module 3208 receives gait parameters, including gait parameters received from sensors including a foot-worn IMU and a head-worn IMU. Furthermore, in one embodiment, the CTA receives data related to physiological parameters from other sensor devices such as PPG sensors.
Returning now to fig. 37, at step 3700, when the ANR system 3200 calibrates and collects preliminary gait data (such as stride length, speed, gait cycle time and symmetry), the patient wearing the AR/AA output device 3220 and the IMU sensor 3252 begins walking. At step 3701, cta module 3208 determines a baseline tempo for both music playback and virtual AR scene to be displayed. For example, as described in connection with fig. 18, the baseline tempo may be determined by CTA.
Using the user's gait cycle time as input, the audio engine (i.e., the audio modeling component of the AR/AA modeling module 3210) generates corresponding tempo adjustment music and any supplemental tempo cues, such as metronome sounds, to enhance beat saliency at step 3702.
At step 3703, the visual AR engine (i.e., the visual modeling component of the AR/AA modeling module 3210) will generate a mobile virtual scene, as understood in the video game industry. More specifically, in one embodiment, the virtual scene includes visual elements presented under control of the CTA and shares a common timing reference with the audio engine to synchronize the elements of the visual scene with music and tempo. Although the AR scene described herein includes a virtual treadmill or a virtual person and footsteps, the AR scene may be any one or more of the various examples discussed herein, such as a virtual treadmill, a walking virtual person, a virtual crowd, or a dynamic virtual scene.
At steps 3704 and 3705, music/cadence and visual content is transferred to the patient using the AR/AA device 3220, such as a lightweight heads-up display with headphones 3225 (e.g., AR goggles 3222). Under the control of the CTA, the patient receives instructions regarding treatment via voice-over cues generated by the audio engine at steps 3706 and 3707. This may include pre-gait training previews in order for the patient to become accustomed and practice the visual landscape and audio experience.
Fig. 40 shows an exemplary AR virtual scene 4010 presented for entrainment by a patient. As shown, the scene may include an animated 3D image of another person walking "in front of" the patient, with steps and walking actions synchronized with the musical speed. More specifically, in one embodiment, the AR actor walks at the same speed as the baseline beat speed of the audio content generated by the CTA and audio engine. In this example, the patient's goal may be to have his or her steps rhythmically matched to the audio and visually matched to the actors. Further, as shown in fig. 40, the AR scene may include multiple footsteps, with additional cues, such as L and R, indicating left and right feet.
A scene including footsteps may virtually move toward a patient at a prescribed rate while a virtual actor walks in front of the patient in a direction away from the patient. In one embodiment, the scene may move according to gait cycle time, where GCT (right foot)/2 x 60 = CTA defined musical speed. Further, the additional cues generated in connection with the AR scene may include rhythmic audio cues that enhance the visual cues. For example, one effective method of reinforcement may include the AA system 3210 generating the footstep sounds of the virtual actors in synchronization with the cadence, simulating team travel to a common beat. The system further enhances the virtuous cycle between the basic therapeutic concepts of entraining and mirroring neurons by providing the patient with rhythmic audio components generated based on visual stimuli, and timing the movement of visual elements with the rhythmic audio stimuli.
By way of further example, fig. 41 shows an exemplary AR virtual scene 4110 presented for a patient. As shown in fig. 41, the virtual treadmill may be generated and dynamically controlled based on the entrainment of the patient toward the target parameters (e.g., the target parameters listed above). In this example, the generated AR treadmill animates the motion of the treadmill surface 4115 and generates a virtual step at the same speed as the CTA defines for auditory stimulation. Further, the 3D animation of the virtual treadmill may include visually highlighted steps or tiles that the patient may use as visual targets while entraining the cadence generated by the CTA control. Thus, in this example, the patient's goal is to rhythmically match their steps with audio and visually with animated target steps. As further shown in fig. 41, an animated foot step 4120 may be shown on the surface of virtual treadmill 4115 that moves in the direction toward the patient, indicated by the directional arrow, which may reverse if the patient is performing a walk-backward training exercise. In addition, the animation highlighting the target step 4125L (left step) and/or 4125R (right step), respectively, is highlighted in synchronization with the CTA cadence to prompt the user to step with the corresponding foot (e.g., left or right foot). Furthermore, as further described herein, virtual scenes such as those shown in fig. 40-41 may be dynamically adjusted according to corresponding changes in the patient's Entrainment Potential (EP) and audio stimulus. Other adjustments to the virtual scene may include changing the virtual background environment to simulate different walking scenes, weather, surfaces, lighting, and inclinations.
Returning now to fig. 37, as the patient begins to walk, at step 3708, the biomechanical sensor (e.g., sensor 3252) measures real-time data for assessing the patient's entrainment level. At step 3709, cta module 3208 determines an entrainment potential (e.g., as shown in fig. 18) and is used to determine how to achieve the training session objective. As discussed in the previous embodiments of the present disclosure, the entrainment potential may be the basis for modifying the rhythmic audio stimulus and visual scene (occurring at step 3710). For example, the CTA analyzes the input data history of the patient's gait cycle time, compared to the cadence intervals of beats transmitted to the patient by the audio device. Exemplary methods for modifying audio stimuli based on entrainment potentials are similarly described above. In one example, if the EP value calculated for a patient's step over a period of time is not within a prescribed range of acceptable EP values and/or is not sufficiently consistent, the CTA module may instruct the AA/AR modeling module to adjust (e.g., decrease) the speed of the RAS and adjust the speed of movement of the AR scene accordingly in synchronization with the RAS. In another example, if enough of the step time is in phase with the beat time, then the patient is considered as CTA entrainment.
Once entrained, the CTA evaluates whether the patient has reached the target at step 3711. If the target has not been reached, one or more target parameters may be adjusted at step 3712. For example, in one example, the CTA compares the RAS speed and associated AR scene speed to a target speed parameter (e.g., training/therapy target) before changing the cadence speed and/or scene movement speed in view of the comparison. An exemplary method that may be implemented by CTA module 3208 for adjusting rhythmic auditory stimulation according to an entrainment potential is shown and described above, for example, in connection with fig. 18.
For example, if the patient has not reached their training speed target, modifying the target parameters may include increasing or decreasing the musical speed at step 3712. This will drive the patient faster or slower using the RAS mechanism of action. Alternatively, another training goal may be to extend the stride length of the patient, which may be accomplished by slowing down the motion speed parameters of the images. By modifying the visual scene, the patient will be driven to simulate the visual examples presented to them using a mirrored action mechanism.
It is important to understand that audio and visual output are mutually enhancing stimuli: the visual scene is layered in synchronization with the rhythmic stimulus. Depending on the program selected (e.g., CTA), the CTA module 3208 dynamically adjusts the visual scene and cadence speed to meet the treatment goal.
The above examples illustrate how the ANR system 3200 using the CTA module 3208 controls synchronization of musical tempo and AR scene based on biomechanical sensor inputs to facilitate gait training. It should be appreciated that the principles of the present embodiments are applicable to many disease indications and rehabilitation scenarios.
In one example configuration, the ANR system 3200 may generate a virtual treadmill to adjust a patient's walk toward a target parameter of oxygen consumption. In this example, a virtual treadmill is generated and controlled to adjust the walking speed to oxygen consumption or efficiency target parameters using entrainment. Fig. 33 is a graphical visualization of a real-time session performed using the ANR system 3200, with V02max as a target parameter, speed variation serving as a temporary target, and entrainment for driving physiological changes related to V02 max. Fig. 33 shows an example of how this process works in real time. More specifically, fig. 33 is a graphical user interface illustrating various salient data points and parameter values measured, calculated, and/or adjusted in real-time by the ANR system during a session. As shown, the top of the interface shows a chart of entrainment potential values for each step calculated in real time throughout the session. In particular, the top bar graph shows the individual EP calculated for each step, which in this case is the phase correlation between the step time interval and the beat time interval. The central area around ep=1 represents steps that are sufficiently entrained into the beat, or in other words, steps whose EP value is within a prescribed range. The next window provides a status bar showing whether the parameter is within the safe range. The next window shows the real-time response driven by the CTA, especially based on measured parameters, entrainment, and other aforementioned inputs and feedback of the CTA. In particular, the circle icon represents an algorithmic response that includes a change in speed and a change in rhythmic stimulus level (e.g., volume). The next column shows only the speed and the speed change itself. The next window shows the real-time rate of rhythmic stimulation provided to the patient over time according to the CTA response. The bottom window displays measured oxygen consumption and target parameters over time. Although not shown in fig. 33, it should be understood that an augmented reality scene (e.g., a virtual treadmill) may be presented to the patient, wherein the visual elements animate in synchronization with the rhythmic stimulus and dynamically adjust in synchronization with the adjustment of the real-time speed of the rhythmic stimulus. Examples of how the AA/VR module 3210 is configured to synchronize visual animation speed and audio may include defining a relationship between the rate of repetitive motion displayed and the speed of the audio cues. For example, based on the beat speed, the speed and stride of the treadmill are calculated to define a relationship between audio and visual elements. Furthermore, the reference position of the treadmill, the timing of the steps, and any beat timing animation are synchronized with the output time of the beat including the beat speed. Using time scaling and video frame interpolation techniques known in the animation industry, a wide range of synchronized virtual scenes can be programmatically generated by the AA/VR module 3210 based on defined relationships between audio and visual elements.
Fig. 34 is a graph depicting metabolic changes of 7 patients during the first training (represented by the corresponding set of two points connected by the dashed line). Figure 34 shows data supporting intentional entrainment that may improve oxygen consumption in an individual. In this case, the graph shows oxygen consumption (ml oxygen/kg/meter) of a person before and after rhythmic training using the ANR system. This number shows an average reduction of 10%. The results indicate that the entrainment process can improve endurance and reduce energy consumption while walking.
The above-described process may be similarly implemented for each of the various possible target parameters described above (e.g., oxygen consumption may be exchanged for an alternative target, such as heart rate), and may be used for walking or other interventions discussed in connection with fig. 18-31.
According to one or more embodiments, the ANR system 3200 may be configured to compare real-time measurement information about a person's movements with AR images and/or composition of music content (e.g., instantaneous speed, tempo, harmony, melody, etc.) output to a patient during a treatment session. The system may use this parameter to calculate entrainment parameters, determine phase entrainment, or establish baseline and entrainment characteristics. For example, the AR image may move at the same speed or pace as the target parameter. Alternatively, the AR-related motion of the image may be entrained into the rhythmic stimulus in synchronization with how the person should move.
One example of an AR3D dynamic model output may include a projected therapist (virtual actor) or a person walking in the patient's field of view (virtual actor), initiated by the person performing the treatment (real therapist). For example, fig. 35 shows a view of a therapist or trainer in front of a patient or trainer through AR projection using AR glasses such as are known in the art. The AR3D dynamic model is controlled by one or more CTAs. In combination with CTA, the virtual therapist may begin with the method shown and described in connection with fig. 22 and then have them perform a gait training regimen as shown and described in connection with fig. 18. Alternatively, both tasks may be performed simultaneously. In these cases, the virtual actor may be controllably displayed by the system to walk or move backward or forward in a smooth motion similar to the unaffected side of the patient. This may activate mirror neurons, where affected neurons are "encouraged" to mirror what appears to be unaffected neurons. The process may also include providing audio stimuli to synchronize virtual and/or actual persons with the stimuli.
In another example, the AR3D dynamic model may be configured to simulate a scene of a patient walking in or around a crowd of people and/or people with objects in front of and/or to the sides of the patient. For example, fig. 36A shows a view of a population projected in front of a patient by AR. The system may be configured to project a crowd or person traveling faster or slower than a person's baseline to encourage them to move or stop/start at a similar speed in a real world natural environment. The crowd or person may be entrained with rhythmic auditory stimuli or beats of other desired targets. The AR3D dynamic model may initiate navigation with different difficulties. It should be appreciated that the AR view of therapists, crowd, people, obstructions, etc. may be dynamically adjusted using an AR3D dynamic model based on the CTA output.
In another example, the AR3D dynamic model may be configured to simulate a scene of a patient walking in or around a cone arrangement that implements a virtual obstacle heading for patient navigation. Cone is a normal obstacle in a therapeutic environment, however, other embodiments may be configured to simulate normal activities of daily living (e.g., grocery shopping). These cones and virtual obstacles may encourage direction changes: walk sideways and backward, not just forward. Here, the wireless beacon trigger may also be used to cause the ANR system to present a cone of appearance and/or disappearance. The beacon will trigger based on detecting the location of the person associated with the cone. In addition, the difficulty of different degrees can be set in the navigation time and the navigation length. The target parameter of this example may be a measure of walking speed or walking quality. A successful navigation would be to navigate around the cone without actually touching the cone. The system may be configured to present a more difficult level (e.g., more obstacles and faster speeds) as long as the person successfully avoids the obstacle and the walking quality is not reduced (as measured by increased variability or worsening asymmetry).
In another example, the AR3D dynamic model may be configured to simulate a scene of a patient walking in or around a cave and/or cliff, which may include a realistic obstacle. The reality sense will enhance the details required for navigation relative to the use case of the existing presentation. In another example of a person with an asymmetric gait pattern, a serpentine path may be presented when the person is required to take a longer step on the affected side. This serpentine path may also be a single cliff, which must cross the valleys without falling down. Wireless beacon triggers may be used to cause the ANR system to cause caverns and/or cliff obstacles to appear and/or disappear, thereby changing the difficulty of navigation time and path length. The system may use the sensor data to synchronize motion to a serpentine path. The patient's navigational requirements may be a biomechanical response for navigating changes in a baseline prescribed heading. The system is configured such that the wireless space and time beacon triggers affect changes in the AR3D dynamic model. The temporal aspect of these wireless triggers is their ability to turn them on and off. This would allow maximum flexibility in writing the navigation path for the heading that the patient should take as part of the treatment session. The target parameter at this time is a measure of the speed or quality of the walk. A successful navigation will be to navigate on the path without stepping down the path or dropping down the cliff. The system may be configured to present more difficult levels (e.g., more obstacles and faster speeds) as long as the person successfully stays on the path and the walking quality is not reduced (as measured by increased variability or worsening asymmetry).
In another example, the AR3D dynamic model may be configured to simulate a scenario in which the patient is standing or sitting still, but is required to travel as virtual objects appear and approach each foot. For example, fig. 36B shows a view of a footprint projected in front of a patient by AR. The ANR system may generate a virtual scene in which the subject may approach the left or right side of the patient to encourage side stepping. The object will be presented to approach the patient at a predefined speed or beat, which will follow the decision tree as depicted in fig. 22. The visual effect of the correct motion obtained by the therapist or patient from the past treatment can also be projected.
In another exemplary AR3D dynamic model implementation, the ANR system may be configured to incorporate haptic feedback into the therapy. For example, if the user is too close to an object or person in the projected AR environment, the system may use haptic feedback as a signal. Rhythmic tactile feedback may also be synchronized with the audible cues to amplify the sensory input. AR may also be adaptively and individually activated to alert the onset of locomotion, for example, during gait freezing in parkinson's disease patients.
In another exemplary AR3D dynamic model implementation, the ANR system may be further configured to combine optics and head tracking. The tracking may be incorporated as feedback to an ANR system configured to trigger audible input in response to the position that their eyes or head are facing. For example, a left-neglected person interacting with only the right side of the natural environment, eye and head tracking may provide input as to how much of their left hemisphere natural environment is occupied, and trigger the system to generate audible cues to divert more attention to the left hemisphere. These data can also be used to track progress over time, as clinical improvement can be measured by the degree of consciousness of each hemisphere. Another example of this is a person suffering from eye movement disorder, the left to right visual scan can be improved by visual scanning at an external auditory rhythm.
In another exemplary AR3D dynamic model implementation, the ANR system may be configured to provide a digital presentation of past sessions to display improvements for the user. These models may be played back after a session to compare the lifecycles of different sessions or treatments. Digital presentations of past sessions (or enhanced sessions), when paired with audio inputs of the session, may be used as a mental imagination task to exercise and limit fatigue between walking sessions. The model will show differences in walking speed, cadence, stride length, and symmetry to help show the user's changes over time and how the treatment might improve their gait. The therapist may also use this presence prior to the session start to assist in preparing a training program or technique for a subsequent session. Researchers and clinicians can also use the existence of such modeling to better visualize and re-activate evolving 3D images of patient progress.
In another exemplary AR3D dynamic model implementation, an AR/VR environment synchronized with music content may create different walking or dance patterns to include elliptical, spiral, serpentine, intersecting paths with others, and double-tasking walking. The dance rhythm of the fingo et al has been shown to have the benefits of neuro-musical therapy (neurogenic MusicTherapy, NMT) and RAS, applicable to the whole human body.
According to one or more embodiments, the ANR system may be configured to utilize AA technology to enhance the entrainment process, provide an environmental context for humans, and aid in the AR experience. To enhance the recovery process, the system may be configured to generate an exemplary AA experience described further herein based on input obtained from the environment, sensor data, AR environment, entrainment, and other methods.
One example of an AA of a treatment/medical use case is to address safety issues and mitigate the risk of the patient undergoing treatment exercises. The ANR system may be configured to improve context awareness while listening to music with headphones by transiently mixing external sounds exceeding a minimum audio loudness threshold into the treated tempo and audio prompt content. An example of external sounds is the whistling of an automobile or the warning of an emergency vehicle, which simultaneously automatically interrupt the normal auditory stimulus, thereby making a person aware of the potential hazard. To perform this and other functions, the listening device may have additional microphones and digital signal processing dedicated to performing this task.
In another embodiment, an ANR system implementing AA may be configured to combine aspects of AA and spatially perceived manipulation by aligning rhythmic auditory cues with the "affected side" of a patient as the patient is engaged in a ambulatory treatment session. For example, if a greater degree of treatment is required on the right side of the patient, the audio prompt may be spatially aligned with the right side for emphasis. Exemplary systems and methods for neurological rehabilitation using lateral specific rhythmic auditory stimulation techniques are disclosed in co-pending and co-assigned U.S. patent application No. 62/934,457, entitled "systemann dmhodsforeulerogeiehabiytitidine," filed on 11/12, 2019, the entire contents of which are hereby incorporated by reference as if each set forth in its entirety herein.
In another embodiment, an ANR system implementing AA may be configured to provide unique audible cues to increase the spatial perception of head position during gait training, encourage the user to keep his head upward, on midline, with the eyes forward, improve balance and spatial perception when undergoing an entrainment process or other CTA experience.
In another embodiment, an ANR system implementing AA may be configured to provide binaural beat sounds and correlate them with human physiology, such as respiratory rate, brain electrical activity (EEG), and heart rate, to improve cognition and enhance memory. The ANR system may be configured to provide a binaural beat audio signal input complementary to the RAS signal input. The real-time entrainment and quality of gait measurements made by the system will also be supplemented by physiological measurements. For example, a system configured for binaural beat audio uses differential frequency signals output by the left and right ears, with a difference of 40Hz, the "gamma" frequency of the neural oscillations. These frequencies can reduce amyloid accumulation in Alzheimer's patients and help to improve cognitive flexibility. By delivering such audio signals to the user via the AA device as the user performs RAS gait training, a second type of neuro-entrainment may be achieved simultaneously with biomechanical RAS entrainment. The network hypothesis of brain activation suggests that walking and cognition may be affected. Thus, such auditory sensory stimuli may entrain neural oscillations in the brain, while rhythmic auditory stimuli may entrain the motor system.
In another embodiment, an ANR system implementing AA may be configured to provide a phase coherent sound field (e.g., correct audio spatial perspective) when a patient rotates his head or changes his pose. A sound field is an imaginary three-dimensional image created by stereo speakers or headphones. It allows the listener to hear the position of the sound source accurately. One example of manipulating the sound field during treatment is keeping the sound of a virtual trainer "in front of" the patient, even though their head may turn to one side. This function helps to avoid disorientation, creating a more stable, predictable, and safe audio experience when treating. This feature may be combined with the AR virtual trainer/therapist in front of the person in fig. 34. It may also be combined with knowledge of the heading or direction that a person needs to take in the real world.
In another embodiment, the ANR system may be configured to combine AA with Augmented Reality (AR) in such a way that when the patient is synchronized with a virtual crowd, virtual sound effects (e.g., encouragement and crowd footsteps) create a coherent sound field for the patient's visual gaze. The audio may also produce a sensation of distance or proximity to the object. Changing the spatial position or loudness in this way may also be used as a target object in combination with AR and 3D images.
In another embodiment, the ANR system may be configured to combine AA with Augmented Reality (AR) in a manner that creates virtual instrument therapies. Instruments such as bells, drums, pianos, and guitars may be training tools commonly used by patients. By creating a digital model of these instruments and providing AA feedback when interacting, the patient can get an immersive experience and feel that they are actually using the instruments. This can be modified according to the difficulty to help the patient make progress over time and show improvement. Examples of modifications may include adding more keys on a piano or adding more strings on a guitar. In addition to the virtual musical instrument, a virtual sheet score or musical symbol may be displayed in real time as the patient plays the musical instrument (virtual musical instrument or real musical instrument). Other examples may be combined with the concepts discussed in connection with fig. 19, where the connected hardware may be replaced by AR. Similar logic may be used for intervention of other records.
In another embodiment, the ANR system may be configured to implement AA in conjunction with telepresence, thereby providing a spatially accurate audio experience for therapists. Audio may also be generated to create a perception of distance or proximity to an object. By changing the spatial position or loudness of the AA and utilizing the AR model, the system can be used to more effectively determine whether the patient meets the goals associated with playing the virtual device and provide them with a more accurate special experience.
According to one or more embodiments, the ANR system may implement a class AA, namely, a Rhythmic Stimulus Engine (RSE). The rhythmic stimulus engine is a custom rhythmic auditory stimulus that embodies the principles of entrainment to drive the therapeutic effect while generating original and custom auditory rhythmic content for the patient. For certain disease states, such as parkinson's disease, it is also beneficial to have a constant rhythm "score" in the patient's environment. The RSE can be configured to perform such continuous background rhythmic neural stimulation without accessing pre-recorded music. In one example, the ANR system may be configured to implement AA in conjunction with a Rhythmic Stimulus Engine (RSE) and AR to create a fully synchronized feedback state between input biometric data, external audio input from the treatment environment, and generated rhythmic content, AR, and AA output. In another example, the system may be configured to interactively adjust the speed of the rhythmic audio content generated by the RSE through a patient's walking pace. In another example, the speed and temporal signature of the rhythmic audio content generated by the RSE may be interactively adjusted by the entrainment accuracy and beat factor of the patient user (e.g., the user using a cane or auxiliary device). In another example, the RSE can provide neural stimulation in combination with auxiliary techniques (e.g., exoskeleton and/or FES devices) to enhance effectiveness of walking therapy. In another example, the RSE may generate rhythmic audio content from a stored library of traditional dance rhythm templates that extends therapy to the patient's upper body and limbs. This can be extended to be combined with the AR technology described above (e.g. crowd dancing or virtual dance pools). In another example, machine learning techniques such as self-learning AI and/or rule-based systems may generate real-time moderation cadence through Inertial Motion Unit (IMU) inputs reporting the quality of the gait parameters, e.g., symmetry and gait cycle time variability. Using unsupervised ML clustering or decision tree models, various gait patterns can be used as inputs to the generation of the music system.
According to one or more embodiments, the ANR system may implement a class AA, namely ultrasound (Sonification), which means that different amounts of signal distortion are applied to the music content depending on how far the patient is from the subject target. The degree and type of sonication helps to push the patient into a corrective condition. The new combination of sonication and entrainment may provide a feed-forward mechanism for auditory motion synchronization through entrainment, while a feedback mechanism is provided through distortion of the musical content of some other biomechanical or physiological parameter that an individual may adjust. For example, increasing signal distortion in a music signal while increasing the volume of a tempo prompt may be more efficient in combination than either approach alone.
According to one or more embodiments, the ANR system may perform CTA in conjunction with neurotoxin injection, as shown below. CTA can apply the entrainment principle to improve motor functions such as gait. Neurotoxin injection can also improve gait by addressing muscle spasms. These injections take 2-4 days to take effect and last for 90 days (e.g., expiration date). The dose of CTA (e.g., the setting of one or more parameters of CTA) used for the entrainment principle can be directed to the effective profile of neurotoxin injection, where the training intensity is less for a period of time before the injection takes effect and increases over the period of time.
According to one or more embodiments, the ANR system may be configured to calculate entrainment parameters using the heart beat or respiration rate in synchronization with the musical content, rather than biomechanical motion parameters. An example of a use case is a person suffering from various forms of anxiety disorders such as dementia, alzheimer's disease, bipolar disorder, schizophrenia, etc. In this use case, the baseline parameter may be determined by heart rate or respiration rate. Entrainment or phase entrainment may be determined by comparing the musical content to heart rate or respiration. In addition, goals may be set to reduce anxiety to improve the quality of life of these individuals.
As can be seen from the above discussion, in accordance with one or more embodiments of the present disclosure, a system for enhancing neurological rehabilitation of a patient may include one or more of the following:
a computing system for having one or more physical processors configured by software modules comprising machine-readable instructions. The software module may include a 3DAR modeling module that, when executed by the processor, configures the processor to generate and present augmented reality visual and audio content to the patient during the treatment session. The content includes visual elements that move in a prescribed spatial and temporal sequence and rhythmic audio elements that are output at a beat rate.
The computing system also includes an input interface in communication with the processor for receiving input including time-stamped biomechanical data of the patient related to the patient's motion performed for the AR visual and audio content and using physical parameters measured by one or more sensors associated with the patient.
The software module also includes a critical thinking algorithm that configures the processor to analyze the time-stamped biomechanical data to determine spatial and temporal relationships of the patient's motion with respect to the visual and audio elements and to determine the level of entrainment of the patient with respect to the target physiological parameter. In addition, the 3DAR modeling module also configures the processor to dynamically adjust the augmented reality visual and audio content output to the patient based on the determined entrainment level relative to the target parameter.
The above-described systems, devices, methods, processes, etc. may be implemented by hardware, software, or any combination thereof, as appropriate for the application. The hardware may include a general purpose computer and/or a special purpose computing device. This includes implementation in one or more microprocessors, microcontrollers, embedded microcontrollers, programmable digital signal processors or other programmable devices or processing circuits, and internal and/or external memory. This may also or alternatively include one or more application specific integrated circuits, programmable gate arrays, programmable array logic components, or any other device or devices that may be configured to process electronic signals. It will also be appreciated that implementations of the above described processes or devices may include computer executable code created using a structured programming language such as C, an object oriented programming language such as c++, or any other high-or low-level programming language (including assembly languages, hardware description languages, and database programming languages and technologies), which may be stored, compiled or interpreted to run on one of the above described devices, as well as heterogeneous combinations of processors, processor architectures, or combinations of different hardware and software. In another aspect, the method may be embodied in a system that performs its steps and may be distributed across devices in a variety of ways. Meanwhile, the processing may be distributed on devices such as the above-described various systems, or all functions may be integrated into a dedicated stand-alone device or other hardware. In another aspect, the means for performing the steps associated with the processes described above may comprise any of the hardware and/or software described above. All such permutations and combinations are intended to fall within the scope of the present disclosure.
Embodiments disclosed herein may include a computer program product comprising computer executable code or computer usable code that, when executed on one or more computing devices, performs any and/or all of its steps. The code may be stored in a non-transitory manner in a computer memory, which may be a memory from which the program is executed (e.g., a random access memory associated with a processor), or a storage device, such as a disk drive, flash memory, or any other optical, electromagnetic, magnetic, infrared, or other device or combination of devices. In another aspect, any of the above-described systems and methods may be embodied in any suitable transmission or propagation medium carrying computer-executable code and/or any input or output thereof.
It should be understood that the above-described apparatus, systems, and methods are set forth by way of example and not limitation. The disclosed steps may be modified, supplemented, omitted, and/or reordered without deviating from the scope of the present disclosure without explicit indication to the contrary. Many variations, additions, omissions, and other modifications will be apparent to those of ordinary skill in the art. Furthermore, the order or presentation of method steps in the above description and drawings is not intended to require that the order of steps be performed unless explicitly required or the context clearly requires otherwise.
The method steps of the embodiments described herein are intended to include any suitable method for enabling the execution of such method steps and to comply with the patentability of the following claims, unless a different meaning or context is specifically indicated. Thus, for example, performing step X includes any suitable method for causing another party, such as a remote user, a remote processing resource (e.g., a server or cloud computer), or a machine, to perform step X. Similarly, performing steps X, Y and Z may include any method of directing or controlling any combination of these other individuals or resources to perform steps X, Y and Z to obtain the benefits of these steps. Accordingly, method steps of embodiments described herein are intended to include any suitable method of causing one or more other parties or entities to perform the steps in accordance with the patentability of the following claims, unless a different meaning or context is specifically provided. These parties or entities need not be guided or controlled by any other parties or entities nor need they be located in a particular jurisdiction.
It should also be understood that the above-described methods are provided by way of example. The disclosed steps may be modified, supplemented, omitted, and/or reordered without deviating from the scope of the present disclosure without explicit indication to the contrary.
It should be understood that the methods and systems described above are set forth by way of example and not limitation. Many variations, additions, omissions, and other modifications will be apparent to those of ordinary skill in the art. Furthermore, the order or presentation of method steps in the above description and drawings is not intended to require the order in which the steps are performed unless a specific order is explicitly required or otherwise explicitly required depending on the context. Thus, while particular embodiments have been shown and described, it will be obvious to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the disclosure.

Claims (19)

1. An enhanced neurological rehabilitation system for a patient, comprising:
a computing system having a processor configured by a software module comprising machine-readable instructions stored in a non-transitory storage medium,
the software module includes:
an AA/AR modeling module that, when executed by the processor, configures the processor to generate augmented reality, AR, visual content and a rhythmic auditory stimulus, RAS, for output to a patient during a treatment session, wherein the RAS comprises beat signals output at a beat rate, and wherein the AR visual content comprises visual elements that move in a prescribed spatial and temporal sequence based on the beat rate;
An input interface in communication with the processor for receiving real-time patient data including time-stamped biomechanical data of the patient related to repeated movements performed by the patient in time with the AR visual content and RAS, and wherein the biomechanical data is measured using a sensor associated with the patient; and
the software module further includes:
a critical thinking algorithm CTA module for configuring the processor to analyze the time-stamped biomechanical data to determine a temporal relationship of repetitive motion of the patient relative to the visual element and beat signals output at the beat speed to determine an entrainment level relative to a target parameter;
wherein the AA/AR modeling module further configures the processor to dynamically adjust to the patient's AR vision and RAS output based on the determined entrainment level and in synchronization.
2. The system of claim 1, wherein the processor dynamically adjusts the AR visual content and RAS based on the determined entrainment level by: adjusting a beat speed of the RAS according to the entrainment level, and adjusting a prescribed spatial and temporal sequence of the visual elements in synchronization with the adjusted beat speed.
3. The system according to claim 2,
wherein, the beat signals are respectively output at corresponding output time;
wherein the entrainment level is determined based on the timing of the repetitive motion relative to the corresponding output time of the beat signal.
4. The system of claim 3, wherein the CTA module configures the processor to determine the entrainment level by:
analyzing the time-stamped biomechanical data to identify respective times of respective repeated movements,
a time relationship between respective times of one or more repetitive movements and respective output times of one or more associated beat signals is measured,
calculating an entrainment potential based on the measured time relationship for one or more of the respective repeated movements; and
wherein the processor is configured to dynamically adjust one or more of the AR vision and RAS outputs based on the entrainment potential.
5. The system according to claim 2,
wherein the CTA module further configures the processor to determine whether biomechanical data or physiological data measured for the patient meets training target parameters;
wherein the AA/AR modeling module further configures the processor to dynamically adjust AR visual content and RAS output to the patient in response to the training target parameter not being met.
6. The system of claim 5, wherein the target parameter is beat speed, and wherein the training target parameter is rehabilitation result.
7. The system of claim 1, further comprising:
an AR video output device configured to present the AR visual content to the patient;
an audio output device configured to output the RAS to the patient; and
the sensor is associated with the patient and configured to measure time stamped biomechanical data of the patient, wherein the sensor comprises an inertial measurement unit, IMU, device.
8. The system of claim 5, further comprising:
a sensor associated with the patient and configured to measure physiological data of the patient, and wherein the training target parameter is a physiological parameter, an
Wherein the physiological parameter is one or more of heart rate, blood oxygen, respiratory rate, VO2, brain electrical activity EEG.
9. The system of claim 1, wherein the AR visual content comprises a visual scene comprising one or more of:
a virtual treadmill that is animated to appear as if the top surface of the treadmill is approaching the patient at a rate corresponding to the beat rate, an
A plurality of footprints superimposed on a top surface of the virtual treadmill, wherein the footprints are arranged to spatially and seemingly approximate the patient at a rate corresponding to the beat speed, an
An animated person performing a repetitive motion at a rate corresponding to the beat speed.
10. The system of claim 1, wherein modifying the AR visual content comprises one or more of: changing the pitch of the footprint, changing the rate at which the animator performs repetitive motion, changing the rate at which the virtual top surface of the treadmill appears to be close to the patient, changing the rate at which virtual obstacles or scene disturbances appear in front of the patient, according to the change in beat speed.
11. A method of enhanced neurological rehabilitation for a patient with physical injury, the method implemented on a computer system having a physical processor configured with machine-readable instructions that when executed perform the method comprising:
providing a rhythmic auditory stimulus, RAS, for output to a patient via an audio output device during a treatment session, wherein the RAS comprises beat signals output at beat rate;
Generating augmented reality AR visual content for output to a patient via an AR display device, wherein the AR visual content includes visual elements that move in a prescribed spatial and temporal sequence based on the beat speed and are output in synchronization with the RAS;
instructing the patient to perform repetitive movements in time with the beat signal of the RAS and the corresponding movements of the visual elements of the AR visual content;
receiving real-time patient data comprising time-stamped biomechanical data of the patient related to repeated movements performed by the patient in time with the AR visual content and RAS, and wherein the biomechanical data is measured using a sensor associated with the patient;
analyzing the time-stamped biomechanical data to determine a temporal relationship of repetitive motion of the patient relative to the visual element and beat signals output from the beat signals to determine an entrainment potential;
dynamically adjusting the AR visual content and RAS for output to the patient based on the determined entrainment potentials not meeting the prescribed entrainment potential and in synchronization;
the treatment session is continued using the adjusted AR visual content and RAS.
12. The method of claim 11, further comprising:
measuring biomechanical data from the patient, wherein measuring biomechanical data from the patient includes providing a sensor associated with the patient and measuring one or more of motion, acceleration, and pressure associated with movement of the patient.
13. The method of claim 11, wherein dynamically adjusting the AR visual content and RAS based on the determined entrainment potential comprises: adjusting a beat speed of the RAS based on the entrainment potential, and adjusting a prescribed spatial and temporal sequence of the visual elements in synchronization with the adjusted beat speed.
14. The method of claim 13, further comprising:
comparing the beat speed with training target parameters including a target beat speed; and
dynamically adjusting the RAS and the AR visual content based on the comparison and the determined entrainment potential for the beat speed.
15. The method of claim 14, further comprising: if the entrainment potential meets a prescribed level and the beat speed is lower than the target beat speed, increasing the beat speed of the RAS toward the target beat speed, and adjusting the prescribed spatial and temporal sequence of the visual element in synchronization with the adjusted beat speed.
16. The method of claim 11, wherein the beat signals are each output at a respective output time as a function of the beat speed, wherein measuring the entrainment potential comprises comparing a respective start time of each of a plurality of repetitive motions to a respective output time of an associated beat signal in real time, and wherein a start time of a given repetitive motion is a time of a prescribed identifiable event occurring during the given repetitive motion.
17. The method of claim 11, further comprising:
determining whether patient data comprising biomechanical data or physiological data measured for the patient meets training target parameters;
wherein the AA/AR modeling module further configures the processor to dynamically adjust AR visual content and RAS output to the patient in response to the training target parameter not being met.
18. The method of claim 17, further comprising:
measuring physiological data from the patient, wherein measuring physiological data from the patient includes providing a sensor associated with the patient, the sensor configured to measure a physiological parameter, wherein the physiological parameter is selected from a heart rate, a respiratory rate, and a maximum oxygen uptake VO2max.
19. The method of claim 11, wherein the AR visual content comprises a visual scene comprising one or more of:
a virtual treadmill that is animated to appear as if the top surface of the treadmill is approaching the patient at a rate corresponding to the beat rate, an
A plurality of footprints superimposed on a top surface of the virtual treadmill, wherein the footprints are arranged to spatially and seemingly approximate the patient at a rate corresponding to the beat speed, an
An animated person performing a repetitive motion at a rate corresponding to the beat speed.
CN202180062559.5A 2020-07-21 2021-07-21 Systems and methods for enhancing neurological rehabilitation Pending CN116096289A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US202063054599P 2020-07-21 2020-07-21
US63/054,599 2020-07-21
PCT/US2021/042606 WO2022020493A1 (en) 2020-07-21 2021-07-21 Systems and methods for augmented neurologic rehabilitation

Publications (1)

Publication Number Publication Date
CN116096289A true CN116096289A (en) 2023-05-09

Family

ID=79728942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180062559.5A Pending CN116096289A (en) 2020-07-21 2021-07-21 Systems and methods for enhancing neurological rehabilitation

Country Status (6)

Country Link
EP (1) EP4185192A1 (en)
JP (1) JP2023537681A (en)
KR (1) KR20230042066A (en)
CN (1) CN116096289A (en)
CA (1) CA3186120A1 (en)
WO (1) WO2022020493A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117766098A (en) * 2024-02-21 2024-03-26 江苏森讯达智能科技有限公司 Body-building optimization training method and system based on virtual reality technology
CN117929173A (en) * 2024-03-18 2024-04-26 中国汽车技术研究中心有限公司 Method and device for testing and calibrating mechanical properties of rib components of automobile collision dummy

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316982A (en) * 2022-09-02 2022-11-11 中国科学院沈阳自动化研究所 Muscle deformation intelligent detection system and method based on multi-mode sensing
CN115868967A (en) * 2023-01-10 2023-03-31 杭州程天科技发展有限公司 Human body motion capture method and system based on IMU and storage medium
JP7449463B1 (en) 2023-11-06 2024-03-14 株式会社Tree Oceans Walking assistance wearable device, control method, and program
CN117594245B (en) * 2024-01-18 2024-03-22 凝动万生医疗科技(武汉)有限公司 Orthopedic patient rehabilitation process tracking method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9675776B2 (en) * 2013-01-20 2017-06-13 The Block System, Inc. Multi-sensory therapeutic system
CN114190924B (en) * 2016-04-14 2023-12-15 医学节奏股份有限公司 Systems and methods for nerve rehabilitation
WO2017222997A1 (en) * 2016-06-20 2017-12-28 Magic Leap, Inc. Augmented reality display system for evaluation and modification of neurological conditions, including visual processing and perception conditions

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117766098A (en) * 2024-02-21 2024-03-26 江苏森讯达智能科技有限公司 Body-building optimization training method and system based on virtual reality technology
CN117929173A (en) * 2024-03-18 2024-04-26 中国汽车技术研究中心有限公司 Method and device for testing and calibrating mechanical properties of rib components of automobile collision dummy

Also Published As

Publication number Publication date
WO2022020493A1 (en) 2022-01-27
EP4185192A1 (en) 2023-05-31
KR20230042066A (en) 2023-03-27
CA3186120A1 (en) 2022-01-27
JP2023537681A (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN114190924B (en) Systems and methods for nerve rehabilitation
US20210345947A1 (en) Systems and methods for augmented neurologic rehabilitation
CN116096289A (en) Systems and methods for enhancing neurological rehabilitation
CN108463271B (en) System and method for motor skill analysis and skill enhancement and prompting
US20220269346A1 (en) Methods and apparatuses for low latency body state prediction based on neuromuscular data
US20210086024A1 (en) Systems and methods for neurologic rehabilitation
US11786147B2 (en) Distributed sensor-actuator system for synchronized movement
US20220019284A1 (en) Feedback from neuromuscular activation within various types of virtual and/or augmented reality environments
US20210106290A1 (en) Systems and methods for the determination of arousal states, calibrated communication signals and monitoring arousal states
Davanzo et al. Hands-free accessible digital musical instruments: conceptual framework, challenges, and perspectives
KR20220117867A (en) Systems and methods for neurorehabilitation
US20230364469A1 (en) Distributed sensor-actuator system for synchronized motion
Powell The Evaluation of Recognizing Aquatic Activities Through Wearable Sensors and Machine Learning
Sunela Real-time musical sonification in rehabilitation technology
Monco From Head to Toe: Body Movement for Human-Computer Interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40094023

Country of ref document: HK