WO2019210087A1 - Methods, systems, and computer readable media for testing visual function using virtual mobility tests - Google Patents

Methods, systems, and computer readable media for testing visual function using virtual mobility tests Download PDF

Info

Publication number
WO2019210087A1
WO2019210087A1 PCT/US2019/029173 US2019029173W WO2019210087A1 WO 2019210087 A1 WO2019210087 A1 WO 2019210087A1 US 2019029173 W US2019029173 W US 2019029173W WO 2019210087 A1 WO2019210087 A1 WO 2019210087A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
virtual mobility
mobility test
test
virtual
Prior art date
Application number
PCT/US2019/029173
Other languages
French (fr)
Inventor
Jean Bennett
Tomas S. ALEMAN
Manzar ASHTARI
Alexander Jacob MILLER
Nancy Bennett
Original Assignee
The Trustees Of The University Of Pennsylvania
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Trustees Of The University Of Pennsylvania filed Critical The Trustees Of The University Of Pennsylvania
Publication of WO2019210087A1 publication Critical patent/WO2019210087A1/en
Priority to US17/079,119 priority Critical patent/US20210045628A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Definitions

  • the subject matter described herein relates to virtual reality. More particularly, the subject matter described herein relates to methods, systems, and computer readable media for testing visual function using virtual mobility tests.
  • Visual function can encompass many different aspects or parameters of vision, including visual acuity (resolution), visual field extent (peripheral vision), contrast sensitivity, motion detection, color vision, light sensitivity, and the pattern recovery or adaptation to different light exposures, to name a few.
  • Functional vision i.e., the ability to use vision to carry out different tasks, may therefore be considered a behavioral direct consequence of visual function.
  • a physical mobility test involving an obstacle course having various obstacles in a room may be used to evaluate one or more aspects of vision function.
  • a mobility test can involve a number of issues including time-consuming setup, limited configurability, risk of injury to users, and limited quantitation of results.
  • One system includes a processor and a memory.
  • the system is configured for receiving configuration information for setting up a virtual mobility test for testing visual function, generating the virtual mobility test; and analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data from body movement detection sensors.
  • One method includes configuring a virtual mobility test for testing visual function of a user; generating the virtual mobility test; and analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
  • the subject matter described herein may be implemented in hardware, software, firmware, or any combination thereof.
  • the terms“function” or“node” as used herein refer to hardware, which may also include software and/or firmware components, for implementing the feature(s) being described in some exemplary implementations
  • the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer, control the computer to perform steps.
  • Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits.
  • a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms in some exemplary implementations, the subject matter described herein may be implemented using hardware, software, firmware delivering augmented reality.
  • FIG. 1 is a diagram illustrating an example virtual mobility test system (VMTS) for testing visual function
  • Figure 2 is a diagram illustrating an example template for a virtual mobility test
  • Figure 3 is a diagram illustrating a user performing a virtual mobility test
  • Figure 4 is a diagram illustrating example obstacles in a virtual mobility test
  • Figure 5 is a diagram illustrating various sized obstacles in a virtual mobility test
  • Figure 6 is a diagram illustrating virtual mobility tests with various lighting conditions
  • Figure 7 is a diagram illustrating example data captured during a virtual mobility test
  • Figure 8 is a diagram illustrating various aspects of an example virtual mobility test
  • Figure 9 is a flow chart illustrating a process for testing visual function using a virtual mobility test.
  • Figure 10 is a flow chart illustrating a process for dissecting or anayzing two different parameters of visual function.
  • a conventional mobility test for testing visual function of a user may involve one or more physical obstacle courses and/or other physical activities to perform. Such courses and/or physical activities may be based on real-life scenarios and/or activities, e.g , walking in a dim hallway or walking on a floor cluttered with obstacles.
  • Existing mobility tests have limited configurability and other issues. For example, conventional mobility tests are, by design, generally inflexible and difficult to implement and reproduce since these tests are usually designed using a particular implementation and equipment, e.g , a test designer’s specific hardware, obstacles, and physical space requirements.
  • a“real-life’ or physical mobility test is a“RPE65” test for testing for retinal disease that affects ability to see in low luminance conditions, e.g., a retinal dystrophy due to retinal pigment epithelium 65 (RPE65) gene mutations.
  • This physical test measures how a person functions in a vision-related activity of avoiding obstacles while following a pathway in different levels of illumination. While this physical test reflects the everyday life level of vision for RPE65-associated disease, the“RPE65” test suffers from a number of limitations. Example limitations for the“RPE85” test are discussed below.
  • The“RPE65” test is limited in usefulness for other populations of low vision patients. For example, the test cannot be used reliably to elicit visual limitations of individuals with fairly good visual acuity (e.g., 20/60 or better) but limited fields of vision.
  • the test area for the“RPE65” test must be capable of holding a 17 feet (ft) X 10 ft obstacle course, the test user (and companion) and the test operators, and cameras.
  • the room must be light-tight (e.g., not transmitting or reflecting light) and capable of presenting lighting conditions at a range of calibrated, accurate luminance levels (e.g., 1 , 4, 10, 50, 125, 250, and 400 lux). Further, this illumination must be uniform in the test area.
  • Physical objects on a physical obstacle course are injury risk to patients (e.g., obstacles can cause a test user to fall or trip).
  • A“RPE85” test user can cheat during the test by using“echo- location” of objects instead of their vision to identify large objects. 6) A“RPE65” test user must be guided back to the course by the test operator if the user goes off course.
  • The“RPE65” test has difficult and limited quantitation for evaluating a test user’s performance.
  • the scoring system for this test is challenging as it requires review of videos by masked graders and subjective grading of collisions and other aspects of the performance.
  • the data is collected through videos showing the performance in two dimensions and focuses generally on the feet, there is no opportunity to collect additional relevant data, such as direction of gaze, likelihood of collision with objects beyond the view of the camera lens, velocity in different directions, acceleration, etc.
  • a virtual mobility test system e.g., a computer, a VR headset, and body movement detection sensors
  • a virtual mobility test system may configure, generate, and analyze a virtual mobility test for testing visual function of a user.
  • the test operator or the virtual mobility test system may change virtually any aspect of the virtual mobility test, including, for example, size, shape, and placement of obstacles, lighting conditions, and may provide haptic and audio user feedback, and may use these capabilities to test various different diseases and/or eye conditions.
  • a virtual mobility test may be configured to efficiently capture and store relevant data not obtained in conventional physical tests (e.g., eye or head movements) and/or may capture data with more precision (e.g., via body movement detection sensors) than in conventional physical tests.
  • the scene can be displayed to one eye or the other or to both eyes simultaneously.
  • a virtual mobility test system or a related entity may produce more objective and/or accurate test results (e.g., user performance scores).
  • FIG 1 is a diagram illustrating an example virtual mobility test system (VMTS) 100 for testing visual function in Figure 1
  • VMTS virtual mobility test system
  • VMTS 100 may include a processing platform 101 , a user display 108, and one or more sensors 110 are depicted.
  • VMTS 100 may represent any suitable entity or entities (e.g., a VIVE virtual reality (VR) system, one or more servers, a desktop computer, a phone, a tablet computer, or a laptop) for testing visual function in a virtual (e.g., VR) environment.
  • VMTS 100 may be a laptop or a desktop computer executing one or more applications and may interact with various modules or components therein.
  • VMTS 100 may include a communications interface for receiving configuration information associated with generating or setting up a virtual environment for testing various aspects of visual function and/or defecting whether a user (e.g., a test participant or subject) may be displaying symptoms or characteristics of one or more eye issues or related conditions.
  • VMTS 100 may include one or more communications interfaces for receiving sensor data or feedback from one or more physical sensors 110 associated with a test user.
  • sensors 110 may detect body movement (e.g., of feet, arms, and head), along with related characteristics (e.g., speed and/or direction of movement) in some embodiments, VMTS 100 may include one or more processors, memories, and/or other hardware for generating a virtual environment, displaying a virtual environment (e.g., including various user interactions with the environment and related effects caused by the interactions, such as collisions with virtual obstacles, movements that the user makes to avoid collision (such as lifting up a leg to step over an object or ducking under a sign) or purposeful touch such as stepping on a“step” or touching a finish line), and recording or storing test output and related test results (e.g., including text based logs indicating sensor data and/or a video recreation of the test involving a user’s progress during the test).
  • a virtual environment e.g., including various user interactions with the environment and related effects caused by the interactions, such as collisions with virtual obstacles, movements that the user makes to avoid collision (such as lifting up a
  • VMTS 100 may utilize processing platform
  • Processing platform 101 may represent any suitable entity or entities (e.g., one or more processors, computers, nodes, or computing platforms) for implementing various modules or system components.
  • processing platform 101 may include a server or computing device containing one or more processors and memory (e.g., flash, random-access memory, or data storage).
  • processors and memory e.g., flash, random-access memory, or data storage.
  • various software and/or firmware modules may be implemented using the hardware at processing platform 101.
  • processing platform 101 may be communicatively connected to user display 108 and/or sensors 110.
  • VMTS 100 or processing platform 101 may include a test controller (TC) 102, a sensor data collector 104, and a data storage 106.
  • TC 102 may represent any suitable entity or entities (e.g., software executing on one or more processors) for performing one or more aspects associated with visual function testing in a virtual environment.
  • TC 102 may include functionality for configuring and generating a virtual environment for testing visual function of a user.
  • TC test controller
  • TC 102 may also be configured for executing a related mobility test, providing output to user display 108 (e.g., a virtual reality (VR) display) or other device, receiving input from one or more sensors 110 (e.g., accelerometers, gyroscopes, eye trackers, or other body movement sensing devices) or other devices (e.g., video cameras).
  • TC 102 may be configured to analyze various input associated with a virtual mobility test and provide various metrics and test results, e.g., a virtual recreation or replay of a user performing the virtual mobility test.
  • TC 102 may communicate or interact with user display 108.
  • User display 108 may represent any suitable entity or entities for receiving and providing information (e.g., audio, video, and/or haptic feedback) to a user.
  • user display 108 may include a VR headset (e.g., a VIVE VR headset), glasses, a mobile device, and/or another device that includes software executing on one or more processors.
  • user display 108 may include various communications interfaces capable of communicating with TC 102, VMTS 100, sensors 110, and/or other entities in some embodiments, TC 102 or VMTS 100 may stream data for displaying a virtual environment to user display 108.
  • TC 102 or VMTS 100 may receive input during testing from various sensors 110 related to a user’s progress through a mobility test (e.g., obstacle course) in the virtual environment and may send data (e.g., in real-time or near real time) to reflect or depict a user’s progress along the course based on a variety of factors, e.g., a preconfigured obstacle course map and user interactions or received feedback from sensors 110.
  • a mobility test e.g., obstacle course
  • data e.g., in real-time or near real time
  • TC 102 may communicate or interact with sensor data collector 104.
  • Sensor data collector 104 may represent any suitable entity or entities (e.g., software executing on one or more processors and/or one or more communications interfaces) for receiving or obtaining sensor data and/or other information from sensors 110 (e.g., body movement detection sensors).
  • sensor data collector 104 may include an antenna or other hardware for receiving input via wireless technologies, e.g., Wi-Fi, Bluetooth, etc.
  • sensor data collector 104 may be capable of identifying, collating, and/or analyzing input from various sensors 110.
  • sensors 110 may include accelerometers and/or gyroscopes to detect various aspects of body movements.
  • sensors 110 may include one or more surface electrodes attached to the skin of a user and sensor data collector 104 (or TC 102) may analyze and interpret EMG data into body movement.
  • sensors 110 or related components may be part of an integrated and/or wearable device, such as a VR display, a wristband, armband, glove, leg band, sock, headband, mask, sleeve, shirt, pants, or other device.
  • sensors 110 may be located at or near user display 108.
  • sensors 110 may be configured to identify or track eye movement, squinting, pupil changes, and/or other aspects related to eyesight
  • VMTS 100 or one or more modules therein may provide functionality for tailoring a virtual mobility test (e.g , a mobility test in a virtual environment using a VR system) to an individual and/or an eye condition or disease.
  • a virtual mobility test as described herein may be administered such that each eye is used separately or both eyes are used together in this example, by using this ability, the progression of an eye disease or the impact of an intervention can be measured according to the effects on each individual eye. In many eye diseases, there is symmetry between the eyes.
  • the other eye can serve as an untreated control and the difference in the performance between the two eyes can be used to evaluate safety and efficacy of the intervention.
  • data can be gathered from monocular (e.g., single eye) tests on the user’s perspective (e.g., is one object in front of another) and from binocular (e.g., two eyes) tests on the user’s depth perception and stereopsis.
  • VMTS 100 or one or more modules therein may configure a virtual mobility test for monitoring eye health and/or related vision over a period of time. For example, changes from a baseline and/or changes in one eye compared to the other eye may measure the clinical utility of a treatment in that an increase in visually based orientation and mobility skills increases an individual’s safety and independence. Further, gaining the ability to orient and navigate under different conditions (e.g., using lower light levels than previously possible) may reflect an improvement of those activities of daily living that depend on vision.
  • VMTS 100 or one or more modules therein may perform virtual mobility tests for a variety of purposes.
  • a virtual mobility test may be used for rehabilitation purposes (e.g., as part of exercises that can potentially improve the use of vision function or maintain existing vision function).
  • a virtual mobility test may also be used for machine learning and artificial intelligence purposes.
  • a virtual mobility test may be configured (e.g , using operator preferences or settings) to include content tailored to a particular eye condition or disease.
  • the configured content may be usable to facilitate or rule out a diagnosis and may at least in part be based on known symptoms associated with a particular eye condition. For example, there are different deficits in different ophthalmic diseases ranging from light sensitivity, color detection, contrast perception, depth perception, focus, movement perception, etc.
  • a virtual mobility test may be configured such that each of these features can be tested, e.g., by controlling these variables (e.g., by adjusting lighting conditions and/or other conditions in the virtual environment where the virtual mobility test is to occur).
  • a virtual mobility test may be configured to include obstacles that represent challenges an individual can face in daily life, such as doorsteps, holes in the ground, objects that jut in a user’s path, objects at various heights (e.g., waist high, head high, etc ), and objects which can swing into the user’s path in such embodiments, risk of injury may be significantly reduced relative to a conventional mobility test since the obstacles in the virtual mobility test are virtual and not real.
  • obstacles that represent challenges an individual can face in daily life such as doorsteps, holes in the ground, objects that jut in a user’s path, objects at various heights (e.g., waist high, head high, etc ), and objects which can swing into the user’s path in such embodiments, risk of injury may be significantly reduced relative to a conventional mobility test since the obstacles in the virtual mobility test are virtual and not real.
  • virtual obstacles e.g., obstacles in a virtual mobility test or a related virtual environment
  • virtual obstacles can be adjusted or resized dynamically or prior to testing.
  • virtual obstacles as a group or individually, may be enlarged or reduced by a certain factor (50%) via a test operator and/or VMTS 100.
  • a virtual mobility test may be configured to include dynamic obstacles that increase or decrease in size, e.g., if a user repeatedly hits the obstacle or cannot move past the obstacle.
  • a virtual mobility test or a related obstacle course therein may be adjustable based on a user’s profile or related characteristics, e.g., height, weight, fitness level, age, or known deficiencies.
  • scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity).
  • scalable obstacle courses may be useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult in some embodiments, scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).
  • obstacles or elements in the virtual environment e.g., tiles that make of a course segments
  • a virtual mobility test or a related obstacle course therein may be configured, generated, or displayed based on various configurable settings. For example, a test operator may input or modify a configuration files with various settings in this example, VMTS 100 or one or more modules therein may use the settings to configure, generate, and/or display the virtual mobility test or a related obstacle course therein.
  • VMTS 100 or one or more modules therein may configure a virtual mobility test for testing a users vision function in a variety of lighting conditions.
  • light levels utilized for a virtual mobility test may be routinely encountered in day-to-day situations, such as walking through an office building, crossing a street at dusk, or locating objects in a dimly-lit restaurant.
  • VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course.
  • VMTS 100 or one or more modules therein may adjust luminance of obstacles, path arrows, hands and feet, finish line, and/or floor tiles associated with the virtual environment or related obstacle course in another example, VMTS 100 or one or more modules therein may design aspects (e.g., objects, obstacles, terrain, etc.) of the virtual environment to minimize light bleeding and/or other issues that can affect test results (e.g., by using Gaussian textures on various virtual obstacles or other virtual objects).
  • a virtual mobility test may be configured such that various types of user feedback are provided to a user.
  • various types of user feedback are provided to a user.
  • three-dimensional (3-D) spatial auditory feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user collides with an obstacle during a virtual mobility test.
  • the auditory feedback may emulate a real-life sound or response (e.g., a ‘clanging’ sound or a‘scraping’ sound depending on the obstacle, or a click when the user climbs up a“step”) and may be usable by the user to correct their direction or movements in another example
  • haptic feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user goes off-course (e.g., away from a designated path) in the virtual environment in this example, by using haptic feedback, the user can be made aware of this occurrence without requiring a test operator to physically guide them back on-course and can also test whether the user can self-redirect appropriately without assistance.
  • VMTS 100 or one or more modules therein may analyze various data associated with a virtual mobility test.
  • VMTS 100 or modules therein may record a virtual mobility user’s performance using sensors 110 and/or one or more video cameras.
  • the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria).
  • quantitative analysis e.g., based on objective criteria.
  • a virtual mobility test e.g., timed from when the virtual environment or scene is displayed until the user touches a finish flag at the finish line
  • details of each collision, details of movement of the user’s head, hands, and feet may be recorded and analyzed.
  • additional sensors e.g., eyesight trackers and/or other devices
  • eyesight trackers and/or other devices may be used to detect and record movements of other parts of the body.
  • an obstacle in a virtual mobility test may include an object adjacent to a user’s path (e.g., a rectangular object, a hanging sign, a floating object), a black tile or an off-course (e.g., off-path) area, a “push-down” or “step-down” object that a user must press or depress, (otherwise there is a penalty for collision or avoidance of this object), or an object on the user’s path that needs to be stepped over.
  • data captured digitally during testing may be analyzed for performance of the user.
  • an audiotape and/or videotape may be generated during a virtual mobility test.
  • digital records e.g., sensor data or related information
  • the audiotape and/or videotape may comprise source data for analyzing a user’s performance.
  • VMTS 100 or related entities may score or measure a user’s performance during a virtual mobility test by using one or more scoring parameters.
  • scoring parameters may include a collision penalty may be assigned each time a particular obstacle is bumped or a score penalty for each obstacle bumped (even if an obstacle is bumped multiple times); an off-course penalty may be assigned if both feet are on tiie(s) that do not have arrows or if the user bypasses tiles with arrows on the course (if one foot straddles the border of an adjacent file or if the user steps backward on the course to take a second look, this may not considered off- course); a guidance penalty may be assigned if a user needs to be directed back on course by the test giver (or the virtual environment).
  • VMTS 100 or related entities may store test data and/or record a user’s performance in a virtual mobility test.
  • VMTS 100 or another element may record a user’s progress by recording frame by frame movement of head, hands, and feet using data from one or more sensors 110.
  • data associated with each collision between a user and an obstacle may be recorded and/or captured.
  • a captured collision may include data related to bodies or items involved, velocity of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, acceleration of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, the point of impact, the time and/or duration of impact, and scene or occurrence playback (e.g., the playback may include a replay (e.g., a video) of an avatar (e.g., graphics representing the user or body parts thereof) performing the body part movements that cause the collision).
  • a replay e.g., a video
  • avatar e.g., graphics representing the user or body parts thereof
  • administering a virtual mobility test may include a user (e.g., a test participant) and one or more test operators, observers, or assistants.
  • a virtual mobility test may be conducted by a study team member and a technical assistant.
  • the study team member may alternatively both administer the test and may monitor equipment used in the testing.
  • the study team member may be present to help the user with course redirects or physical guidance, if necessary.
  • the test operators, observers, and/or assistants may not give instructions or advise during the virtual mobility test.
  • a virtual mobility test may be conducted on a level floor in a space appropriate for the test, e.g., in a room with clearance of a 12 feet (ft) x 7 ft rectangular space, since the test may include one or more courses that require a user to turn in different directions and avoid obstacles of various sizes and heights along the way.
  • the virtual mobility test may be described to the user and the goals of the test may be explained (e.g., complete the course(s) as accurately and as quickly as possible).
  • the user may be instructed to do their best to avoid ail of the obstacles except for the steps, and to stay on the path.
  • the user may be encouraged to take their time and focus on accuracy.
  • the user may be reminded not only to look down for guidance arrows showing the direction to walk, but also to scan back and forth with their eyes so as to avoid obstacles that may be on the ground or at any height up to their head.
  • a user may be given a practice session so that they understand how to use equipment, recognize guidance arrows that must be followed, are familiar with the obstacles and how to avoid or overcome them (e.g., how to step on the“push down” obstacles), and also how to complete the virtual mobility test (e.g., by touching a finish flag to mark completion of the test).
  • the user may be reminded that during the official or scored test, that the course may be displayed to one eye or the other or to both eyes. The user may be told that they will not receive directions while the test is in progress.
  • the tester or an element of the virtual mobility test may recommend that the user chooses a direction.
  • the tester may also assist and/or assure the user regarding safety issues, e.g., the tester may stop the user if a particular direction puts the user at risk of injury.
  • a user may be given one or more different practice tests (e.g., three tests or as many as are necessary to ensure that the user understands how to take the test).
  • a practice test may use one or two courses that are different from courses used in the non-practice tests (e.g., tests that are scored) to be given. The same practice courses may be used for each user.
  • the practice runs of a user may be recorded; however, the practice runs may not be scored.
  • the user when a user is ready for an official (e.g., scored) virtual mobility test, the user may be fitted with user display 108 (e.g., a VR headset) and sensors 110 (e.g., body movement detection sensors). The user may also be dark adapted prior to the virtual mobility test. The user may be led to the virtual mobility test origin area and instructed to begin the test once the VR scene (e.g., the virtual environment) is displayed in user display 108. Alternatively, the user may be asked to move to a location containing a virtual illuminated circle on the floor which, when the test is illuminated, will become the starting point of the test. The onset of VR scene in user display 108 may mark the start of the test.
  • VR scene e.g., the virtual environment
  • an obstacle course may be traversed first with one eye“patched” or unable to see the VR scene (e.g., user display 108 may not show visuals on the left (eye) display, but show visuals on the right (eye) display), then the other eye “patched”, then both eyes“un-patched” or able to see the VR scene (e.g., user display 108 may show visuals on both the left and the right (eye) displays).
  • the mobility test may involve various iterations of an obstacle course at different light intensities (e.g., incrementally dimmer or brighter), and at different layouts or configurations of elements therein (e.g., the path taken and the obstacles along the path may be changed after each iteration. For example, each obstacle course attempted by a user may have the same number of guidance arrows, turns, and obstacles, but to preclude a learning effect, each attempt by the user may be performed using a different iteration of the obstacle course.
  • user display 108 and processing platform 101 may be integrated into a single computing device, module, or system.
  • a VIVE VR system a mobile computing device, or smartphone configured with appropriate VR software, hardware, and mobility testing logic may generate a virtual environment and may perform and analyze a mobility test in the virtual environment.
  • the mobile computing device or smartphone may also display and record a user’s progress through the virtual mobility test.
  • FIG. 2 is a diagram 200 illustrating an example template for a virtual mobility test.
  • a virtual environment and/or a related obstacle course may utilize a template generated using a program called Tiled Map Editor (http://www.mapeditor.org).
  • Tiled Map Editor http://www.mapeditor.org
  • a user may select the File Menu, select the New Option, and then select the New Map Option (File Menu->New->New Map) to generate a new map.
  • the user may configurable various aspects of the new map, e.g., the new map may be set to‘orthogonal’ orientation, the tile format may be set to‘CSV’, the tile render order may be set to‘Left Up’.
  • the tile size may be a width of 85 pixels (px) and a height of 85 px.
  • the Map size may be fixed, and the number of tiles may be user-configurable in some embodiments, the dimensions may be set to a width of 5 tiles and a length of 10 tiles (e.g., 5 ft X 10 ft).
  • the name of the map may be selected or provided when the map is saved (e.g., File Menu->Save As).
  • a tile set may be added to the template (File Menu->New->New Tile set)
  • a tile set may include a number of tile types, e.g., basic path tiles are straight, turn left, and turn right.
  • a tile set may also include variations of a tile type, e.g., a straight tile type may include a hanging obstacle tiles, button tiles, and step over tiles.
  • a tile set may also provide one or more colors, images, and/or textures for the path or tiles in a template.
  • a tile set may be named and a browse button may be used to select an image file source and appropriate tile width and height may also be inputted (e.g., 85 px for tile width and height).
  • to place a file on the map select or click a tile from your tile set and then click on the square on which to place the selected tile. For example, to create a standard mobility test or related obstacle course, a continuous path may be created from one of the start tiles to the finish tile.
  • the template may be exported and saved to a location for use by VMTS 100 or other entities (File Menu->Export As).
  • the file may be stored in a folder along with a config.csv containing additional configuration information associated with a virtual mobility test in this example, VMTS 100 or related entities may use the CSV files to generate a virtual mobility test.
  • a configuration file (e.g., config.csv) may be used to add or remove courses and/or configure courses used in a virtual mobility test.
  • Example configuration settings for a virtual mobility test are listed below:
  • the value is the width and height of the VMTS’s active area in meters. This may be determined when the VMTS is configured with room setup.
  • the value is the desired length and width of each tile in meters.
  • the value is indicates the number of seconds it takes for a swinging obstacle to make its full range of motion.
  • the value is the height of the user (test participant) in meters.
  • the value is the distance between the floor and the bottom of the obstacle. (The value may be a fraction of the height of the user.)
  • the value is the distance between the guiding arrows and the floor. (The value may be a fraction of the height of the user.)
  • the value is the distance between the center of low floating obstacles and the floor and height of low box obstacles. (The value may be a fraction of the height of the user.)
  • the value is the distance between the center of medium floating obstacles and the floor and height of high box obstacles. (The value may be a fraction of the height of the user.)
  • the value is the distance between the center of high floating obstacles and the floor. (The value may be a fraction of the height of the user.)
  • the value is the radius of small floating obstacles. (The value may be a fraction of the height of the user.)
  • the value is the radius of medium floating obstacles. (The value may be a fraction of the height of the user.)
  • the value is the radius of large floating obstacles. (The value may be a fraction of the height of the user.)
  • the value is the height of very small step-over obstacles. (The value may be a fraction of the height of the user.)
  • the value is the height of small step-over obstacles. (The value may be a fraction of the height of the user.)
  • the value is the height of big step-over obstacles. (The value may be a fraction of the height of the user.)
  • the value is the height of very big step-over obstacles. (The value may be a fraction of the height of the user.)
  • the value is the width and depth of box obstacles. (The value may be a fraction of tile length.)
  • the height is 5 feet with the shape of a parking meter.
  • the door of a box-like dishwasher may be open anywhere between a 5-90 degree angle and jut into the pathway.
  • o The value indicates the local scale of the arrow (relative to tile length).
  • a value of 1 is 100% of normal scale, which is one half the length of a file.
  • arrow luminance o The value indicates the luminance of guiding arrows in the scene (the virtual environment or course).
  • the luminance may be measured in lumens (lux) and the maximum value may be operator-configurable or may be user display dependent.
  • the value indicates the luminance of ail buttons in the scene (the virtual environment or course).
  • the luminance may be measured in lux and the maximum may be operator- configurable or may be user display dependent.
  • the value indicates the luminance of all obstacles (e.g., box shaped, floating, swinging, or hanging obstacles) in the scene.
  • the luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
  • the value indicates the luminance of the user’s hands and feet in the scene.
  • the luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
  • the value indicates the luminance of the finish line in the scene.
  • the luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
  • the value indicates the number of file names referenced in this configuration file. Whatever this value is, there should be that many file names ending in .csv following it, and each one of those file names should correspond to a .csv file that is in the same folder as the configuration file.
  • Figure 3 is a diagram 300 illustrating a user performing a virtual mobility test.
  • a test observer’s view is shown on the left panel and a test user’s view is shown on the right panel.
  • the test user may be at the termination point of the course looking in the direction of a green arrow.
  • a test user’s head, eyes, hands and feet may appear white.
  • the test observer’s view may be capable of displaying various views (e.g., first person view, overhead (bird’s eye) view, or a third person view) of a related obstacle course and may be adjustable on-the-fly.
  • the test user’s view may also be capable of displaying various views and may be adjustable on-the-fly, but may default to the first-person view.
  • the termination point may be indicated by black flag and the test user may mark the completion of the course by touching the flag with his/her favored hand. Alternatively, the test user may walk into the flag.
  • the user’s“touch” may be indicated with a red sphere.
  • Figure 3 is for illustrative purposes and that various virtual mobility tests may include additional and/or different features than those depicted in Figure 3.
  • Figure 4 is a diagram 400 illustrating example objects in a virtual mobility test
  • Images A-F depict example objects that may include in a virtual mobility test image A depicts tiles and arrows showing the path (e.g., pointing forward, to the left, or to the right). In some embodiments, arrows may be depicted on the tiles (and not floating above the tiles).
  • Image B depicts step-over obstacles and arrows showing the required direction of movement.
  • Image C depicts box-shaped obstacles
  • image D depicts small floating obstacles (e.g., a group of 12) at different levels of a user’s body (e.g., from ankle to head height)
  • image E depicts large floating obstacles (e.g., a group of 10) at different levels of a user’s body (e.g., from ankle to head height).
  • Image F depicts obstacles that a user must step on (e.g., to mimic stairs, rocks, etc.). These obstacles may depress (e.g., sink into the floor or tiles) as the user steps on them.
  • arrows may be depicted on the tiies (and not floating above or adjacent to the tiles).
  • a virtual mobility test may include an obstacle course containing small, medium, and large floating obstacles, parking meter shaped posts, or doors (e.g., partially open dishwasher doors) or gates that jut into the path.
  • doors e.g., partially open dishwasher doors
  • Figure 5 is a diagram 500 illustrating various sized obstacles In a virtual environment.
  • a virtual mobility test or related objects therein may be adjustable.
  • VMTS 100 or one or modules therein may scale or resize obstacles based on a test user’s height or other physical characteristics.
  • scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as the user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity).
  • the ability to resize objects in a virtual mobility test is also useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult.
  • scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).
  • obstacles or elements in the virtual environment e.g., tiles that make of a course segments
  • scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).
  • images A-C depict different size obstacles from an overhead view.
  • image A depicts an overhead view of small floating obstacles
  • image B depicts an overhead view of medium floating obstacles
  • image C depicts an overhead view of large floating obstacles.
  • Figure 5 is for illustrative purposes and that objects may be scaled in more precise terms and/or with more granularity (e.g., a percentage or fraction of a test user’s height).
  • a virtual mobility test may include an obstacle course containing obstacles that appear to be 18 76% of the height of a test user.
  • Figure 6 is a diagram 800 iiiustrating virtuai mobility tests with various lighting conditions in some embodiments, lighting conditions in a virtuai mobility test may be adjustable.
  • VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course associated with a virtual mobility test.
  • VMTS 100 or one or more modules therein may adjust luminance of various objects (e.g., obstacles, path arrows, hands, head, and feet, finish line, and/or floor tiles) associated with a virtuai mobility test.
  • objects e.g., obstacles, path arrows, hands, head, and feet, finish line, and
  • individual obstacles and/or groups of obstacles can be assigned different luminance, contrast, shading, outlines, and/or color.
  • each condition or setting may be assigned a relative value or an absolute value. For example, assuming luminance can be from 0.1 lux to 400 lux, a first obstacle can be displayed at 50 lux and a second obstacle can be assigned to a percentage of the first obstacle (e.g., 70% or 35 lux) in this example, regardless of a luminance value, some objects in a virtual mobility test may have a fixed luminance (e.g., a finish flag).
  • images A-C depict a mobility test under different luminance conditions with arrows highlighted for illustrative purposes.
  • image A shows a mobility test displayed under low luminance conditions (e.g , about 1 lux);
  • image B shows a mobility test with a step obstacle displayed under medium luminance conditions (e.g., about 100 lux); and
  • image C shows a mobility test with a step obstacle and other objects displayed under high luminance conditions (e.g., about 400 lux).
  • Figure 6 is for illustrative purposes and that different and/or additional aspects of the virtual mobility test than those depicted in Figure 6.
  • FIG. 7 is a diagram 700 illustrating example data captured during a virtual mobility test.
  • VMTS 100 or one or more modules therein may analyze various data associated with a virtual mobility test.
  • VMTS 100 or modules therein may gather data from sensors 110, information regarding the virtual environment (e.g., locations and sizes of obstacles, path, etc.), and/or one or more video cameras in this example, the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria).
  • captured data may be stored in one or more files (e.g., test__events.csv and test_scene.csv files).
  • Example captured data may include details of an obstacle course in a virtual mobility test (e.g., play area, etc.), a particular configuration of the course, a height of a test user, sensor locations (e.g., head, hands, feet) as a function of time (e.g., indicative of body movements), direction of gaze, acceleration and deceleration, leaning over to look more closely at an object, and/or amount of time interacting with each obstacle.
  • Figure 7 is for illustrative purposes and that different and/or additional data than those depicted in Figure 7 may be captured or obtained during a virtual mobility test.
  • FIG 800 is a diagram 800 illustrating various aspects of an example virtual mobility test.
  • VMTS 100 or one or more modules therein may be capable of providing real-time or near real-time playback of a user’s performance during a virtual mobility test.
  • VMTS 100 or one or more modules therein may be capable of recording a user’s performance during a virtual mobility test.
  • VMTS 100 or modules therein may use gathered data from sensors 110, and/or other input, to create an avatar representing the user in the virtual mobility test and may depict the avatar interacting with various objects in the virtual mobility test.
  • a video or playback of a user performing a virtual mobility test may depict the user’s head, eyes, hands, and feet appear white on the video and can depict the user walking through an obstacle course toward the termination point (e.g., a finish flag) of the course.
  • a green arrow may point to a red circle located at the location of the collision.
  • start location 802 represents the start of an obstacle course
  • avatar 804 represents the user’s head, hands, and feet
  • floating obstacle 808 represents a head height obstacle in the obstacle course (e.g., one way to avoid such an obstacle is to duck)
  • indicators 808 represent a collision circle indicating where a collision between the user and the virtual environment occurred and an arrow pointing to the collision circle
  • finish location 810 represents the end of the obstacle course.
  • Figure 8 is for illustrative purposes and that different and/or additional aspects than those depicted in Figure 8 may be part of a virtual mobility test.
  • Figure 9 is a flow chart illustrating an example process 900 for testing visual function using a virtual environment.
  • example process 900 described herein, or portions thereof, may be performed at or performed by VMTS 100, processing platform 101 , TC 102, sensor data collector 104, user display 108, and/or another module or node.
  • a virtual mobility test may be configured for testing visual function of a user.
  • VMTS 100 may use configuration files containing settings and/or configuration information for configuring a virtual mobility test or a related obstacle course.
  • configuring a virtual mobility test may include configuring the virtual mobility test based on the user, e.g , physical characteristics or a vision condition (e.g , an eye disease or other condition that effects vision).
  • a vision condition e.g , an eye disease or other condition that effects vision
  • the virtual mobility test may be generated.
  • VMTS 100 may generate and display a virtual mobility test to user display 108.
  • step 908 performance of the user during the virtual mobility test may be analyzed for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
  • VMTS 100 or related entities may receive data collected from sensors 110 to determine whether a user collided with an obstacle in a virtual mobility test in this example, the number or amounts of collisions may affect a generated score indicating performance of the user regarding the virtual mobility test.
  • configuring a virtual mobility test may include configuring the virtual mobility test to test a right eye, a left eye, or both eyes.
  • configuring a virtual mobility test may include configuring luminance, shadow, color, contrast, gradients of contrast/color on the surface of the obstacles or enhanced contrast,“reflectance” or color of borders of obstacles or a lighting condition associated with one or more of the objects in the virtual mobility test based on configuration information.
  • configuring a virtual mobility test includes: configuring the height of one or more of the objects in the virtual mobility test and/or configuring the size of one or more of the objects in the virtual mobility test based on configuration information.
  • configuration information may include the height or other attributes of the user, condition-based information (e.g., aspects of cognition, perception, eye condition, etc ), user-inputted information, operator-inputted information, or dynamic information.
  • condition-based information e.g., aspects of cognition, perception, eye condition, etc
  • generating a virtual mobility test may include providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition may include a collision between the user and one or more of the objects in the virtual mobility test, the user leaves a designated path or course associated with the virtual mobility test, or a predetermined amount of progress has not occurred in a predetermined amount of time.
  • generating a virtual mobility test may include capturing the data from body movement defection sensors and using the data to output a video of the user’s progress through the virtual mobility test.
  • a video of a user ’ s progress through the virtual mobility test may include an avatar representing the user and/or their body movements.
  • objects in a virtual mobility test may include a tile, an obstacle, a box obstacle, a step-over obstacle, a hanging or swinging obstacle, a floating obstacle, a starting line, a finish line, a finish flag, a guide arrow, or a button obstacle.
  • testing may be done in a multi-step fashion in order to isolate the role of central vision versus peripheral vision.
  • a virtual mobility test or a related test may be configured to initially identify a luminance threshold value for the subject to identify colored (red, for example) arrows on the path. This luminance threshold value may then be held constant in subsequent tests for central vision while luminance of the obstacles is modified in order to elicit the sensitivity of the subject’s peripheral vision.
  • Figure 10 is a flow chart illustrating a process 1000 for dissecting or analyzing two different parameters of visual function.
  • example process 1000 described herein, or portions thereof, may be performed at or performed by VMTS 100, processing platform 101 , TC 102, sensor data collector 104, user display 108, and/or another module or node.
  • a virtual mobility test may be configured for testing visual function of a user.
  • VMTS 100 may use configuration files containing settings and/or configuration information for configuring a virtual mobility test or a related obstacle course.
  • configuring a virtual mobility test may include configuring the virtual mobility test based on the user, e.g., physical characteristics or a vision condition (e.g., an eye disease or other condition that effects vision).
  • a vision condition e.g., an eye disease or other condition that effects vision
  • step 1004 establish or determine the threshold luminance for cone photoreceptor (e.g., foveai or center of vision) function of the user.
  • VMTS 100 may generate and display a pathway of red (or other color) arrows, where the arrows are gradually increasing in luminance.
  • a virtual mobility test may be generated using the threshold luminance established in step 1004.
  • VMTS 100 may generate and display a virtual mobility test to user display 108, where the objects (e.g., obstacles) at the start of the virtual mobility test are of a low luminance (e.g., as determined by a standard or based on the user’s established threshold luminance from step 1004).
  • the encountered objects may gradually increase in luminance.
  • performance of the user during the virtual mobility test may be analyzed for determining the visual function of the user based on speed of test completion and user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
  • VMTS 100 or related entities may receive data collected from sensors 110 to determine whether a user collided with an obstacle in a virtual mobility test in this example, the number or amounts of collisions may affect a generated score indicating performance of the user regarding the virtual mobility test.
  • VMTS 100 and/or functionality described herein may constitute a special purpose computing device. Further, VMTS 100 and/or functionality described herein can improve the technological field of eye treatments and/or diagnosis. For example, by generating and using a virtual mobility test, a significant number of benefits can be achieved, including the ability to quickly and easily test visual function of a user without requiring expensive and time-consuming setup (e.g., extensive lighting requirements) needed for conventional mobility test.
  • the virtual mobility test can also use data collected from sensors and the VR system to more effectively and more objectively analyze and/or score a user’s performance.
  • the details provided here would also be applicable to augmented reality (AR) systems which could be delivered through glasses, thus facilitating usage.
  • AR augmented reality

Abstract

Methods, systems, and computer readable media for testing visual function using virtual mobility tests are disclosed. One system includes a processor and a memory. The system is configured for configuring a virtual mobility test for testing visual function of a user; generating the virtual mobility test; and analyzing a user's performance during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test and using data from body movement detection sensors.

Description

DESCRIPTION
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR TESTING VISUAL FUNCTION USING VIRTUAL MOBILITY TESTS
PRIORITY CLAIM
This application claims the benefit of U.S. Provisional Patent Application Serial No. 62/682,737, filed April 25, 2018, the disclosure of which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
The subject matter described herein relates to virtual reality. More particularly, the subject matter described herein relates to methods, systems, and computer readable media for testing visual function using virtual mobility tests.
BACKGROUND
One challenge with developing treatments for eye disorders involves developing test paradigms that can quickly, accurately, and reproducibly characterize the level of visual function and functional vision in real-life situations. Visual function can encompass many different aspects or parameters of vision, including visual acuity (resolution), visual field extent (peripheral vision), contrast sensitivity, motion detection, color vision, light sensitivity, and the pattern recovery or adaptation to different light exposures, to name a few. Functional vision, i.e., the ability to use vision to carry out different tasks, may therefore be considered a behavioral direct consequence of visual function. These attributes of vision are typically tested in isolation, e.g., in a scenario detached from the real-life use Di vision. For example, a physical mobility test involving an obstacle course having various obstacles in a room may be used to evaluate one or more aspects of vision function. However, such a mobility test can involve a number of issues including time-consuming setup, limited configurability, risk of injury to users, and limited quantitation of results.
Accordingly, there exists a need for methods, systems, and computer readable media for testing visual function using virtual mobility tests. SUMMARY
Methods, systems, and computer readable media for testing visual function using virtual mobility tests are disclosed. One system includes a processor and a memory. The system is configured for receiving configuration information for setting up a virtual mobility test for testing visual function, generating the virtual mobility test; and analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data from body movement detection sensors.
One method includes configuring a virtual mobility test for testing visual function of a user; generating the virtual mobility test; and analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
The subject matter described herein may be implemented in hardware, software, firmware, or any combination thereof. As such, the terms“function” or“node” as used herein refer to hardware, which may also include software and/or firmware components, for implementing the feature(s) being described in some exemplary implementations, the subject matter described herein may be implemented using a computer readable medium having stored thereon computer executable instructions that when executed by the processor of a computer, control the computer to perform steps. Exemplary computer readable media suitable for implementing the subject matter described herein include non-transitory computer readable media, such as disk memory devices, chip memory devices, programmable logic devices, and application specific integrated circuits. In addition, a computer readable medium that implements the subject matter described herein may be located on a single device or computing platform or may be distributed across multiple devices or computing platforms in some exemplary implementations, the subject matter described herein may be implemented using hardware, software, firmware delivering augmented reality. BRIEF DESCRIPTION OF THE DRAWINGS
The subject matter described herein will now be explained with reference to the accompanying drawings of which:
Figure 1 is a diagram illustrating an example virtual mobility test system (VMTS) for testing visual function;
Figure 2 is a diagram illustrating an example template for a virtual mobility test;
Figure 3 is a diagram illustrating a user performing a virtual mobility test;
Figure 4 is a diagram illustrating example obstacles in a virtual mobility test;
Figure 5 is a diagram illustrating various sized obstacles in a virtual mobility test;
Figure 6 is a diagram illustrating virtual mobility tests with various lighting conditions;
Figure 7 is a diagram illustrating example data captured during a virtual mobility test;
Figure 8 is a diagram illustrating various aspects of an example virtual mobility test;
Figure 9 is a flow chart illustrating a process for testing visual function using a virtual mobility test; and
Figure 10 is a flow chart illustrating a process for dissecting or anayzing two different parameters of visual function.
DETAILED DESCRIPTION
The subject matter described herein relates to methods, systems, and computer readable media for testing visual function using virtual mobility tests. A conventional mobility test for testing visual function of a user may involve one or more physical obstacle courses and/or other physical activities to perform. Such courses and/or physical activities may be based on real-life scenarios and/or activities, e.g , walking in a dim hallway or walking on a floor cluttered with obstacles. Existing mobility tests, however, have limited configurability and other issues. For example, conventional mobility tests are, by design, generally inflexible and difficult to implement and reproduce since these tests are usually designed using a particular implementation and equipment, e.g , a test designer’s specific hardware, obstacles, and physical space requirements.
One example of a‘real-life’ or physical mobility test is a“RPE65” test for testing for retinal disease that affects ability to see in low luminance conditions, e.g., a retinal dystrophy due to retinal pigment epithelium 65 (RPE65) gene mutations. This physical test measures how a person functions in a vision-related activity of avoiding obstacles while following a pathway in different levels of illumination. While this physical test reflects the everyday life level of vision for RPE65-associated disease, the“RPE65” test suffers from a number of limitations. Example limitations for the“RPE85” test are discussed below.
1) The“RPE65” test is limited in usefulness for other populations of low vision patients. For example, the test cannot be used reliably to elicit visual limitations of individuals with fairly good visual acuity (e.g., 20/60 or better) but limited fields of vision.
2) The set-up of the“RPE65” test is challenging in that it requires a dedicated, large space. For example, the test area for the“RPE65” test must be capable of holding a 17 feet (ft) X 10 ft obstacle course, the test user (and companion) and the test operators, and cameras. Further, the room must be light-tight (e.g., not transmitting or reflecting light) and capable of presenting lighting conditions at a range of calibrated, accurate luminance levels (e.g., 1 , 4, 10, 50, 125, 250, and 400 lux). Further, this illumination must be uniform in the test area.
3) Setting-up a physical obstacle course and randomizing assignment and positions of obstacles for the“RPE65” test (even for a limited number of layouts) is time-consuming.
4) Physical objects on a physical obstacle course are injury risk to patients (e.g., obstacles can cause a test user to fall or trip).
5) A“RPE85” test user can cheat during the test by using“echo- location” of objects instead of their vision to identify large objects. 6) A“RPE65” test user must be guided back to the course by the test operator if the user goes off course.
7) The “RPE65” test does not take into account that different individuals have different heights (and thus different visual angles).
8} The “RPE65” test captures video recordings of the subject’s performance which are then graded by outside consultants. This results in potential disclosure of personal identifiers.
The“RPE65” test has difficult and limited quantitation for evaluating a test user’s performance. For example, the scoring system for this test is challenging as it requires review of videos by masked graders and subjective grading of collisions and other aspects of the performance. Further, since the data is collected through videos showing the performance in two dimensions and focuses generally on the feet, there is no opportunity to collect additional relevant data, such as direction of gaze, likelihood of collision with objects beyond the view of the camera lens, velocity in different directions, acceleration, etc.
In accordance with some aspects of the subject matter described herein, techniques, methods, systems, or mechanisms are disclosed for using a virtual (e.g., virtual reality (VR) based) mobility test. For example, a virtual mobility test system (e.g., a computer, a VR headset, and body movement detection sensors) may configure, generate, and analyze a virtual mobility test for testing visual function of a user. In this example, the test operator or the virtual mobility test system may change virtually any aspect of the virtual mobility test, including, for example, size, shape, and placement of obstacles, lighting conditions, and may provide haptic and audio user feedback, and may use these capabilities to test various different diseases and/or eye conditions. Moreover, since a virtual mobility test does not involve real or physical obstacles, cost and time associated with setting up and administering the virtual mobility test may be significantly reduced compared to a physical mobility test. Further, a virtual mobility test may be configured to efficiently capture and store relevant data not obtained in conventional physical tests (e.g., eye or head movements) and/or may capture data with more precision (e.g., via body movement detection sensors) than in conventional physical tests. With the VR system, the scene can be displayed to one eye or the other or to both eyes simultaneously. Furthermore, with additional and more precise data, a virtual mobility test system or a related entity may produce more objective and/or accurate test results (e.g., user performance scores).
Reference will now be made in detail to exemplary embodiments of the subject matter described herein, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers may be used throughout the drawings to refer to the same or like parts.
Figure 1 is a diagram illustrating an example virtual mobility test system (VMTS) 100 for testing visual function in Figure 1 , virtual mobility test system (VMTS) 100 may include a processing platform 101 , a user display 108, and one or more sensors 110 are depicted. VMTS 100 may represent any suitable entity or entities (e.g., a VIVE virtual reality (VR) system, one or more servers, a desktop computer, a phone, a tablet computer, or a laptop) for testing visual function in a virtual (e.g., VR) environment. For example, VMTS 100 may be a laptop or a desktop computer executing one or more applications and may interact with various modules or components therein. In some embodiments, VMTS 100 may include a communications interface for receiving configuration information associated with generating or setting up a virtual environment for testing various aspects of visual function and/or defecting whether a user (e.g., a test participant or subject) may be displaying symptoms or characteristics of one or more eye issues or related conditions. In some embodiments, VMTS 100 may include one or more communications interfaces for receiving sensor data or feedback from one or more physical sensors 110 associated with a test user. For example, sensors 110 may detect body movement (e.g., of feet, arms, and head), along with related characteristics (e.g., speed and/or direction of movement) in some embodiments, VMTS 100 may include one or more processors, memories, and/or other hardware for generating a virtual environment, displaying a virtual environment (e.g., including various user interactions with the environment and related effects caused by the interactions, such as collisions with virtual obstacles, movements that the user makes to avoid collision (such as lifting up a leg to step over an object or ducking under a sign) or purposeful touch such as stepping on a“step” or touching a finish line), and recording or storing test output and related test results (e.g., including text based logs indicating sensor data and/or a video recreation of the test involving a user’s progress during the test).
In some embodiments, VMTS 100 may utilize processing platform
101 for providing various functionality. Processing platform 101 may represent any suitable entity or entities (e.g., one or more processors, computers, nodes, or computing platforms) for implementing various modules or system components. For example, processing platform 101 may include a server or computing device containing one or more processors and memory (e.g., flash, random-access memory, or data storage). In this example, various software and/or firmware modules may be implemented using the hardware at processing platform 101. In some embodiments, processing platform 101 may be communicatively connected to user display 108 and/or sensors 110.
In some embodiments, VMTS 100 or processing platform 101 may include a test controller (TC) 102, a sensor data collector 104, and a data storage 106. TC 102 may represent any suitable entity or entities (e.g., software executing on one or more processors) for performing one or more aspects associated with visual function testing in a virtual environment. For example, TC 102 may include functionality for configuring and generating a virtual environment for testing visual function of a user. In this example, TC
102 may also be configured for executing a related mobility test, providing output to user display 108 (e.g., a virtual reality (VR) display) or other device, receiving input from one or more sensors 110 (e.g., accelerometers, gyroscopes, eye trackers, or other body movement sensing devices) or other devices (e.g., video cameras). Continuing with this example, TC 102 may be configured to analyze various input associated with a virtual mobility test and provide various metrics and test results, e.g., a virtual recreation or replay of a user performing the virtual mobility test. In some embodiments, TC 102 may communicate or interact with user display 108. User display 108 may represent any suitable entity or entities for receiving and providing information (e.g., audio, video, and/or haptic feedback) to a user. For example, user display 108 may include a VR headset (e.g., a VIVE VR headset), glasses, a mobile device, and/or another device that includes software executing on one or more processors. In this example, user display 108 may include various communications interfaces capable of communicating with TC 102, VMTS 100, sensors 110, and/or other entities in some embodiments, TC 102 or VMTS 100 may stream data for displaying a virtual environment to user display 108. For example, TC 102 or VMTS 100 may receive input during testing from various sensors 110 related to a user’s progress through a mobility test (e.g., obstacle course) in the virtual environment and may send data (e.g., in real-time or near real time) to reflect or depict a user’s progress along the course based on a variety of factors, e.g., a preconfigured obstacle course map and user interactions or received feedback from sensors 110.
In some embodiments, TC 102 may communicate or interact with sensor data collector 104. Sensor data collector 104 may represent any suitable entity or entities (e.g., software executing on one or more processors and/or one or more communications interfaces) for receiving or obtaining sensor data and/or other information from sensors 110 (e.g., body movement detection sensors). For example, sensor data collector 104 may include an antenna or other hardware for receiving input via wireless technologies, e.g., Wi-Fi, Bluetooth, etc. In this example, sensor data collector 104 may be capable of identifying, collating, and/or analyzing input from various sensors 110. in some embodiments, sensors 110 may include accelerometers and/or gyroscopes to detect various aspects of body movements. In some embodiments, sensors 110 may include one or more surface electrodes attached to the skin of a user and sensor data collector 104 (or TC 102) may analyze and interpret EMG data into body movement.
In some embodiments, sensors 110 or related components may be part of an integrated and/or wearable device, such as a VR display, a wristband, armband, glove, leg band, sock, headband, mask, sleeve, shirt, pants, or other device. For example, sensors 110 may be located at or near user display 108. In this example, such sensors 110 may be configured to identify or track eye movement, squinting, pupil changes, and/or other aspects related to eyesight
In some embodiments, VMTS 100 or one or more modules therein (e.g , TC 102 and/or sensor data collector 104) may provide functionality for tailoring a virtual mobility test (e.g , a mobility test in a virtual environment using a VR system) to an individual and/or an eye condition or disease. For example, a virtual mobility test as described herein may be administered such that each eye is used separately or both eyes are used together in this example, by using this ability, the progression of an eye disease or the impact of an intervention can be measured according to the effects on each individual eye. In many eye diseases, there is symmetry between the eyes. Thus, if an intervention is tested in one eye, the other eye can serve as an untreated control and the difference in the performance between the two eyes can be used to evaluate safety and efficacy of the intervention. For example, data can be gathered from monocular (e.g., single eye) tests on the user’s perspective (e.g., is one object in front of another) and from binocular (e.g., two eyes) tests on the user’s depth perception and stereopsis.
In some embodiments, VMTS 100 or one or more modules therein may configure a virtual mobility test for monitoring eye health and/or related vision over a period of time. For example, changes from a baseline and/or changes in one eye compared to the other eye may measure the clinical utility of a treatment in that an increase in visually based orientation and mobility skills increases an individual’s safety and independence. Further, gaining the ability to orient and navigate under different conditions (e.g., using lower light levels than previously possible) may reflect an improvement of those activities of daily living that depend on vision.
In some embodiments, VMTS 100 or one or more modules therein may perform virtual mobility tests for a variety of purposes. For example, a virtual mobility test may be used for rehabilitation purposes (e.g., as part of exercises that can potentially improve the use of vision function or maintain existing vision function). In another example, a virtual mobility test may also be used for machine learning and artificial intelligence purposes.
In some embodiments, a virtual mobility test may be configured (e.g , using operator preferences or settings) to include content tailored to a particular eye condition or disease. In some embodiments, the configured content may be usable to facilitate or rule out a diagnosis and may at least in part be based on known symptoms associated with a particular eye condition. For example, there are different deficits in different ophthalmic diseases ranging from light sensitivity, color detection, contrast perception, depth perception, focus, movement perception, etc. In this example, a virtual mobility test may be configured such that each of these features can be tested, e.g., by controlling these variables (e.g., by adjusting lighting conditions and/or other conditions in the virtual environment where the virtual mobility test is to occur).
In some embodiments, a virtual mobility test may be configured to include obstacles that represent challenges an individual can face in daily life, such as doorsteps, holes in the ground, objects that jut in a user’s path, objects at various heights (e.g., waist high, head high, etc ), and objects which can swing into the user’s path in such embodiments, risk of injury may be significantly reduced relative to a conventional mobility test since the obstacles in the virtual mobility test are virtual and not real.
In some embodiments, virtual obstacles (e.g., obstacles in a virtual mobility test or a related virtual environment) can be adjusted or resized dynamically or prior to testing. For example, virtual obstacles, as a group or individually, may be enlarged or reduced by a certain factor (50%) via a test operator and/or VMTS 100. In this example, a virtual mobility test may be configured to include dynamic obstacles that increase or decrease in size, e.g., if a user repeatedly hits the obstacle or cannot move past the obstacle.
In some embodiments, a virtual mobility test or a related obstacle course therein may be adjustable based on a user’s profile or related characteristics, e.g., height, weight, fitness level, age, or known deficiencies. For example, scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity). In another example, scalable obstacle courses may be useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult in some embodiments, scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).
In some embodiments, a virtual mobility test or a related obstacle course therein may be configured, generated, or displayed based on various configurable settings. For example, a test operator may input or modify a configuration files with various settings in this example, VMTS 100 or one or more modules therein may use the settings to configure, generate, and/or display the virtual mobility test or a related obstacle course therein.
In some embodiments, VMTS 100 or one or more modules therein may configure a virtual mobility test for testing a users vision function in a variety of lighting conditions. For example, light levels utilized for a virtual mobility test may be routinely encountered in day-to-day situations, such as walking through an office building, crossing a street at dusk, or locating objects in a dimly-lit restaurant.
In some embodiments, VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course. In this example, VMTS 100 or one or more modules therein may adjust luminance of obstacles, path arrows, hands and feet, finish line, and/or floor tiles associated with the virtual environment or related obstacle course in another example, VMTS 100 or one or more modules therein may design aspects (e.g., objects, obstacles, terrain, etc.) of the virtual environment to minimize light bleeding and/or other issues that can affect test results (e.g., by using Gaussian textures on various virtual obstacles or other virtual objects).
In some embodiments, a virtual mobility test may be configured such that various types of user feedback are provided to a user. For example, three-dimensional (3-D) spatial auditory feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user collides with an obstacle during a virtual mobility test. In this example, the auditory feedback may emulate a real-life sound or response (e.g., a ‘clanging’ sound or a‘scraping’ sound depending on the obstacle, or a click when the user climbs up a“step”) and may be usable by the user to correct their direction or movements in another example, haptic feedback may be provided to a user (e.g., via speakers associated with user display 108 or VMTS 100) when the user goes off-course (e.g., away from a designated path) in the virtual environment in this example, by using haptic feedback, the user can be made aware of this occurrence without requiring a test operator to physically guide them back on-course and can also test whether the user can self-redirect appropriately without assistance.
In some embodiments, VMTS 100 or one or more modules therein (e.g., TC 102 and/or sensor data collector 104) may analyze various data associated with a virtual mobility test. For example, VMTS 100 or modules therein may record a virtual mobility user’s performance using sensors 110 and/or one or more video cameras. In this example, the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria). In some embodiments, in contrast to conventional mobility test, there may be little to no subjective interpretation of the performance. For example, from the start to the finish of a virtual mobility test (e.g., timed from when the virtual environment or scene is displayed until the user touches a finish flag at the finish line), details of each collision, details of movement of the user’s head, hands, and feet may be recorded and analyzed. In some embodiments, additional sensors (e.g., eyesight trackers and/or other devices) may be used to detect and record movements of other parts of the body.
In some embodiments, an obstacle in a virtual mobility test may include an object adjacent to a user’s path (e.g., a rectangular object, a hanging sign, a floating object), a black tile or an off-course (e.g., off-path) area, a “push-down” or “step-down” object that a user must press or depress, (otherwise there is a penalty for collision or avoidance of this object), or an object on the user’s path that needs to be stepped over. In some embodiments, data captured digitally during testing may be analyzed for performance of the user. For example, the time before taking the first step, or the time necessary to complete a virtual mobility test and the number of errors (e.g , bumping into obstacles, using feet to‘feel’ one’s way, and/or going off course and then correcting themselves after receiving auditory feedback) or the attempt of the user to correct themselves after they have collided with an obstacle may all be assessed to develop a composite analysis metric or score in some embodiments, an audiotape and/or videotape may be generated during a virtual mobility test. In such example, digital records (e.g., sensor data or related information) and the audiotape and/or videotape may comprise source data for analyzing a user’s performance.
In some embodiments, VMTS 100 or related entities may score or measure a user’s performance during a virtual mobility test by using one or more scoring parameters. Some example scoring parameters may include a collision penalty may be assigned each time a particular obstacle is bumped or a score penalty for each obstacle bumped (even if an obstacle is bumped multiple times); an off-course penalty may be assigned if both feet are on tiie(s) that do not have arrows or if the user bypasses tiles with arrows on the course (if one foot straddles the border of an adjacent file or if the user steps backward on the course to take a second look, this may not considered off- course); a guidance penalty may be assigned if a user needs to be directed back on course by the test giver (or the virtual environment).
In some embodiments, VMTS 100 or related entities (e.g., a data storage 106, sensor data collector 104, or external device) may store test data and/or record a user’s performance in a virtual mobility test. For example, VMTS 100 or another element may record a user’s progress by recording frame by frame movement of head, hands, and feet using data from one or more sensors 110. in some embodiments, data associated with each collision between a user and an obstacle may be recorded and/or captured. In such embodiments, a captured collision may include data related to bodies or items involved, velocity of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, acceleration of the body part(s) (e.g., head, foot, arm, etc.) involved in the collision, the point of impact, the time and/or duration of impact, and scene or occurrence playback (e.g., the playback may include a replay (e.g., a video) of an avatar (e.g., graphics representing the user or body parts thereof) performing the body part movements that cause the collision).
In some embodiments, administering a virtual mobility test may include a user (e.g., a test participant) and one or more test operators, observers, or assistants. For example, a virtual mobility test may be conducted by a study team member and a technical assistant. The study team member may alternatively both administer the test and may monitor equipment used in the testing. The study team member may be present to help the user with course redirects or physical guidance, if necessary. The test operators, observers, and/or assistants may not give instructions or advise during the virtual mobility test. In some embodiments, a virtual mobility test may be conducted on a level floor in a space appropriate for the test, e.g., in a room with clearance of a 12 feet (ft) x 7 ft rectangular space, since the test may include one or more courses that require a user to turn in different directions and avoid obstacles of various sizes and heights along the way.
In some embodiments, before administering a virtual mobility test the virtual mobility test may be described to the user and the goals of the test may be explained (e.g., complete the course(s) as accurately and as quickly as possible). The user may be instructed to do their best to avoid ail of the obstacles except for the steps, and to stay on the path. The user may be encouraged to take their time and focus on accuracy. The user may be reminded not only to look down for guidance arrows showing the direction to walk, but also to scan back and forth with their eyes so as to avoid obstacles that may be on the ground or at any height up to their head.
In some embodiments, a user may be given a practice session so that they understand how to use equipment, recognize guidance arrows that must be followed, are familiar with the obstacles and how to avoid or overcome them (e.g., how to step on the“push down” obstacles), and also how to complete the virtual mobility test (e.g., by touching a finish flag to mark completion of the test). The user may be reminded that during the official or scored test, that the course may be displayed to one eye or the other or to both eyes. The user may be told that they will not receive directions while the test is in progress. However, under certain circumstances (e.g., if the user does not know which way to go and pauses for more than 15 seconds, the tester or an element of the virtual mobility test (e.g., flashing arrow's, words, sounds, etc.) may recommend that the user chooses a direction. The tester may also assist and/or assure the user regarding safety issues, e.g., the tester may stop the user if a particular direction puts the user at risk of injury.
In some embodiments, a user may be given one or more different practice tests (e.g., three tests or as many as are necessary to ensure that the user understands how to take the test). A practice test may use one or two courses that are different from courses used in the non-practice tests (e.g., tests that are scored) to be given. The same practice courses may be used for each user. The practice runs of a user may be recorded; however, the practice runs may not be scored.
In some embodiments, when a user is ready for an official (e.g., scored) virtual mobility test, the user may be fitted with user display 108 (e.g., a VR headset) and sensors 110 (e.g., body movement detection sensors). The user may also be dark adapted prior to the virtual mobility test. The user may be led to the virtual mobility test origin area and instructed to begin the test once the VR scene (e.g., the virtual environment) is displayed in user display 108. Alternatively, the user may be asked to move to a location containing a virtual illuminated circle on the floor which, when the test is illuminated, will become the starting point of the test. The onset of VR scene in user display 108 may mark the start of the test. During the test, an obstacle course may be traversed first with one eye“patched” or unable to see the VR scene (e.g., user display 108 may not show visuals on the left (eye) display, but show visuals on the right (eye) display), then the other eye “patched”, then both eyes“un-patched” or able to see the VR scene (e.g., user display 108 may show visuals on both the left and the right (eye) displays). The mobility test may involve various iterations of an obstacle course at different light intensities (e.g., incrementally dimmer or brighter), and at different layouts or configurations of elements therein (e.g., the path taken and the obstacles along the path may be changed after each iteration. For example, each obstacle course attempted by a user may have the same number of guidance arrows, turns, and obstacles, but to preclude a learning effect, each attempt by the user may be performed using a different iteration of the obstacle course.
It will also be appreciated that the above described modules, components, and nodes in Figure 1 are for illustrative purposes and that features or portions of features described herein may be performed by different and/or additional modules, components, or nodes than those depicted in Figure 1. it will also be appreciated that some modules and/or components may be combined and/or integrated. For example, user display 108 and processing platform 101 may be integrated into a single computing device, module, or system. For example, a VIVE VR system, a mobile computing device, or smartphone configured with appropriate VR software, hardware, and mobility testing logic may generate a virtual environment and may perform and analyze a mobility test in the virtual environment. In this example, the mobile computing device or smartphone may also display and record a user’s progress through the virtual mobility test.
Figure 2 is a diagram 200 illustrating an example template for a virtual mobility test. In some embodiments, a virtual environment and/or a related obstacle course may utilize a template generated using a program called Tiled Map Editor (http://www.mapeditor.org). in such embodiments, a user may select the File Menu, select the New Option, and then select the New Map Option (File Menu->New->New Map) to generate a new map. In some embodiments, the user may configurable various aspects of the new map, e.g., the new map may be set to‘orthogonal’ orientation, the tile format may be set to‘CSV’, the tile render order may be set to‘Left Up’.
In some embodiments, the tile size may be a width of 85 pixels (px) and a height of 85 px. The Map size may be fixed, and the number of tiles may be user-configurable in some embodiments, the dimensions may be set to a width of 5 tiles and a length of 10 tiles (e.g., 5 ft X 10 ft). In some embodiments, the name of the map may be selected or provided when the map is saved (e.g., File Menu->Save As).
In some embodiments, a tile set may be added to the template (File Menu->New->New Tile set) A tile set may include a number of tile types, e.g., basic path tiles are straight, turn left, and turn right. A tile set may also include variations of a tile type, e.g., a straight tile type may include a hanging obstacle tiles, button tiles, and step over tiles. In some embodiments, a tile set may also provide one or more colors, images, and/or textures for the path or tiles in a template. In some embodiments, a tile set may be named and a browse button may be used to select an image file source and appropriate tile width and height may also be inputted (e.g., 85 px for tile width and height).
In some embodiments, to place a file on the map, select or click a tile from your tile set and then click on the square on which to place the selected tile. For example, to create a standard mobility test or related obstacle course, a continuous path may be created from one of the start tiles to the finish tile.
In some embodiments, after a template is created, the template may be exported and saved to a location for use by VMTS 100 or other entities (File Menu->Export As). For example, after exporting a template as a file name map.csv, the file may be stored in a folder along with a config.csv containing additional configuration information associated with a virtual mobility test in this example, VMTS 100 or related entities may use the CSV files to generate a virtual mobility test.
In some embodiments, a configuration file (e.g., config.csv) may be used to add or remove courses and/or configure courses used in a virtual mobility test. Example configuration settings for a virtual mobility test are listed below:
» play_area_width and play_area_height
o The value is the width and height of the VMTS’s active area in meters. This may be determined when the VMTS is configured with room setup.
* tiie jength o The value is the desired length and width of each tile in meters.
* swings__per__sec
o The value is indicates the number of seconds it takes for a swinging obstacle to make its full range of motion.
* subject__heigbt
o The value is the height of the user (test participant) in meters.
(Some values in the configuration file may be a fraction of the user’s height. Changing this value may affect hanging_obstacle_height, arrow_height, low_height, med_height, high_height, med_obstacle_radius, and big_obstacle_radius.)
® hanging_obstacle_height
o The value is the distance between the floor and the bottom of the obstacle. (The value may be a fraction of the height of the user.)
® arrow_height
o The value is the distance between the guiding arrows and the floor. (The value may be a fraction of the height of the user.)
® low__height
o The value is the distance between the center of low floating obstacles and the floor and height of low box obstacles. (The value may be a fraction of the height of the user.)
* med__height
o The value is the distance between the center of medium floating obstacles and the floor and height of high box obstacles. (The value may be a fraction of the height of the user.)
* high__height
o The value is the distance between the center of high floating obstacles and the floor. (The value may be a fraction of the height of the user.)
® small obstacle radius o The value is the radius of small floating obstacles. (The value may be a fraction of the height of the user.)
* med_obstacle_radius
o The value is the radius of medium floating obstacles. (The value may be a fraction of the height of the user.)
* big__obstacle__radius
o The value is the radius of large floating obstacles. (The value may be a fraction of the height of the user.)
* tiny_step
o The value is the height of very small step-over obstacles. (The value may be a fraction of the height of the user.)
* small_step
o The value is the height of small step-over obstacles. (The value may be a fraction of the height of the user.)
* big__step
o The value is the height of big step-over obstacles. (The value may be a fraction of the height of the user.)
* huge__step
o The value is the height of very big step-over obstacles. (The value may be a fraction of the height of the user.)
* boxjength
o The value is the width and depth of box obstacles. (The value may be a fraction of tile length.)
* parking meter
o The height is 5 feet with the shape of a parking meter.
* open dishwasher door
o The door of a box-like dishwasher may be open anywhere between a 5-90 degree angle and jut into the pathway.
* arrow_local_scale
o The value indicates the local scale of the arrow (relative to tile length). A value of 1 is 100% of normal scale, which is one half the length of a file.
arrow luminance o The value indicates the luminance of guiding arrows in the scene (the virtual environment or course). The luminance may be measured in lumens (lux) and the maximum value may be operator-configurable or may be user display dependent.
® buttonjuminance
o The value indicates the luminance of ail buttons in the scene (the virtual environment or course). The luminance may be measured in lux and the maximum may be operator- configurable or may be user display dependent.
® obstaclejuminance
o The value indicates the luminance of all obstacles (e.g., box shaped, floating, swinging, or hanging obstacles) in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
• footjuminance
o The value indicates the luminance of the user’s hands and feet in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
® finishjinejuminance
o The value indicates the luminance of the finish line in the scene. The luminance may be measured in lux and the maximum value may be operator-configurable or may be user display dependent.
® num_courses
o The value indicates the number of file names referenced in this configuration file. Whatever this value is, there should be that many file names ending in .csv following it, and each one of those file names should correspond to a .csv file that is in the same folder as the configuration file.
It will also be appreciated that the above described files and data in Figure 2 are for illustrative purposes and that VMTS 100 or related entities may use additional and/or different files and data than those depicted in Figure 2.
Figure 3 is a diagram 300 illustrating a user performing a virtual mobility test. In Figure 3, a test observer’s view is shown on the left panel and a test user’s view is shown on the right panel. Referring to Figure 3, the test user may be at the termination point of the course looking in the direction of a green arrow. In the VR scene, a test user’s head, eyes, hands and feet may appear white. The test observer’s view may be capable of displaying various views (e.g., first person view, overhead (bird’s eye) view, or a third person view) of a related obstacle course and may be adjustable on-the-fly. The test user’s view may also be capable of displaying various views and may be adjustable on-the-fly, but may default to the first-person view. The termination point may be indicated by black flag and the test user may mark the completion of the course by touching the flag with his/her favored hand. Alternatively, the test user may walk into the flag. On the right panel, the user’s“touch” may be indicated with a red sphere.
It will be appreciated that Figure 3 is for illustrative purposes and that various virtual mobility tests may include additional and/or different features than those depicted in Figure 3.
Figure 4 is a diagram 400 illustrating example objects in a virtual mobility test Referring to Figure 4, Images A-F depict example objects that may include in a virtual mobility test image A depicts tiles and arrows showing the path (e.g., pointing forward, to the left, or to the right). In some embodiments, arrows may be depicted on the tiles (and not floating above the tiles). Image B depicts step-over obstacles and arrows showing the required direction of movement. Image C depicts box-shaped obstacles image D depicts small floating obstacles (e.g., a group of 12) at different levels of a user’s body (e.g., from ankle to head height) image E depicts large floating obstacles (e.g., a group of 10) at different levels of a user’s body (e.g., from ankle to head height). Image F depicts obstacles that a user must step on (e.g., to mimic stairs, rocks, etc.). These obstacles may depress (e.g., sink into the floor or tiles) as the user steps on them. In some embodiments, arrows may be depicted on the tiies (and not floating above or adjacent to the tiles).
It will be appreciated that Figure 4 is for illustrative purposes and that other objects may be used in a virtual mobility test than those depicted in Figure 4. For example, a virtual mobility test may include an obstacle course containing small, medium, and large floating obstacles, parking meter shaped posts, or doors (e.g., partially open dishwasher doors) or gates that jut into the path.
Figure 5 is a diagram 500 illustrating various sized obstacles In a virtual environment. In some embodiments, a virtual mobility test or related objects therein may be adjustable. For example, VMTS 100 or one or modules therein may scale or resize obstacles based on a test user’s height or other physical characteristics. In this example, scalable obstacle courses may be useful for comparisons of performance of individuals who differ in height as the user’s height (e.g., distance of the eyes to the objects on the ground) affects visual resolution (e.g., visual acuity). The ability to resize objects in a virtual mobility test is also useful for following the visual performance of a child over time, e.g., as the child will grow and become an adult. In some embodiments, scaling an obstacle course may also be useful to ensure that obstacles or elements in the virtual environment (e.g., tiles that make of a course segments) are sized appropriately (e.g., so that a user’s foot can fit along an approved path through the virtual obstacle course).
Referring to Figure 5, images A-C depict different size obstacles from an overhead view. For example, image A depicts an overhead view of small floating obstacles, image B depicts an overhead view of medium floating obstacles, and image C depicts an overhead view of large floating obstacles.
It will be appreciated that Figure 5 is for illustrative purposes and that objects may be scaled in more precise terms and/or with more granularity (e.g., a percentage or fraction of a test user’s height). For example, a virtual mobility test may include an obstacle course containing obstacles that appear to be 18 76% of the height of a test user. Figure 6 is a diagram 800 iiiustrating virtuai mobility tests with various lighting conditions in some embodiments, lighting conditions in a virtuai mobility test may be adjustable. For example, VMTS 100 or one or more modules therein may adjust lighting conditions for a virtual environment or related obstacle course associated with a virtual mobility test. For example, VMTS 100 or one or more modules therein may adjust luminance of various objects (e.g., obstacles, path arrows, hands, head, and feet, finish line, and/or floor tiles) associated with a virtuai mobility test.
In some embodiments, individual obstacles and/or groups of obstacles can be assigned different luminance, contrast, shading, outlines, and/or color. In some embodiments, each condition or setting may be assigned a relative value or an absolute value. For example, assuming luminance can be from 0.1 lux to 400 lux, a first obstacle can be displayed at 50 lux and a second obstacle can be assigned to a percentage of the first obstacle (e.g., 70% or 35 lux) in this example, regardless of a luminance value, some objects in a virtual mobility test may have a fixed luminance (e.g., a finish flag).
Referring to Figure 6, images A-C depict a mobility test under different luminance conditions with arrows highlighted for illustrative purposes. For example, image A shows a mobility test displayed under low luminance conditions (e.g , about 1 lux); image B shows a mobility test with a step obstacle displayed under medium luminance conditions (e.g., about 100 lux); and image C shows a mobility test with a step obstacle and other objects displayed under high luminance conditions (e.g., about 400 lux).
It will be appreciated that Figure 6 is for illustrative purposes and that different and/or additional aspects of the virtual mobility test than those depicted in Figure 6.
Figure 7 is a diagram 700 illustrating example data captured during a virtual mobility test. In some embodiments, VMTS 100 or one or more modules therein (e.g., TC 102 and/or sensor data collector 104) may analyze various data associated with a virtual mobility test. For example, VMTS 100 or modules therein may gather data from sensors 110, information regarding the virtual environment (e.g., locations and sizes of obstacles, path, etc.), and/or one or more video cameras in this example, the data captured may be measured and analyzed using quantitative analysis (e.g., based on objective criteria).
Referring to Figure 7, captured data may be stored in one or more files (e.g., test__events.csv and test_scene.csv files). Example captured data may include details of an obstacle course in a virtual mobility test (e.g., play area, etc.), a particular configuration of the course, a height of a test user, sensor locations (e.g., head, hands, feet) as a function of time (e.g., indicative of body movements), direction of gaze, acceleration and deceleration, leaning over to look more closely at an object, and/or amount of time interacting with each obstacle.
It will be appreciated that Figure 7 is for illustrative purposes and that different and/or additional data than those depicted in Figure 7 may be captured or obtained during a virtual mobility test.
Figure 8 is a diagram 800 illustrating various aspects of an example virtual mobility test. In some embodiments, VMTS 100 or one or more modules therein may be capable of providing real-time or near real-time playback of a user’s performance during a virtual mobility test. In some embodiments, VMTS 100 or one or more modules therein may be capable of recording a user’s performance during a virtual mobility test. For example, VMTS 100 or modules therein may use gathered data from sensors 110, and/or other input, to create an avatar representing the user in the virtual mobility test and may depict the avatar interacting with various objects in the virtual mobility test. For example, a video or playback of a user performing a virtual mobility test may depict the user’s head, eyes, hands, and feet appear white on the video and can depict the user walking through an obstacle course toward the termination point (e.g., a finish flag) of the course. In this example, to emphasize the user bumping into the hanging sign and stepping backwards, a green arrow may point to a red circle located at the location of the collision.
Referring to Figure 8, a snapshot from a playback of a user performing a virtual mobility test is shown. In the snapshot, start location 802 represents the start of an obstacle course; avatar 804 represents the user’s head, hands, and feet; floating obstacle 808 represents a head height obstacle in the obstacle course (e.g., one way to avoid such an obstacle is to duck); indicators 808 represent a collision circle indicating where a collision between the user and the virtual environment occurred and an arrow pointing to the collision circle; and finish location 810 represents the end of the obstacle course.
It will be appreciated that Figure 8 is for illustrative purposes and that different and/or additional aspects than those depicted in Figure 8 may be part of a virtual mobility test.
Figure 9 is a flow chart illustrating an example process 900 for testing visual function using a virtual environment. In some embodiments, example process 900 described herein, or portions thereof, may be performed at or performed by VMTS 100, processing platform 101 , TC 102, sensor data collector 104, user display 108, and/or another module or node.
Referring to example process 900, in step 902, a virtual mobility test may be configured for testing visual function of a user. For example, VMTS 100 may use configuration files containing settings and/or configuration information for configuring a virtual mobility test or a related obstacle course.
In some embodiments, configuring a virtual mobility test may include configuring the virtual mobility test based on the user, e.g , physical characteristics or a vision condition (e.g , an eye disease or other condition that effects vision).
In step 904, the virtual mobility test may be generated. For example, VMTS 100 may generate and display a virtual mobility test to user display 108.
In step 908, performance of the user during the virtual mobility test may be analyzed for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors. For example, VMTS 100 or related entities may receive data collected from sensors 110 to determine whether a user collided with an obstacle in a virtual mobility test in this example, the number or amounts of collisions may affect a generated score indicating performance of the user regarding the virtual mobility test. In some embodiments, configuring a virtual mobility test may include configuring the virtual mobility test to test a right eye, a left eye, or both eyes.
In some embodiments, configuring a virtual mobility test may include configuring luminance, shadow, color, contrast, gradients of contrast/color on the surface of the obstacles or enhanced contrast,“reflectance” or color of borders of obstacles or a lighting condition associated with one or more of the objects in the virtual mobility test based on configuration information.
In some embodiments, configuring a virtual mobility test includes: configuring the height of one or more of the objects in the virtual mobility test and/or configuring the size of one or more of the objects in the virtual mobility test based on configuration information.
In some embodiments, configuration information may include the height or other attributes of the user, condition-based information (e.g., aspects of cognition, perception, eye condition, etc ), user-inputted information, operator-inputted information, or dynamic information.
In some embodiments, generating a virtual mobility test may include providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition may include a collision between the user and one or more of the objects in the virtual mobility test, the user leaves a designated path or course associated with the virtual mobility test, or a predetermined amount of progress has not occurred in a predetermined amount of time.
In some embodiments, generating a virtual mobility test may include capturing the data from body movement defection sensors and using the data to output a video of the user’s progress through the virtual mobility test. For example, a video of a users progress through the virtual mobility test may include an avatar representing the user and/or their body movements.
In some embodiments, objects in a virtual mobility test may include a tile, an obstacle, a box obstacle, a step-over obstacle, a hanging or swinging obstacle, a floating obstacle, a starting line, a finish line, a finish flag, a guide arrow, or a button obstacle.
In some embodiments, testing may be done in a multi-step fashion in order to isolate the role of central vision versus peripheral vision. For example, a virtual mobility test or a related test may be configured to initially identify a luminance threshold value for the subject to identify colored (red, for example) arrows on the path. This luminance threshold value may then be held constant in subsequent tests for central vision while luminance of the obstacles is modified in order to elicit the sensitivity of the subject’s peripheral vision.
Figure 10 is a flow chart illustrating a process 1000 for dissecting or analyzing two different parameters of visual function. In some embodiments, example process 1000 described herein, or portions thereof, may be performed at or performed by VMTS 100, processing platform 101 , TC 102, sensor data collector 104, user display 108, and/or another module or node.
Referring to example process 1000, in step 1002, a virtual mobility test may be configured for testing visual function of a user. For example, VMTS 100 may use configuration files containing settings and/or configuration information for configuring a virtual mobility test or a related obstacle course.
In some embodiments, configuring a virtual mobility test may include configuring the virtual mobility test based on the user, e.g., physical characteristics or a vision condition (e.g., an eye disease or other condition that effects vision).
In step 1004, establish or determine the threshold luminance for cone photoreceptor (e.g., foveai or center of vision) function of the user. For example, VMTS 100 may generate and display a pathway of red (or other color) arrows, where the arrows are gradually increasing in luminance.
In step 1006, using the threshold luminance established in step 1004, a virtual mobility test may be generated. For example, VMTS 100 may generate and display a virtual mobility test to user display 108, where the objects (e.g., obstacles) at the start of the virtual mobility test are of a low luminance (e.g., as determined by a standard or based on the user’s established threshold luminance from step 1004). in this example, as the user moves through the virtual mobility test the encountered objects may gradually increase in luminance. In step 1008, performance of the user during the virtual mobility test may be analyzed for determining the visual function of the user based on speed of test completion and user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors. For example, VMTS 100 or related entities may receive data collected from sensors 110 to determine whether a user collided with an obstacle in a virtual mobility test in this example, the number or amounts of collisions may affect a generated score indicating performance of the user regarding the virtual mobility test.
It should be noted that VMTS 100 and/or functionality described herein may constitute a special purpose computing device. Further, VMTS 100 and/or functionality described herein can improve the technological field of eye treatments and/or diagnosis. For example, by generating and using a virtual mobility test, a significant number of benefits can be achieved, including the ability to quickly and easily test visual function of a user without requiring expensive and time-consuming setup (e.g., extensive lighting requirements) needed for conventional mobility test. In this example, the virtual mobility test can also use data collected from sensors and the VR system to more effectively and more objectively analyze and/or score a user’s performance. The details provided here would also be applicable to augmented reality (AR) systems which could be delivered through glasses, thus facilitating usage.
It may be understood that various details of the subject matter described herein may be changed without departing from the scope of the subject matter described herein. Furthermore, the foregoing description is for the purpose of illustration only, and not for the purpose of limitation, as the subject matter described herein is defined by the claims as set forth hereinafter.

Claims

CLAIMS What is claimed is:
1. A system comprising:
at least one processor; and
a memory, wherein the system is configured for:
configuring a virtual mobility test for testing visual function of a user;
generating the virtual mobility test; and analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data from body movement detection sensors.
2. The system of claim 1 wherein configuring the virtual mobility test includes configuring the virtual mobility test based on the user or a vision condition.
3. The system of claim 1 wherein configuring the virtual mobility test includes configuring the virtual mobility test to test a right eye, a left eye, or both eyes.
4. The system of claim 1 wherein configuring the virtual mobility test includes configuring luminance, shadow, color, contrast, gradients of contrast or color on the surface of the objects, reflectance or color of borders of the objects, or a lighting condition associated with one or more of the objects in the virtual mobility test based on configuration information.
5. The system of claim 1 wherein configuring the virtual mobility test includes: configuring the height of one or more of the objects in the virtual mobility test and/or configuring the size of one or more of the objects in the virtual mobility test based on configuration information.
8. The system of claim 5 wherein the configuration information includes the height or other attributes of the user, condition-based information; user-inputted information, operator-inputted information, or dynamic information.
7. The system of claim 1 wherein generating the virtual mobility test includes providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition includes a collision between a user and an obstacle in the virtual mobility test, a user leaves a designated path or course associated with the virtual mobility test, or a predetermined amount of progress has not occurred in a predetermined amount of time.
8. The system of claim 1 generating the virtual mobility test includes capturing the data from the body movement detection sensors and using the data to output a video of the user’s progress through the virtual mobility test.
9. The system of claim 1 wherein the objects in the virtual mobility test may include a tile, an obstacle, a box obstacle, a step-over obstacle, a hanging or swinging obstacle, a floating obstacle, a starting line, a finish line, a finish flag, a guide arrow, or a button obstacle.
10. A method, the method comprising:
configuring a virtual mobility test for testing visual function of a user;
generating the virtual mobility test; and
analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
1 1 The method of claim 10 wherein configuring the virtual mobility test includes configuring the virtual mobility test based on the user or a vision condition.
12 The method of claim 10 wherein configuring the virtual mobility test includes configuring the virtual mobility test to test a right eye, a left eye, or both eyes.
13. The method of claim 10 wherein configuring the virtual mobility test includes configuring luminance, shadow, color, contrast, gradients of contrast or color on the surface of the objects, reflectance or color of borders of the objects, or a lighting condition associated with one or more of the objects in the virtual mobility test based on configuration information.
14. The method of claim 10 wherein configuring the virtual mobility test includes: configuring the height of one or more of the objects in the virtual mobility test and/or configuring the size of one or more of the objects in the virtual mobility test based on configuration information.
15. The method of claim 14 wherein the configuration information includes the height or other attributes of the user, condition-based information; user-inputted information, operator-inputted information, or dynamic information.
16. The method of claim 10 wherein generating the virtual mobility test includes providing auditory or haptic feedback to the user when a feedback condition occurs, wherein the feedback condition includes a collision between the user and one or more of the objects in the virtual mobility test, the user leaves a designated path or course associated with the virtual mobility test, or a predetermined amount of progress has not occurred in a predetermined amount of time.
17. The method of claim 10 generating the virtual mobility test includes capturing the data from body movement detection sensors and using the data to output a video of the user’s progress through the virtual mobility test.
18. The method of claim 10 wherein the objects in the virtual mobility test may include a tile, an obstacle, a box obstacle, a step-over obstacle, a hanging or swinging obstacle, a floating obstacle, a starting line, a finish line, a finish flag, a guide arrow, or a button obstacle.
19. A non-transitory computer readable medium having stored thereon executable instructions that when executed by at least one processor of a computer cause the computer to perform steps comprising:
configuring a virtual mobility test for testing visual function of a user;
generating the virtual mobility test; and
analyzing performance of the user during the virtual mobility test for determining the visual function of the user based on user interaction with objects in the virtual mobility test using data obtained from body movement detection sensors.
20. The non-transitory computer readable medium of claim 19 wherein configuring the virtual mobility test includes configuring the virtual mobility test based on the user or an eye condition.
PCT/US2019/029173 2018-04-25 2019-04-25 Methods, systems, and computer readable media for testing visual function using virtual mobility tests WO2019210087A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/079,119 US20210045628A1 (en) 2018-04-25 2020-10-23 Methods, systems, and computer readable media for testing visual function using virtual mobility tests

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862662737P 2018-04-25 2018-04-25
US62/662,737 2018-04-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/079,119 Continuation-In-Part US20210045628A1 (en) 2018-04-25 2020-10-23 Methods, systems, and computer readable media for testing visual function using virtual mobility tests

Publications (1)

Publication Number Publication Date
WO2019210087A1 true WO2019210087A1 (en) 2019-10-31

Family

ID=68294241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/029173 WO2019210087A1 (en) 2018-04-25 2019-04-25 Methods, systems, and computer readable media for testing visual function using virtual mobility tests

Country Status (2)

Country Link
US (1) US20210045628A1 (en)
WO (1) WO2019210087A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024003319A1 (en) 2022-07-01 2024-01-04 Dublin City University Video analysis gait assessment system and method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210134049A1 (en) * 2017-08-08 2021-05-06 Sony Corporation Image processing apparatus and method
US20210357021A1 (en) * 2020-05-13 2021-11-18 Northwestern University Portable augmented reality system for stepping task therapy
FR3122326A1 (en) * 2021-04-28 2022-11-04 Streetlab Device for measuring visual ability to move

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140276130A1 (en) * 2011-10-09 2014-09-18 The Medical Research, Infrastructure and Health Services Fund of the Tel Aviv Medical Center Virtual reality for movement disorder diagnosis and/or treatment
US20170290504A1 (en) * 2016-04-08 2017-10-12 Vizzario, Inc. Methods and Systems for Obtaining, Aggregating, and Analyzing Vision Data to Assess a Person's Vision Performance
US20170340200A1 (en) * 2014-05-29 2017-11-30 Vivid Vision, Inc. Interactive System for Vision Assessment and Correction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140276130A1 (en) * 2011-10-09 2014-09-18 The Medical Research, Infrastructure and Health Services Fund of the Tel Aviv Medical Center Virtual reality for movement disorder diagnosis and/or treatment
US20170340200A1 (en) * 2014-05-29 2017-11-30 Vivid Vision, Inc. Interactive System for Vision Assessment and Correction
US20170290504A1 (en) * 2016-04-08 2017-10-12 Vizzario, Inc. Methods and Systems for Obtaining, Aggregating, and Analyzing Vision Data to Assess a Person's Vision Performance

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024003319A1 (en) 2022-07-01 2024-01-04 Dublin City University Video analysis gait assessment system and method

Also Published As

Publication number Publication date
US20210045628A1 (en) 2021-02-18

Similar Documents

Publication Publication Date Title
US20240099575A1 (en) Systems and methods for vision assessment
CN107224261B (en) Visual impairment detection system using virtual reality
US20210045628A1 (en) Methods, systems, and computer readable media for testing visual function using virtual mobility tests
CA2767654C (en) Visualization testing and/or training
US20170340200A1 (en) Interactive System for Vision Assessment and Correction
CN103959357B (en) System and method for eye examination training
Chandra et al. Eye tracking based human computer interaction: Applications and their uses
CN110167421A (en) Integrally measure the system of the clinical parameter of visual performance
Kinateder et al. Using an augmented reality device as a distance-based vision aid—promise and limitations
KR102328089B1 (en) Apparatus and method for evaluating disorders of conscious based on eye tracking in virtual reality
US20150160474A1 (en) Corrective lens prescription adaptation system for personalized optometry
Kasprowski Eye tracking hardware: past to present, and beyond
Garro et al. A review of current trends on visual perception studies in virtual and augmented reality
US11331551B2 (en) Augmented extended realm system
US11768594B2 (en) System and method for virtual reality based human biological metrics collection and stimulus presentation
WO2023172768A1 (en) Methods, systems, and computer readable media for assessing visual function using virtual mobility tests
Wang et al. Design and evaluation of an exergame system of knee with the Azure Kinect
Li et al. A markerless visual-motor tracking system for behavior monitoring in DCD assessment
CN116153510B (en) Correction mirror control method, device, equipment, storage medium and intelligent correction mirror
Lopez Off-the-shelf gaze interaction
Neugebauer et al. Simulating vision impairment in virtual reality: a comparison of visual task performance with real and simulated tunnel vision
US20230337911A1 (en) Systems and methods to measure and improve eye movements and fixation
KR20240015687A (en) Virtual Reality Techniques for Characterizing Visual Abilities
Francisco Development of an Eye-Tracker for a HMD
Vannieuwenhuijsen et al. Research on the optimal setup for eye tracking and optimization of instruction set with eye tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19792837

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19792837

Country of ref document: EP

Kind code of ref document: A1