WO2018107108A1 - Method for visual field perimetry testing - Google Patents

Method for visual field perimetry testing Download PDF

Info

Publication number
WO2018107108A1
WO2018107108A1 PCT/US2017/065443 US2017065443W WO2018107108A1 WO 2018107108 A1 WO2018107108 A1 WO 2018107108A1 US 2017065443 W US2017065443 W US 2017065443W WO 2018107108 A1 WO2018107108 A1 WO 2018107108A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
target
processors
visual
computing devices
Prior art date
Application number
PCT/US2017/065443
Other languages
French (fr)
Inventor
David Huang
Original Assignee
Oregon Health & Science University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oregon Health & Science University filed Critical Oregon Health & Science University
Publication of WO2018107108A1 publication Critical patent/WO2018107108A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/024Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • A61B3/005Constructional features of the display

Definitions

  • Embodiments herein relate to the field of ophthalmology, and, specifically to computer implemented methods of testing the visual field perimetry of a subject.
  • VF testing is a versatile diagnostic tool for glaucoma and other optic neuropathies, as well as retinal disease.
  • VF Visual field
  • it's most common use is in the evaluation of glaucoma, which tends to damage the peripheral vision early and only affect central vision much later. Therefore the detection of early peripheral visual field loss is an important part of glaucoma diagnosis and monitoring.
  • Glaucoma is a leading cause of blindness worldwide. It is a degeneration of the optic nerve associated with cupping of the optic nerve head (optic disc). Glaucoma is often associated with elevated intraocular pressure (IOP). However, the IOP is normal in a large minority of cases and therefore IOP alone is not an accurate means of diagnosing glaucoma.
  • IOP intraocular pressure
  • One time examination of the optic disc is usually not sufficient to diagnose glaucoma either, as there is a great variation in the degree of physiologic cupping among normal eyes. Glaucoma eventually damages vision, usually starting in the peripheral region. Therefore visual field (VF) tests that cover a wide area of vision (for example, ⁇ 24° of visual angle) is a standard for diagnosing glaucoma.
  • Visual field testing the systematic
  • VF test measurement of visual field function
  • perimetry or “perimetry testing”
  • automated testing is called automated perimetry or automated perimetry testing.
  • a single standard VF test is poorly reliable, however, due to large test-retest variation. Therefore several VF tests are required to establish an initial diagnosis of glaucoma or to show a worsening of glaucoma over time.
  • standard visual field testing There are drawbacks to standard visual field testing. Dedicated instruments installed at an eye specialist's clinic are needed. This prevents frequent repetition of the test to confirm a glaucoma diagnosis or to monitor the progression of the disease. The test requires fixation at a fixed spot for many minutes. This is unnatural, tiring, and often not achieved. Fixation loss is a common cause of unreliable tests.
  • Subject input typically consists of simple yes-or-no clicking of a button. Since the timing of the click can be affected by poor subject attention, this contributes toward higher false positive and false negative responses. It also requires long intervals to separate presentation of visual stimuli. This causes boredom and loss of attention. This prevents frequent repetition of the test.
  • the visual stimuli are uninteresting. This causes boredom and loss of attention.
  • the auditory environment is quiet. This causes boredom and loss of attention. There is no immediate feedback on how the subject is doing. This causes boredom and loss of attention.
  • the head is held in a chin rest to maintain fixed distance to the visual stimuli. This is uncomfortable over extended periods of time. This prevents frequent repetition of the test.
  • the need exists for new and innovative ways to test VF for example as a test for glaucoma and/or glaucoma progression.
  • Disclosed herein are computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media that enable the determination of a subject's visual field perimetry using virtual reality with one or more computing devices, such as a mobile computing device, for example, a smart phone.
  • a mobile computing device for example, a smart phone.
  • the computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media are provided in the form of a video game, which makes the test more enjoyable to the subject undergoing the test.
  • the method may include any and all of the flowing steps: presenting, using one or more processors of one or more computing devices, a fixation target at a known location on a virtual reality display screen, wherein the fixation target is presented to both eyes stereoscopically; determining, using the one or more processors of the one or more computing devices, if subject's head has rotated to align a central site on the virtual reality display screen with the fixation target within a predetermined variable time window Ttarget; providing, using the one or more processors of the one or more computing devices, a visual stimulation target at a known location on the virtual reality display screen, wherein the visual stimulation target is presented to a single eye of the subject for a predetermined fixed amount of time Tstimuius; determining, using the one or more processors of
  • determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, a positive identification of the fixation target if the subject's head is determined to align with the fixation target within the predetermined variable time window Tt arg et, or a negative identification if the subj ect's head is determined not to align with the fixation target within the predetermined variable time window Ttarget.
  • determining a subject's visual field perimetry with virtual reality includes: choosing a new known location for the visual stimulation target for presentation from visual field locations that remain to be tested.
  • determining a subject's visual field perimetry with virtual reality includes that the known location of the visual stimulation target relative to the fixation target is randomly chosen.
  • determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, if the head rotation of the subject in response to providing the visual stimulation target at the known location reaches a predetermined threshold magnitude within a predetermined variable time window Tdetectwn after the visual stimulation target presentation.
  • determining a positive stimulation target detection by the subject is based on whether there is head rotation directed toward the visual stimulation target within a predetermined variable time window Tdetectwn after the visual stimulation target presentation.
  • determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, a negative stimulation target detection if no head rotation of the subject toward the visual stimulation target is detected in response to providing the visual stimulation target at the known location; storing, using the one or more processors of the one or more computing devices, the known location of the negative visual stimulation target detection; and constructing, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception for the subject from the known locations of the positive visual stimulation target detection and the known locations of the negative visual stimulation target detection.
  • determining a subject's visual field perimetry with virtual reality includes: outputting, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception.
  • outputting the visual field map of the threshold of perception comprises outputting a grid of sensitivity values.
  • outputting the visual field map of the threshold of perception comprises outputting a full-threshold visual field map.
  • outputting the visual field map of the threshold of perception comprises outputting a suprathreshold visual field map.
  • the one or more computing devices comprise at least one smart phone.
  • the one or more computing devices are in communication with a network.
  • the network is a
  • determining a subject's visual field perimetry with virtual reality includes: initiating, using the one or more processors of the one or more computing devices, a telemedicine session.
  • Figure 1 is a schematic of a Virtual Reality (VR) set that uses a smartphone coupled with a head-mounted VR adaptor.
  • VR Virtual Reality
  • FIG. 2 is a schematic of a VR set up that uses a smartphone coupled with a head-mounted VR adapter.
  • Figure 3 is a schematic depicting a mobile computing device for Visual Field
  • Figure 4 is a schematic of an example computing device, according to embodiments herein.
  • Figures 5-14 are example screen shot illustrations of the dragon slayer VR game for VF testing, according to embodiments herein.
  • Figure 15 is a schematic Illustration of a hyperacuity target, according to embodiments herein.
  • Figure 16 is a work flow diagram showing the testing cycle used to establish the threshold of visual stimulus perception, according to embodiments herein.
  • Figure 17 is a schematic of a full-threshold visual field output from the dragon slayer VR game for VF testing, according to embodiments herein.
  • Figure 18 is a schematic of a suprathreshold visual field output from the dragon slayer VR game for VF testing, according to embodiments herein.
  • Figure 19 is a work flow diagram showing the selection of visual stimulus and fixation target presentation locations for one round from the dragon slayer VR game for VF testing, according to embodiments herein.
  • Figure 20 is a schematic diagram of a networked mobile computing device for computer implemented methods of testing the visual field perimetry of a subject, in accordance with embodiments herein.
  • the description may use perspective-based descriptions such as up/down, back/front, and top/bottom. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of disclosed embodiments.
  • Coupled may mean that two or more elements are in direct physical contact. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other.
  • a and/or B means (A), (B), or (A and B).
  • a phrase in the form "at least one of A, B, and C” means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
  • a phrase in the form "(A)B” means (B) or (AB) that is, A is an optional element.
  • Disclosed herein are computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media that enable the determination of a subject's visual field perimetry using virtual reality with one or more computing devices, such as a mobile computing device, for example, a smart phone.
  • a mobile computing device for example, a smart phone.
  • the computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media are provided in the form of a video game, which makes the test more enjoyable to the subject undergoing the test.
  • the disclosed methods, apparatuses, systems and computer non-transitory computer-readable storage media offer several advantages over traditional methods of perimetry measurement as well as computer implemented methods of the same.
  • the current disclosure enables the presentation of both background graphics and a fixation target to both eyes to create a 3-dimensional stereoscopic immersive VR visual environment.
  • the fixation target is used as a shooting target in a game, which makes fixation more reliable and the testing more fun for the subject.
  • the test treats the visual stimulus and the fixation target differently.
  • the visual stimulus is displayed very briefly and its perception is detected by a small head motion that only need to approximately match the direction of the stimulus and does not need to reach its position.
  • the fixation target is presented for extended period of time and as a target of a shooting game. Shooting the target requires precise and fast alignment of a sight onto the target and maintains the active attention of the subject.
  • This disclosure represents the first time a separate treatment of the fixation target and the visual stimulus to optimize the rigor of the VF test has been implemented within the excitement of a targeting game.
  • the game action i.e. shooting
  • VF tests typically one eye is tested at a time, and the contralateral eye is occluded with a patch.
  • the presentation of visual stimulus could be varied between the right and left eye in rapid alternation since the left and right eye displays are separately controlled. There is no need to occlude the contralateral eye. It is preferable to randomly vary the eye being tested with the visual stimulus to keep the game more interesting and the subject more engaged. And both eyes should be presented with the background scenery, head-tracking sight, and fixation target in a stereoscopic manner to maintain a 3D immersive VR visual environment for maximally enjoyable game play and subject interest.
  • the physical apparatus of the current disclosure (the device) is a virtual reality
  • the device comprises a mobile computing device, such as a smartphone, and a viewer that adapts the mobile computing device for binocular viewing.
  • a mobile computing device such as a smartphone
  • a viewer that adapts the mobile computing device for binocular viewing. Examples of such viewers include the Google Cardboard (see, for example, Figure 1) and the Samsung Gear VR (see, for example, Figure 2), each of which works with specific smartphone models.
  • the VR device 100 comprises a mobile computing device 110 and a viewer 120.
  • the mobile computing device 110 has a VR display screen 1 11 that is divided into left eye display 112a and right eye display 1 12b.
  • the viewer 120 includes enclosure 122 to secure the mobile computing device 1 10 and ocular lenses 131 and 132 for right and left eye displays, respectively.
  • the VR device 100 is preferably mounted on the head
  • the viewer is coupled to a touch pad 125, for example for manual input by the user and focus wheel 126 to adjust the focus of the ocular lenses.
  • Figure 3 illustrates a simplified diagram of an exemplary computing device
  • the one or more computing devices can be employed.
  • the one or more computing devices comprises a smart phone, such as a commercially available smart phone, for example an iPhone ® , Samsung Galaxy®, Nokia Lumina ®, Motorola Droid ®, and the like.
  • the smartphone is an iPhone, for example an iPhone X.
  • the computing device may be a mobile computing device, such as a smart phone, for example, the computing device 110 could be mobile computing device 110, or even smart phone 110.
  • a smartphone is a handheld mobile computing device, typically with a mobile operating system and an integrated mobile broadband cellular network connection for voice, SMS, and Internet data communication; most if not all smartphones also support Wi-Fi.
  • Smartphones are typically pocket-sized, as opposed to tablets, which are much larger than a pocket. They are able to run a variety of third-party software components ("apps") from places like the Google Play Store or Apple App Store, and can receive bug fixes and gain additional functionality through operating system software updates.
  • Modern smartphones have a touchscreen color display with a graphical user interface that covers the front surface and enables the user to use a virtual keyboard to type and press onscreen icons to activate "app" features.
  • Typical smartphones will include one or more of the following sensors: magnetometer, proximity sensor, barometer, gyroscope and/or accelerometer.
  • the computing device 110 includes a VR display screen 111 and a touch pad 125 (for example integral to the computing device or external and coupled thereto), which may be part of the VR display screen 111, for example a video display touch screen.
  • the computing device 110 includes a number of components, such as one or more processors 140 and at least one communication module 142.
  • the communication module 142 allows communication from and to one or more other networked computing devices, for example having remote data storage and computing capabilities.
  • the one or more processors 140 each include one or more processor cores.
  • the at least one communication module 142 is physically and electrically coupled to the one or more processors 140.
  • the communication module 142 is part of the one or more processors 140.
  • computing device 110 includes printed circuit board (PCB) 155.
  • PCB printed circuit board
  • the computing device 110 includes other components that may or may not be physically and electrically coupled to the PCB.
  • these other components include, but are not limited to, a memory controller (not shown), volatile memory (e.g., dynamic random access memory (DRAM) (not shown)), non-volatile memory (not shown) such as read only memory (ROM), flash memory (not shown), an I/O port (not shown), (not shown), a digital signal processor (not shown), a crypto processor (not shown), a graphics processor (not shown), one or more antenna (not shown), a touch-screen display, a touch-screen display controller (not shown), a battery (not shown), an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device (not shown), a compass (not shown), a speaker 113, a camera (not shown), and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD) (not shown), digital versatile disk (DVD) (not shown
  • the one or more processors 140 are operatively coupled to system memory through one or more links (e.g., interconnects, buses, etc.).
  • links e.g., interconnects, buses, etc.
  • system memory is capable of storing information that the one or more processors 140 utilize to operate and execute programs and operating systems.
  • system memory is any usable type of readable and writeable memory such as a form of dynamic random access memory (DRAM).
  • the computing device 110 includes a microphone 146 configured to capture audio.
  • the computing device 110 includes a speaker 141 configured to transmit audio.
  • computing device 110 includes or is otherwise associated with various input and output/feedback devices to enable user interaction with the computing device 110 and/or peripheral components or devices associated with the computing device 110 by way of one or more user interfaces or peripheral component interfaces.
  • the user interfaces include, but are not limited to a physical keyboard or keypad, a touchpad 125, a display device (touchscreen or non-touchscreen), speakers, microphones, image sensors, haptic feedback devices and/or one or more actuators, and the like.
  • the computing device can comprise a memory element (not shown), which can exist within a removable smart chip or a secure digital (“SD”) card or which can be embedded within a fixed chip.
  • SD secure digital
  • SIM Subscriber Identity Component
  • the memory element may allow a software application resident on the device.
  • an I/O link connecting a peripheral device to a computing device is protocol-specific with a protocol-specific connector port that allows a compatible peripheral device to be attached to the protocol-specific connector port (i.e., a USB keyboard device would be plugged into a USB port, a router device would be plugged into a
  • Any single connector port would be limited to peripheral devices with a compatible plug and compatible protocol. Once a compatible peripheral device is plugged into the connector port, a communication link would be established between the peripheral device and a protocol-specific controller.
  • a non-protocol-specific connector port is configured to couple the I/O interconnect with a connector port of the computing device 110, allowing multiple device types to attach to the computing device 110 through a single physical connector port.
  • the I/O link between the computing device 110 and the I/O complex is configured to carry multiple I/O protocols (e.g., PCI Express®, USB,
  • the connector port is capable of providing the full bandwidth of the link in both directions with no sharing of bandwidth between ports or between upstream and downstream directions.
  • the connection between the I/O interconnect and the computing device 110 supports electrical connections, optical connections, or both.
  • the one or more processors 140, flash memory, and/or a storage device includes associated firmware storing programming instructions configured to enable the computing device 110, in response to execution of the programming instructions by one or more processors 140, to practice all or selected aspects of a computer implemented method of determining the visual field perimetry of a subject, in accordance with
  • the communication module 142 enables wired and/or wireless communications for the transfer of data to and from the computing device 110.
  • the computing device 110 also includes a network interface configured to connect the computing device 110 to one or more networked computing devices wirelessly via a transmitter and a receiver (or optionally a transceiver) and/or via a wired connection using a communications port.
  • the network interface and the network interface are configured to connect the computing device 110 to one or more networked computing devices wirelessly via a transmitter and a receiver (or optionally a transceiver) and/or via a wired connection using a communications port.
  • transmitter/receiver and/or communications port are collectively referred to as a
  • the wireless transmitter/receiver and/or transceiver may be configured to operate in accordance with one or more wireless communications standards.
  • the term “wireless” and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not.
  • the computing device 110 includes a wireless communication module 142 for transmitting to and receiving data, for example for transmitting and receiving data from a network, such as a telecommunications network.
  • the communication module transmits data, including video data, though a cellular network or mobile network, such as a Global System for Mobile
  • GSM Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • cdmaOne CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless
  • the mobile computing device 110 is directly connected with one or more devices via the direct wireless connection by using, for example, Bluetooth and/or BLE protocols, WiFi protocols, Infrared Data Association (IrDA) protocols, ANT and/or ANT+ protocols, LTE ProSe standards, and the like.
  • Bluetooth and/or BLE protocols WiFi protocols, Infrared Data Association (IrDA) protocols, ANT and/or ANT+ protocols, LTE ProSe standards, and the like.
  • the communications port is configured to operate in accordance with one or more known wired communications protocols, such as a serial communications protocol (e.g., the Universal Serial Bus (USB), FireWire, Serial Digital Interface (SDI), and/or other like serial communications protocols), a parallel communications protocol (e.g., IEEE 1284, Computer Automated Measurement And Control (CAMAC), and/or other like parallel communications protocols), and/or a network communications protocol (e.g., Ethernet, token ring, Fiber Distributed Data Interface (FDDI), and/or other like network communications protocols).
  • a serial communications protocol e.g., the Universal Serial Bus (USB), FireWire, Serial Digital Interface (SDI), and/or other like serial communications protocols
  • a parallel communications protocol e.g., IEEE 1284, Computer Automated Measurement And Control (CAMAC), and/or other like parallel communications protocols
  • CAMAC Computer Automated Measurement And Control
  • FDDI Fiber Distributed Data Interface
  • the computing device 1 10 is configured to run, execute, or otherwise operate one or
  • the native applications are used for operating the computing device 1 10, such as using a camera or other like sensor of the computing device 110, cellular phone functionality of the computing device 1 10, and other like functions of the computing device 110.
  • native applications are platform or operating system (OS) specific or non-specific.
  • native applications are developed for a specific platform using platform-specific development tools, programming languages, and the like. Such platform-specific development tools and/or programming languages are provided by a platform vendor.
  • native applications are pre-installed on computing device 110 during manufacturing, or provided to the computing device 1 10 by an application server via a network.
  • Web applications are applications that load into a web browser of the computing device 110 in response to requesting the web application from a service provider.
  • the web applications are websites that are designed or customized to run on a computing device by taking into account various computing device parameters, such as resource availability, display size, touch-screen input, and the like. In this way, web applications may provide an experience that is similar to a native application within a web browser.
  • Web applications may be any server- side application that is developed with any server-side development tools and/or
  • Hybrid applications may be a hybrid between native applications and web applications.
  • Hybrid applications may be a standalone, skeletons, or other like application containers that may load a website within the application container.
  • Hybrid applications may be written using website development tools and/or programming languages, such as HTML5, CSS, JavaScript, and the like.
  • hybrid applications use a browser engine of the computing device 110, without using a web browser of the computing device 1 10, to render a website's services locally.
  • hybrid applications also access computing device capabilities that are not accessible in web applications, such as the accelerometer, camera, local storage, and the like.
  • the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium.
  • the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CD-ROM portable compact disc read-only memory
  • CD-ROM compact disc read-only memory
  • transmission media such as those supporting the Internet or an intranet, or a magnetic storage device.
  • the computer-usable or computer-readable medium can even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
  • a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave.
  • the computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.
  • Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computing device, partly on the user's computing device, as a stand-alone software package, partly on the user's computing device and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computing device, through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computing device, (for example, through the Internet using an Internet Service Provider), or wireless network, such as described above.
  • LAN local area network
  • WAN wide area network
  • example embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium.
  • a code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, program code, a software package, a class, or any combination of instructions, data structures, program statements, and the like.
  • an article of manufacture may be employed to implement one or more methods as disclosed herein.
  • the article of manufacture may include a computer-readable non-transitory storage medium and a storage medium.
  • the storage medium may include programming instructions configured to cause an apparatus to practice some or all aspects of a computer implemented method of determining the visual field perimetry of a subject, in accordance with embodiments of the present disclosure, in accordance with embodiments of the present disclosure.
  • the storage medium may represent a broad range of persistent storage media known in the art, including but not limited to flash memory, optical disks or magnetic disks.
  • the programming instructions may enable an apparatus, in response to their execution by the apparatus, to perform various operations described herein.
  • the storage medium may include programming instructions configured to cause an apparatus to practice some or all aspects of a computer implemented method of determining the visual field perimetry of a subject using a computing device, in accordance with embodiments of the present disclosure.
  • computing, input, and output modules internal to the computing device are computing, input, and output modules. These include a computer comprising central processing unit, graphic processing unit, motion processing unit, and memory to process the inputs and produce the outputs for VR game play.
  • the inputs include head motion sensors that comprise a gyroscope and an accelerometer. Another input is a touch pad which allows for manual input used in game setup.
  • the outputs include the binocular stereo VR display corresponding to the smartphone screen sections shown before in Figure 1. There is also a speaker to produce the sounds of the game action.
  • Figure 4 depicts an example gaming method for determining the visual field perimetry of a subject using a computing device, in accordance with embodiments of the present disclosure.
  • the methods disclosed herein are preferably performed by a human operator or user in addition to the human subject although in certain situations the operator and the subject can be the same individual, in that case, the subject would be both the subject and the operator or user.
  • the method 200 is described with reference to the components illustrated in Figures 1-3 and the screen shots shown in Figures 5-14.
  • the computing device 110 prompts the user to set up the device for a new test of visual field perimetry. For example, the first time the subject is taking the test, the subject's identifying information and date of birth (or age) are entered into the computing device. Based on this information, the computing device retrieves an age-stratified average VF (maps of visual stimulus perception threshold for right and left eyes) of a normal population to use as the initial estimate of the subject's current VF map, for example from memory of the computing device.
  • age-stratified average VF maps of visual stimulus perception threshold for right and left eyes
  • the subject may enter a previously stored user name so the computing device can be directed to retrieve recent VF results for the subject that are stored in memory.
  • the average of recent VF maps are used as initial estimates of the VF for the current test. Since the VR game is used to perform a VF test, the terms “game” and “test” are used interchangeably throughout.
  • the user of the mobile computing device is both subject of the VF test and the game player. Therefore the terms “user,” “subject,” and “player” are also used interchangeably.
  • the player Before each game, the player may be directed to adjust the focus of the viewer by turning the focusing wheels 126 (see Figure 2). Then the player aligns the gun sight 220 (see Figure 5). This is done by having the head in a neutral position and looking straight ahead, then pushing the touch pad 125 to activate the sight 220 ( Figure 5) in straight-ahead gaze. Thereafter the sight 220 is linked to head position for and used to control gun firing within the game.
  • the position of the sight in the screen display can be assumed to be the position of visual fixation of the subject at the active shooting portion of the game.
  • a game playing and visual field test cycle is begun by the computing device.
  • Many game scenarios could be devised based on the principles of the current disclosure.
  • a "dragon slayer" VF game is illustrated in Figures 5-14 is described.
  • the VR display area 200 has a mostly uniform blue field background.
  • the bottom of the display is anchored by a horizon 210.
  • the game play action is above the horizon and therefore the scenery on and below the horizon mainly provides a background frame of reference to the viewer in the VR environment. In this case, an ocean scene was chosen, which is not overly distracting.
  • the center of the display is marked by a sight 220, which, for the purpose of stereo display, is preferably set at a far distance.
  • the scene in the whole display 200, along with central sight 220 pans with head movement to give a sense of immersion in the VR environment. This VR scene is displayed to both the right and left eyes of the player.
  • the game cycle begins with the fixation target 230 which takes the form of an animated dragon, which is displayed to both the right and left eyes of the player, and, for the purpose of stereo display, is set at a moderately far distance.
  • the player is tasked to move the sight 220 over to the dragon before it can escape.
  • the player moves the sight by head rotation, which is detected by the head motion sensors 115 in the VR system 100.
  • the player is able to quickly move the head so that the sight 220 directly overlies the fixation target 230 dragon. This automatically activates gun fire 240 which slays the dragon. The player does not have to touch the touch pad 125 or perform any manual action to fire the gun.
  • the automatic firing activation is an important aspect of the gaming embodiment that functions to make the game proceed quickly. The firing may be accompanied by sound effects, for example from speaker 142, which adds to the excitement of the game.
  • the dragon disappears into a poof 231.
  • the score board 211 indicates that one more dragon has been slay ed.
  • Tstimuius is set sufficiently short (for example as a fraction of a second) so that the gaze does not have sufficient time to wander off fixation location 220.
  • the stimulus 250 is only presented to the eye being tested (for example, the right eye), unlike the rest of the visual scene, which is presented to both eyes.
  • the stimulus 250 takes the form of a checker board with moderate contrast between the light and dark squares within.
  • the checker board is animated by light-dark partem reversal at several Hz (cycles per second) to activate the retinal motion sensing system and enhance stimulus perception.
  • the average brightness and color of the checker board as a whole is matched (made the same) as the background.
  • the stimulus is preferably larger (both in terms of the overall size and the internal squares making up the checker board) when a peripheral VF location is tested and smaller when a central VF location is tested.
  • the central 12° square area could be considered the central VF, though other schemes of classification could also be used.
  • the stimulus is outside the central VF area and therefore a larger stimulus size is used.
  • the strength of the stimulus is controlled by varying the contrast between the light and dark squares.
  • the location of the stimulus within the VF is specified by the vector 221 between the stimulus 250 and the fixation location 220.
  • the vector 221 can be specified in a polar coordinate using a distance measured in degrees of visual angle and a direction also measured in degrees.
  • the vector can also be specified in Cartesian coordinates as horizontal (azimuthal) and vertical (elevation) displacements measured in degrees of visual angle. If the player perceives the visual stimulus 250, then the player should anticipate that a dragon would appear shortly in that direction and move the sight toward the stimulus. This expectation is part of the pregame instructions to the player.
  • a new fixation target 230 (dragon) is introduced in a quadrant different from the recently shown visual stimulus ( Figure 6) that the player failed to perceive.
  • the game task for the player is now to move sight 220 over to the position of the dragon in order to fire upon it. This must occur within a preset time window Ttarget during which the dragon will remain stationary.
  • the length of the time window is varied within a preset range to provide unpredictability that makes the game more interesting to play.
  • the range is set according to the level of play appropriate for the player's previous record of reaction time. Faster players are assigned a higher level of play where the game proceeds more quickly, while slower players are given more time to react.
  • the time window for a stationary dragon is only a fraction of a second for the average player and the game and VF testing proceeds quickly. If the player fails to move sight 220 over to dragon 230 in time, then the dragon escapes.
  • the dragon 230 flies away to the distance, and eventually disappears over the horizon.
  • the score board 21 1 indicates that one more dragon has escaped. This also means that fixation on the target was not achieved and yet another new fixation target must be introduced. This is shown in Figure 10, where new fixation target 230 (dragon) appears in a quadrant different from the previous target. For the purpose of demonstration, let's suppose that the player is now able to move sight 220 over to the fixation target on time.
  • the dragon disappears into poof 220.
  • T stimulus the visual stimulus 260 is presented.
  • the location of the stimulus 260 relative to fixation 220 is described by vector 222, which has a distance measured in a unit of visual angle and a direction.
  • the size of the stimulus is chosen according to its distance from fixation (also called “eccentricity" in perimetry terminology). In this case, a smaller stimulus is chosen because it is with the central 12° VF area.
  • the motion of sight 220 is described by vector 270.
  • Perception of the visual stimulus is detected if the motion vector 270 exceeds a magnitude threshold Theta (for example, the Theta could be 1.5° visual angle equivalent) within an allotted time Tdetectwn (e.g. 0.8 second), and agrees with the direction 222 ( Figure 11) of the stimulus within an angular tolerance Alpha (e.g. ⁇ 15°).
  • Tdetectwn e.g. 0.8 second
  • Alpha e.g. ⁇ 15°
  • a new fixation target 230 in the form of an animated dragon appears in the immediate path of sight motion 270 so that the player can easily lock sight onto the target and slay the dragon, as described before.
  • This is another important element of the game design of the current disclosure to motivate the player/test subject to move the sight toward the stimulus. Once the player locks sight onto the target, fixation is reestablished and this sets up the conditions for another round of visual stimulus presentation and perception testing.
  • the game concludes, once a sufficient number of targets have been presented to map the visual field perimetry.
  • the method of game play ends.
  • the visual stimulus can take any form that contrasts with the background in terms of brightness, color, partem, or motion.
  • a bright white dot could be used on a gray background
  • a yellow dot could be used on a blue background
  • a stimulus could oscillate in brightness or location over the presentation time.
  • a particular type of stimulus that is useful for testing the central visual field i.e. that subtending the macular portion of the retina
  • is the hyperacuity stimulus where distortion in an extended target can be detected with higher sensitivity than an ordinary acuity target (i.e. identification of letters).
  • FIG. 15 An example is shown in Figure 15, where the circular target 280 has an arc segment 281 that is offset from the rest of the circle.
  • the offset of the arc segment 281 constitutes the visual stimulus in this hyperacuity target.
  • the location vector 222 of the hyperacuity stimulus 281 could constitute the goal of sight movement in the dragon slayer game described above, in place of the checker board reversal stimulus.
  • the dragon slayer game is only one example of the current disclosure.
  • the fixation target can take forms other than a dragon.
  • a fish in an aquatic background can serve an equivalent purpose.
  • catching fireflies in a dimly lit prairie at night could be an alternate game.
  • a more general description of the game/test cycle is given in the flow chart of Figure 16.
  • a fixation target is presented in the VR display.
  • the target and the background scene are always presented to both eyes stereoscopically.
  • the initial target should be close to the central sight but the location should be randomly varied to keep the game interesting.
  • the player is tasked to rotate the head to move the sight onto the target within a short variable time window Tt arg et, for example, within about 4 seconds.
  • the target moves off screen and a new target (at block 304) is presented in a different quadrant of the VR display, and the operation returns to block 301.
  • Blocks 301 to 307 can be considered a subroutine to establish visual fixation.
  • the presentation at block 310 of the visual stimulus immediately follows.
  • the stimulus is shown briefly only to the eye being tested, for a fixed amount of time Tstimuius, for example, within about 0.5 seconds.
  • Tstimuius for example, within about 0.5 seconds.
  • the location of the stimulus relative to the fixation point is randomly chosen from a map of VF locations that remains to be tested.
  • the player is tasked to move sight toward the stimulus, if the stimulus is perceived.
  • the VF test decides whether the subject detected the stimulus based on whether the direction of sight motion is headed correctly toward the stimulus. The motion must also reach a threshold magnitude within a time window Tdetectwn, for example, within about 2 seconds. Although 2 has been chosen as a exemplary time period, other time periods, either shorter or longer, are contemplated. If stimulus perception is detected, then this is recorded at block 313 and a new target is presented in the path of the sight movement at block 305. If stimulus perception is not detected, then this is recorded at block 314 and a new target is presented (at block 304) in a quadrant different from that of the stimulus. Either way, the fixation establishment subroutine is re-entered at block 301. Blocks 310 to 314 constitute the visual stimulus subroutine. Repeated testing of visual stimulus perception provides information build-up of a VF map of the threshold of perception. This is described in the following section.
  • the output of the VF test is a map of the threshold for perceiving the visual stimulus.
  • the VR display is wide and does not limit the width of VF test.
  • the VF 400 is presented as a grid of sensitivity values.
  • the retinal sensitivity is the inverse of the minimum stimulus strength needed for the eye to perceive it at the particular location in the VF.
  • the strength of the stimulus is specified as a combination of the size, contrast, and duration.
  • the dark-bright reversal checker board stimulus strength is primarily determined by the contrast between the dark and bright checker squares.
  • the size is fixed according to eccentricity and the presentation duration is fixed, unless an increase is necessary to increase the stimulus strength at maximum contrast.
  • the standard unit of the logarithmic scale is decibel (dB).
  • the normative reference (0 dB) is calibrated to the average perception threshold of a healthy human population, typically chosen to have a similar range of age to that of the population to be tested for disease.
  • the numbers in the squares are dB sensitivity values relative to the normative reference. The sensitivity values are determined by the minimal strength of the visual stimulus that could be perceived by the eye being tested using the VR game of the present disclosure.
  • the center 401 represents the fixation point, corresponding to the foveal center anatomically.
  • the blind spot 402 corresponding to the optic nerve head anatomically, is to the right of and slightly inferior to fixation.
  • the 2 grid squares around the blind spot is not tested (marked by "NT" in Figure 17).
  • the VF map format of the left eye is the mirror image.
  • the sizes of the grid squares determine the resolution of the VF test.
  • the central VF within a 12° square area is tested a sampling interval of 3° (width of the central 4x4 grid squares around the fixation 401).
  • the sampling interval is increased to 6° (width of the larger peripheral squares), where a larger visual stimulus is used.
  • the overall VF test area extends 18° above fixation and 24° to the right left and below fixation.
  • the superior extension is less because many people have relatively droopy upper lids that limit the superior VF.
  • the numerical perimetry map 400 is called a "full-threshold" VF because it precisely shows the threshold of perception at each sampled VF location.
  • the full-threshold test requires a series of visual stimuli to be shown at each location to bracket in the threshold. This takes a longer time, but provides more information on the severity of disease. Thus the full-threshold test is used to monitor the rate of disease progression in patients already known to have glaucoma. If the purpose of the test is simply to discriminate between a healthy and diseased eye, it is not necessary to precisely determine the numerical value of the thresholds of perception, but only to determine if it is worse than normal limits. This simplified testing is call a
  • VF test is often used for screening purposes to decide if an individual person suspected of having glaucoma has evidence of VF damage.
  • a suprathreshold VF map 500 is shown in Figure 18. This map is obtained by testing each VF location using a stimulus strength that almost all normal people could perceive. For example, this strength could be set at 5 dB above the average stimulus strength threshold that could be perceived by the normal population. If the eye being tested could not perceive this stimulus at a given location, then the sensitivity is worse than -5 dB and clearly abnormal.
  • abnormal VF grid locations are marked by black squares such as the central VF location 504. Locations with normal sensitivities, such as grid location 503, are left blank. The cluster of VF defects in the superonasal quadrant is typical of glaucoma.
  • the suprathreshold VF map is compiled from information obtained from probing each VF location with a suprathreshold stimulus several times.
  • a flow chart for generating the sequence of testing is shown in Figure 19. At the beginning of the game, all locations on the VF grid are eligible for selection and both counters M and N at each location are set to zero. Each location has counter N and M to keep track of the number of times a visual stimulus was or was not perceived, respectively. Whenever fixation was established (see Figure 16), a stimulus location is randomly selected.
  • this random selection at block 601 initiates the stimulus testing routine.
  • the suprathreshold stimulus is displayed (at block 603).
  • the perception counter N is incremented by 1
  • the nonperception counter M is incremented by 1.
  • the VF location is marked as normal (having visual sensitivity that is within normal range or better). Otherwise the operation is passed to block 609, where if M>1, then the VF location is marked as abnormal (having visual sensitivity significantly worse than normal). If the location is determined to be normal or abnormal, then no further testing is needed at this location and it is precluded from random selection in future rounds (at block 611).
  • VF testing and game playing is completed when all of the VF points ( Figure 18) have been found to be either normal or abnormal.
  • each VF point requires 1 to 3 rounds of stimulus presentation. Since there are 64 locations on the map ( Figure 18), 64 to 192 rounds of stimulus presentation are needed for each eye. If each round of testing uses 2 seconds, then 4 to 12 minutes are needed to test both eyes, which is tolerable.
  • the full-threshold VF game follows a similar scheme, but tests each VF location using a range of stimulus strengths.
  • the initial strength is chosen based on the average threshold of several previous tests by the same individual. If no previous test has been performed, then the initial stimulus strength could be set at the average of the normal population. If the stimulus is not perceived, then the stimulus strength is increased at that location until it is perceived (up to a limit). Then the threshold of perception is established to the desirable precision by bracketing the stimulus strength. There are various methods of bracketing the stimulus strength well known to those skilled in the art of perimetry and these are not detailed here. But generally the full-threshold test requires more time than the suprathreshold test.
  • any VF test is susceptible to error due to variation in the subject's response and loss of attention or fixation from time to time, it is best to make diagnosis of glaucoma based on several VF tests. Likewise, worsening of the VF over time is best confirmed over several VF tests.
  • the advantage of the VR game VF test is that it is not as tedious and boring as a conventional VF test and therefore repeat testing is better tolerated. It can also be performed at home so that testing can be done continually between visits to a physician.
  • FIG. 20 illustrates a networked telemedicine system 700, in accordance with embodiments herein.
  • the networked telemedicine system 700 includes the computing device 1 10 in wireless communication therewith.
  • the networked telemedicine system 700 also induces other networked devices 710, which may be in wired or wireless communication therewith.
  • the computing device 1 10 includes application software with executable instructions configured to transmit and receive information from the network 705. The information can be transmitted to and/or received from another device, such as one or more networked devices 705 through a network.
  • the computing device 110 is also capable of transmitting information about the visual field perimetry of a subject to one or more of a doctor, such as an eye doctor, other medical practitioner, for example as part of a telemedicine session.
  • Telemedicine and/or a telemedicine session is the use of telecommunication and information technology to provide clinical health care from a distance, for example using a telemedicine system 700. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities.
  • the telemedicine system 700 distributes and receives information to and from one or more networked devices 110, 710 through one or more of network 705.
  • network 705 may be any network that allows computers to exchange data.
  • network 705 includes one or more network elements (not shown) capable of physically or logically connecting computers.
  • the network 705 may include any appropriate network, including an intranet, the Internet, a cellular network, a local area network (LAN), a wide area network (WAN), a personal network or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected.
  • Each network 705 includes a wired or wireless telecommunication means by which network systems and networked devices 110, 710 may communicate and exchange data.
  • each network 705 is implemented as, or may be a part of, a storage area network (SAN), personal area network (PAN), a metropolitan area network (MAN), a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a virtual private network (VPN), an intranet, an Internet, a mobile telephone network, such as Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne,
  • GSM Global System for Mobile Communications
  • GPRS General Packet Radio Service
  • cdmaOne a mobile telephone network
  • CDMA2000 Evolution-Data Optimized
  • EDGE Enhanced Data Rates for GSM Evolution
  • UMTS Universal Mobile Telecommunications System
  • DECT Digital Enhanced Cordless Telecommunications
  • iDEN Integrated Digital Enhanced Network
  • LTE Long-Term Evolution
  • 3G 3 rd generation mobile network
  • 4G 4th generation mobile network
  • 5G 5th generation mobile network
  • card network Bluetooth, near field communication network (NFC)
  • NFC near field communication network
  • any form of standardized radio frequency or any combination thereof, or any other appropriate architecture or system that facilitates the communication of signals, data, and/or messages (generally referred to as data).
  • NFC near field communication network
  • each network system (networked devices 110 and
  • each networked device 110 and 710 includes a device having a communication component capable of transmitting and/or receiving data over the network 705.
  • each networked device 110 and 710 may comprise a server, personal computer, mobile device (for example, notebook computer, tablet computer, netbook computer, personal digital assistant (PDA), video game device, GPS locator device, cellular telephone, smartphone, or other mobile device), a television with one or more processors embedded therein and/or coupled thereto, or other appropriate technology that includes or is coupled to a web browser or other application for communicating via the network 705.
  • PDA personal digital assistant

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Described herein are methods, apparatuses, systems, and non-transitory media for virtual-reality testing of a subject's visual field perimetry with a computing device, such as a mobile computing device.

Description

METHOD FOR VISUAL FIELD PERIMETRY TESTING
Cross-reference to Related Applications
[0001] This application claims the priority benefit of the earlier filing date of U.S.
Provisional Patent Application No. 62/431,597, filed December 8, 2016, which is hereby incorporated herein by reference in its entirety.
Technical Field
[0002] Embodiments herein relate to the field of ophthalmology, and, specifically to computer implemented methods of testing the visual field perimetry of a subject.
Background
[0003] Visual field (VF) testing is a versatile diagnostic tool for glaucoma and other optic neuropathies, as well as retinal disease. However, it's most common use is in the evaluation of glaucoma, which tends to damage the peripheral vision early and only affect central vision much later. Therefore the detection of early peripheral visual field loss is an important part of glaucoma diagnosis and monitoring.
[0004] Glaucoma is a leading cause of blindness worldwide. It is a degeneration of the optic nerve associated with cupping of the optic nerve head (optic disc). Glaucoma is often associated with elevated intraocular pressure (IOP). However, the IOP is normal in a large minority of cases and therefore IOP alone is not an accurate means of diagnosing glaucoma. One time examination of the optic disc is usually not sufficient to diagnose glaucoma either, as there is a great variation in the degree of physiologic cupping among normal eyes. Glaucoma eventually damages vision, usually starting in the peripheral region. Therefore visual field (VF) tests that cover a wide area of vision (for example, ±24° of visual angle) is a standard for diagnosing glaucoma. Visual field testing, the systematic
measurement of visual field function, is also called "perimetry" or "perimetry testing" and automated testing is called automated perimetry or automated perimetry testing. A single standard VF test is poorly reliable, however, due to large test-retest variation. Therefore several VF tests are required to establish an initial diagnosis of glaucoma or to show a worsening of glaucoma over time. [0005] There are drawbacks to standard visual field testing. Dedicated instruments installed at an eye specialist's clinic are needed. This prevents frequent repetition of the test to confirm a glaucoma diagnosis or to monitor the progression of the disease. The test requires fixation at a fixed spot for many minutes. This is unnatural, tiring, and often not achieved. Fixation loss is a common cause of unreliable tests. Subject input typically consists of simple yes-or-no clicking of a button. Since the timing of the click can be affected by poor subject attention, this contributes toward higher false positive and false negative responses. It also requires long intervals to separate presentation of visual stimuli. This causes boredom and loss of attention. This prevents frequent repetition of the test. The visual stimuli are uninteresting. This causes boredom and loss of attention. The auditory environment is quiet. This causes boredom and loss of attention. There is no immediate feedback on how the subject is doing. This causes boredom and loss of attention. The head is held in a chin rest to maintain fixed distance to the visual stimuli. This is uncomfortable over extended periods of time. This prevents frequent repetition of the test. In view of the forgoing, the need exists for new and innovative ways to test VF, for example as a test for glaucoma and/or glaucoma progression.
Summary
[0006] Disclosed herein are computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media that enable the determination of a subject's visual field perimetry using virtual reality with one or more computing devices, such as a mobile computing device, for example, a smart phone. In certain exemplary
embodiments, the computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media are provided in the form of a video game, which makes the test more enjoyable to the subject undergoing the test. With respect to the disclosed method, the method may include any and all of the flowing steps: presenting, using one or more processors of one or more computing devices, a fixation target at a known location on a virtual reality display screen, wherein the fixation target is presented to both eyes stereoscopically; determining, using the one or more processors of the one or more computing devices, if subject's head has rotated to align a central site on the virtual reality display screen with the fixation target within a predetermined variable time window Ttarget; providing, using the one or more processors of the one or more computing devices, a visual stimulation target at a known location on the virtual reality display screen, wherein the visual stimulation target is presented to a single eye of the subject for a predetermined fixed amount of time Tstimuius; determining, using the one or more processors of the one or more computing devices, a positive stimulation target detection by detecting head rotation of the subject in response to providing the visual stimulation target at the known location; storing, using the one or more processors of the one or more computing devices, the known location of the positive visual stimulation target detection; and repeating, using the one or more processors of the one or more computing devices, the above steps, to construct a visual field map of a threshold of perception for the subject from the known locations of the positive visual stimulation target detection, thereby determining a subject's visual field perimetry with virtual reality.
[0007] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, a positive identification of the fixation target if the subject's head is determined to align with the fixation target within the predetermined variable time window Ttarget, or a negative identification if the subj ect's head is determined not to align with the fixation target within the predetermined variable time window Ttarget.
[0008] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: choosing a new known location for the visual stimulation target for presentation from visual field locations that remain to be tested.
[0009] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes that the known location of the visual stimulation target relative to the fixation target is randomly chosen.
[0010] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, if the head rotation of the subject in response to providing the visual stimulation target at the known location reaches a predetermined threshold magnitude within a predetermined variable time window Tdetectwn after the visual stimulation target presentation. [0011] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a positive stimulation target detection by the subject is based on whether there is head rotation directed toward the visual stimulation target within a predetermined variable time window Tdetectwn after the visual stimulation target presentation.
[0012] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: determining, using the one or more processors of the one or more computing devices, a negative stimulation target detection if no head rotation of the subject toward the visual stimulation target is detected in response to providing the visual stimulation target at the known location; storing, using the one or more processors of the one or more computing devices, the known location of the negative visual stimulation target detection; and constructing, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception for the subject from the known locations of the positive visual stimulation target detection and the known locations of the negative visual stimulation target detection.
[0013] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality if subject's head is not determined to have rotated to align a central site on the virtual reality display screen with the fixation target within the predetermined variable time window Ttarget, a new fixation target is presented to both eyes stereoscopically in a different quadrant of the virtual reality display screen, using the one or more processors of the one or more computing devices.
[0014] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: outputting, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception.
[0015] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality, outputting the visual field map of the threshold of perception comprises outputting a grid of sensitivity values.
[0016] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality, outputting the visual field map of the threshold of perception comprises outputting a full-threshold visual field map.
[0017] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality, outputting the visual field map of the threshold of perception comprises outputting a suprathreshold visual field map.
[0018] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media, the one or more computing devices comprise at least one smart phone.
[0019] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media, the one or more computing devices are in communication with a network.
[0020] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media, the network is a
telecommunications network.
[0021] In various embodiments of computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media determining a subject's visual field perimetry with virtual reality includes: initiating, using the one or more processors of the one or more computing devices, a telemedicine session.
Brief Description of the Drawings
[0022] Embodiments will be readily understood by the following detailed description in conjunction with the accompanying drawings and the appended claims. Embodiments are illustrated by way of example and not by way of limitation in the figures of the
accompanying drawings. [0023] Figure 1 is a schematic of a Virtual Reality (VR) set that uses a smartphone coupled with a head-mounted VR adaptor.
[0024] Figure 2 is a schematic of a VR set up that uses a smartphone coupled with a head-mounted VR adapter.
[0025] Figure 3 is a schematic depicting a mobile computing device for Visual Field
(VF) testing, according to embodiments herein.
[0026] Figure 4 is a schematic of an example computing device, according to embodiments herein.
[0027] Figures 5-14 are example screen shot illustrations of the dragon slayer VR game for VF testing, according to embodiments herein.
[0028] Figure 15 is a schematic Illustration of a hyperacuity target, according to embodiments herein.
[0029] Figure 16 is a work flow diagram showing the testing cycle used to establish the threshold of visual stimulus perception, according to embodiments herein.
[0030] Figure 17 is a schematic of a full-threshold visual field output from the dragon slayer VR game for VF testing, according to embodiments herein.
[0031] Figure 18 is a schematic of a suprathreshold visual field output from the dragon slayer VR game for VF testing, according to embodiments herein.
[0032] Figure 19 is a work flow diagram showing the selection of visual stimulus and fixation target presentation locations for one round from the dragon slayer VR game for VF testing, according to embodiments herein.
[0033] Figure 20 is a schematic diagram of a networked mobile computing device for computer implemented methods of testing the visual field perimetry of a subject, in accordance with embodiments herein.
Detailed Description of Disclosed Embodiments
[0034] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope. Therefore, the following detailed description is not to be taken in a limiting sense, and the scope of embodiments is defined by the appended claims and their equivalents.
[0035] Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding embodiments; however, the order of description should not be construed to imply that these operations are order dependent.
[0036] The description may use perspective-based descriptions such as up/down, back/front, and top/bottom. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application of disclosed embodiments.
[0037] The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, "connected" may be used to indicate that two or more elements are in direct physical contact with each other. "Coupled" may mean that two or more elements are in direct physical contact. However, "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other.
[0038] For the purposes of the description, a phrase in the form "A/B" or in the form
"A and/or B" means (A), (B), or (A and B). For the purposes of the description, a phrase in the form "at least one of A, B, and C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C). For the purposes of the description, a phrase in the form "(A)B" means (B) or (AB) that is, A is an optional element.
[0039] The description may use the terms "embodiment" or "embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments, are synonymous, and are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.).
[0040] With respect to the use of any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
[0041] Introduction
[0042] Disclosed herein are computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media that enable the determination of a subject's visual field perimetry using virtual reality with one or more computing devices, such as a mobile computing device, for example, a smart phone. In certain exemplary
embodiments the computer-implemented methods, apparatuses, systems, and non-transitory computer-readable storage media are provided in the form of a video game, which makes the test more enjoyable to the subject undergoing the test.
[0043] As will become apparent, the disclosed methods, apparatuses, systems and computer non-transitory computer-readable storage media offer several advantages over traditional methods of perimetry measurement as well as computer implemented methods of the same. Compared to prior methods, the current disclosure enables the presentation of both background graphics and a fixation target to both eyes to create a 3-dimensional stereoscopic immersive VR visual environment. Furthermore, in certain embodiments, the fixation target is used as a shooting target in a game, which makes fixation more reliable and the testing more fun for the subject. The test treats the visual stimulus and the fixation target differently. The visual stimulus is displayed very briefly and its perception is detected by a small head motion that only need to approximately match the direction of the stimulus and does not need to reach its position. This feature maximizes the chance that stimulus perception is detected regardless of the subject's game playing skill. On the other hand, the fixation target is presented for extended period of time and as a target of a shooting game. Shooting the target requires precise and fast alignment of a sight onto the target and maintains the active attention of the subject. This disclosure represents the first time a separate treatment of the fixation target and the visual stimulus to optimize the rigor of the VF test has been implemented within the excitement of a targeting game. The game action (i.e. shooting) is activated when the sight dwells on a target for an extended period of time. This obviates the need for manual pressing of a button or other inputs described in previous perimetry disclosures and is better suited to a VR device.
[0044] Furthermore, in traditional VF tests, typically one eye is tested at a time, and the contralateral eye is occluded with a patch. In the VR-based VF tests of the present disclosure, the presentation of visual stimulus could be varied between the right and left eye in rapid alternation since the left and right eye displays are separately controlled. There is no need to occlude the contralateral eye. It is preferable to randomly vary the eye being tested with the visual stimulus to keep the game more interesting and the subject more engaged. And both eyes should be presented with the background scenery, head-tracking sight, and fixation target in a stereoscopic manner to maintain a 3D immersive VR visual environment for maximally enjoyable game play and subject interest.
[0045] Exemplary Device
[0046] The physical apparatus of the current disclosure (the device) is a virtual reality
(VR) platform that includes a virtual reality display screen and one or more computing devices that are coupled to the virtual reality display screen such that one or more computing devices can control the content displayed on the virtual reality display screen. In a preferred embodiment, the device comprises a mobile computing device, such as a smartphone, and a viewer that adapts the mobile computing device for binocular viewing. Examples of such viewers include the Google Cardboard (see, for example, Figure 1) and the Samsung Gear VR (see, for example, Figure 2), each of which works with specific smartphone models.
[0047] Referring to Figure 1 , the VR device 100 comprises a mobile computing device 110 and a viewer 120. The mobile computing device 110 has a VR display screen 1 11 that is divided into left eye display 112a and right eye display 1 12b. In embodiments, there is a speaker 113 to generate sound effects, for example for the VR game. The viewer 120 includes enclosure 122 to secure the mobile computing device 1 10 and ocular lenses 131 and 132 for right and left eye displays, respectively.
[0048] Referring to Figure 2, the VR device 100 is preferably mounted on the head
140 of the test subject using head strap 124 that securely holds viewer 120 and mobile computing device 110. In embodiments, the viewer is coupled to a touch pad 125, for example for manual input by the user and focus wheel 126 to adjust the focus of the ocular lenses.
[0049] Figure 3 illustrates a simplified diagram of an exemplary computing device
110 for determining the visual field perimetry of a subject, in accordance with embodiments herein. As disclosed herein the one or more computing devices can be employed. In certain embodiments, the one or more computing devices comprises a smart phone, such as a commercially available smart phone, for example an iPhone®, Samsung Galaxy®, Nokia Lumina ®, Motorola Droid ®, and the like. In certain embodiments, the smartphone is an iPhone, for example an iPhone X. In the description below, the computing device may be a mobile computing device, such as a smart phone, for example, the computing device 110 could be mobile computing device 110, or even smart phone 110.
[0050] In embodiments, a smartphone is a handheld mobile computing device, typically with a mobile operating system and an integrated mobile broadband cellular network connection for voice, SMS, and Internet data communication; most if not all smartphones also support Wi-Fi. Smartphones are typically pocket-sized, as opposed to tablets, which are much larger than a pocket. They are able to run a variety of third-party software components ("apps") from places like the Google Play Store or Apple App Store, and can receive bug fixes and gain additional functionality through operating system software updates. Modern smartphones have a touchscreen color display with a graphical user interface that covers the front surface and enables the user to use a virtual keyboard to type and press onscreen icons to activate "app" features. They integrate and now largely fulfill most people's needs for a telephone, digital camera and video camera, GPS navigation, a media player, clock, news, calculator, web browsing, handheld video games, flashlight, compass, an address book, a note-taking application, digital messaging, an event calendar, etc. Typical smartphones will include one or more of the following sensors: magnetometer, proximity sensor, barometer, gyroscope and/or accelerometer.
[0051] In embodiments, the computing device 110 includes a VR display screen 111 and a touch pad 125 (for example integral to the computing device or external and coupled thereto), which may be part of the VR display screen 111, for example a video display touch screen. In embodiments, the computing device 110 includes a number of components, such as one or more processors 140 and at least one communication module 142. The communication module 142 allows communication from and to one or more other networked computing devices, for example having remote data storage and computing capabilities. It is useful for updates of game software, upload of information on the subject such as identifying information, previous VF performance and related personalized game setting/testing parameters, archiving of a series of VF tests, analysis of the series to detect long term trends, and communication with healthcare providers to provide periodic reports or special alert for significant VF change (for the worse). In various embodiments, the one or more processors 140 each include one or more processor cores. In various embodiments, the at least one communication module 142 is physically and electrically coupled to the one or more processors 140. In further implementations, the communication module 142 is part of the one or more processors 140. In various embodiments, computing device 110 includes printed circuit board (PCB) 155. For these embodiments, the one or more processors 140 and communication module 142 may be disposed thereon. Depending on its applications, the computing device 110 includes other components that may or may not be physically and electrically coupled to the PCB. These other components include, but are not limited to, a memory controller (not shown), volatile memory (e.g., dynamic random access memory (DRAM) (not shown)), non-volatile memory (not shown) such as read only memory (ROM), flash memory (not shown), an I/O port (not shown), (not shown), a digital signal processor (not shown), a crypto processor (not shown), a graphics processor (not shown), one or more antenna (not shown), a touch-screen display, a touch-screen display controller (not shown), a battery (not shown), an audio codec (not shown), a video codec (not shown), a global positioning system (GPS) device (not shown), a compass (not shown), a speaker 113, a camera (not shown), and a mass storage device (such as hard disk drive, a solid state drive, compact disk (CD) (not shown), digital versatile disk (DVD) (not shown), a microphone (not shown), head motion sensor 115 which may include a gyroscope 116 and an accelerometer 117, and so forth. The head motion sensor 115 provides the data to establish head orientation so the VR display can be adjusted to produce an immersive visual environment.
[0052] In some embodiments, the one or more processors 140 are operatively coupled to system memory through one or more links (e.g., interconnects, buses, etc.). In
embodiments, system memory is capable of storing information that the one or more processors 140 utilize to operate and execute programs and operating systems. In different embodiments, system memory is any usable type of readable and writeable memory such as a form of dynamic random access memory (DRAM). In embodiments, the computing device 110 includes a microphone 146 configured to capture audio. In embodiments, the computing device 110 includes a speaker 141 configured to transmit audio. In embodiments, computing device 110 includes or is otherwise associated with various input and output/feedback devices to enable user interaction with the computing device 110 and/or peripheral components or devices associated with the computing device 110 by way of one or more user interfaces or peripheral component interfaces. In embodiments, the user interfaces include, but are not limited to a physical keyboard or keypad, a touchpad 125, a display device (touchscreen or non-touchscreen), speakers, microphones, image sensors, haptic feedback devices and/or one or more actuators, and the like. In some embodiments, the computing device can comprise a memory element (not shown), which can exist within a removable smart chip or a secure digital ("SD") card or which can be embedded within a fixed chip. In certain example embodiments, Subscriber Identity Component ("SIM") cards may be used. In various embodiments, the memory element may allow a software application resident on the device.
[0053] In embodiments, an I/O link connecting a peripheral device to a computing device is protocol-specific with a protocol-specific connector port that allows a compatible peripheral device to be attached to the protocol-specific connector port (i.e., a USB keyboard device would be plugged into a USB port, a router device would be plugged into a
LAN/Ethernet port, etc.) with a protocol-specific cable. Any single connector port would be limited to peripheral devices with a compatible plug and compatible protocol. Once a compatible peripheral device is plugged into the connector port, a communication link would be established between the peripheral device and a protocol-specific controller.
[0054] In embodiments, a non-protocol-specific connector port is configured to couple the I/O interconnect with a connector port of the computing device 110, allowing multiple device types to attach to the computing device 110 through a single physical connector port. Moreover, the I/O link between the computing device 110 and the I/O complex is configured to carry multiple I/O protocols (e.g., PCI Express®, USB,
DisplayPort, HDMI, etc.) simultaneously. In various embodiments, the connector port is capable of providing the full bandwidth of the link in both directions with no sharing of bandwidth between ports or between upstream and downstream directions. In various embodiments, the connection between the I/O interconnect and the computing device 110 supports electrical connections, optical connections, or both.
[0055] In some embodiments, the one or more processors 140, flash memory, and/or a storage device includes associated firmware storing programming instructions configured to enable the computing device 110, in response to execution of the programming instructions by one or more processors 140, to practice all or selected aspects of a computer implemented method of determining the visual field perimetry of a subject, in accordance with
embodiments of the present disclosure.
[0056] In embodiments, the communication module 142 enables wired and/or wireless communications for the transfer of data to and from the computing device 110. In various embodiments, the computing device 110 also includes a network interface configured to connect the computing device 110 to one or more networked computing devices wirelessly via a transmitter and a receiver (or optionally a transceiver) and/or via a wired connection using a communications port. In embodiments, the network interface and the
transmitter/receiver and/or communications port are collectively referred to as a
"communication module". In embodiments, the wireless transmitter/receiver and/or transceiver may be configured to operate in accordance with one or more wireless communications standards. The term "wireless" and its derivatives may be used to describe circuits, devices, systems, methods, techniques, communications channels, etc., that may communicate data through the use of modulated electromagnetic radiation through a non- solid medium. The term does not imply that the associated devices do not contain any wires, although in some embodiments they might not. In embodiments, the computing device 110 includes a wireless communication module 142 for transmitting to and receiving data, for example for transmitting and receiving data from a network, such as a telecommunications network. In examples, the communication module transmits data, including video data, though a cellular network or mobile network, such as a Global System for Mobile
Communications (GSM), General Packet Radio Service (GPRS), cdmaOne, CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless
Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN), Long-Term Evolution (LTE), 3rd generation mobile network (3G), 4th generation mobile network (4G), and/or 5th generation mobile network (5G) networks. In embodiments, the mobile computing device 110 is directly connected with one or more devices via the direct wireless connection by using, for example, Bluetooth and/or BLE protocols, WiFi protocols, Infrared Data Association (IrDA) protocols, ANT and/or ANT+ protocols, LTE ProSe standards, and the like. In embodiments, the communications port is configured to operate in accordance with one or more known wired communications protocols, such as a serial communications protocol (e.g., the Universal Serial Bus (USB), FireWire, Serial Digital Interface (SDI), and/or other like serial communications protocols), a parallel communications protocol (e.g., IEEE 1284, Computer Automated Measurement And Control (CAMAC), and/or other like parallel communications protocols), and/or a network communications protocol (e.g., Ethernet, token ring, Fiber Distributed Data Interface (FDDI), and/or other like network communications protocols). [0057] In embodiments, the computing device 1 10 is configured to run, execute, or otherwise operate one or more applications. In embodiments, the applications include native applications, web applications, and hybrid applications. For example, the native applications are used for operating the computing device 1 10, such as using a camera or other like sensor of the computing device 110, cellular phone functionality of the computing device 1 10, and other like functions of the computing device 110. In embodiments, native applications are platform or operating system (OS) specific or non-specific. In embodiments, native applications are developed for a specific platform using platform-specific development tools, programming languages, and the like. Such platform-specific development tools and/or programming languages are provided by a platform vendor. In embodiments, native applications are pre-installed on computing device 110 during manufacturing, or provided to the computing device 1 10 by an application server via a network. Web applications are applications that load into a web browser of the computing device 110 in response to requesting the web application from a service provider. In embodiments, the web applications are websites that are designed or customized to run on a computing device by taking into account various computing device parameters, such as resource availability, display size, touch-screen input, and the like. In this way, web applications may provide an experience that is similar to a native application within a web browser. Web applications may be any server- side application that is developed with any server-side development tools and/or
programming languages, such as PHP, Node.js, ASP.NET, and/or any other like technology that renders HTML. Hybrid applications may be a hybrid between native applications and web applications. Hybrid applications may be a standalone, skeletons, or other like application containers that may load a website within the application container. Hybrid applications may be written using website development tools and/or programming languages, such as HTML5, CSS, JavaScript, and the like. In embodiments, hybrid applications use a browser engine of the computing device 110, without using a web browser of the computing device 1 10, to render a website's services locally. In some embodiments, hybrid applications also access computing device capabilities that are not accessible in web applications, such as the accelerometer, camera, local storage, and the like.
[0058] Any combination of one or more computer usable or computer readable medium(s) may be utilized with the embodiments disclosed herein. The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, transmission media such as those supporting the Internet or an intranet, or a magnetic storage device. Note that the computer-usable or computer-readable medium can even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. The computer-usable medium may include a propagated data signal with the computer-usable program code embodied therewith, either in baseband or as part of a carrier wave. The computer usable program code may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc.
[0059] Computer program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's computing device, as a stand-alone software package, partly on the user's computing device and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computing device, through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computing device, (for example, through the Internet using an Internet Service Provider), or wireless network, such as described above.
[0060] Furthermore, example embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium. A code segment may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, program code, a software package, a class, or any combination of instructions, data structures, program statements, and the like.
[0061] In various embodiments, an article of manufacture may be employed to implement one or more methods as disclosed herein. The article of manufacture may include a computer-readable non-transitory storage medium and a storage medium. The storage medium may include programming instructions configured to cause an apparatus to practice some or all aspects of a computer implemented method of determining the visual field perimetry of a subject, in accordance with embodiments of the present disclosure, in accordance with embodiments of the present disclosure.
[0062] The storage medium may represent a broad range of persistent storage media known in the art, including but not limited to flash memory, optical disks or magnetic disks. The programming instructions, in particular, may enable an apparatus, in response to their execution by the apparatus, to perform various operations described herein. For example, the storage medium may include programming instructions configured to cause an apparatus to practice some or all aspects of a computer implemented method of determining the visual field perimetry of a subject using a computing device, in accordance with embodiments of the present disclosure.
[0063] In embodiments, internal to the computing device are computing, input, and output modules. These include a computer comprising central processing unit, graphic processing unit, motion processing unit, and memory to process the inputs and produce the outputs for VR game play.
[0064] In embodiments, the inputs include head motion sensors that comprise a gyroscope and an accelerometer. Another input is a touch pad which allows for manual input used in game setup. The outputs include the binocular stereo VR display corresponding to the smartphone screen sections shown before in Figure 1. There is also a speaker to produce the sounds of the game action. [0065] Example Gaming Embodiment
[0066] Figure 4 depicts an example gaming method for determining the visual field perimetry of a subject using a computing device, in accordance with embodiments of the present disclosure. The methods disclosed herein are preferably performed by a human operator or user in addition to the human subject although in certain situations the operator and the subject can be the same individual, in that case, the subject would be both the subject and the operator or user. The method 200 is described with reference to the components illustrated in Figures 1-3 and the screen shots shown in Figures 5-14.
[0067] In block 210, the computing device 110 prompts the user to set up the device for a new test of visual field perimetry. For example, the first time the subject is taking the test, the subject's identifying information and date of birth (or age) are entered into the computing device. Based on this information, the computing device retrieves an age-stratified average VF (maps of visual stimulus perception threshold for right and left eyes) of a normal population to use as the initial estimate of the subject's current VF map, for example from memory of the computing device.
[0068] For repeat tests, the subject may enter a previously stored user name so the computing device can be directed to retrieve recent VF results for the subject that are stored in memory. The average of recent VF maps are used as initial estimates of the VF for the current test. Since the VR game is used to perform a VF test, the terms "game" and "test" are used interchangeably throughout. The user of the mobile computing device is both subject of the VF test and the game player. Therefore the terms "user," "subject," and "player" are also used interchangeably.
[0069] Before each game, the player may be directed to adjust the focus of the viewer by turning the focusing wheels 126 (see Figure 2). Then the player aligns the gun sight 220 (see Figure 5). This is done by having the head in a neutral position and looking straight ahead, then pushing the touch pad 125 to activate the sight 220 (Figure 5) in straight-ahead gaze. Thereafter the sight 220 is linked to head position for and used to control gun firing within the game. The position of the sight in the screen display can be assumed to be the position of visual fixation of the subject at the active shooting portion of the game.
[0070] If the player has significant astigmatism or a difference between the spherical equivalent refractions of the 2 eyes, then contact lens or spectacle correction would be needed for best vision during the game play. The VR viewers are designed to accommodate most spectacles, within certain size limits.
[0071] In block 220, a game playing and visual field test cycle is begun by the computing device. Many game scenarios could be devised based on the principles of the current disclosure. For the purpose of demonstration, a "dragon slayer" VF game is illustrated in Figures 5-14 is described.
[0072] Referring to Figure 5, the VR display area 200 has a mostly uniform blue field background. The bottom of the display is anchored by a horizon 210. The game play action is above the horizon and therefore the scenery on and below the horizon mainly provides a background frame of reference to the viewer in the VR environment. In this case, an ocean scene was chosen, which is not overly distracting. There is a score display 211 to help the player keep track of game progress and provide motivation for good performance. The center of the display is marked by a sight 220, which, for the purpose of stereo display, is preferably set at a far distance. The scene in the whole display 200, along with central sight 220, pans with head movement to give a sense of immersion in the VR environment. This VR scene is displayed to both the right and left eyes of the player.
[0073] Again referring to Figure 5, the game cycle begins with the fixation target 230 which takes the form of an animated dragon, which is displayed to both the right and left eyes of the player, and, for the purpose of stereo display, is set at a moderately far distance. The player is tasked to move the sight 220 over to the dragon before it can escape. The player moves the sight by head rotation, which is detected by the head motion sensors 115 in the VR system 100.
[0074] Referring to Figure 6, the player is able to quickly move the head so that the sight 220 directly overlies the fixation target 230 dragon. This automatically activates gun fire 240 which slays the dragon. The player does not have to touch the touch pad 125 or perform any manual action to fire the gun. The automatic firing activation is an important aspect of the gaming embodiment that functions to make the game proceed quickly. The firing may be accompanied by sound effects, for example from speaker 142, which adds to the excitement of the game.
[0075] Referring to Figure 7, within a second of firing, the dragon disappears into a poof 231. The score board 211 indicates that one more dragon has been slay ed. At this instant, it can be safely assumed that the player's eyes are gazing directly at sight 220. This provides the opportunity to briefly present visual stimulus 250 within a time window Tstimuius. Tstimuius is set sufficiently short (for example as a fraction of a second) so that the gaze does not have sufficient time to wander off fixation location 220. The stimulus 250 is only presented to the eye being tested (for example, the right eye), unlike the rest of the visual scene, which is presented to both eyes. For the present demonstration, the stimulus 250 takes the form of a checker board with moderate contrast between the light and dark squares within. The checker board is animated by light-dark partem reversal at several Hz (cycles per second) to activate the retinal motion sensing system and enhance stimulus perception. The average brightness and color of the checker board as a whole is matched (made the same) as the background. The stimulus is preferably larger (both in terms of the overall size and the internal squares making up the checker board) when a peripheral VF location is tested and smaller when a central VF location is tested. For example, the central 12° square area could be considered the central VF, though other schemes of classification could also be used. In this figure, the stimulus is outside the central VF area and therefore a larger stimulus size is used. The strength of the stimulus is controlled by varying the contrast between the light and dark squares. The location of the stimulus within the VF is specified by the vector 221 between the stimulus 250 and the fixation location 220. The vector 221 can be specified in a polar coordinate using a distance measured in degrees of visual angle and a direction also measured in degrees. The vector can also be specified in Cartesian coordinates as horizontal (azimuthal) and vertical (elevation) displacements measured in degrees of visual angle. If the player perceives the visual stimulus 250, then the player should anticipate that a dragon would appear shortly in that direction and move the sight toward the stimulus. This expectation is part of the pregame instructions to the player. For the purpose of
demonstration, let's suppose that the player did not perceive the visual stimulus 250 and did not move the sight 220 toward the stimulus in time. Then the fixation position can no longer be assured, and a new fixation target must be introduced to re-establish fixation. This is shown in Figure 7.
[0076] Referring to Figure 8, a new fixation target 230 (dragon) is introduced in a quadrant different from the recently shown visual stimulus (Figure 6) that the player failed to perceive. The game task for the player is now to move sight 220 over to the position of the dragon in order to fire upon it. This must occur within a preset time window Ttarget during which the dragon will remain stationary. The length of the time window is varied within a preset range to provide unpredictability that makes the game more interesting to play. The range is set according to the level of play appropriate for the player's previous record of reaction time. Faster players are assigned a higher level of play where the game proceeds more quickly, while slower players are given more time to react. Generally, the time window for a stationary dragon is only a fraction of a second for the average player and the game and VF testing proceeds quickly. If the player fails to move sight 220 over to dragon 230 in time, then the dragon escapes.
[0077] Referring to Figure 9, the dragon 230 flies away to the distance, and eventually disappears over the horizon. The score board 21 1 indicates that one more dragon has escaped. This also means that fixation on the target was not achieved and yet another new fixation target must be introduced. This is shown in Figure 10, where new fixation target 230 (dragon) appears in a quadrant different from the previous target. For the purpose of demonstration, let's suppose that the player is now able to move sight 220 over to the fixation target on time.
[0078] Referring to Figure 11 , fixation is established when the player locks the sight
220 over the fixation target 230. This automatically activates gun fire 240 to slay the dragon 220.
[0079] Referring now to Figure 12, the dragon disappears into poof 220. For a brief time window, we can assume that the fixation is maintained. Within this time (T stimulus), the visual stimulus 260 is presented. The location of the stimulus 260 relative to fixation 220 is described by vector 222, which has a distance measured in a unit of visual angle and a direction. As mentioned before, the size of the stimulus is chosen according to its distance from fixation (also called "eccentricity" in perimetry terminology). In this case, a smaller stimulus is chosen because it is with the central 12° VF area. Let's suppose that the player perceives the stimulus, then according to the instructions, the player should rotate his/her head to move sight 220 toward the stimulus.
[0080] Referring to Figure 13, the player rotates his/her head and thereby moves sight
220 toward the recent location of the stimulus 261. The motion of sight 220 is described by vector 270. Perception of the visual stimulus is detected if the motion vector 270 exceeds a magnitude threshold Theta (for example, the Theta could be 1.5° visual angle equivalent) within an allotted time Tdetectwn (e.g. 0.8 second), and agrees with the direction 222 (Figure 11) of the stimulus within an angular tolerance Alpha (e.g. ±15°). Note that the test does not require the sight to be moved to the exact location of the visual stimulus, but only move minimally in that general direction. This is an important element of the present disclosure that enables fast and reliable VF testing even in subjects with limited game playing ability. Once perception is detected, the player is rewarded by facilitated target acquisition.
[0081] Referring to Figure 14, a new fixation target 230 in the form of an animated dragon appears in the immediate path of sight motion 270 so that the player can easily lock sight onto the target and slay the dragon, as described before. This is another important element of the game design of the current disclosure to motivate the player/test subject to move the sight toward the stimulus. Once the player locks sight onto the target, fixation is reestablished and this sets up the conditions for another round of visual stimulus presentation and perception testing.
[0082] At block 230, the game concludes, once a sufficient number of targets have been presented to map the visual field perimetry. At block 240, the method of game play ends.
[0083] Although a checker board reversal stimulus was illustrated above, this should not be construed as limiting the current disclosure. The visual stimulus can take any form that contrasts with the background in terms of brightness, color, partem, or motion. For example, a bright white dot could be used on a gray background, a yellow dot could be used on a blue background, or a stimulus could oscillate in brightness or location over the presentation time. A particular type of stimulus that is useful for testing the central visual field (i.e. that subtending the macular portion of the retina) is the hyperacuity stimulus, where distortion in an extended target can be detected with higher sensitivity than an ordinary acuity target (i.e. identification of letters). An example is shown in Figure 15, where the circular target 280 has an arc segment 281 that is offset from the rest of the circle. The offset of the arc segment 281 constitutes the visual stimulus in this hyperacuity target. The location vector 222 of the hyperacuity stimulus 281 could constitute the goal of sight movement in the dragon slayer game described above, in place of the checker board reversal stimulus.
[0084] The dragon slayer game is only one example of the current disclosure. The fixation target can take forms other than a dragon. For example, a fish in an aquatic background can serve an equivalent purpose. Or catching fireflies in a dimly lit prairie at night could be an alternate game. A more general description of the game/test cycle is given in the flow chart of Figure 16. [0085] Exemplary Methods
[0086] Referring to the flow chart in Figure 16, at block 300 a fixation target is presented in the VR display. The target and the background scene are always presented to both eyes stereoscopically. In embodiments, the initial target should be close to the central sight but the location should be randomly varied to keep the game interesting.
[0087] At block 301, the player is tasked to rotate the head to move the sight onto the target within a short variable time window Ttarget, for example, within about 4 seconds.
Although 4 has been chosen as a exemplary time period, other time periods, either shorter or longer, are contemplated.
[0088] At block 302, if the sight is not on target in time, then the player scores a miss
(block 303), the target moves off screen and a new target (at block 304) is presented in a different quadrant of the VR display, and the operation returns to block 301.
[0089] On the other hand, if at 302 the sight was put on target in time, then the player scores a hit (at block 306) and visual fixation is established at the location of the sight (at block 307). Blocks 301 to 307 can be considered a subroutine to establish visual fixation. Once fixation is established (at block 307) the presentation at block 310 of the visual stimulus immediately follows. The stimulus is shown briefly only to the eye being tested, for a fixed amount of time Tstimuius, for example, within about 0.5 seconds. Although 4 has been chosen as a exemplary time period, other time periods, either shorter or longer, are contemplated. In embodiments, the location of the stimulus relative to the fixation point is randomly chosen from a map of VF locations that remains to be tested.
[0090] At block 31 1, the player is tasked to move sight toward the stimulus, if the stimulus is perceived.
[0091] At block 312, the VF test decides whether the subject detected the stimulus based on whether the direction of sight motion is headed correctly toward the stimulus. The motion must also reach a threshold magnitude within a time window Tdetectwn, for example, within about 2 seconds. Although 2 has been chosen as a exemplary time period, other time periods, either shorter or longer, are contemplated. If stimulus perception is detected, then this is recorded at block 313 and a new target is presented in the path of the sight movement at block 305. If stimulus perception is not detected, then this is recorded at block 314 and a new target is presented (at block 304) in a quadrant different from that of the stimulus. Either way, the fixation establishment subroutine is re-entered at block 301. Blocks 310 to 314 constitute the visual stimulus subroutine. Repeated testing of visual stimulus perception provides information build-up of a VF map of the threshold of perception. This is described in the following section.
[0092] Mapping of Stimulus Perception Threshold
[0093] The output of the VF test is a map of the threshold for perceiving the visual stimulus. The VR display is wide and does not limit the width of VF test. For the purpose of glaucoma testing, usually +/- 24° (48° full field width) suffice. Referring to Figure 17, the VF 400 is presented as a grid of sensitivity values. The retinal sensitivity is the inverse of the minimum stimulus strength needed for the eye to perceive it at the particular location in the VF. The strength of the stimulus is specified as a combination of the size, contrast, and duration. For the dragon slayer game, the dark-bright reversal checker board stimulus strength is primarily determined by the contrast between the dark and bright checker squares. The size is fixed according to eccentricity and the presentation duration is fixed, unless an increase is necessary to increase the stimulus strength at maximum contrast. These parameters are described on a logarithmic scale relative to a normative reference. The standard unit of the logarithmic scale is decibel (dB). The normative reference (0 dB) is calibrated to the average perception threshold of a healthy human population, typically chosen to have a similar range of age to that of the population to be tested for disease. In Figure 17, the numbers in the squares are dB sensitivity values relative to the normative reference. The sensitivity values are determined by the minimal strength of the visual stimulus that could be perceived by the eye being tested using the VR game of the present disclosure. The center 401 represents the fixation point, corresponding to the foveal center anatomically. In this example of the VF of the right eye, the blind spot 402, corresponding to the optic nerve head anatomically, is to the right of and slightly inferior to fixation. The 2 grid squares around the blind spot is not tested (marked by "NT" in Figure 17). The VF map format of the left eye is the mirror image. The sizes of the grid squares determine the resolution of the VF test. As shown, the central VF within a 12° square area is tested a sampling interval of 3° (width of the central 4x4 grid squares around the fixation 401). Thus there are 16 test locations within the central VF where a smaller size visual stimulus is used. Outside the central area, the sampling interval is increased to 6° (width of the larger peripheral squares), where a larger visual stimulus is used. The overall VF test area extends 18° above fixation and 24° to the right left and below fixation. The superior extension is less because many people have relatively droopy upper lids that limit the superior VF. The numerical perimetry map 400 is called a "full-threshold" VF because it precisely shows the threshold of perception at each sampled VF location. The full-threshold test requires a series of visual stimuli to be shown at each location to bracket in the threshold. This takes a longer time, but provides more information on the severity of disease. Thus the full-threshold test is used to monitor the rate of disease progression in patients already known to have glaucoma. If the purpose of the test is simply to discriminate between a healthy and diseased eye, it is not necessary to precisely determine the numerical value of the thresholds of perception, but only to determine if it is worse than normal limits. This simplified testing is call a
"suprathreshold" VF test and is often used for screening purposes to decide if an individual person suspected of having glaucoma has evidence of VF damage.
[0094] A suprathreshold VF map 500 is shown in Figure 18. This map is obtained by testing each VF location using a stimulus strength that almost all normal people could perceive. For example, this strength could be set at 5 dB above the average stimulus strength threshold that could be perceived by the normal population. If the eye being tested could not perceive this stimulus at a given location, then the sensitivity is worse than -5 dB and clearly abnormal. Referring to Figure 18, abnormal VF grid locations are marked by black squares such as the central VF location 504. Locations with normal sensitivities, such as grid location 503, are left blank. The cluster of VF defects in the superonasal quadrant is typical of glaucoma.
[0095] The suprathreshold VF map is compiled from information obtained from probing each VF location with a suprathreshold stimulus several times. A flow chart for generating the sequence of testing is shown in Figure 19. At the beginning of the game, all locations on the VF grid are eligible for selection and both counters M and N at each location are set to zero. Each location has counter N and M to keep track of the number of times a visual stimulus was or was not perceived, respectively. Whenever fixation was established (see Figure 16), a stimulus location is randomly selected.
[0096] Referring to Figure 19 again, this random selection at block 601 initiates the stimulus testing routine. The suprathreshold stimulus is displayed (at block 603). At block 604, if the stimulus is perceived, then the perception counter N is incremented by 1, and if instead it is not perceived, then the nonperception counter M is incremented by 1. Then at block 607, if N>M then the VF location is marked as normal (having visual sensitivity that is within normal range or better). Otherwise the operation is passed to block 609, where if M>1, then the VF location is marked as abnormal (having visual sensitivity significantly worse than normal). If the location is determined to be normal or abnormal, then no further testing is needed at this location and it is precluded from random selection in future rounds (at block 611). If the suprathreshold stimulus has been missed once but then perceived at the second testing at the same location, then a third testing is still needed and the location is not removed from the map for future rounds. This completes one round of the stimulus routine and the loop returns to block 601. The VF testing and game playing is completed when all of the VF points (Figure 18) have been found to be either normal or abnormal. In this scheme, each VF point requires 1 to 3 rounds of stimulus presentation. Since there are 64 locations on the map (Figure 18), 64 to 192 rounds of stimulus presentation are needed for each eye. If each round of testing uses 2 seconds, then 4 to 12 minutes are needed to test both eyes, which is tolerable.
[0097] The full-threshold VF game follows a similar scheme, but tests each VF location using a range of stimulus strengths. The initial strength is chosen based on the average threshold of several previous tests by the same individual. If no previous test has been performed, then the initial stimulus strength could be set at the average of the normal population. If the stimulus is not perceived, then the stimulus strength is increased at that location until it is perceived (up to a limit). Then the threshold of perception is established to the desirable precision by bracketing the stimulus strength. There are various methods of bracketing the stimulus strength well known to those skilled in the art of perimetry and these are not detailed here. But generally the full-threshold test requires more time than the suprathreshold test.
[0098] Since any VF test is susceptible to error due to variation in the subject's response and loss of attention or fixation from time to time, it is best to make diagnosis of glaucoma based on several VF tests. Likewise, worsening of the VF over time is best confirmed over several VF tests. The advantage of the VR game VF test is that it is not as tedious and boring as a conventional VF test and therefore repeat testing is better tolerated. It can also be performed at home so that testing can be done continually between visits to a physician.
[0099] Example Telemedicine System [00100] FIG. 20 illustrates a networked telemedicine system 700, in accordance with embodiments herein. The networked telemedicine system 700 includes the computing device 1 10 in wireless communication therewith. The networked telemedicine system 700 also induces other networked devices 710, which may be in wired or wireless communication therewith. In some embodiments, the computing device 1 10 includes application software with executable instructions configured to transmit and receive information from the network 705. The information can be transmitted to and/or received from another device, such as one or more networked devices 705 through a network. In certain examples, the computing device 110 is also capable of transmitting information about the visual field perimetry of a subject to one or more of a doctor, such as an eye doctor, other medical practitioner, for example as part of a telemedicine session. Telemedicine and/or a telemedicine session is the use of telecommunication and information technology to provide clinical health care from a distance, for example using a telemedicine system 700. It has been used to overcome distance barriers and to improve access to medical services that would often not be consistently available in distant rural communities.
[00101] As depicted in FIG. 20, the telemedicine system 700 distributes and receives information to and from one or more networked devices 110, 710 through one or more of network 705. According to various embodiments, network 705 may be any network that allows computers to exchange data. In some embodiments, network 705 includes one or more network elements (not shown) capable of physically or logically connecting computers. The network 705 may include any appropriate network, including an intranet, the Internet, a cellular network, a local area network (LAN), a wide area network (WAN), a personal network or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. In embodiments, communication over the network 705 are enabled by wired or wireless connections, and combinations thereof. Each network 705 includes a wired or wireless telecommunication means by which network systems and networked devices 110, 710 may communicate and exchange data. For example, each network 705 is implemented as, or may be a part of, a storage area network (SAN), personal area network (PAN), a metropolitan area network (MAN), a local area network (LAN), a wide area network (WAN), a wireless local area network (WLAN), a virtual private network (VPN), an intranet, an Internet, a mobile telephone network, such as Global System for Mobile Communications (GSM), General Packet Radio Service (GPRS), cdmaOne,
CDMA2000, Evolution-Data Optimized (EV-DO), Enhanced Data Rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Digital Enhanced Cordless Telecommunications (DECT), Digital AMPS (IS-136/TDMA), and Integrated Digital Enhanced Network (iDEN), Long-Term Evolution (LTE), 3rd generation mobile network (3G), 4th generation mobile network (4G), and/or 5th generation mobile network (5G) networks, a card network, Bluetooth, near field communication network (NFC), any form of standardized radio frequency, or any combination thereof, or any other appropriate architecture or system that facilitates the communication of signals, data, and/or messages (generally referred to as data). Throughout this specification, it should be understood that the terms "data" and "information" are used interchangeably herein to refer to text, images, audio, video, or any other form of information that can exist in a computer-based
environment.
[00102] In an example embodiment, each network system (networked devices 110 and
710) includes a device having a communication component capable of transmitting and/or receiving data over the network 705. For example, each networked device 110 and 710 may comprise a server, personal computer, mobile device (for example, notebook computer, tablet computer, netbook computer, personal digital assistant (PDA), video game device, GPS locator device, cellular telephone, smartphone, or other mobile device), a television with one or more processors embedded therein and/or coupled thereto, or other appropriate technology that includes or is coupled to a web browser or other application for communicating via the network 705.
[00103] Although various example methods, apparatus, systems, and articles of manufacture have been described herein, the scope of coverage of the present disclosure is not limited thereto. On the contrary, the present disclosure covers all methods, apparatus, and articles of manufacture fairly falling within the scope of the appended claims either literally or under the doctrine of equivalents. For example, although the above discloses example systems including, among other components, software or firmware executed on hardware, it should be noted that such systems are merely illustrative and should not be considered as limiting. In particular, it is contemplated that any or all of the disclosed hardware, software, and/or firmware components can be embodied exclusively in hardware, exclusively in software, exclusively in firmware or in some combination of hardware, software, and/or firmware.

Claims

Claims What is claimed is:
1. A computer-implemented virtual reality method for determining a subject's visual field perimetry, comprising: a) presenting, using one or more processors of one or more computing devices, a fixation target at a known location on a virtual reality display screen, wherein the fixation target is presented to both eyes stereoscopically; b) determining, using the one or more processors of the one or more computing devices, if subject's head has rotated to align a central site on the virtual reality display screen with the fixation target within a predetermined variable time window T target; c) providing, using the one or more processors of the one or more computing devices, a visual stimulation target at a known location on the virtual reality display screen, wherein the visual stimulation target is presented to a single eye of the subject for a predetermined fixed amount of time TstimuiuS; d) determining, using the one or more processors of the one or more computing devices, a positive stimulation target detection by detecting head rotation of the subject in response to providing the visual stimulation target at the known location; e) storing, using the one or more processors of the one or more computing devices, the known location of the positive visual stimulation target detection; and f) repeating, using the one or more processors of the one or more computing devices, steps a-e, to construct a visual field map of a threshold of perception for the subject from the known locations of the positive visual stimulation target detection, thereby determining a subject's visual field perimetry with virtual reality.
2. The computer-implemented method of claim 1, further comprising:
determining, using the one or more processors of the one or more computing devices, a positive identification of the fixation target if the subject's head is determined to align with the fixation target within the predetermined variable time window T target, or a negative identification if the subject's head is determined not to align with the fixation target within the predetermined variable time window T target.
3. The computer-implemented method of claim 1 or 2, further comprising: choosing a new known location for the visual stimulation target for presentation from visual field locations that remain to be tested.
4. The computer-implemented method of any one of claims 1 -3, wherein the known location of the visual stimulation target relative to the fixation target is randomly chosen.
5. The computer-implemented method of any one of claims 1 -4, further comprising: determining, using the one or more processors of the one or more computing devices, if the head rotation of the subject in response to providing the visual stimulation target at the known location reaches a predetermined threshold magnitude within a predetermined variable time window T detection after the visual stimulation target presentation.
6. The computer-implemented method of any one of claims 1 -5, wherein determining a positive stimulation target detection by the subject is based on whether there is head rotation directed toward the visual stimulation target within a predetermined variable time window T detection after the visual stimulation target presentation.
7. The computer-implemented method of any one of claims 1 -6, further comprising: determining, using the one or more processors of the one or more computing devices, a negative stimulation target detection if no head rotation of the subject toward the visual stimulation target is detected in response to providing the visual stimulation target at the known location; and storing, using the one or more processors of the one or more computing devices, the known location of the negative visual stimulation target detection; and constructing, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception for the subject from the known locations of the positive visual stimulation target detection and the known locations of the negative visual stimulation target detection.
8. The computer-implemented method of any one of claims 1 -7, wherein if subject's head is not determined to have rotated to align a central site on the virtual reality display screen with the fixation target within the predetermined variable time window Ttarget a new fixation target is presented to both eyes stereoscopically in in a different quadrant of the virtual reality display screen, using the one or more processors of the one or more computing devices.
9. The computer-implemented method of any one of claims 1 -8, further comprising: outputting, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception.
10. The computer-implemented method of claim 9, wherein outputting the visual field map of the threshold of perception comprises outputting a grid of sensitivity values.
11. The computer-implemented method of claim 9, wherein outputting the visual field map of the threshold of perception comprises outputting a full-threshold visual field map.
12. The computer-implemented method of claim 9, wherein outputting the visual field map of the threshold of perception comprises outputting a suprathreshold visual field map.
13. The computer-implemented method of any one of claims 1 -12, wherein the one or more computing devices comprise at least one smart phone.
14. The computer-implemented method of any one of claims 1 -13, wherein the one or more computing devices are in communication with a network.
15. The computer-implemented method of claim 14, wherein the network is a telecommunications network.
16. The computer-implemented method of any one of claims 1 -15, further comprising: initiating, by the one or more computing devices, a telemedicine session.
17. An apparatus for conducting a virtual reality method for determining a subject's visual field perimetry, comprising: a virtual reality display screen; a one or more computing devices coupled to the virtual reality view screen, the one or more computing devices comprising one or more processors coupled to memory, wherein the one or more processors are programed to: a) present a fixation target at a known location on the virtual reality display screen, wherein the fixation target is presented to both eyes stereoscopically; b) determine if a subject's head has rotated to align a central site on the virtual reality display screen with the fixation target within a predetermined variable time window Ttarget, c) provide a visual stimulation target at a known location on the virtual reality display screen, wherein the visual stimulation target is presented to a single eye of the subject for a predetermined fixed amount of time T stimulus, d) determine a positive stimulation target detection by detecting head rotation of the subject in response to providing the visual stimulation target at the known location; e) store the known location of the positive visual stimulation target detection; and f) repeat steps a-e, to construct a visual field map of a threshold of perception for the subject from the known locations of the positive visual stimulation target detection.
18. The apparatus of claim 17, wherein the one or more processors are programed to: determine a positive identification of the fixation target if the subject's head is determined to align with the fixation target within the predetermined variable time window Ttarget, or a negative identification if the subject's head is determined not to align with the fixation target within the predetermined variable time window Ttarget.
19. The apparatus of claim 17 or 18, wherein the one or more processors are programed to: choose a new known location for the visual stimulation target for presentation from visual field locations that remain to be tested.
20. The apparatus of any one of claims 17-19, wherein the known location of the visual stimulation target relative to the fixation target is randomly chosen.
21. The apparatus of any one of claims 17-20, wherein the one or more processors are programed to: determine if the head rotation of the subject in response to providing the visual stimulation target at the known location reaches a predetermined threshold magnitude within a predetermined variable time window Tdetectwn after the visual stimulation target presentation.
22. The apparatus of any one of claims 17-21 , wherein a determination a positive stimulation target detection by the subject is based on whether there is head rotation directed toward the visual stimulation target within a predetermined variable time window Tdetectwn after the visual stimulation target presentation.
23. The apparatus of any one of claims 17-22, wherein the one or more processors are programed to: determine a negative stimulation target detection if no head rotation of the subject toward the visual stimulation target is detected in response to providing the visual stimulation target at the known location; and store the known location of the negative visual stimulation target detection; and construct the visual field map of the threshold of perception for the subject from the known locations of the positive visual stimulation target detection and the known locations of the negative visual stimulation target detection.
24. The apparatus of any one of claims 17-23, wherein if subject's head is not determined to have rotated to align a central site on the virtual reality display screen with the fixation target within the predetermined variable time window Ttargett e one or more processors are programed to: present a new fixation to both eyes stereoscopically in in a different quadrant of the virtual reality display screen.
25. The apparatus of any one of claims 17-24, wherein the one or more processors are programed to: output the visual field map of the threshold of perception.
26. The apparatus of claim 25, wherein the visual field map of the threshold of perception comprises a grid of sensitivity values.
27. The apparatus of claim 25, wherein the visual field map of the threshold of perception comprises a full-threshold visual field map.
28. The apparatus of claim 25, wherein the visual field map of the threshold of perception comprises a suprathreshold visual field map.
29. The apparatus of any one of claims 17-28, wherein the one or more computing devices comprise at least one smart phone.
30. The apparatus of any one of claims 17-29, wherein the one or more computing devices are in communication with a network.
31. The apparatus of claim 30, wherein the network is a telecommunications network.
32. The apparatus of any one of claims 17-31 , wherein the one or more processors are programed to: initiate a telemedicine session.
33. A non-transitory computer-readable storage medium with an executable program stored thereon for conducting virtual reality method for determining a subject's visual field perimetry with one or more computing devices, wherein the program instructs a microprocessor to perform the steps of: a) presenting, using one or more processors of one or more computing devices, a fixation target at a known location on a virtual reality display screen, wherein the fixation target is presented to both eyes stereoscopically; b) determining, using the one or more processors of the one or more computing devices, if subject's head has rotated to align a central site on the virtual reality display screen with the fixation target within a predetermined variable time window T target; c) providing, using the one or more processors of the one or more computing devices, a visual stimulation target at a known location on the virtual reality display screen, wherein the visual stimulation target is presented to a single eye of the subject for a predetermined fixed amount of time TstimuiuS; d) determining, using the one or more processors of the one or more computing devices, a positive stimulation target detection by detecting head rotation of the subject in response to providing the visual stimulation target at the known location; e) storing, using the one or more processors of the one or more computing devices, the known location of the positive visual stimulation target detection; and f) repeating, using the one or more processors of the one or more computing devices, steps a-e, to construct a visual field map of a threshold of perception for the subject from the known locations of the positive visual stimulation target detection, thereby determining a subject's visual field perimetry with virtual reality.
34. The non-transitory computer-readable storage medium of claim 33, wherein the executable program further instructs a microprocessor to perform the step of: determining, using the one or more processors of the one or more computing devices, a positive identification of the fixation target if the subject's head is determined to align with the fixation target within the predetermined variable time window T target, or a negative identification if the subject's head is determined not to align with the fixation target within the predetermined variable time window T target.
35. The non-transitory computer-readable storage medium of claim 33 or 34, wherein the executable program further instructs a microprocessor to perform the step of: choosing a new known location of visual stimulation targets for presentation from visual field locations that remain to be tested.
36. The non-transitory computer-readable storage medium of any one of claims 33-35, wherein the known location of the visual stimulation target relative to the fixation target is randomly chosen.
37. The non-transitory computer-readable storage medium of any one of claims 33-36, wherein the executable program further instructs a microprocessor to perform the step of: determining, using the one or more processors of the one or more computing devices, if the head rotation of the subject in response to providing the visual stimulation target at the known location reaches a predetermined threshold magnitude within a predetermined variable time window T detection after the visual stimulation target presentation.
38. The non-transitory computer-readable storage medium of any one of claims 33-37, wherein determining a positive stimulation target detection by the subject is based on whether there is head rotation directed toward the visual stimulation target within a predetermined variable time window T detection after the visual stimulation target presentation.
39. The non-transitory computer-readable storage medium of any one of claims 33-38, wherein the executable program further instructs a microprocessor to perform the steps of: determining, using the one or more processors of the one or more computing devices, a negative stimulation target detection if no head rotation of the subject toward the visual stimulation target is detected in response to providing the visual stimulation target at the known location; and storing, using the one or more processors of the one or more computing devices, the known location of the negative visual stimulation target detection; and constructing, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception for the subject from the known locations of the positive visual stimulation target detection and the known locations of the negative visual stimulation target detection.
40. The non-transitory computer-readable storage medium of any one of claims 33-39, wherein if subject's head is not determined to have rotated to align a central site on the virtual reality display screen with the fixation target within the predetermined variable time window Ttarget a new fixation target is presented to both eyes stereoscopically in in a different quadrant of the virtual reality display screen, using the one or more processors of the one or more computing devices.
41. The non-transitory computer-readable storage medium of any one of claims 39-40, wherein the executable program further instructs a microprocessor to perform the step of: outputting, using the one or more processors of the one or more computing devices, the visual field map of the threshold of perception.
42. The non-transitory computer-readable storage medium of claim 41, wherein outputting the visual field map of the threshold of perception comprises outputting a grid of sensitivity values.
43. The non-transitory computer-readable storage medium of claim 41, wherein outputting the visual field map of the threshold of perception comprises outputting a full- threshold visual field map.
44. The non-transitory computer-readable storage medium of claim 41, wherein outputting the visual field map of the threshold of perception comprises outputting a suprathreshold visual field map.
45. The non-transitory computer-readable storage medium of any one of claims 33-44, wherein the one or more computing devices comprise at least one smart phone.
46. The non-transitory computer-readable storage medium of any one of claims 33-45, wherein the one or more computing devices are in communication with a network.
47. The non-transitory computer-readable storage medium of claim 46, wherein the network is a telecommunications network.
48. The non-transitory computer-readable storage medium of any one of claims 33-47, wherein the executable program further instructs a microprocessor to perform the step of: initiating, by the one or more computing devices, a telemedicine session.
PCT/US2017/065443 2016-12-08 2017-12-08 Method for visual field perimetry testing WO2018107108A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662431597P 2016-12-08 2016-12-08
US62/431,597 2016-12-08

Publications (1)

Publication Number Publication Date
WO2018107108A1 true WO2018107108A1 (en) 2018-06-14

Family

ID=62491364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/065443 WO2018107108A1 (en) 2016-12-08 2017-12-08 Method for visual field perimetry testing

Country Status (1)

Country Link
WO (1) WO2018107108A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180110409A1 (en) * 2016-10-20 2018-04-26 Stylianos Georgios Tsapakis Visual field test method/perimeter using virtual reality glasses/headset and a smartphone or tablet or other portable device
RU2682932C1 (en) * 2018-06-26 2019-03-22 Федеральное государственное бюджетное научное учреждение "Научно-исследовательский институт глазных болезней" Method for carrying out perimetry in patients with no central vision
CN111938672A (en) * 2020-08-20 2020-11-17 京东方科技集团股份有限公司 Visual characteristic detection method based on virtual reality environment and related equipment
EP3827735A1 (en) 2019-11-28 2021-06-02 Health E Health Coorp, SL Device, method and computer programs for visual field rehabilitation
RU2759239C1 (en) * 2021-03-05 2021-11-11 Вячеслав Николаевич БЕТИН Device for perimetry in patients with lack of central vision
WO2022226141A1 (en) * 2021-04-21 2022-10-27 Olleyes, Inc. System and method for providing visual field tests

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994021162A1 (en) * 1993-03-24 1994-09-29 University College London Device for measuring the vestibulo-ocular reflex function
EP1767155A1 (en) * 2000-10-11 2007-03-28 Yeda Research And Development Co. Ltd. Kinetic method for efficient high-resolution visual field mapping
WO2016001902A1 (en) * 2014-07-04 2016-01-07 Libra At Home Ltd Apparatus comprising a headset, a camera for recording eye movements and a screen for providing a stimulation exercise and an associated method for treating vestibular, ocular or central impairment
US20160242642A1 (en) * 2013-10-03 2016-08-25 Neuroscience Research Australia (Neura) Systems and methods for diagnosis and therapy of vision stability dysfunction
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1994021162A1 (en) * 1993-03-24 1994-09-29 University College London Device for measuring the vestibulo-ocular reflex function
EP1767155A1 (en) * 2000-10-11 2007-03-28 Yeda Research And Development Co. Ltd. Kinetic method for efficient high-resolution visual field mapping
US20160242642A1 (en) * 2013-10-03 2016-08-25 Neuroscience Research Australia (Neura) Systems and methods for diagnosis and therapy of vision stability dysfunction
WO2016001902A1 (en) * 2014-07-04 2016-01-07 Libra At Home Ltd Apparatus comprising a headset, a camera for recording eye movements and a screen for providing a stimulation exercise and an associated method for treating vestibular, ocular or central impairment
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180110409A1 (en) * 2016-10-20 2018-04-26 Stylianos Georgios Tsapakis Visual field test method/perimeter using virtual reality glasses/headset and a smartphone or tablet or other portable device
RU2682932C1 (en) * 2018-06-26 2019-03-22 Федеральное государственное бюджетное научное учреждение "Научно-исследовательский институт глазных болезней" Method for carrying out perimetry in patients with no central vision
EP3827735A1 (en) 2019-11-28 2021-06-02 Health E Health Coorp, SL Device, method and computer programs for visual field rehabilitation
WO2021104965A1 (en) 2019-11-28 2021-06-03 Health E Health Coorp, Sl Device, method and computer programs for visual field rehabilitation
CN111938672A (en) * 2020-08-20 2020-11-17 京东方科技集团股份有限公司 Visual characteristic detection method based on virtual reality environment and related equipment
CN111938672B (en) * 2020-08-20 2024-01-23 京东方科技集团股份有限公司 Visual characteristic detection method based on virtual reality environment and related equipment
RU2759239C1 (en) * 2021-03-05 2021-11-11 Вячеслав Николаевич БЕТИН Device for perimetry in patients with lack of central vision
WO2022226141A1 (en) * 2021-04-21 2022-10-27 Olleyes, Inc. System and method for providing visual field tests

Similar Documents

Publication Publication Date Title
WO2018107108A1 (en) Method for visual field perimetry testing
US20240148244A1 (en) Interactive system for vision assessment and correction
EP3052004B1 (en) Diagnosis of vision stability dysfunction
WO2019099952A1 (en) Smartphone-based measurements of the refractive error in an eye
US20110267577A1 (en) Ophthalmic diagnostic apparatus
US20130155376A1 (en) Video game to monitor visual field loss in glaucoma
US11612316B2 (en) Medical system and method operable to control sensor-based wearable devices for examining eyes
US20100292999A1 (en) Ophthalmic diagnostic apparatus
US20150190048A1 (en) Circular preferential hyperacuity perimetry video game to monitor macular and retinal diseases
WO2013078406A1 (en) Video game to monitor retinal diseases
US10709328B2 (en) Main module, system and method for self-examination of a user's eye
US10299674B2 (en) Visual field measuring device and system
US20210007599A1 (en) Visual testing using mobile devices
KR20180062034A (en) Hearing test system and hearing test method
US20180042543A1 (en) Application for screening vestibular functions with cots components
US20220369924A1 (en) Head-mounted vision detection equipment, vision detection method and electronic device
US20220230749A1 (en) Systems and methods for ophthalmic digital diagnostics via telemedicine
CN115407504A (en) Virtual display apparatus and virtual display method
US20220322930A1 (en) Method and apparatus for evaluation and therapeutic relaxation of eyes
US20230404388A1 (en) Method and apparatus for measuring relative afferent pupillary defects
SRINIVAS VR-Phore: A Novel Virtual Reality system for diagnosis and therapeutics of Binocular Vision
CN114822127A (en) Training method and training device based on virtual reality equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17879348

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17879348

Country of ref document: EP

Kind code of ref document: A1