WO2023044520A1 - Methods and systems for screening for central visual field loss - Google Patents

Methods and systems for screening for central visual field loss Download PDF

Info

Publication number
WO2023044520A1
WO2023044520A1 PCT/AU2022/050281 AU2022050281W WO2023044520A1 WO 2023044520 A1 WO2023044520 A1 WO 2023044520A1 AU 2022050281 W AU2022050281 W AU 2022050281W WO 2023044520 A1 WO2023044520 A1 WO 2023044520A1
Authority
WO
WIPO (PCT)
Prior art keywords
subject
search target
eye
visual
target
Prior art date
Application number
PCT/AU2022/050281
Other languages
French (fr)
Inventor
Allison Mckendrick
Andrew Turpin
Rekha Srinivasan
Original Assignee
The University Of Melbourne
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2021903043A external-priority patent/AU2021903043A0/en
Application filed by The University Of Melbourne filed Critical The University Of Melbourne
Publication of WO2023044520A1 publication Critical patent/WO2023044520A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0091Fixation targets for viewing direction
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/024Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • G02B27/022Viewing apparatus
    • G02B27/024Viewing apparatus comprising a light source, e.g. for viewing photographic slides, X-ray transparancies
    • G02B27/026Viewing apparatus comprising a light source, e.g. for viewing photographic slides, X-ray transparancies and a display device, e.g. CRT, LCD, for adding markings or signs or to enhance the contrast of the viewed object

Definitions

  • Embodiments generally relate to systems, methods and devices for screening for vision loss.
  • described embodiments are directed to systems, methods and devices for detecting and assessing central visual field loss in a subject.
  • Some embodiments relate to a method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; displaying a visual search target superimposed on the background; receiving eye tracking data related to movement of an eye of the subject viewing the display screen and locating the visual search target; determine an eye movement parameter relating to the received eye tracking data, wherein the eye movement parameter relates to the movement of the eye while locating the visual search target; comparing the eye movement parameter with a predetermined threshold; and upon determining that the eye movement parameter exceeds the threshold, determining that the subject has a vision defect.
  • the vision defect is central visual field loss.
  • the eye movement parameter relates to a number of fixations taken to locate the visual search target.
  • the eye movement parameter comprises the number of fixations taken to locate the visual search target.
  • the eye movement parameter comprises a duration of time taken to locate the target.
  • the non-uniform background comprises complex spatial frequency content and complex orientation content.
  • the non-uniform background is free of real object content.
  • the non-uniform background comprises a noise background. According to some embodiments, the non-uniform background comprises a 1/f noise background.
  • the non-uniform background comprises a random filtered Gaussian noise image.
  • the non-uniform background is displayed to subtend between 0 and 30 degrees of a visual field of the subject. In some embodiments, the non-uniform background is displayed to subtend 15 degrees of a visual field of the subject.
  • the visual search target is a Gabor patch.
  • the visual search target is a six cycles per degree Gabor patch.
  • the visual search target is oriented orthogonal to the visual field meridian.
  • Some embodiments further comprise presenting a fixation target for the eye of the subject to fixate on before presentation of the visual search target.
  • Some embodiments further comprise performing a fixation check to ensure that the eye of the subject is fixated on the fixation target before presentation of the visual search target.
  • the eye of the subject is determined to be fixated on the fixation target if the eye of the subject is determined to be fixated on a location within a predetermined threshold of the fixation target.
  • the predetermined threshold is a 1 degree visual angle threshold.
  • Some embodiments relate to a device comprising: a processor; and memory storing program code that is accessible to the processor; wherein when executing the program code, the processor is caused to perform the method of some other described embodiments.
  • Some embodiments relate to a system comprising: the device of some other described embodiments; and the display screen for displaying the non-uniform background and the visual search target.
  • Some embodiments further comprise an eye tracking device configured to generate the eye tracking data.
  • Some embodiments relate to a method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; at random, determining whether to display a visual search target superimposed on the background, and in response to determining to display the visual search target superimposed on the background, displaying the visual search target superimposed on the background; receiving user input data related to whether a subject perceived a visual search target superimposed on the background; determine whether the user input data was correct based on whether the visual search target was displayed; and upon determining that the user input data was incorrect, determining that the subject has a vision defect.
  • Figure 1 shows a block diagram of a vision assessment system according to some embodiments
  • Figure 2 shows a flow diagram illustrating an example method of performing vision assessment using the system of Figure 1;
  • Figure 3 A shows an example layout for a fixation check image as presented by the system of Figure 1;
  • Figure 3B shows an example stimulus image layout as presented by the system of Figure 1
  • Figure 4A shows an example instance of a fixation check image as presented by the system of Figure 1 ;
  • Figure 4B shows an example instance of a stimulus image as presented by the system of Figure 1;
  • Figure 5 shows an example layout illustrating possible positions of a target within a stimulus presented by the system of Figure 1;
  • Figure 6 shows the example layout of Figure 5 with Gabor patches as the targets and a non-uniform background
  • Figure 7 shows a graph illustrating the results of performing the method of Figure 2 on a control group
  • Figure 8 shows a histogram illustrating the results of performing the method of Figure 2 on a control group
  • Figure 9 shows a graph illustrating the results of performing the method of Figure 2 on an individual subject with vision loss.
  • Figure 10 shows a flow diagram illustrating an alternative example method of performing vision assessment using the system of Figure 1.
  • Embodiments generally relate to systems, methods and devices for screening for vision loss.
  • described embodiments are directed to systems, methods and devices for detecting and assessing central visual field loss in a subject.
  • Central visual field loss or impairment may be caused by conditions such as glaucoma, neurodegenerative diseases, critical ocular diseases such as macular degeneration, and other causes of macular lesions such as diabetes, for example.
  • Central visual field loss or impairment can have a significant impact on a patient’s daily life. Assessing and diagnosing central visual field loss or impairment at an early stage may allow for a wider variety of treatment options and management strategies.
  • Described embodiments relate to systems, methods and devices for detecting and assessing central visual field loss in a subject by having subjects perform visual search tasks within a presented stimulus image designed to represent natural image statistics, but that does not contain specific objects within the image content.
  • the subject By measuring eye movement parameters while a subject performs a visual search task within a stimulus image designed to represent natural image statistics, the subject’s central field vision can be assessed.
  • some described embodiments relate to a method of assessing central visual field loss by determining a number of fixations that a subject takes to find a target on a background having spatial frequency content similar to a natural scene. As central visual field loss affects fixation behaviour, comparing the number of fixations a subject takes to locate a target to a predetermined threshold can provide an indication of central visual field loss exhibited by the subject.
  • Some described embodiments relate to a computer implemented method for detecting central visual field loss.
  • a fixation target is presented to a subject on a screen. After the subject fixates on the fixation target and passes a fixation check, a non-uniform background is presented, which may be a 1/f noise background in some embodiments.
  • a visual target is also presented, which may be a Gabor patch in some embodiments. The location of the visual target is randomized to one of a predetermined number of positions. As the subject searches for and locates the target, the subject’s eye movements are tracked, and a parameter relating to the eye movements while the subject searches for the target is determined. The parameter might relate to the number of fixations it takes the subject to locate the target.
  • the number of fixations it takes the subject to locate the target are counted.
  • a duration of time taken to locate the target may be measured, where the duration of time taken to locate the target is related to the number of fixations it takes the subject to locate the target.
  • the exercise may be repeated a number of times, and a measure of central tendency relating to the number of fixations may be compared to a threshold value.
  • the measure of central tendency may be the median number of fixations. This may provides a more accurate measure of central tendency than a mean, as the number of fixations may form a skewed distribution, in some embodiments.
  • the mean, or a different measure of central tendency may be used instead.
  • Described methods may be used as a patient risk assessment or a method of providing information to support clinical decisions such as treatment plans.
  • FIG. 1 shows a visual field assessment system 100 according to some embodiments.
  • System 100 comprises a computing device 110 in communication with a number of input/output (I/O) peripherals 160. While I/O peripherals 160 are illustrated as separate from computing device 110, according to some embodiments one or more of the I/O peripherals 160 may be integrated with computing device 110. According to some embodiments, computing device 110 may also be in communication with one or more external processing devices 180.
  • I/O peripherals 160 are illustrated as separate from computing device 110, according to some embodiments one or more of the I/O peripherals 160 may be integrated with computing device 110. According to some embodiments, computing device 110 may also be in communication with one or more external processing devices 180.
  • Computing device 110 comprises a processor 120 and memory 130.
  • Processor 120 may include one or more data processors for executing instructions, and may include one or more of a microprocessor, microcontroller-based platform, a suitable integrated circuit, and one or more application-specific integrated circuits (ASICs).
  • Memory 130 may include one or more memory storage locations, and may be in the form of ROM, RAM, flash or other memory types. Memory 130 is arranged to be accessible to processor 120, and to store program code that is executable by processor 120, in the form of executable code modules. These may include stimulus presentation module 132 for causing visual stimuli to be delivered to a subject, and data processing module 133 for processing data received relating to a subject’s response to a delivered stimulus, for example.
  • Memory 130 may further store data that can be read and written to by processor 120.
  • memory 130 may store stimulus data 136, which may comprise visual stimulus data to be delivered to a subject, and normative data 137, which may be used to assess whether a subject’s response to presented stimuli indicates a visual field loss condition.
  • stimulus data 136 may comprise visual stimulus data to be delivered to a subject
  • normative data 137 which may be used to assess whether a subject’s response to presented stimuli indicates a visual field loss condition.
  • Computing device 110 further comprises an output module 140 and an input module 150, both in communication with processor 120.
  • Output module 140 may be configured to facilitate delivering output signals to one or more output peripherals of I/O peripherals 160
  • input module 150 may be configured to facilitate receiving input signals from one or more input peripherals of I/O peripherals 160.
  • I/O peripherals 160 may comprise a number of different input and output devices and components configured to send data signals to and/or receive data signals from computing device 110.
  • I/O peripherals 160 may include user I/O peripherals configured to facilitate communication between a user and computing device 110.
  • I/O peripherals 160 comprise a display screen 162, a speaker 164, an eye tracking device 166 and a user input device 168.
  • I/O peripherals 160 may comprise other input and output devices, such as one or more of a mouse, keyboard, motion sensor, motor, microphone, camera, or other VO peripheral.
  • Display screen 162 may be configured to receive control signals via output module 140 and to display visual data.
  • display screen 162 may be used to present one or more visual search tasks to a patient or subject.
  • Display screen 162 may include one or more screens, which may be LCD or LED screen displays in some embodiments.
  • display screen 162 may be a 32-inch gamma-corrected Display++ monitor by Cambridge Research Systems Ltd. with a refresh rate of 120 Hz, resolution of 1920 x 1080 pixels, and a mean luminance of 50 cd.m-2, for example.
  • Speaker 164 may be configured to receive control signals via output module 140 and to cause data to be audibly communicated to a user.
  • speaker 164 may be used to present instructions to a patient or subject, such as when to begin performing a visual search task.
  • Eye tracking device 166 may be configured to generate data signals and to communicate these to input module 150.
  • the data signals may comprise eye tracking data related to the movement of at least one eye of a patient or subject.
  • eye tracking device 166 may be configured to track only one eye of the subject.
  • the data signals may relate to patterns of natural eye movements such as number of saccades, number of fixations, saccadic amplitude and/or saccadic latency observed as being performed by an eye of a subject.
  • eye tracking device 166 may comprise at least one camera to record visual image data relating to the position and movement of at least one eye of a patient or user.
  • eye tracking device 166 may further comprise computing components configured to derive eye movement parameters based on the generated visual image data, and to communicate these to input module 150. According to some alternative embodiments, eye tracking device 166 may communicate the generated visual image data to input module 166, and computing device 110 may be configured to derive the eye movement parameters. According to some embodiments, eye tracking device 166 may comprise a Gazepoint GP3 eye tracker with a refresh rate of 60Hz, for example.
  • User input device 168 may be configured to generate data signals based on interaction with a user such as a patient or subject, and to communicate these to input module 150.
  • the data signals may indicate that the patient or subject is ready to commence a visual search task, for example.
  • user input device 168 may generate data signals in response to physical interaction between the user and user input device 168 or physical manipulation of user input device 168 by a user.
  • user input device 168 may be a button or switch that generates data signals when pressed or switched.
  • user input device 169 may form part of a touch screen display, and virtual buttons or switches may be presented on the touch screen display for a user to interact with.
  • I/O peripherals 160 may be integrated into a single device.
  • display screen 162 and eye tracking device 166 may be integrated into a single device such as the imo® by CREWT Medical Systems Inc., or the Octopus 600 by Haag-Streit Diagnostics, for example.
  • Computing device 110 may also include a communications module 170.
  • Communications module 170 may allow for wired or wireless communication between computing device 110 and external computing devices, and may utilise Wi-Fi, USB, Bluetooth, or other communications protocols.
  • communications module 170 may facilitate communication between computing device 110 and external computing device 180.
  • External computing device 180 may comprise one or more computing devices, server systems and/or databases, and may be in communication with computing device 110 via a network such as the internet, for example.
  • Figure 2 illustrates a method 200 that may be performed by system 100 to conduct a central visual field assessment of a subject. While method 200 is described as being performed by processor 120 of computing device 110, according to some embodiments, some or all of the method steps of method 200 may be performed by external computing device 180.
  • method 200 may be performed while a subject is located in a viewing position of display screen 162, and eye tracking device 166 is configured to monitor at least one eye of the subject.
  • the subject may be positioned between 40cm and 120cm from display screen 162.
  • the subject may be positioned around 80cm from display screen 162, for example.
  • display screen 162 may be integrated into a head mounted display device to be worn on the head of the subject. In such embodiments, display screen 162 may be located relatively close to the subject’s eye, such as between 5 and 15cm from the subject’s eye in some embodiments.
  • the subject may be positioned so that an area of display screen 162 occupies their central visual field area.
  • the subject may be positioned so that an area of display screen 162 occupies between a 0° and 30° visual angle of the subject’s visual field.
  • the subject may be positioned so that an area of display screen 162 occupies around a 15° visual angle of the subject’s visual field, for example.
  • processor 120 executing stimulus presentation module 132 is caused to present a fixation check image to a subject via display screen 162.
  • Processor 120 may retrieve a fixation check image from stimulus data 136, and communicate it to output module 140 for display on display screen 162.
  • the fixation check image may comprise a uniform background with a visual target superimposed on the background. Example fixation check images are described below with reference to Figures 3A and 4A.
  • the subject may be instructed to fixate on the target with at least one eye that is being tracked with eye tracking device 166. The subject may be instructed to indicate when they have fixated on the target by interacting with user input device 168.
  • the instructions may be delivered audibly to the subject via speaker 164.
  • processor 120 executing data processing module 133 receives data indicating that the subject has fixated on the target presented as part of the fixation check image on display screen 162.
  • processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150. The data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate that they have fixated on the presented visual target.
  • processor 120 executing data processing module 133 receives eye track data from eye tracking device 166 and delivered via input module 150.
  • the eye track data may be generated by eye tracking device 166 tracking at least one eye of the subject as the subject fixates on the target presented as part of the fixation check image shown on display screen 162.
  • the eye track data may comprise at least one coordinate relating to a position of the subject’s eye, and at least one timestamp relating to a time at which the data was captured.
  • processor 120 executing data processing module 133 performs a fixation check to confirm that the subject is actually fixating on the presented target.
  • Processor 120 may process the eye tracking data as received at step 215 to correlate it to a position on the fixation check image being displayed on display screen 162.
  • processor 120 determines whether the subject is fixating on the presented target. According to some embodiments, processor 120 may determine that the subject is fixating on the presented target if the eye tracking data indicates that the position of fixation of the eye of the subject is within a predetermined threshold of the location of the presented target. For example, processor 120 may determine that the subject is fixating on the presented target if the eye tracking data indicates that the position of fixation of the eye of the subject is within a 1 ° visual angle tolerance of the location of the presented target.
  • processor 120 may return to step 205 of method 200, and re -present the fixation check image and/or re-instruct the subject to fixate on the presented target. If processor 120 determines that the subject is fixating on the presented target, then processor 120 may proceed to step 230.
  • processor 120 executing stimulus presentation module 132 is caused to present a visual search task to the subject.
  • Processor 120 may do this by causing a stimulation image to be presented to the subject via display screen 162.
  • Processor 120 may retrieve the stimulation image from stimulus data 136, and communicate it to output module 140 for display on display screen 162.
  • the stimulation image may comprise a non-uniform background with a visual target superimposed on the background.
  • the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene.
  • the non-uniform background does not comprise any real object content.
  • the visual target may comprise a Gabor patch.
  • Example stimulation images and example targets are described in further detail below with reference to Figures 3B, 4B, 5 and 6.
  • the subject may be instructed to locate and fixate on the target with at least one eye that is being tracked with eye tracking device 166.
  • the subject may be instructed to indicate when they have fixated on the target by interacting with user input device 168.
  • the instructions may be delivered audibly to the subject via speaker 164.
  • processor 120 executing data processing module 133 receives data indicating that the subject has located and fixated on the target presented as part of the stimulation image on display screen 162.
  • processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150.
  • the data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate that they have located and fixated on the presented visual target.
  • processor 120 executing data processing module 133 receives eye track data from eye tracking device 166 and delivered via input module 150.
  • the eye track data may be generated by eye tracking device 166 tracking at least one eye of the subject as the subject searches for the target presented as part of the stimulation image shown on display screen 162.
  • the eye track data may comprise at least one coordinate relating to a position of the subject’s eye, and at least one timestamp relating to a time at which the data was captured.
  • the eye track data may comprise data generated before the eye of the subject fixates on the presented target, being data generated while the eye is searching for and locating the target.
  • processor 120 executing data processing module 133 determines an eye movement parameter based on the eye track data received from eye tracking device 166.
  • the eye movement parameter may relate to the movement of the eye while searching for and locating the target, before the eye fixates on the target.
  • the eye movement parameter may relate to a number of fixations were measured before the eye fixated on the target.
  • the eye movement parameter may relate to a duration of time that was measured before the eye fixated on the target.
  • the eye movement parameter is a fixation parameter.
  • the fixation parameter may be a number of fixations taken for the eye of the subject to fixate on the presented target.
  • the fixation parameter may be a duration of time taken for the eye of the subject to fixate on the presented target.
  • the duration of time taken for the eye of the subject to fixate on the presented target may be related to the number of fixations measured before the eye fixated on the target.
  • the eye movement parameter may be another parameter relating to the movement of the eye of the subject as it searched for and/or fixated on the presented target, such as a number of saccades, saccadic amplitude and/or saccadic latency, for example.
  • processor 120 executing data processing module 133 compares the eye movement parameter determined at step 245 with a predetermined threshold.
  • the predetermined threshold may be determined based on normative data 137.
  • Normative data 137 may be used to determine the range of values seen in a majority of a control population for the fixation parameter, where the control population does not exhibit central visual field loss.
  • a threshold value can then be determined by calculating a range for the eye movement parameter that encompasses the majority of the range of values seen in the control population. Outside of the calculated range, it may be likely that that central visual field loss exists.
  • comparing the eye movement parameter determined with a predetermined threshold may comprise determining whether the eye movement parameter is outside a predetermined threshold.
  • Processor 120 may be configured to determine whether the eye movement parameter is higher than the threshold for the eye movement parameter for a particular location of the visual search target based on normative data 137. As normative data 137 may result in different normative limits for different locations of a visual search target, the threshold may vary depending on where on the non-uniform background the target was displayed. According to some embodiments, where steps 205 to 245 of method 200 are performed multiple times to present the visual search target in multiple locations, processor 120 may be configured to determine a number of locations for which the eye movement parameter was outside the normative limits, and to compare this with a predetermined threshold value.
  • processor 120 executing data processing module 133 determines whether the subject exhibits central visual field loss in the eye being monitored, based on the comparison conducted at step 250. For example, if the measured eye movement parameter falls outside the normative limits in over a predetermined threshold of locations in which the visual search target is presented, processor 120 may determine that the eye being monitored exhibits central visual field loss.
  • processor 120 may be configured to export the results determined at step 255. Processor 120 may do this by storing the results to a location in memory 130, or by communicating the results to external computing device 180 via communications module 170, for example. According to some embodiments, processor 120 may export the data by causing the data to be communicated via an I/O peripheral 160 such as display screen 162 or speaker 164.
  • Figure 3A shows an example layout 300 for a fixation check image as presented on display screen 162 of system 100 at step 205 of method 200.
  • Layout 300 comprises a background 310 and a fixation target 320.
  • background 310 may be a uniform background.
  • background 310 may be a uniform white, grey, or black background in some embodiments.
  • fixation target 320 is located in a central region of background 310.
  • fixation target 320 is located at the centre of background 310.
  • Fixation target 320 may comprise one or more shapes, symbols, or other visual features that are visually distinct from background 310.
  • fixation target 320 may comprise a visual feature that is distinct from background 310 in colour, shade or tone.
  • fixation target 320 may comprise one or more, dots, lines, squares, circles, triangles or crosses.
  • Figure 3B shows an example layout 350 for a stimulation image as presented on display screen 162 of system 100 at step 230 of method 200.
  • Layout 350 comprises the background 310 and fixation target 320 as described above with reference to Figure 3A.
  • Layout 350 further comprises a non-uniform background 330 and a visual search target 340.
  • non-uniform background 330 may be presented as an overlay over background 310.
  • non-uniform background 330 may be displayed to fill display screen 162 entirely, so that none of background 310 is visible.
  • non-uniform background 330 may be free of real object content.
  • non-uniform background 330 may comprise complex spatial frequency content.
  • non-uniform background 330 may comprise complex orientation information. As natural scenes contain objects of varied orientation and spatial frequency content, generally biased towards low spatial frequencies and lower contrasts, non- uniform background 330 may comprise visual characteristics similar to a natural scene image. Non-uniform background 330 may be biased towards low spatial frequencies and/or lower contrasts, for example. Non-uniform background 330 may be generated to exhibit the spatial characteristics of a natural scene image without any of the real object content, to allow for aspects of visual processing in visual search tasks to be investigated without the added complexity of scene information such as colour, context, object information, or scene gist, for example.
  • non-uniform background 330 may comprise a textured cloudy background. According to some embodiments, non-uniform background 330 may comprise a noise background. As the power spectrum (or 2D representation of spatial frequency) of natural scenes is approximately proportional to the inverse of the frequency of the spectrum, and the relative log amplitude falls off roughly by a factor of 1/f, non- uniform background 330 may comprise a 1/f noise background in some embodiments. Non-uniform background 330 may comprise a random filtered noise image.. In some embodiments, non-uniform background 330 may comprise an image filtered using a nonGaussian band-pass way, a low pass filter, or a high pass filter.
  • non-uniform background 330 may comprise a random filtered Gaussian noise image
  • non-uniform background 330 may comprise a random filtered Gaussian noise image adjusted to the factor of l/f“.
  • a may be 1.7.
  • a may be between 1.0 and 2.0.
  • a may be any value between 0.5 and 10.0.
  • the Root Mean Square (R.M.S) contrast of non- uniform background 330 may be defined using two times the standard deviation of the normalised pixel luminance across the whole image.
  • the RMS contrast of non-uniform background 330 may be scaled to 0.20 in some embodiments.
  • the RMS contrast of non-uniform background 330 may be scaled to between 0.10 and 0.30.
  • the RMS contrast of non-uniform background 330 may be scaled to another value.
  • non- uniform background 330 may be isotropic.
  • non- uniform background 330 may be displayed to subtend between 0 and 30 degrees of a visual field of a subject.
  • non-uniform background 330 may be displayed to subtend around 15 degrees of the visual field of the subject.
  • the size of non-uniform background 330 as displayed on display screen 162 may be adjusted based on a position of a subject from display screen 162, to ensure that non-uniform background 330 is displayed to subtend the desired angle of the visual field of the subject.
  • a number of non-uniform images may be generated and stored in stimulus data 136.
  • processor 120 may select an image at random from the stored images in stimulus data 136 to display as non-uniform background 330.
  • Visual search target 340 may be displayed in a random location on non-uniform background 330. According to some embodiments, visual search target 340 may be displayed in a randomly selected location selected from a set of predefined locations, such as the 25 locations described below with reference to Figure 5. In some embodiments, the visual search target may be displayed in a different location to those shown in Figure 5. For example, while Figure 5 shows locations within a 15 degree visual field, the locations may be different if a different visual angle was used.
  • Visual search target 340 may comprise one or more shapes, symbols, or other visual features that are visually distinct from non-uniform background 330. According to some embodiments, visual search target 340 may comprise a visual feature that is distinct from background 310 in colour, shade, pattern or tone. According to some embodiments, visual search target 340 may comprise one or more, dots, lines, squares, circles, triangles or crosses.
  • visual search target 340 may comprise a striped shape.
  • visual search target 340 may comprise a Gabor patch.
  • Visual search target 340 may comprise a six cycles per degree Gabor patch, for example.
  • visual search target 340 may comprise between a one cycles per degree and an eight cycles per degree Gabor patch.
  • the Gabor patch may be generated by sine wave grating masked by a Gaussian distribution of SD 0.17° in some embodiments.
  • the SD of the Gaussian window may be selected as a different value, based on the desired size of the resulting visual search target 340.
  • the Gabor patch may be oriented orthogonal to the visual field meridian. This may result in more uniform sensitivity in the presented visual field rings, so that all stimuli presented at a particular eccentricity from the centre of the visual field would be expected to have a similar contrast.
  • visual search target 340 may comprise a different visual feature.
  • visual search target 340 may comprise a sixth derivative (D6) of a spatial Gaussian function, in some embodiments.
  • visual search target 340 may be vertically oriented.
  • visual search target 340 may be rotated to be orthogonal to the visual field meridian.
  • visual search target 340 may be adjusted to have a contrast with non-uniform background 330 that provides a 95% probability of being perceptible to a subject without vision loss, which may be calculated using seeing normative limits estimated from the frequency of seeing curves in subjects without vision loss, for example.
  • the normative limits may be calculated using a method such as method 1000 described below with reference to Figure 10.
  • Figure 4A shows a specific instance 400 of layout 300 for a fixation check image as presented on display screen 162 of system 100 at step 205 of method 200.
  • Instance 400 comprises a background 410, being a specific instance of background 310, and a fixation target 420, being a specific instance of fixation target 320.
  • Background 410 is a uniform background, being a uniform grey background.
  • Fixation target 420 is located at the centre of background 410.
  • Fixation target 420 comprises four white dots forming a diamond shape, having a 1 ° visual angle radius in the visual field of the subject.
  • Figure 4B shows a specific instance 450 of layout 350 for a stimulation image as presented on display screen 162 of system 100 at step 230 of method 200.
  • Layout 450 comprises the background 410 and fixation target 420 as described above with reference to Figure 4A.
  • Layout 450 further comprises a non-uniform background 430, being a specific instance of non-uniform background 330, and a visual search target 440, being a specific instance of visual search target 340.
  • Non-uniform background 430 comprises a 1/f noise background, being a random filtered Gaussian noise image adjusted to the factor of l/f“ where a is 1.7.
  • Non-uniform background 430 has the appearance of a cloudy texture.
  • Non-uniform background 330 is defined using two times the standard deviation of the normalised pixel luminance across the whole image scaled to 0.20.
  • Non-uniform background 330 is isotropic and displayed to subtend around 15 degrees of the visual field of the subject.
  • Visual search target 440 is shown displayed to the top right of fixation target 420.
  • Visual search target 440 is a six cycles per degree Gabor patch, generated by sine wave grating masked by a Gaussian distribution of SD 0.17°, and oriented orthogonal to the visual field meridian.
  • Visual search target 440 has the appearance of a soft circle shape filled with black and white stripes, the stripes angled on a diagonal extending from top left to bottom right.
  • the contrast of visual search target 440 against background 430 is set at the 95% probability of seeing normative limits estimated from the frequency of seeing curves set of subjects having no vision loss.
  • Figure 5 show an example layout 500 illustrating a number of possible locations 540 of a visual search target 340 on non-uniform background 330.
  • the locations 540 include a central location 505, with three concentric rings each comprising 8 further locations, aligned to form a starburst pattern. While 25 possible locations 540 are illustrated, according to some embodiments visual search target 340 may be displayed in any location on non-uniform background 330.
  • Figure 6 shows a specific instance of layout 500, comprising the non-uniform background 430 as illustrated in Figure 4.
  • the possible locations of visual search target 340 are displayed as Gabor patches 640 oriented orthogonal to the visual field meridian, so that the stripes of each Gabor patch 640 appear to extend around the perimeter of each concentric circle surrounding the central Gabor patch 605.
  • Each Gabor patch 640 may be identical to visual search target 440 as illustrated in Figure 4, except in orientation.
  • each participant was seated at 80 cm from a display screen 162 with individual refractive correction applied for the 80 cm distance in a trial frame.
  • the participant was asked to look at a fixation target 420 as illustrated in Figure 4, which was displayed at the centre of display screen 162 before the start of the experiment.
  • the participants were instructed to start the task by a button press using a user input device 168 when looking at fixation target 420.
  • the stimulus presentation began with an auditory tone played via speaker 164.
  • fixation was outside 1° tolerance, the participants were re-instructed to look at the fixation target and the stimulus presentation began after they had passed the fixation check.
  • the fixation check at the beginning of each stimulus presentation was performed to ensure that all participants commenced the visual search from the centre of the fixation target.
  • eye tracking device 166 generated the gaze points and the timestamp for each fixation, and provided these to computing device 110 via input module 150.
  • the number of fixations for each presentation of visual search target 440 were estimated using the number of timestamps.
  • the initial fixation from the centre of the fixation target 420 was verified and included in the fixation count.
  • the outcome measure was the number of fixations required by an eye of the subject to find the target, measured using an eye tracking device 166. Normative limits of 95%, 98%, and 99% for the number of fixations at each location was estimated based on the control group.
  • the procedure performance was assessed by calculating sensitivity and specificity for different combinations of normative limits, target locations with fixations outside normative limits and number of repeated target presentations.
  • the highest Area Under Curve (AUC) (computed as a partial AUC (pAUC) for specificity higher than 80%) was chosen. From this curve, the criteria with highest sensitivity for specificity greater than 95% was selected as a thresholding criteria for determining central visual field loss in the experimental group. Specifically, the visual field of a subject was flagged “abnormal” when the fixation number was greater than the 99% normative limit for three or more locations averaged over two repeated presentations. This threshold gave a 85% sensitivity, 95.2% specificity, and 0.88 pAUC.
  • Figure 7 shows a graph 700 illustrating the results of the experiment as performed on the control group.
  • the x-axis 710 and the y-axis 720 represent the eccentricity in degrees of the presented visual search targets 320, and circles 740 represent each of locations in which the targets were presented.
  • the first number in each circle 740 corresponds to the median number of fixations made to locate a target 340 at that location as displayed on a non-uniform background 330, while the second number corresponds to the interquartile range for the number of fixations made to locate the target 340.
  • a median of 2 or 3 fixations were required depending on the location of target 340. This number includes the initial fixation on the central fixation target 320 at the beginning of the presentation.
  • Figure 8 shows a graph 800 illustrating the number of fixations for a single target location as a histogram.
  • Graph 800 was generated based on fixations recorded when a target was displayed at a position of 0,6 degrees.
  • X-axis 810 shows the number of fixations taken to locate a target, while Y-axis 820 shows the proportion of people in each group.
  • Bars 830 illustrate the proportion of subjects for each number of fixations that it took to locate the target.
  • the 95%, 98% and 99% quantiles were extracted and then rounded as normative limits. These are illustrated as lines 850, 860, and 870, respectively.
  • the 95% normative limit shown by line 850 was a fixation number of 6.20, meaning that subjects with a fixation number higher than 6 would be considered as outside 95% normative limits for that location.
  • the 98% normative limit shown by line 860 was a fixation number of 7, and the 99% normative limit shown by line 870 was a fixation number of 8. For each of the experimental group, any fixation number higher than the normative limits was considered as abnormal for that location.
  • a criterion for the entire central visual field to be considered abnormal was determined.
  • Various parameters were explored to determine the criteria of abnormal. These included the number of locations in which the number of fixations was outside the normative limits (referred to as G; the level of the normative limits (being either 95%, 98% or 99%, for example, and referred to as p); and the number of trials required (referred to as n and being the number of repeats of the stimulus target at each location).
  • Receiver Operating Characteristic (ROC) curves were plotted to determine the specificity, sensitivity, and the area under the curve for the screening approach for varied k values across each n and p values, with higher specificity, higher sensitivity, and a greater area under the curve (AUC) being considered as the better performance.
  • ROC Receiver Operating Characteristic
  • a combination of parameters was chosen that gave the highest partial AUC (pAUC calculated as AUC computed for specificities higher than 80%) and the highest specificity possible less than 100% (being 95.2% for the data collected).
  • Figure 9 shows an example graph 900 of the proportion of the median number of fixations over 20 trials from a single subject in the experimental group, and illustrates locations where the fixation number falls outside normal limits.
  • the x-axis 910 and the y-axis 920 represent the eccentricity in degrees of the presented visual search targets 320, and circles 940 represent each of locations in which the targets were presented.
  • the number in each circle 940 corresponds to the median number of fixations made to locate a target 340 at that location as displayed on a non-uniform background 330.
  • the grey circles 944 represent the locations where the median number of fixations fell outside the normative limit p, which was 95% in the illustrated example.
  • the white circles 942 represent the locations where the median number of fixations fell within the normative limit p. As seen from the graph, this subject had a median number of fixations ranging from 3 to 22. 19 locations represented by circles 944 had median numbers of fixations outside the normative limit, and only 6 locations represented by circles 942 had median numbers within the
  • the described testing protocol could detect abnormal central visual field with a 95.2% specificity and a 85% sensitivity within an average time of about 1.5 minutes using low-cost hardware.
  • the total testing time (median) taken by the control participants to perform the task for two trials was 85.71 (66.49 - 113.53) seconds.
  • Figure 10 illustrates a method 1000 that may be performed by system 100 to conduct a central visual field assessment of a subject.
  • method 1000 may be used as an alternative or additional assessment to method 200 as described above with reference to Figure 2.
  • method 1000 may be used to calculate normative limits of the vision of subjects without vision loss. Such normative limits may be used to determine whether visual search target 340 is likely to be perceptible to a subject without vision loss, for example.
  • Method 1000 may include processor 120 initially conducting a fixation check by executing steps 205 to 225, as described above with reference to method 200.
  • processor 120 executing stimulus presentation module 132 is caused to then present a first stimulus image to a subject via display screen 162.
  • Processor 120 may retrieve the stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162.
  • the stimulation image may comprise a non-uniform background.
  • the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene.
  • the non-uniform background does not comprise any real object content.
  • the first stimulus image may be displayed for between 100ms and 500ms. The first stimulus image may be displayed for around 250ms, for example.
  • processor 120 executing stimulus presentation module 132 is optionally caused to present an inter- stimulus image to a subject via display screen 162.
  • Processor 120 may retrieve the inter-stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162.
  • the inter-stimulus image may be the same as the fixation check image presented at step 205.
  • the inter-stimulus image may be any image that is different to the image presented at step 1010.
  • the inter- stimulus image may be displayed for between 250ms and 1000ms. The inter- stimulus image may be displayed for around 500ms, for example.
  • processor 120 executing stimulus presentation module 132 is optionally caused to then present a subsequent stimulus image to a subject via display screen 162.
  • Processor 120 may retrieve the subsequent stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162.
  • the stimulation image may comprise a non-uniform background.
  • the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene.
  • the non-uniform background does not comprise any real object content.
  • the subsequent stimulus image may be displayed for between 100ms and 500ms.
  • the subsequent stimulus image may be displayed for around 250ms, for example.
  • processor 120 may optionally repeat steps 1020 and 1030 one or more times.
  • processor 120 may skip steps 1020 and 1030, and perform step 1040 directly after step 1010.
  • At least one of the first stimulus image presented at step 1010 and a subsequent stimulus image 1030 may include a visual target superimposed on the background.
  • the visual target may comprise a Gabor patch.
  • processor 120 may randomly select whether to include the visual target when presenting the stimulus images at steps 1010 and 1030.
  • at least one of the stimulus images presented at steps 1010 and 1030 comprises a visual target.
  • only one of the stimulus images presented at steps 1010 and 1030 comprises a visual target.
  • processor 120 executing data processing module 133 receives user input data indicating which, if any, of the one or more stimulus images the subject perceived to include the visual target.
  • processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150.
  • the data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate which one or more stimulus images included the visual target.
  • the subject may indicate that they did not perceive the visual target in any presented stimulus images.
  • steps 205 to 1040 may be repeated one or more times to collect a plurality of user response data.
  • processor 120 executing data processing module 133 determines how many of the user responses were correct. Processor 120 may do this by comparing the user responses with stored data indicating the correct response corresponding to which, if any, stimulus image or images included the visual target.
  • processor 120 executing data processing module 133 optionally determines whether the subject exhibits central visual field loss in the eye being monitored, based on the number of correct responses determined at step 1050. According to some embodiments, the number of correct responses, or a percentage of correct responses, may be compared to a predetermined threshold. If processor 120 determines that the number or percentage of incorrect answers was over a predetermined threshold, processor 120 may determine that the eye being monitored exhibits central visual field loss.
  • processor 120 may be configured to export the results determined at step 1060. Processor 120 may do this by storing the results to a location in memory 130, or by communicating the results to external computing device 180 via communications module 170, for example. According to some embodiments, processor 120 may export the data by causing the data to be communicated via an I/O peripheral 160 such as display screen 162 or speaker 164.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Embodiments generally relate to a method for detecting a vision defect in a subject. The method comprises displaying a non-uniform background on a display screen; displaying a visual search target superimposed on the background; receiving eye tracking data related to movement of an eye of the subject viewing the display screen and locating the visual search target; determine an eye movement parameter relating to the received eye tracking data, wherein the eye movement parameter relates to a number of fixations taken to locate the visual search target; comparing the eye movement parameter with a predetermined threshold; and upon determining that the eye movement parameter exceeds the threshold, determining that the subject has a vision defect.

Description

"Methods and systems for screening for central visual field loss"
Embodiments generally relate to systems, methods and devices for screening for vision loss. In particular, described embodiments are directed to systems, methods and devices for detecting and assessing central visual field loss in a subject.
Background
Assessment of visual field loss is a common clinical requirement when assessing the vision of a patient. Vision loss can present in many forms, and affect different areas of a patient’s visual field. In particular, damage to the central visual field has been found to have a specific and direct impact on daily living activities.
While methods for testing for loss of the central visual field exist, these assessment techniques can be burdensome on the patient and may require the patients to perform tasks that are not representative of typical visual behaviour. For example, patients may be required to maintain steady central fixation throughout the testing, rather than making eye movements in a natural way. This may result is discomfort and fatigue to the patient, and can cause the patient to exhibit Troxler fading. Furthermore, the visual stimuli presented to the patients during the testing process are generally dissimilar to natural visual environments, meaning that these testing techniques may not accurately predict visual performance in more natural environments. These testing techniques may also require specialised, expensive hardware, and a need for training on the task, making the testing expensive and inaccessible.
It is desired to address or ameliorate one or more shortcomings or disadvantages associated with prior systems for visual field assessment, or to at least provide a useful alternative thereto.
Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.
Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each of the appended claims.
Summary
Some embodiments relate to a method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; displaying a visual search target superimposed on the background; receiving eye tracking data related to movement of an eye of the subject viewing the display screen and locating the visual search target; determine an eye movement parameter relating to the received eye tracking data, wherein the eye movement parameter relates to the movement of the eye while locating the visual search target; comparing the eye movement parameter with a predetermined threshold; and upon determining that the eye movement parameter exceeds the threshold, determining that the subject has a vision defect.
In some embodiments, the vision defect is central visual field loss.
According to some embodiments, the eye movement parameter relates to a number of fixations taken to locate the visual search target.
According to some embodiments, the eye movement parameter comprises the number of fixations taken to locate the visual search target.
In some embodiments, the eye movement parameter comprises a duration of time taken to locate the target.
In some embodiments, the non-uniform background comprises complex spatial frequency content and complex orientation content.
According to some embodiments, the non-uniform background is free of real object content.
In some embodiments, the non-uniform background comprises a noise background. According to some embodiments, the non-uniform background comprises a 1/f noise background.
According to some embodiments, the non-uniform background comprises a random filtered Gaussian noise image.
In some embodiments, the non-uniform background is displayed to subtend between 0 and 30 degrees of a visual field of the subject. In some embodiments, the non-uniform background is displayed to subtend 15 degrees of a visual field of the subject.
According to some embodiments, the visual search target is a Gabor patch.
According to some embodiments, the visual search target is a six cycles per degree Gabor patch.
In some embodiments, the visual search target is oriented orthogonal to the visual field meridian.
Some embodiments further comprise presenting a fixation target for the eye of the subject to fixate on before presentation of the visual search target.
Some embodiments further comprise performing a fixation check to ensure that the eye of the subject is fixated on the fixation target before presentation of the visual search target.
According to some embodiments, the eye of the subject is determined to be fixated on the fixation target if the eye of the subject is determined to be fixated on a location within a predetermined threshold of the fixation target. In some embodiments, the predetermined threshold is a 1 degree visual angle threshold.
Some embodiments relate to a device comprising: a processor; and memory storing program code that is accessible to the processor; wherein when executing the program code, the processor is caused to perform the method of some other described embodiments. Some embodiments relate to a system comprising: the device of some other described embodiments; and the display screen for displaying the non-uniform background and the visual search target.
Some embodiments further comprise an eye tracking device configured to generate the eye tracking data.
Some embodiments relate to a method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; at random, determining whether to display a visual search target superimposed on the background, and in response to determining to display the visual search target superimposed on the background, displaying the visual search target superimposed on the background; receiving user input data related to whether a subject perceived a visual search target superimposed on the background; determine whether the user input data was correct based on whether the visual search target was displayed; and upon determining that the user input data was incorrect, determining that the subject has a vision defect.
Brief Description of Drawings
Embodiments are described in further detail below, by way of example and with reference to the accompanying drawings, in which:
Figure 1 shows a block diagram of a vision assessment system according to some embodiments;
Figure 2 shows a flow diagram illustrating an example method of performing vision assessment using the system of Figure 1;
Figure 3 A shows an example layout for a fixation check image as presented by the system of Figure 1;
Figure 3B shows an example stimulus image layout as presented by the system of Figure 1; Figure 4A shows an example instance of a fixation check image as presented by the system of Figure 1 ;
Figure 4B shows an example instance of a stimulus image as presented by the system of Figure 1;
Figure 5 shows an example layout illustrating possible positions of a target within a stimulus presented by the system of Figure 1;
Figure 6 shows the example layout of Figure 5 with Gabor patches as the targets and a non-uniform background;
Figure 7 shows a graph illustrating the results of performing the method of Figure 2 on a control group;
Figure 8 shows a histogram illustrating the results of performing the method of Figure 2 on a control group;
Figure 9 shows a graph illustrating the results of performing the method of Figure 2 on an individual subject with vision loss; and
Figure 10 shows a flow diagram illustrating an alternative example method of performing vision assessment using the system of Figure 1.
Detailed Description
Embodiments generally relate to systems, methods and devices for screening for vision loss. In particular, described embodiments are directed to systems, methods and devices for detecting and assessing central visual field loss in a subject.
Central visual field loss or impairment may be caused by conditions such as glaucoma, neurodegenerative diseases, critical ocular diseases such as macular degeneration, and other causes of macular lesions such as diabetes, for example. Central visual field loss or impairment can have a significant impact on a patient’s daily life. Assessing and diagnosing central visual field loss or impairment at an early stage may allow for a wider variety of treatment options and management strategies.
However, existing techniques for assessing central visual field loss result in discomfort and fatigue to the patient, may be difficult for the patients to learn and perform, and can be expensive and time consuming. For example, some existing techniques require the patient to fixate on a target for a long period of time, and discard any data that is generated while the patient is not fixated on the target. Concentrating on fixating on a target for a long period of time can be difficult and fatiguing. Furthermore, as these tests do not simulate natural visual environments having objects of interest embedded in more complex backgrounds, these techniques may not predict visual performance in more natural environments.
To address some of the limitations of existing visual field testing procedures, alternative approaches have been developed that incorporating visual search behaviour in more natural scenarios, such as while subjects watch television or movies. However, presenting such widely varying visual content having varying basic visual attributes such as spatial, colour and motion cues, in addition to higher level attributes such as differences in object scene gist, makes it highly complex to predict a subject’s performance for any individual visual scene and thus arrive at an accurate assessment of their central visual field loss.
Described embodiments relate to systems, methods and devices for detecting and assessing central visual field loss in a subject by having subjects perform visual search tasks within a presented stimulus image designed to represent natural image statistics, but that does not contain specific objects within the image content. By measuring eye movement parameters while a subject performs a visual search task within a stimulus image designed to represent natural image statistics, the subject’s central field vision can be assessed. For example, some described embodiments relate to a method of assessing central visual field loss by determining a number of fixations that a subject takes to find a target on a background having spatial frequency content similar to a natural scene. As central visual field loss affects fixation behaviour, comparing the number of fixations a subject takes to locate a target to a predetermined threshold can provide an indication of central visual field loss exhibited by the subject.
Some described embodiments relate to a computer implemented method for detecting central visual field loss. A fixation target is presented to a subject on a screen. After the subject fixates on the fixation target and passes a fixation check, a non-uniform background is presented, which may be a 1/f noise background in some embodiments. A visual target is also presented, which may be a Gabor patch in some embodiments. The location of the visual target is randomized to one of a predetermined number of positions. As the subject searches for and locates the target, the subject’s eye movements are tracked, and a parameter relating to the eye movements while the subject searches for the target is determined. The parameter might relate to the number of fixations it takes the subject to locate the target. For example, in some embodiments, the number of fixations it takes the subject to locate the target are counted. In some embodiments, a duration of time taken to locate the target may be measured, where the duration of time taken to locate the target is related to the number of fixations it takes the subject to locate the target. The exercise may be repeated a number of times, and a measure of central tendency relating to the number of fixations may be compared to a threshold value. For example, the measure of central tendency may be the median number of fixations. This may provides a more accurate measure of central tendency than a mean, as the number of fixations may form a skewed distribution, in some embodiments. In some alternative embodiments, the mean, or a different measure of central tendency may be used instead. If the measure of central tendency relating to the number of fixations is higher than a predetermined threshold, it is determined that the subject may have a central visual field defect. Described methods may be used as a patient risk assessment or a method of providing information to support clinical decisions such as treatment plans.
Figure 1 shows a visual field assessment system 100 according to some embodiments. System 100 comprises a computing device 110 in communication with a number of input/output (I/O) peripherals 160. While I/O peripherals 160 are illustrated as separate from computing device 110, according to some embodiments one or more of the I/O peripherals 160 may be integrated with computing device 110. According to some embodiments, computing device 110 may also be in communication with one or more external processing devices 180.
Computing device 110 comprises a processor 120 and memory 130. Processor 120 may include one or more data processors for executing instructions, and may include one or more of a microprocessor, microcontroller-based platform, a suitable integrated circuit, and one or more application-specific integrated circuits (ASICs). Memory 130 may include one or more memory storage locations, and may be in the form of ROM, RAM, flash or other memory types. Memory 130 is arranged to be accessible to processor 120, and to store program code that is executable by processor 120, in the form of executable code modules. These may include stimulus presentation module 132 for causing visual stimuli to be delivered to a subject, and data processing module 133 for processing data received relating to a subject’s response to a delivered stimulus, for example. Memory 130 may further store data that can be read and written to by processor 120. For example, memory 130 may store stimulus data 136, which may comprise visual stimulus data to be delivered to a subject, and normative data 137, which may be used to assess whether a subject’s response to presented stimuli indicates a visual field loss condition. The content and functionality of the stored program code 131 and data 135 are described in further detail below with reference to Figure 2.
Computing device 110 further comprises an output module 140 and an input module 150, both in communication with processor 120. Output module 140 may be configured to facilitate delivering output signals to one or more output peripherals of I/O peripherals 160, while input module 150 may be configured to facilitate receiving input signals from one or more input peripherals of I/O peripherals 160. I/O peripherals 160 may comprise a number of different input and output devices and components configured to send data signals to and/or receive data signals from computing device 110. According to some embodiments, I/O peripherals 160 may include user I/O peripherals configured to facilitate communication between a user and computing device 110. In the illustrated embodiment, I/O peripherals 160 comprise a display screen 162, a speaker 164, an eye tracking device 166 and a user input device 168. According to some embodiments, I/O peripherals 160 may comprise other input and output devices, such as one or more of a mouse, keyboard, motion sensor, motor, microphone, camera, or other VO peripheral.
Display screen 162 may be configured to receive control signals via output module 140 and to display visual data. For example, display screen 162 may be used to present one or more visual search tasks to a patient or subject. Display screen 162 may include one or more screens, which may be LCD or LED screen displays in some embodiments. According to some embodiments, display screen 162 may be a 32-inch gamma-corrected Display++ monitor by Cambridge Research Systems Ltd. with a refresh rate of 120 Hz, resolution of 1920 x 1080 pixels, and a mean luminance of 50 cd.m-2, for example.
Speaker 164 may be configured to receive control signals via output module 140 and to cause data to be audibly communicated to a user. For example, speaker 164 may be used to present instructions to a patient or subject, such as when to begin performing a visual search task.
Eye tracking device 166 may be configured to generate data signals and to communicate these to input module 150. The data signals may comprise eye tracking data related to the movement of at least one eye of a patient or subject. According to some embodiment, eye tracking device 166 may be configured to track only one eye of the subject. The data signals may relate to patterns of natural eye movements such as number of saccades, number of fixations, saccadic amplitude and/or saccadic latency observed as being performed by an eye of a subject. According to some embodiments, eye tracking device 166 may comprise at least one camera to record visual image data relating to the position and movement of at least one eye of a patient or user. According to some embodiments, eye tracking device 166 may further comprise computing components configured to derive eye movement parameters based on the generated visual image data, and to communicate these to input module 150. According to some alternative embodiments, eye tracking device 166 may communicate the generated visual image data to input module 166, and computing device 110 may be configured to derive the eye movement parameters. According to some embodiments, eye tracking device 166 may comprise a Gazepoint GP3 eye tracker with a refresh rate of 60Hz, for example.
User input device 168 may be configured to generate data signals based on interaction with a user such as a patient or subject, and to communicate these to input module 150. The data signals may indicate that the patient or subject is ready to commence a visual search task, for example. According to some embodiments, user input device 168 may generate data signals in response to physical interaction between the user and user input device 168 or physical manipulation of user input device 168 by a user. For example, user input device 168 may be a button or switch that generates data signals when pressed or switched. According to some embodiments, user input device 169 may form part of a touch screen display, and virtual buttons or switches may be presented on the touch screen display for a user to interact with.
According to some embodiments, one or more of I/O peripherals 160 may be integrated into a single device. For example, display screen 162 and eye tracking device 166 may be integrated into a single device such as the imo® by CREWT Medical Systems Inc., or the Octopus 600 by Haag-Streit Diagnostics, for example.
Computing device 110 may also include a communications module 170. Communications module 170 may allow for wired or wireless communication between computing device 110 and external computing devices, and may utilise Wi-Fi, USB, Bluetooth, or other communications protocols. For example, communications module 170 may facilitate communication between computing device 110 and external computing device 180. External computing device 180 may comprise one or more computing devices, server systems and/or databases, and may be in communication with computing device 110 via a network such as the internet, for example. Figure 2 illustrates a method 200 that may be performed by system 100 to conduct a central visual field assessment of a subject. While method 200 is described as being performed by processor 120 of computing device 110, according to some embodiments, some or all of the method steps of method 200 may be performed by external computing device 180. According to some embodiments, method 200 may be performed while a subject is located in a viewing position of display screen 162, and eye tracking device 166 is configured to monitor at least one eye of the subject. According to some embodiments, the subject may be positioned between 40cm and 120cm from display screen 162. According to some embodiments, the subject may be positioned around 80cm from display screen 162, for example. According to some alternative embodiments, display screen 162 may be integrated into a head mounted display device to be worn on the head of the subject. In such embodiments, display screen 162 may be located relatively close to the subject’s eye, such as between 5 and 15cm from the subject’s eye in some embodiments.
According to some embodiments, the subject may be positioned so that an area of display screen 162 occupies their central visual field area. For example, the subject may be positioned so that an area of display screen 162 occupies between a 0° and 30° visual angle of the subject’s visual field. According to some embodiments, the subject may be positioned so that an area of display screen 162 occupies around a 15° visual angle of the subject’s visual field, for example.
At step 205, processor 120 executing stimulus presentation module 132 is caused to present a fixation check image to a subject via display screen 162. Processor 120 may retrieve a fixation check image from stimulus data 136, and communicate it to output module 140 for display on display screen 162. According to some embodiments, the fixation check image may comprise a uniform background with a visual target superimposed on the background. Example fixation check images are described below with reference to Figures 3A and 4A. According to some embodiments, the subject may be instructed to fixate on the target with at least one eye that is being tracked with eye tracking device 166. The subject may be instructed to indicate when they have fixated on the target by interacting with user input device 168. According to some embodiments, the instructions may be delivered audibly to the subject via speaker 164.
At step 210, processor 120 executing data processing module 133 receives data indicating that the subject has fixated on the target presented as part of the fixation check image on display screen 162. For example, processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150. The data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate that they have fixated on the presented visual target.
At step 215, processor 120 executing data processing module 133 receives eye track data from eye tracking device 166 and delivered via input module 150. The eye track data may be generated by eye tracking device 166 tracking at least one eye of the subject as the subject fixates on the target presented as part of the fixation check image shown on display screen 162. According to some embodiments, the eye track data may comprise at least one coordinate relating to a position of the subject’s eye, and at least one timestamp relating to a time at which the data was captured.
At step 220, processor 120 executing data processing module 133 performs a fixation check to confirm that the subject is actually fixating on the presented target. Processor 120 may process the eye tracking data as received at step 215 to correlate it to a position on the fixation check image being displayed on display screen 162.
At step 225, processor 120 determines whether the subject is fixating on the presented target. According to some embodiments, processor 120 may determine that the subject is fixating on the presented target if the eye tracking data indicates that the position of fixation of the eye of the subject is within a predetermined threshold of the location of the presented target. For example, processor 120 may determine that the subject is fixating on the presented target if the eye tracking data indicates that the position of fixation of the eye of the subject is within a 1 ° visual angle tolerance of the location of the presented target.
If processor 120 determines that the subject is not fixating on the presented target, then processor 120 may return to step 205 of method 200, and re -present the fixation check image and/or re-instruct the subject to fixate on the presented target. If processor 120 determines that the subject is fixating on the presented target, then processor 120 may proceed to step 230.
At step 230, processor 120 executing stimulus presentation module 132 is caused to present a visual search task to the subject. Processor 120 may do this by causing a stimulation image to be presented to the subject via display screen 162. Processor 120 may retrieve the stimulation image from stimulus data 136, and communicate it to output module 140 for display on display screen 162. According to some embodiments, the stimulation image may comprise a non-uniform background with a visual target superimposed on the background. According to some embodiments, the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene. According to some embodiments, the non-uniform background does not comprise any real object content. According to some embodiments, the visual target may comprise a Gabor patch. Example stimulation images and example targets are described in further detail below with reference to Figures 3B, 4B, 5 and 6. According to some embodiments, the subject may be instructed to locate and fixate on the target with at least one eye that is being tracked with eye tracking device 166. The subject may be instructed to indicate when they have fixated on the target by interacting with user input device 168. According to some embodiments, the instructions may be delivered audibly to the subject via speaker 164.
At step 235, processor 120 executing data processing module 133 receives data indicating that the subject has located and fixated on the target presented as part of the stimulation image on display screen 162. For example, processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150. The data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate that they have located and fixated on the presented visual target.
At step 240, processor 120 executing data processing module 133 receives eye track data from eye tracking device 166 and delivered via input module 150. The eye track data may be generated by eye tracking device 166 tracking at least one eye of the subject as the subject searches for the target presented as part of the stimulation image shown on display screen 162. According to some embodiments, the eye track data may comprise at least one coordinate relating to a position of the subject’s eye, and at least one timestamp relating to a time at which the data was captured. According to some embodiments, the eye track data may comprise data generated before the eye of the subject fixates on the presented target, being data generated while the eye is searching for and locating the target.
At step 245, processor 120 executing data processing module 133 determines an eye movement parameter based on the eye track data received from eye tracking device 166. According to some embodiments, the eye movement parameter may relate to the movement of the eye while searching for and locating the target, before the eye fixates on the target. According to some embodiments, the eye movement parameter may relate to a number of fixations were measured before the eye fixated on the target. According to some embodiments, the eye movement parameter may relate to a duration of time that was measured before the eye fixated on the target. According to some embodiments, the eye movement parameter is a fixation parameter. According to some embodiments, the fixation parameter may be a number of fixations taken for the eye of the subject to fixate on the presented target. According to some embodiments, the fixation parameter may be a duration of time taken for the eye of the subject to fixate on the presented target. The duration of time taken for the eye of the subject to fixate on the presented target may be related to the number of fixations measured before the eye fixated on the target. According to some embodiments, the eye movement parameter may be another parameter relating to the movement of the eye of the subject as it searched for and/or fixated on the presented target, such as a number of saccades, saccadic amplitude and/or saccadic latency, for example.
At step 255, processor 120 executing data processing module 133 compares the eye movement parameter determined at step 245 with a predetermined threshold. According to some embodiments, the predetermined threshold may be determined based on normative data 137. Normative data 137 may be used to determine the range of values seen in a majority of a control population for the fixation parameter, where the control population does not exhibit central visual field loss. A threshold value can then be determined by calculating a range for the eye movement parameter that encompasses the majority of the range of values seen in the control population. Outside of the calculated range, it may be likely that that central visual field loss exists. According to some embodiments, comparing the eye movement parameter determined with a predetermined threshold may comprise determining whether the eye movement parameter is outside a predetermined threshold. Processor 120 may be configured to determine whether the eye movement parameter is higher than the threshold for the eye movement parameter for a particular location of the visual search target based on normative data 137. As normative data 137 may result in different normative limits for different locations of a visual search target, the threshold may vary depending on where on the non-uniform background the target was displayed. According to some embodiments, where steps 205 to 245 of method 200 are performed multiple times to present the visual search target in multiple locations, processor 120 may be configured to determine a number of locations for which the eye movement parameter was outside the normative limits, and to compare this with a predetermined threshold value.
At step 255, processor 120 executing data processing module 133 determines whether the subject exhibits central visual field loss in the eye being monitored, based on the comparison conducted at step 250. For example, if the measured eye movement parameter falls outside the normative limits in over a predetermined threshold of locations in which the visual search target is presented, processor 120 may determine that the eye being monitored exhibits central visual field loss.
At step 260, processor 120 may be configured to export the results determined at step 255. Processor 120 may do this by storing the results to a location in memory 130, or by communicating the results to external computing device 180 via communications module 170, for example. According to some embodiments, processor 120 may export the data by causing the data to be communicated via an I/O peripheral 160 such as display screen 162 or speaker 164.
Figure 3A shows an example layout 300 for a fixation check image as presented on display screen 162 of system 100 at step 205 of method 200. Layout 300 comprises a background 310 and a fixation target 320. According to some embodiments, background 310 may be a uniform background. For example, background 310 may be a uniform white, grey, or black background in some embodiments. According to some embodiments, fixation target 320 is located in a central region of background 310. According to some embodiments, fixation target 320 is located at the centre of background 310. Fixation target 320 may comprise one or more shapes, symbols, or other visual features that are visually distinct from background 310. According to some embodiments, fixation target 320 may comprise a visual feature that is distinct from background 310 in colour, shade or tone. According to some embodiments, fixation target 320 may comprise one or more, dots, lines, squares, circles, triangles or crosses.
Figure 3B shows an example layout 350 for a stimulation image as presented on display screen 162 of system 100 at step 230 of method 200. Layout 350 comprises the background 310 and fixation target 320 as described above with reference to Figure 3A. Layout 350 further comprises a non-uniform background 330 and a visual search target 340. According to some embodiments, non-uniform background 330 may be presented as an overlay over background 310. According to some embodiments, non-uniform background 330 may be displayed to fill display screen 162 entirely, so that none of background 310 is visible. According to some embodiments, non-uniform background 330 may be free of real object content. According to some embodiments, non-uniform background 330 may comprise complex spatial frequency content. According to some embodiments, non-uniform background 330 may comprise complex orientation information. As natural scenes contain objects of varied orientation and spatial frequency content, generally biased towards low spatial frequencies and lower contrasts, non- uniform background 330 may comprise visual characteristics similar to a natural scene image. Non-uniform background 330 may be biased towards low spatial frequencies and/or lower contrasts, for example. Non-uniform background 330 may be generated to exhibit the spatial characteristics of a natural scene image without any of the real object content, to allow for aspects of visual processing in visual search tasks to be investigated without the added complexity of scene information such as colour, context, object information, or scene gist, for example.
According to some embodiments, non-uniform background 330 may comprise a textured cloudy background. According to some embodiments, non-uniform background 330 may comprise a noise background. As the power spectrum (or 2D representation of spatial frequency) of natural scenes is approximately proportional to the inverse of the frequency of the spectrum, and the relative log amplitude falls off roughly by a factor of 1/f, non- uniform background 330 may comprise a 1/f noise background in some embodiments. Non-uniform background 330 may comprise a random filtered noise image.. In some embodiments, non-uniform background 330 may comprise an image filtered using a nonGaussian band-pass way, a low pass filter, or a high pass filter. In some embodiments, non-uniform background 330 may comprise a random filtered Gaussian noise image For example, non-uniform background 330 may comprise a random filtered Gaussian noise image adjusted to the factor of l/f“. According to some embodiments, a may be 1.7. According to some embodiments, a may be between 1.0 and 2.0. According to some embodiments, a may be any value between 0.5 and 10.0.
According to some embodiments, the Root Mean Square (R.M.S) contrast of non- uniform background 330 may be defined using two times the standard deviation of the normalised pixel luminance across the whole image. For example, the RMS contrast of non-uniform background 330 may be scaled to 0.20 in some embodiments. In some embodiments, the RMS contrast of non-uniform background 330 may be scaled to between 0.10 and 0.30. In some embodiments, the RMS contrast of non-uniform background 330 may be scaled to another value. According to some embodiments, non- uniform background 330 may be isotropic. According to some embodiments, non- uniform background 330 may be displayed to subtend between 0 and 30 degrees of a visual field of a subject. According to some embodiments, non-uniform background 330 may be displayed to subtend around 15 degrees of the visual field of the subject. According to some embodiments, the size of non-uniform background 330 as displayed on display screen 162 may be adjusted based on a position of a subject from display screen 162, to ensure that non-uniform background 330 is displayed to subtend the desired angle of the visual field of the subject.
According to some embodiments, a number of non-uniform images may be generated and stored in stimulus data 136. At step 230 of method 200, processor 120 may select an image at random from the stored images in stimulus data 136 to display as non-uniform background 330.
Visual search target 340 may be displayed in a random location on non-uniform background 330. According to some embodiments, visual search target 340 may be displayed in a randomly selected location selected from a set of predefined locations, such as the 25 locations described below with reference to Figure 5. In some embodiments, the visual search target may be displayed in a different location to those shown in Figure 5. For example, while Figure 5 shows locations within a 15 degree visual field, the locations may be different if a different visual angle was used. Visual search target 340 may comprise one or more shapes, symbols, or other visual features that are visually distinct from non-uniform background 330. According to some embodiments, visual search target 340 may comprise a visual feature that is distinct from background 310 in colour, shade, pattern or tone. According to some embodiments, visual search target 340 may comprise one or more, dots, lines, squares, circles, triangles or crosses.
According to some embodiments, visual search target 340 may comprise a striped shape. According to some embodiments, visual search target 340 may comprise a Gabor patch. Visual search target 340 may comprise a six cycles per degree Gabor patch, for example. In some embodiments, visual search target 340 may comprise between a one cycles per degree and an eight cycles per degree Gabor patch. The Gabor patch may be generated by sine wave grating masked by a Gaussian distribution of SD 0.17° in some embodiments. According to some embodiments, the SD of the Gaussian window may be selected as a different value, based on the desired size of the resulting visual search target 340. According to some embodiments, the Gabor patch may be oriented orthogonal to the visual field meridian. This may result in more uniform sensitivity in the presented visual field rings, so that all stimuli presented at a particular eccentricity from the centre of the visual field would be expected to have a similar contrast.
According to some embodiments, visual search target 340 may comprise a different visual feature. For example, visual search target 340 may comprise a sixth derivative (D6) of a spatial Gaussian function, in some embodiments. In some embodiments, visual search target 340 may be vertically oriented. In some embodiments, visual search target 340 may be rotated to be orthogonal to the visual field meridian.
In some embodiments, visual search target 340 may be adjusted to have a contrast with non-uniform background 330 that provides a 95% probability of being perceptible to a subject without vision loss, which may be calculated using seeing normative limits estimated from the frequency of seeing curves in subjects without vision loss, for example. According to some embodiments, the normative limits may be calculated using a method such as method 1000 described below with reference to Figure 10.
Figure 4A shows a specific instance 400 of layout 300 for a fixation check image as presented on display screen 162 of system 100 at step 205 of method 200. Instance 400 comprises a background 410, being a specific instance of background 310, and a fixation target 420, being a specific instance of fixation target 320. Background 410 is a uniform background, being a uniform grey background. Fixation target 420 is located at the centre of background 410. Fixation target 420 comprises four white dots forming a diamond shape, having a 1 ° visual angle radius in the visual field of the subject.
Figure 4B shows a specific instance 450 of layout 350 for a stimulation image as presented on display screen 162 of system 100 at step 230 of method 200. Layout 450 comprises the background 410 and fixation target 420 as described above with reference to Figure 4A. Layout 450 further comprises a non-uniform background 430, being a specific instance of non-uniform background 330, and a visual search target 440, being a specific instance of visual search target 340. Non-uniform background 430 comprises a 1/f noise background, being a random filtered Gaussian noise image adjusted to the factor of l/f“ where a is 1.7. Non-uniform background 430 has the appearance of a cloudy texture. The RMS contrast of non-uniform background 330 is defined using two times the standard deviation of the normalised pixel luminance across the whole image scaled to 0.20. Non-uniform background 330 is isotropic and displayed to subtend around 15 degrees of the visual field of the subject.
Visual search target 440 is shown displayed to the top right of fixation target 420. Visual search target 440 is a six cycles per degree Gabor patch, generated by sine wave grating masked by a Gaussian distribution of SD 0.17°, and oriented orthogonal to the visual field meridian. Visual search target 440 has the appearance of a soft circle shape filled with black and white stripes, the stripes angled on a diagonal extending from top left to bottom right. The contrast of visual search target 440 against background 430 is set at the 95% probability of seeing normative limits estimated from the frequency of seeing curves set of subjects having no vision loss.
Figure 5 show an example layout 500 illustrating a number of possible locations 540 of a visual search target 340 on non-uniform background 330. The locations 540 include a central location 505, with three concentric rings each comprising 8 further locations, aligned to form a starburst pattern. While 25 possible locations 540 are illustrated, according to some embodiments visual search target 340 may be displayed in any location on non-uniform background 330.
Figure 6 shows a specific instance of layout 500, comprising the non-uniform background 430 as illustrated in Figure 4. The possible locations of visual search target 340 are displayed as Gabor patches 640 oriented orthogonal to the visual field meridian, so that the stripes of each Gabor patch 640 appear to extend around the perimeter of each concentric circle surrounding the central Gabor patch 605. Each Gabor patch 640 may be identical to visual search target 440 as illustrated in Figure 4, except in orientation.
A study was performed to examine the effects of performing method 200 as described above on a group of subjects having no identified central visual field loss, and on a group of subjects having established visual field defects.
21 older adults having a mean age of 69 years and ranging in age from 61 to 79 years were selected as a control group. A further 20 people having a mean age of 72.5 years and ranging in age from 59 to 84 years and having a clinical diagnosis of glaucoma were chosen as the experimental group. The criteria for inclusion into the experimental group was the existence of established visual field defects, which were identified using the 24- 2 test pattern.
In the study, each participant was seated at 80 cm from a display screen 162 with individual refractive correction applied for the 80 cm distance in a trial frame. The participant was asked to look at a fixation target 420 as illustrated in Figure 4, which was displayed at the centre of display screen 162 before the start of the experiment. The participants were instructed to start the task by a button press using a user input device 168 when looking at fixation target 420. As described above with reference to steps 220 and 225 of method 200, if their fixation was within 1 ° tolerance of the fixation target 420, the stimulus presentation began with an auditory tone played via speaker 164. If the fixation was outside 1° tolerance, the participants were re-instructed to look at the fixation target and the stimulus presentation began after they had passed the fixation check. The fixation check at the beginning of each stimulus presentation was performed to ensure that all participants commenced the visual search from the centre of the fixation target.
After the central fixation check, participants were required to search for a visual search target 440 embedded in the non-uniform background 430 as illustrated in Figure 4B. Once the participants located visual search target 440, they were asked to look at the target and simultaneously press the button on user input device 168 to indicate their final fixation on visual search target 440. If the participant could not find the target despite searching, they were asked to press another button on user input device 168 to indicate a “non-seen” target presentation. Testing was performed monocularly, where the dominant eye was chosen for the control group, and the worst eye as per the mean deviation of the 24-2 visual fields was chosen for the experimental group. Participants visually searched for a Gabor patch such as that described with reference to visual search target 440 of Figure 4 that appeared in one of 25 possible locations as described with reference to Figures 5 and 6. The Gabor patch was located within a 15° visual angle 1/f noise background with an RMS contrast of 0.20, as described above with reference to non- uniform background 430 of Figure 4.
A total of 20 trials were tested in the control group and 10-15 trials were tested in the experimental group, with each trial comprising randomly presenting the visual search target 440 in each of the 25 locations as described with reference to Figures 5 and 6. This enabled normative limits to be calculated from the control group. For each presentation, eye tracking device 166 generated the gaze points and the timestamp for each fixation, and provided these to computing device 110 via input module 150. The number of fixations for each presentation of visual search target 440 were estimated using the number of timestamps. The initial fixation from the centre of the fixation target 420 was verified and included in the fixation count.
The outcome measure was the number of fixations required by an eye of the subject to find the target, measured using an eye tracking device 166. Normative limits of 95%, 98%, and 99% for the number of fixations at each location was estimated based on the control group.
The procedure performance was assessed by calculating sensitivity and specificity for different combinations of normative limits, target locations with fixations outside normative limits and number of repeated target presentations. The highest Area Under Curve (AUC) (computed as a partial AUC (pAUC) for specificity higher than 80%) was chosen. From this curve, the criteria with highest sensitivity for specificity greater than 95% was selected as a thresholding criteria for determining central visual field loss in the experimental group. Specifically, the visual field of a subject was flagged “abnormal” when the fixation number was greater than the 99% normative limit for three or more locations averaged over two repeated presentations. This threshold gave a 85% sensitivity, 95.2% specificity, and 0.88 pAUC.
Figure 7 shows a graph 700 illustrating the results of the experiment as performed on the control group. The x-axis 710 and the y-axis 720 represent the eccentricity in degrees of the presented visual search targets 320, and circles 740 represent each of locations in which the targets were presented. The first number in each circle 740 corresponds to the median number of fixations made to locate a target 340 at that location as displayed on a non-uniform background 330, while the second number corresponds to the interquartile range for the number of fixations made to locate the target 340. As illustrated in the graph, a median of 2 or 3 fixations were required depending on the location of target 340. This number includes the initial fixation on the central fixation target 320 at the beginning of the presentation.
Figure 8 shows a graph 800 illustrating the number of fixations for a single target location as a histogram. Graph 800 was generated based on fixations recorded when a target was displayed at a position of 0,6 degrees. X-axis 810 shows the number of fixations taken to locate a target, while Y-axis 820 shows the proportion of people in each group. Bars 830 illustrate the proportion of subjects for each number of fixations that it took to locate the target.
As visible from graph 800, the number of fixations were not normally distributed. As the distribution of the number of fixations at all locations was positively skewed in the control group, a Gamma distribution 840 was fit to the number of fixations at each location using the ‘dgamma’ function in the R software environment, where the shape and rate parameters were adjusted by minimising the sum of squared differences.
From the fitted Gamma distributions, the 95%, 98% and 99% quantiles were extracted and then rounded as normative limits. These are illustrated as lines 850, 860, and 870, respectively. As illustrated, the 95% normative limit shown by line 850 was a fixation number of 6.20, meaning that subjects with a fixation number higher than 6 would be considered as outside 95% normative limits for that location. The 98% normative limit shown by line 860 was a fixation number of 7, and the 99% normative limit shown by line 870 was a fixation number of 8. For each of the experimental group, any fixation number higher than the normative limits was considered as abnormal for that location.
For the purpose of making recommendations regarding future implementation as a screening procedure, a criterion for the entire central visual field to be considered abnormal was determined. Various parameters were explored to determine the criteria of abnormal. These included the number of locations in which the number of fixations was outside the normative limits (referred to as G; the level of the normative limits (being either 95%, 98% or 99%, for example, and referred to as p); and the number of trials required (referred to as n and being the number of repeats of the stimulus target at each location). Receiver Operating Characteristic (ROC) curves were plotted to determine the specificity, sensitivity, and the area under the curve for the screening approach for varied k values across each n and p values, with higher specificity, higher sensitivity, and a greater area under the curve (AUC) being considered as the better performance.
A combination of parameters was chosen that gave the highest partial AUC (pAUC calculated as AUC computed for specificities higher than 80%) and the highest specificity possible less than 100% (being 95.2% for the data collected). The best criterion under this definition with the data collected was determined to be with parameters n = 2 where k > 3 locations at p = 99%. This gave a sensitivity of 85% (95% CI: 70% - 100%) with pAUC of 0.88 (95% CI: 0.75 - 0.98).
A central vision field was therefore flagged as abnormal when the median number of fixations over two trials at each tested location was greater than the p=99% for k=3 or more tested locations.
Figure 9 shows an example graph 900 of the proportion of the median number of fixations over 20 trials from a single subject in the experimental group, and illustrates locations where the fixation number falls outside normal limits. The x-axis 910 and the y-axis 920 represent the eccentricity in degrees of the presented visual search targets 320, and circles 940 represent each of locations in which the targets were presented. The number in each circle 940 corresponds to the median number of fixations made to locate a target 340 at that location as displayed on a non-uniform background 330. The grey circles 944 represent the locations where the median number of fixations fell outside the normative limit p, which was 95% in the illustrated example. The white circles 942 represent the locations where the median number of fixations fell within the normative limit p. As seen from the graph, this subject had a median number of fixations ranging from 3 to 22. 19 locations represented by circles 944 had median numbers of fixations outside the normative limit, and only 6 locations represented by circles 942 had median numbers within the normative limits.
The results on conducting the experiment on the experimental group showed that in some embodiments, the described testing protocol could detect abnormal central visual field with a 95.2% specificity and a 85% sensitivity within an average time of about 1.5 minutes using low-cost hardware. The total testing time (median) taken by the control participants to perform the task for two trials was 85.71 (66.49 - 113.53) seconds.
Figure 10 illustrates a method 1000 that may be performed by system 100 to conduct a central visual field assessment of a subject. According to some embodiments, method 1000 may be used as an alternative or additional assessment to method 200 as described above with reference to Figure 2. In some embodiments, method 1000 may be used to calculate normative limits of the vision of subjects without vision loss. Such normative limits may be used to determine whether visual search target 340 is likely to be perceptible to a subject without vision loss, for example. Method 1000 may include processor 120 initially conducting a fixation check by executing steps 205 to 225, as described above with reference to method 200.
At step 1010, processor 120 executing stimulus presentation module 132 is caused to then present a first stimulus image to a subject via display screen 162. Processor 120 may retrieve the stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162. According to some embodiments, the stimulation image may comprise a non-uniform background. According to some embodiments, the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene. According to some embodiments, the non-uniform background does not comprise any real object content. According to some embodiments, the first stimulus image may be displayed for between 100ms and 500ms. The first stimulus image may be displayed for around 250ms, for example.
At step 1020, processor 120 executing stimulus presentation module 132 is optionally caused to present an inter- stimulus image to a subject via display screen 162. Processor 120 may retrieve the inter-stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162. According to some embodiments, the inter-stimulus image may be the same as the fixation check image presented at step 205. According to some embodiments, the inter-stimulus image may be any image that is different to the image presented at step 1010. According to some embodiments, the inter- stimulus image may be displayed for between 250ms and 1000ms. The inter- stimulus image may be displayed for around 500ms, for example.
At step 1030, processor 120 executing stimulus presentation module 132 is optionally caused to then present a subsequent stimulus image to a subject via display screen 162. Processor 120 may retrieve the subsequent stimulus image from stimulus data 136, and communicate it to output module 140 for display on display screen 162. According to some embodiments, the stimulation image may comprise a non-uniform background. According to some embodiments, the non-uniform background comprises complex spatial frequency content and orientation information similar to a natural scene. According to some embodiments, the non-uniform background does not comprise any real object content. According to some embodiments, the subsequent stimulus image may be displayed for between 100ms and 500ms. The subsequent stimulus image may be displayed for around 250ms, for example. According to some embodiments, processor 120 may optionally repeat steps 1020 and 1030 one or more times. According to some alternative embodiments, processor 120 may skip steps 1020 and 1030, and perform step 1040 directly after step 1010.
At least one of the first stimulus image presented at step 1010 and a subsequent stimulus image 1030 may include a visual target superimposed on the background. According to some embodiments, the visual target may comprise a Gabor patch. According to some embodiments, processor 120 may randomly select whether to include the visual target when presenting the stimulus images at steps 1010 and 1030. According to some embodiments, at least one of the stimulus images presented at steps 1010 and 1030 comprises a visual target. According to some embodiments, only one of the stimulus images presented at steps 1010 and 1030 comprises a visual target.
At step 1040, processor 120 executing data processing module 133 receives user input data indicating which, if any, of the one or more stimulus images the subject perceived to include the visual target. For example, processor 120 may receive a data signal generated by user input device 168 and delivered via input module 150. The data signal may be generated based on the subject interacting with user input device 168 in response to an instruction to indicate which one or more stimulus images included the visual target. According to some embodiments, the subject may indicate that they did not perceive the visual target in any presented stimulus images.
According to some embodiments, steps 205 to 1040 may be repeated one or more times to collect a plurality of user response data.
At step 1050, processor 120 executing data processing module 133 determines how many of the user responses were correct. Processor 120 may do this by comparing the user responses with stored data indicating the correct response corresponding to which, if any, stimulus image or images included the visual target.
At step 1060, processor 120 executing data processing module 133 optionally determines whether the subject exhibits central visual field loss in the eye being monitored, based on the number of correct responses determined at step 1050. According to some embodiments, the number of correct responses, or a percentage of correct responses, may be compared to a predetermined threshold. If processor 120 determines that the number or percentage of incorrect answers was over a predetermined threshold, processor 120 may determine that the eye being monitored exhibits central visual field loss.
At step 1070, processor 120 may be configured to export the results determined at step 1060. Processor 120 may do this by storing the results to a location in memory 130, or by communicating the results to external computing device 180 via communications module 170, for example. According to some embodiments, processor 120 may export the data by causing the data to be communicated via an I/O peripheral 160 such as display screen 162 or speaker 164.
It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.

Claims

26 CLAIMS:
1. A method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; displaying a visual search target superimposed on the background; receiving eye tracking data related to movement of an eye of the subject viewing the display screen and locating the visual search target; determine an eye movement parameter relating to the received eye tracking data, wherein the eye movement parameter relates to a number of fixations taken to locate the visual search target; comparing the eye movement parameter with a predetermined threshold; and upon determining that the eye movement parameter exceeds the threshold, determining that the subject has a vision defect.
2. The method of claim 1, where the vision defect is central visual field loss.
3. The method of claim 1 or claim 2, wherein the eye movement parameter relates to a number of fixations taken to locate the visual search target
4. The method of claim 3, wherein the eye movement parameter comprises the number of fixations taken to locate the visual search target.
5. The method of claim 3, wherein the eye movement parameter comprises a duration of time taken to locate the target.
6. The method of any one of claims 1 to 5, wherein the non-uniform background comprises complex spatial frequency content and complex orientation content.
7. The method of any one of claims 1 to 6, wherein the non-uniform background is free of real object content.
8. The method of any one of claims 1 to 7, wherein the non-uniform background comprises a noise background.
9. The method of any one of claims 1 to 8, wherein the non-uniform background comprises a 1/f noise background.
10. The method of any one of claims 1 to 9, wherein the non-uniform background comprises a random filtered Gaussian noise image.
11. The method of any one of claims 1 to 10, wherein the non-uniform background is displayed to subtend between 0 and 30 degrees of a visual field of the subject.
12. The method of claim 11, wherein the non-uniform background is displayed to subtend 15 degrees of a visual field of the subject.
13. The method of any one of claims 1 to 12, wherein the visual search target is a Gabor patch.
14. The method of any one of claims 1 to 13, wherein the visual search target is a six cycles per degree Gabor patch.
15. The method of any one of claims 1 to 14, wherein the visual search target is oriented orthogonal to the visual field meridian.
16. The method of any one of claims 1 to 15, further comprising presenting a fixation target for the eye of the subject to fixate on before presentation of the visual search target.
17. The method of claim 16, further comprising performing a fixation check to ensure that the eye of the subject is fixated on the fixation target before presentation of the visual search target.
18. The method of claim 17, wherein the eye of the subject is determined to be fixated on the fixation target if the eye of the subject is determined to be fixated on a location within a predetermined threshold of the fixation target.
19. The method of claim 18, wherein the predetermined threshold is a 1 degree visual angle threshold.
20. A method for detecting a vision defect in a subject, the method comprising: displaying a non-uniform background on a display screen; at random, determining whether to display a visual search target superimposed on the background, and in response to determining to display the visual search target superimposed on the background, displaying the visual search target superimposed on the background; receiving user input data related to whether a subject perceived a visual search target superimposed on the background; determine whether the user input data was correct based on whether the visual search target was displayed; and upon determining that the user input data was incorrect, determining that the subject has a vision defect.
21. The method of claim 20, where the vision defect is central visual field loss.
22. The method of claim 20 or claim 21, wherein the non-uniform background comprises complex spatial frequency content and complex orientation content.
23. The method of any one of claims 20 to 22, wherein the non-uniform background is free of real object content.
24. The method of any one of claims 20 to 23, wherein the non-uniform background comprises a noise background.
25. The method of any one of claims 20 to 24, wherein the non-uniform background comprises a 1/f noise background.
26. The method of any one of claims 20 to 25, wherein the non-uniform background comprises a random filtered Gaussian noise image.
27. The method of any one of claims 20 to 26, wherein the non-uniform background is displayed to subtend between 0 and 30 degrees of a visual field of the subject.
28. The method of claim 27, wherein the non-uniform background is displayed to subtend 15 degrees of a visual field of the subject.
29. The method of any one of claims 20 to 28, wherein the visual search target is a Gabor patch. 29
30. The method of any one of claims 20 to 29, wherein the visual search target is a six cycles per degree Gabor patch.
31. The method of any one of claims 20 to 30, wherein the visual search target is oriented orthogonal to the visual field meridian.
32. A device comprising: a processor; and memory storing program code that is accessible to the processor; wherein when executing the program code, the processor is caused to perform the method of any one of claims 1 to 31.
33. A system comprising: the device of claim 32; and the display screen for displaying the non-uniform background and the visual search target.
34. The system of claim 33, further comprising an eye tracking device configured to generate the eye tracking data.
PCT/AU2022/050281 2021-09-22 2022-03-29 Methods and systems for screening for central visual field loss WO2023044520A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2021903043 2021-09-22
AU2021903043A AU2021903043A0 (en) 2021-09-22 Methods and systems for screening for central visual field loss

Publications (1)

Publication Number Publication Date
WO2023044520A1 true WO2023044520A1 (en) 2023-03-30

Family

ID=85719101

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2022/050281 WO2023044520A1 (en) 2021-09-22 2022-03-29 Methods and systems for screening for central visual field loss

Country Status (1)

Country Link
WO (1) WO2023044520A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4257690A (en) * 1979-04-16 1981-03-24 Massachusetts Institute Of Technology Eye testing chart
US4615594A (en) * 1985-01-23 1986-10-07 The United States Of America As Represented By The Secretary Of The Air Force Vision test chart and method using Gaussians
WO1999015071A1 (en) * 1997-09-23 1999-04-01 Virtual-Eye.Com Color on color vision test and apparatus
US6736511B2 (en) * 2000-07-13 2004-05-18 The Regents Of The University Of California Virtual reality peripheral vision scotoma screening
WO2019099572A1 (en) * 2017-11-14 2019-05-23 Vivid Vision, Inc. Systems and methods for visual field analysis
WO2021051162A1 (en) * 2019-09-16 2021-03-25 Eyonic Pty Limited Method and/or system for testing visual function
US20210298593A1 (en) * 2020-03-30 2021-09-30 Research Foundation For The State University Of New York Systems, methods, and program products for performing on-off perimetry visual field tests

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4257690A (en) * 1979-04-16 1981-03-24 Massachusetts Institute Of Technology Eye testing chart
US4615594A (en) * 1985-01-23 1986-10-07 The United States Of America As Represented By The Secretary Of The Air Force Vision test chart and method using Gaussians
WO1999015071A1 (en) * 1997-09-23 1999-04-01 Virtual-Eye.Com Color on color vision test and apparatus
US6736511B2 (en) * 2000-07-13 2004-05-18 The Regents Of The University Of California Virtual reality peripheral vision scotoma screening
WO2019099572A1 (en) * 2017-11-14 2019-05-23 Vivid Vision, Inc. Systems and methods for visual field analysis
WO2021051162A1 (en) * 2019-09-16 2021-03-25 Eyonic Pty Limited Method and/or system for testing visual function
US20210298593A1 (en) * 2020-03-30 2021-09-30 Research Foundation For The State University Of New York Systems, methods, and program products for performing on-off perimetry visual field tests

Similar Documents

Publication Publication Date Title
US9039182B2 (en) Video game to monitor retinal diseases
US9433346B2 (en) Circular preferential hyperacuity perimetry video game to monitor macular and retinal diseases
US8500278B2 (en) Apparatus and method for objective perimetry visual field test
US8337019B2 (en) Testing vision
US7665847B2 (en) Eye mapping
US20030223038A1 (en) Methods, devices and systems for assessing eye disease
JP2018520820A (en) Method and system for inspecting visual aspects
US20210007599A1 (en) Visual testing using mobile devices
WO2013170091A1 (en) Rapid measurement of visual sensitivity
EP2566379B1 (en) A supra-threshold test and a sub-pixel strategy for use in measurements across the field of vision
Geringswald et al. Impairment of visual memory for objects in natural scenes by simulated central scotomata
CN115553707A (en) Contrast sensitivity measurement method and device based on eye movement tracking
Zeppieri et al. Frequency doubling technology (FDT) perimetry
WO2023044520A1 (en) Methods and systems for screening for central visual field loss
Hung et al. Low-contrast acuity under strong luminance dynamics and potential benefits of divisive display augmented reality
Groth et al. Evaluation of virtual reality perimetry and standard automated perimetry in normal children. Transl Vis Sci Technol. 2023; 12 (1): 6
RU2788962C2 (en) System and method for digital measurement of characteristics of stereoscopic vision
Arefin et al. Mapping Eye Vergence Angle to the Depth of Real and Virtual Objects as an Objective Measure of Depth Perception
Mao et al. Different eye movement patterns on simulated visual field defects in a video-watching task
Sipatchin et al. Eye-Tracking for Clinical Ophthalmology with Virtual Reality (VR): A Case Study of the HTC Vive Pro Eye’s Usability. Healthcare 2021, 9, 180
Liu et al. Central Vision Assessment through Gaze Tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871176

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE