WO2020232309A1 - Method and apparatus to track binocular eye motion - Google Patents

Method and apparatus to track binocular eye motion Download PDF

Info

Publication number
WO2020232309A1
WO2020232309A1 PCT/US2020/032996 US2020032996W WO2020232309A1 WO 2020232309 A1 WO2020232309 A1 WO 2020232309A1 US 2020032996 W US2020032996 W US 2020032996W WO 2020232309 A1 WO2020232309 A1 WO 2020232309A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
image
reference frame
gaze
light
Prior art date
Application number
PCT/US2020/032996
Other languages
French (fr)
Inventor
Austin John ROORDA
Martin S. BANKS
Agostino GIBALDI
Norick R. BOWERS
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2020232309A1 publication Critical patent/WO2020232309A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/02Subjective types, i.e. testing apparatus requiring the active assistance of the patient
    • A61B3/08Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing binocular or stereoscopic vision, e.g. strabismus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0041Operational features thereof characterised by display arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Definitions

  • the present disclosure relates generally to eye tracking, binocular eye movements, retinal imaging, and stabilization of images onto the retina.
  • the eyes also undergo torsional eye movements, or rotations about an axis that is approximately in line with the optical axis. Like lateral eye movements, torsional eye movements are correlated between the two eyes and can be in the same or opposite directions.
  • torsional eye movements are correlated between the two eyes and can be in the same or opposite directions.
  • Different systems and techniques have been developed to measure eye motion, gaze direction, and/or retinal image motion. However, currently existing methods may not be able to track eye motion accurately over a large range.
  • a method and apparatus to track eye motion comprising generating a reference frame, capturing a video frame of an eye, while the eye is engaged in a fixation task, calculating a direction and rotation of gaze of the eye based on the video frame relative to the reference frame, and updating an image on a display based on the direction and rotation of gaze of the eye that is calculated.
  • the present disclosure provides an apparatus to track eye motion.
  • the apparatus comprises a light source to emit a light, a beam-splitter to split the light into a first portion of the light and a second portion of the light, a series of optics to direct the light towards an eye of a user, and a camera to capture the second portion of the light that includes an image of the eye of the user, wherein the a direction and rotation of gaze of the eye is to be calculated based on the image and a reference frame of the eye.
  • FIG. 1 illustrates an example of the optical apparatus of the present disclosure
  • FIG. 2 illustrates an example of two of the optical apparatuses to form a binocular apparatus of the present disclosure
  • FIG. 3 illustrates an example of a reference frame that is generated of the present disclosure
  • FIG. 4 illustrates an example step of a large-field eye tracking of the present disclosure
  • FIG. 5 illustrates an example of an additional step of the large-field eye tracking of the present disclosure
  • FIG. 6 illustrates an example of an additional step of the large-field eye tracking of the present disclosure
  • FIG. 7 illustrates an example of an additional step of the large-field eye tracking of the present disclosure
  • FIG. 8 illustrates an example of a system using the optical apparatus of the present disclosure
  • FIG. 9 illustrates a flowchart of an example method for large-field eye tracking of the present disclosure.
  • the present disclosure provides a method and apparatus to track binocular eye motion over a large range.
  • human eyes are in constant motion.
  • the eye motions that redirect gaze consist of drift (a random- walk-like motion) and saccades (fast, jerk-like redirections of gaze).
  • the motions of the left and the right eyes are strongly correlated.
  • binocular eye movements comprise version, where two eyes redirect gaze in the same direction, and vergence, where the two eyes move in opposite directions.
  • Horizontal vergence eye movements for example, are required when two eyes need to converge to fixate on near objects.
  • the eyes also undergo torsional eye movements, or rotations about an axis that is approximately in line with the optical axis.
  • torsional eye movements are correlated between the two eyes and can be in the same or opposite directions.
  • the relative displacements of the pupil position and the reflection of a light source from the first surface of the cornea are used to compute gaze direction.
  • gaze direction is measured by recording the displacements of a beam of light reflected off a small mirror fixed to the cornea.
  • a fourth method called the magnetic search coil method
  • changes in the gaze direction are recorded from the voltage generated in a coil of wire embedded in a scleral contact lens worn by the subject, while the subject is within a magnetic field.
  • the four methods described above can all be classified as those that measure gaze direction based on measurements from the anterior structures in the eye (cornea, pupil and lens). Of the above four methods, only the search coil method can also be used directly to measure torsional eye motion. Torsion can also be measured using a camera to image and measure rotation of the iris. However, the appearance of the iris in an off-axis gaze position will be distorted by the cornea.
  • the class of eye tracker on which the current disclosure is based involves measurement of gaze direction and torsion by direct observation of the retina.
  • the lateral eye motion is measured by quantifying the shift of the entire image, frame-by-frame over time.
  • the sampling rate of eye motion measurement is limited by the frame rate of the camera.
  • the eye motion can be measured by determining the position of a line, or a strip
  • the rate of eye motion measurement is a product of the number of strips per frame and the frame rate.
  • torsion is measured by quantifying the relative rotation of each line, strip or image.
  • the accuracy of tracking systems is generally dependent on the sampling resolution and the signal to noise ratio of the data they record.
  • the accuracy increases with image sampling density and retinal image resolution.
  • Scanning imaging systems that use adaptive optics and have sampling densities that are finer than the diffraction limited point spread function of the system have tracking accuracies that are a fraction of a minute of arc which is the highest accuracy reported in the literature.
  • retinal-image-based eye trackers Another advantage of retinal-image-based eye trackers is that they measure the motion of the retinal image directly.
  • High speed eye trackers can be used to guide the presentation of imagery to the retina, or delivery of light to the retina, so that the light lands at targeted retinal locations.
  • a foveated display uses information from the eye tracker to render the display with highest resolution along and around the line of sight.
  • eye tracking could be used to guide the delivery light to specific cones in the retina.
  • eye tracking could be used to guide the delivery of a therapeutic or treatment laser to a targeted retinal location.
  • a large reference frame is generated prior to the tracking.
  • “large” may be defined to be a plurality of video frames that are stitched together to form the reference frame.
  • the large reference frame may be generated using the same camera that is used to capture the individual video frames. As a result, the large reference frame may have the same resolution and accuracy as each video frame.
  • the large reference frame has an identical scale (in pixels per degree) as the video used for tracking.
  • a consideration for the large field reference frame is that the image may contain distortions from the eye’s optics. However, these optical distortions only correspond to the retinal image. Each pixel in the image corresponds to an exact visual direction, and the changes in visual direction between adjacent pixels in the image remain constant across the field of the image. In other words, the distortions that affect the video of the retina may affect the retina’s view of the outside world in precisely the same fashion. Therefore, it would be disadvantageous to remove such optical distortions from the reference frame.
  • the present disclosure possesses all the advantages of increased accuracy of a small field tracker, without imposing any limitations to the range of the tracker.
  • a second advantage of using a small-field eye tracker in a large- field eye tracker is that it is more practical to create a system with a lot of eye relief, which is the distance between the last element in the system and the eye of the subject being imaged. Having a lot of eye relief makes it possible to add additional auxiliary components into the system, such as beam-splitters for secondary displays, or other ocular measurement devices.
  • the present disclosure and its various embodiments provide a system that is designed for accurate eye tracking over a large range of gaze directions of both eyes simultaneously.
  • This eye tracker over other eye trackers that are in current use.
  • the use of the retinal image allows for direct and unambiguous measurement of the gaze direction.
  • estimations of gaze direction that are based on measurements of the anterior segment, which include pupil trackers, dual Purkinje image eye trackers, and search coil based eye trackers, fail to take into consideration shifts in the retina image that might be caused by displacements in gaze direction caused by tilt and decentration of the crystalline lens.
  • the fidelity of the eye trace that is measured from the distortions in the image can be ensured by measuring the accuracy of the registration of the strips in the video with the reference frame. A confidence value for each sample in the eye motion trace can be obtained in this manner.
  • a retinal-image-based eye tracker has no offset or ambiguity in the absolute direction of gaze, since the fovea is part of the reference image. Once a reference frame is acquired, no further calibrations are needed.
  • the use of the retinal image for eye tracking allows for direct, unambiguous, and simultaneous measurements of torsional eye movements without having to add any additional components to the system.
  • All of the aforementioned eye trackers can be applied to both eyes to form a binocular eye tracker.
  • a method and a system to track both eyes using a binocular retinal-image-based eye tracker are described.
  • LSO ophthalmoscope
  • An LSO collects the scattered light from a line-illumination on the retina using a linear array camera.
  • An image of the retina is generated by collecting images of multiple adjacent lines.
  • the adjacent lines are collected by scanning a line of illumination across the retina using a scanning mirror that is in the path of the system and conjugate to the pupil of the eye.
  • the light returning from the eye is de-scanned by the same scanning mirror and redirected to a linear array camera.
  • a processor, or computer reads the lines from the camera in sequence and renders them into a retinal image that is displayed and/or saved on the computer.
  • a high line rate is achieved by using a fast linear array detector.
  • CMOS linear arrays for example, have line rates of 140 kHz. With such line rates the frame rates are 140,000 divided by the number of lines per frame. For example, a 512 line image can be acquired at 273 frames per second. In this embodiment, the extent of the line-scanned image may be kept as small as is necessary to optimally sample the retinal features that may be used to track the eye motion.
  • a typical line separation of 0.5 minutes of arc or less may be employed, where an angle of 1 minute of arc spans a retinal distance of approximately 5 microns, depending on the exact dimension of the eye.
  • a pixel dimension of 0.5 minutes of arc a 512 x 512 pixel retinal image will have a field size of 4.26 x 4.26 degrees and the Nyquist sampling limit will be 1 minute of arc (approximately 5 microns, depending on the length of the eye), which is about the same size of cone photoreceptors 300 microns outside the fovea.
  • the LSO may be designed to be diffraction- limited over a minimum field size of 5 degrees for a beam diameter of 5 millimeters (mm) or less (at the eye) so that the only limits to image quality will be the imperfections of the eye being imaged.
  • An adjustable aperture may be placed in the system to control the diameter of the beam that enters the subject’s eye. The aperture can be controlled for each individual to find the optimal beam diameter for image quality and image contrast.
  • FIG. 1 illustrates an example of an optical apparatus 100 that can capture an image of the retina of the present disclosure.
  • light of any wavelength is emitted from a light source 101 (e.g., a single mode fiber).
  • the light may be collimated by lens 102 then passed through a cylindrical lens 103 to form a line focus of a specific length.
  • 105 may be placed at the position of the line focus and directed toward a lens
  • the light may be reflected by a beam-splitter
  • a first 4f mirror-based telescope assembly (e.g., mirror 109 and mirror 1 10) may be used to form an image of the entrance pupil onto a galvanometric scanning mirror 1 1 1 .
  • the scanning mirror 1 1 1 scans the beam in a saw-tooth pattern in a direction perpendicular to the direction of the line focus.
  • a second 4f mirror- based telescope e.g., mirror 1 12 and lens 1 14
  • the first 4f mirror-based telescope assembly may be located downstream from the first 4f mirror-based telescope assembly and used to form an image of the galvanometric scanning mirror 1 1 1 on the pupil of the eye.
  • the final component of the second 4f telescope is the lens 1 14.
  • an optical trombone 1 13 may be placed in the optical path between the lens 1 14 and mirror 1 12 of the second 4f telescope assembly.
  • the optical trombone 1 13 may be a simple optical system where one can change the position of 2 of 4 flat mirrors to adjust the optical path length between the mirror 1 12 and the lens 1 14.
  • the adjusting of the optical path length by the optical trombone 1 13 may be a way to adjust the vergence of light at the plane of the eye without changing pupil position, or scan field size.
  • a pair of mirrors 1 15 may adjust the optical path from the lens 1 14 to the eye 1 16.
  • the pair of mirrors 1 15 may form a periscope assembly that can be used to adjust the relative height and/or separation between two beams (e.g., when used with another apparatus 100 as part of a binocular system).
  • the pupil of the eye 1 16 may be placed at a position that is optically conjugate to the galvanometric scanning mirror 1 1 1 .
  • the scanning beam will pivot about the pupil plane of the eye 1 16 and scan the line across the retina of the eye in a direction that is perpendicular to the line of illumination.
  • this optical system so described will serve to scan and de-scan a line focus across the retina of the subject.
  • the present system may differ from other systems that may exclude the optical trombone.
  • the light that scatters from the line of illumination on the retina of the eye 1 16 may pass back through the optical system in the reverse direction, and will be‘de-scanned’ by the galvanometric scanning mirror 1 1 1.
  • the light that transmits through the beam-splitter 107 may pass through two cylindrical lenses 1 19 and 1 17 that may form an image of the illuminated line on the retina of the eye 1 16 onto a linear array camera 1 18 (e.g., part of an LSO).
  • the two cylindrical lenses 1 19 and 1 17 are used to form the image with anamorphic point spread function, which may increase the light collection efficiency without compromising sampling resolution.
  • lens-based telescopes may replace the mirror-based telescopes (e.g., the mirrors 109 and 1 10 or the mirror 1 12 and the lens 1 14) with no change in optical resolution.
  • An example embodiment uses mirror-based telescopes to minimize back-reflections.
  • optical design software may be used to determine the exact component placements to render the entire system to be diffraction-limited over a specified field size.
  • no scanning optics may be used. Rather, the adjacent lines may be generated by projecting a series of lines of illumination across the retina using a display (digital micromirror device, for example) and collecting the returning light with an area detector.
  • a display digital micromirror device, for example
  • FIG. 2 illustrates an example of two optical apparatuses illustrated in FIG. 1 arranged for binocular operation.
  • a binocular LSO system comprises two identical systems, 201 and 202, possibly mirrored in layout, but otherwise identical.
  • the systems 201 and 202 may be similar to the optical apparatus 100 illustrated in FIG. 1.
  • the arrangement of the systems 201 and 202 may enable imaging of both eyes simultaneously. Both systems 201 and 202 may use the same scanner driver signal and frame acquisition to ensure that they will run synchronously. Careful positioning of the head and the eyes 204 relative to the scanning beams 205 and 206 may be important to optimize image quality and consequent tracking performance. Therefore, in order to accommodate different pupil separations and differences in relative height between the two eyes of different individuals, one of the scanning beams or LSOs 205 or 206 may be placed on a lateral translation and a height adjustable stage (not shown).
  • a pair of translating mirrors may be used to adjust for pupil distance and a goniometer stage will be used to adjust the orientation of the head relative to the line of sight.
  • two periscope assemblies 203 may be used to adjust the relative height and separation between the two beams.
  • the exact scale of the scanned image in pixels per degree may be determined by imaging a model eye, comprised of a high quality distortion-free achromatic doublet lens with a calibration target placed at its secondary focal point. Distortions of the LSO system, which might be caused by non-linearity of the scanner or distortions of the lens, may be quantified based on these images and used to generate a transformation that will remove them. The distortion correction will be identical for each LSO frame.
  • FIG. 9 illustrates a method 900 for large-field eye tracking using the binocular system illustrated in FIG. 2.
  • the method 900 may be executed by a system 800 illustrated in FIG. 8, and discussed in further details below.
  • the method 900 may begin at block 902.
  • the method 900 generates a reference frame.
  • the LSO may be used to record videos from an overlapping series of regions of the retina spanning the extent of the visual field that is to be tracked.
  • the process is illustrated in FIG. 3.
  • Each video 304 and 305 may be processed to make a high-quality image that is free of eye movement distortions, and the videos or images 304 and 305 may be assembled together as a montage of the images 302 and 303 to form a single reference frame image 301.
  • Each pixel in the reference frame image 301 so constructed may indicate a specific visual direction and the change in visual direction between each pixel may be constant across the field.
  • the method 900 may capture a video frame of an eye.
  • the second stage may be data collection.
  • a video may be recorded of the two eyes simultaneously while the subject, or user, is engaged in a fixation task.
  • the fixation task may be to follow an image on a display, for example.
  • the video may include a portion of a retina of the eye.
  • a size of the video frame may be smaller than the reference frame.
  • the same camera used to capture the video frame may be used to capture video frames of the reference frame that are stitched together.
  • the video frame and the reference frame, which is larger, may still have the same resolution.
  • the method 900 may calculate a direction and rotation of gaze of the eye based on the video frame relative to the reference frame.
  • the third stage may be a measurement/calculation stage.
  • the measurement stage may be a multistep, or pyramidal process. This step is illustrated in FIG. 4.
  • each entire frame of the video 402 is registered to the reference frame 401 using a cross-correlation and/or feature matching method (e.g., shown as the frame 403 being fit to the reference frame 401 ).
  • a video 402 may be matched to a corresponding location on the reference frame 401 .
  • the registration may include a lateral and a torsional component 404.
  • the coordinates of the video frame in the reference frame relative to the fovea 405 are xfrmf, yfrmf, Ofrmf , where the superscript / refers to the frame number.
  • the frame 502 is broken into a series of strips in a direction perpendicular to the direction of the scan.
  • the position and orientation of each strip 503 relative to the fovea 504 in the reference frame 501 is then computed. f f f
  • the final motion trace may be generated as illustrated in FIG. 6.
  • the position of each strip is converted into a measurement of eye gaze direction.
  • each strip in the image represents a fixed position in space.
  • Computing the gaze direction means finding the position of the fovea 605 relative to the strip.
  • the position of each strip in the raster is computed.
  • the position of the each strip in a frame 601 is given by d l, d 2 d- n , where n is the number of strips per frame.
  • n is the number of strips per frame.
  • the strip positions d di 6 in pixels
  • the pixel positions are converted to angular units by multiplying by the image scale in degrees per pixel.
  • the position 603 of the first strip, xstr , ystr/, 9str/ may be converted to the position of the raster 604 relative to the fovea 605.
  • the position of the raster center within the reference frame is given by: where n is the number of strips per frame and N is the total number of frames in the movie.
  • the next step is illustrated in FIG. 7.
  • the conversion from the raster position 702 to gaze direction 703 may be given by
  • the eye position measurements and the generation of the final motion trace of block 908 and illustrated by FIGs. 4-7 may be made in real time while the data is being recorded.
  • the method 900 may update an image on a display based on the direction and rotation of gaze of the eye that is calculated. In other words, images on the display may be updated in real-time as the movement of the eyes is tracked.
  • the method 900 may end.
  • Real-time eye tracking enables the delivery of retina-contingent imagery to the retina.
  • Retina-contingent imagery is when the display of projection system is updated based on the exact position of the retina.
  • An example application is one in which an image is stabilized on the moving retina.
  • the mirror 105 in FIG.1 may be replaced by a computer controllable digital micromirror device (DMD). While imaging, the reflectivity of the DMD is modulated across the line of illumination to project a patterned line across the retina. The DMD is modulated differently for each line across the scan field to project a 2-dimensional, or areal, image directly onto the retina of the subject. For a retina-contingent display, each line’s modulation is contingent on the most recently recorded position of the retina.
  • DMD computer controllable digital micromirror device
  • the subject views a second areal display through a beam splitter while the subject is being imaged and tracked with the binocular eye tracker.
  • the position of features on the display are updated based on the most recently recorded position of the retina.
  • the subject views an areal display through a beam-splitter, a pair of scanning mirrors, and an image rotator (dove prism, for example).
  • the scanners and prism adjust the apparent direction of the display.
  • the scanner and prism positions are updated based on the last recorded position of the retina.
  • the accuracy of the retina- contingent display is governed by the accuracy of the most recently recorded position of the retina and the time between the last recorded position of the retina and the updating of the display. This is referred to as the latency.
  • the latency is a sum of the following times: (i) the time it takes read a line, or a strip from the line scanning camera, (ii) the time it takes to find a match between the last recorded line (or strip) and the reference frame, and (iii) the time it takes to update the DMD display (in the first embodiment), the time takes to update the display (in the second embodiment), or the time it takes to change the scanners and prism position (in the third embodiment). In all cases, it is possible to keep the latency as small as 1 millisecond.
  • FIG. 8 illustrates a block diagram of a general system that may use the binocular apparatus illustrated in FIG. 2 to perform real time binocular eye tracking to update an image on a display.
  • the system 800 may include an eye motion tracker 802, a display 804, and a controller 806.
  • the eye motion tracker 802 may be the binocular apparatus illustrated in FIG. 2 and described above.
  • the controller 806 may be a processor, an application specific integrated controller (ASIC), a computing device, and the like.
  • the display 804 may be an external display or may be a projection of an image onto the retina of a subject.
  • the large reference frame may be generated and stored in memory, as described above.
  • the controller 806 may receive video frames of the eye captured by the eye motion tracker 802.
  • the controller 806 may process the video frame of the eye to calculate a direction and rotation of gaze of the eyes, as described above. Based on the calculations, the controller 806 may adjust the image that is shown on the display 804. For example, the controller 806 may change a location of the image on the display 804, change a shape of the image, animate the image on the display 804, change a color of the image on the display 804, or any combination thereof.
  • system 800 may be incorporated into a single hardware device.
  • the system 800 may be part of a head mounted display.
  • the system 800 may be deployed as one or more separate devices.
  • the design of the present optical apparatus may provide sufficient eye relief to enable use of corneal-reflection-based eye trackers simultaneously.
  • the present method and apparatus may allow for another eye tracker to be deployed between it and the eyes of a subject or user.
  • the present method and apparatus may then be used to calibrate or determine the accuracy of the additional eye tracker.
  • the present method and apparatus may be used to aid in the development of other eye trackers.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A method and apparatus to track eye motion are disclosed. The method includes generating a reference frame, capturing a video frame of an eye, while the eye is engaged in a fixation task, calculating a direction and rotation of gaze of the eye based on the video frame relative to the reference frame, and updating an image on a display based on the direction and rotation of gaze of the eye that is calculated.

Description

METHOD AND APPARATUS TO TRACK BINOCULAR EYE MOTION
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to United States Provisional Patent Application Serial No. 62/848,121 , filed May 15, 2019, which is herein incorporated by reference in its entirety.
FIELD OF THE INVENTION
[0002] The present disclosure relates generally to eye tracking, binocular eye movements, retinal imaging, and stabilization of images onto the retina.
BACKGROUND
[0003] Human eyes are in constant motion. The eye motions that redirect gaze consist of drift (a random-walk-like motion) and saccades (fast, jerk-like redirections of gaze). The motions of the left and the right eyes are strongly correlated. In general, binocular eye movements comprise version, where two eyes redirect gaze in the same direction, and vergence, where the two eyes move in opposite directions. Horizontal vergence eye movements, for example, are required when two eyes need to converge to fixate on near objects.
[0004] In addition, to redirect gaze, the eyes also undergo torsional eye movements, or rotations about an axis that is approximately in line with the optical axis. Like lateral eye movements, torsional eye movements are correlated between the two eyes and can be in the same or opposite directions. Different systems and techniques have been developed to measure eye motion, gaze direction, and/or retinal image motion. However, currently existing methods may not be able to track eye motion accurately over a large range.
SUMMARY
[0005] According to aspects illustrated herein, there is provided a method and apparatus to track eye motion. One disclosed feature of the embodiments is a method comprising generating a reference frame, capturing a video frame of an eye, while the eye is engaged in a fixation task, calculating a direction and rotation of gaze of the eye based on the video frame relative to the reference frame, and updating an image on a display based on the direction and rotation of gaze of the eye that is calculated.
[0006] In another aspect, the present disclosure provides an apparatus to track eye motion. The apparatus comprises a light source to emit a light, a beam-splitter to split the light into a first portion of the light and a second portion of the light, a series of optics to direct the light towards an eye of a user, and a camera to capture the second portion of the light that includes an image of the eye of the user, wherein the a direction and rotation of gaze of the eye is to be calculated based on the image and a reference frame of the eye.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The teaching of the present disclosure can be readily understood by considering the following detailed description in conjunction with the
accompanying drawings, in which:
[0008] FIG. 1 illustrates an example of the optical apparatus of the present disclosure;
[0009] FIG. 2 illustrates an example of two of the optical apparatuses to form a binocular apparatus of the present disclosure;
[0010] FIG. 3 illustrates an example of a reference frame that is generated of the present disclosure;
[0011] FIG. 4 illustrates an example step of a large-field eye tracking of the present disclosure;
[0012] FIG. 5 illustrates an example of an additional step of the large-field eye tracking of the present disclosure;
[0013] FIG. 6 illustrates an example of an additional step of the large-field eye tracking of the present disclosure;
[0014] FIG. 7 illustrates an example of an additional step of the large-field eye tracking of the present disclosure;
[0015] FIG. 8 illustrates an example of a system using the optical apparatus of the present disclosure; and
[0016] FIG. 9 illustrates a flowchart of an example method for large-field eye tracking of the present disclosure. DETAILED DESCRIPTION
[0017] The present disclosure provides a method and apparatus to track binocular eye motion over a large range. As noted above, human eyes are in constant motion. The eye motions that redirect gaze consist of drift (a random- walk-like motion) and saccades (fast, jerk-like redirections of gaze). The motions of the left and the right eyes are strongly correlated. In general, binocular eye movements comprise version, where two eyes redirect gaze in the same direction, and vergence, where the two eyes move in opposite directions. Horizontal vergence eye movements, for example, are required when two eyes need to converge to fixate on near objects.
[0018] In addition, to redirect gaze, the eyes also undergo torsional eye movements, or rotations about an axis that is approximately in line with the optical axis. Like lateral eye movements, torsional eye movements are correlated between the two eyes and can be in the same or opposite directions.
[0019] There are different systems and techniques to measure eye motion, gaze direction and/or retinal image motion; some of the more commonly used methods are described briefly here. In a first method, called the dual-Purkinje- image eye tracker, the relative displacements between the reflections of a light source from the corneal first surface and the lens back surface are used to compute gaze direction.
[0020] In a second method, the relative displacements of the pupil position and the reflection of a light source from the first surface of the cornea are used to compute gaze direction.
[0021] In a third method, often referred to as an optical lever method, gaze direction is measured by recording the displacements of a beam of light reflected off a small mirror fixed to the cornea.
[0022] In a fourth method, called the magnetic search coil method, changes in the gaze direction are recorded from the voltage generated in a coil of wire embedded in a scleral contact lens worn by the subject, while the subject is within a magnetic field.
[0023] The four methods described above can all be classified as those that measure gaze direction based on measurements from the anterior structures in the eye (cornea, pupil and lens). Of the above four methods, only the search coil method can also be used directly to measure torsional eye motion. Torsion can also be measured using a camera to image and measure rotation of the iris. However, the appearance of the iris in an off-axis gaze position will be distorted by the cornea.
[0024] The class of eye tracker on which the current disclosure is based involves measurement of gaze direction and torsion by direct observation of the retina. In the simplest case, with a full field camera, the lateral eye motion is measured by quantifying the shift of the entire image, frame-by-frame over time. As such, the sampling rate of eye motion measurement is limited by the frame rate of the camera. In a scanning system (point or line scanning), the eye motion can be measured by determining the position of a line, or a strip
(collection of lines) of the image relative to a reference image. As such, the rate of eye motion measurement is a product of the number of strips per frame and the frame rate. For image-based tracking, torsion is measured by quantifying the relative rotation of each line, strip or image.
[0025] The accuracy of tracking systems is generally dependent on the sampling resolution and the signal to noise ratio of the data they record. In retinal-image-based systems, the accuracy increases with image sampling density and retinal image resolution. Scanning imaging systems that use adaptive optics and have sampling densities that are finer than the diffraction limited point spread function of the system have tracking accuracies that are a fraction of a minute of arc which is the highest accuracy reported in the literature.
[0026] Another advantage of retinal-image-based eye trackers is that they measure the motion of the retinal image directly. Some published research has suggested that there are discrepancies between eye motion inferred from measurements of the anterior structures of the eye and the actual movement of the image on the retina. These discrepancies arise because, in a moving eye, the lens will move independently of other ocular structures, and these displacements give rise to prismatic effects, which in turn give rise to
displacements of the retinal image. These displacements can be measured directly by recording the retinal motion. [0027] High speed eye trackers can be used to guide the presentation of imagery to the retina, or delivery of light to the retina, so that the light lands at targeted retinal locations. A foveated display, for example, uses information from the eye tracker to render the display with highest resolution along and around the line of sight. In other applications, eye tracking could be used to guide the delivery light to specific cones in the retina. In yet another application, eye tracking could be used to guide the delivery of a therapeutic or treatment laser to a targeted retinal location.
[0028] The faster and more accurate the eye tracker, the higher the performance of targeted light delivery to the retina will be. Retina-image-based eye trackers are reported to be the most accurate. However, the manner in which the tracking is implemented may impose fundamental limitations in the range of eye motion that can be measured. The limitation is due to the size of the reference frame. If the reference frame is selected from the video sequence, then the tracking will only be possible when the current frame in the video overlaps with the reference. To increase the dynamic range, a larger field size could be used for imaging, but this often comes at the cost of reduced sampling resolution.
[0029] In the current disclosure, a large reference frame is generated prior to the tracking. In one embodiment,“large” may be defined to be a plurality of video frames that are stitched together to form the reference frame. The large reference frame may be generated using the same camera that is used to capture the individual video frames. As a result, the large reference frame may have the same resolution and accuracy as each video frame.
[0030] The large reference frame has an identical scale (in pixels per degree) as the video used for tracking. A consideration for the large field reference frame is that the image may contain distortions from the eye’s optics. However, these optical distortions only correspond to the retinal image. Each pixel in the image corresponds to an exact visual direction, and the changes in visual direction between adjacent pixels in the image remain constant across the field of the image. In other words, the distortions that affect the video of the retina may affect the retina’s view of the outside world in precisely the same fashion. Therefore, it would be disadvantageous to remove such optical distortions from the reference frame.
[0031] The present disclosure possesses all the advantages of increased accuracy of a small field tracker, without imposing any limitations to the range of the tracker. A second advantage of using a small-field eye tracker in a large- field eye tracker is that it is more practical to create a system with a lot of eye relief, which is the distance between the last element in the system and the eye of the subject being imaged. Having a lot of eye relief makes it possible to add additional auxiliary components into the system, such as beam-splitters for secondary displays, or other ocular measurement devices.
[0032] The present disclosure and its various embodiments provide a system that is designed for accurate eye tracking over a large range of gaze directions of both eyes simultaneously. There are several advantages of this eye tracker over other eye trackers that are in current use. First, the use of the retinal image allows for direct and unambiguous measurement of the gaze direction. By comparison, estimations of gaze direction that are based on measurements of the anterior segment, which include pupil trackers, dual Purkinje image eye trackers, and search coil based eye trackers, fail to take into consideration shifts in the retina image that might be caused by displacements in gaze direction caused by tilt and decentration of the crystalline lens.
[0033] Second, the fidelity of the eye trace that is measured from the distortions in the image can be ensured by measuring the accuracy of the registration of the strips in the video with the reference frame. A confidence value for each sample in the eye motion trace can be obtained in this manner.
[0034] Third, a retinal-image-based eye tracker has no offset or ambiguity in the absolute direction of gaze, since the fovea is part of the reference image. Once a reference frame is acquired, no further calibrations are needed.
[0035] Fourth, the use of the retinal image for eye tracking allows for direct, unambiguous, and simultaneous measurements of torsional eye movements without having to add any additional components to the system.
[0036] All of the aforementioned eye trackers can be applied to both eyes to form a binocular eye tracker. In the current disclosure, a method and a system to track both eyes using a binocular retinal-image-based eye tracker are described.
[0037] To record a single-line image of the retina, a line scanning
ophthalmoscope (LSO) may be used. An LSO collects the scattered light from a line-illumination on the retina using a linear array camera. An image of the retina is generated by collecting images of multiple adjacent lines.
[0038] In one embodiment, the adjacent lines are collected by scanning a line of illumination across the retina using a scanning mirror that is in the path of the system and conjugate to the pupil of the eye. In this embodiment, the light returning from the eye is de-scanned by the same scanning mirror and redirected to a linear array camera. A processor, or computer, reads the lines from the camera in sequence and renders them into a retinal image that is displayed and/or saved on the computer.
[0039] In this embodiment, a high line rate is achieved by using a fast linear array detector. CMOS linear arrays, for example, have line rates of 140 kHz. With such line rates the frame rates are 140,000 divided by the number of lines per frame. For example, a 512 line image can be acquired at 273 frames per second. In this embodiment, the extent of the line-scanned image may be kept as small as is necessary to optimally sample the retinal features that may be used to track the eye motion.
[0040] In practice, a typical line separation of 0.5 minutes of arc or less may be employed, where an angle of 1 minute of arc spans a retinal distance of approximately 5 microns, depending on the exact dimension of the eye. With a pixel dimension of 0.5 minutes of arc, a 512 x 512 pixel retinal image will have a field size of 4.26 x 4.26 degrees and the Nyquist sampling limit will be 1 minute of arc (approximately 5 microns, depending on the length of the eye), which is about the same size of cone photoreceptors 300 microns outside the fovea.
[0041] In this embodiment, the LSO may be designed to be diffraction- limited over a minimum field size of 5 degrees for a beam diameter of 5 millimeters (mm) or less (at the eye) so that the only limits to image quality will be the imperfections of the eye being imaged. An adjustable aperture may be placed in the system to control the diameter of the beam that enters the subject’s eye. The aperture can be controlled for each individual to find the optimal beam diameter for image quality and image contrast.
[0042] FIG. 1 illustrates an example of an optical apparatus 100 that can capture an image of the retina of the present disclosure. In one embodiment, light of any wavelength is emitted from a light source 101 (e.g., a single mode fiber). The light may be collimated by lens 102 then passed through a cylindrical lens 103 to form a line focus of a specific length. Mirrors 104 and
105 may be placed at the position of the line focus and directed toward a lens
106 placed one focal length away.
[0043] Following the lens 106, the light may be reflected by a beam-splitter
107 with a low reflection to transmission ratio (typ. 10%:90%). An adjustable aperture 108 is placed at a distance along the reflected beam that is optically conjugate to the cylindrical lens 103. A first 4f mirror-based telescope assembly (e.g., mirror 109 and mirror 1 10) may be used to form an image of the entrance pupil onto a galvanometric scanning mirror 1 1 1 .
[0044] The scanning mirror 1 1 1 scans the beam in a saw-tooth pattern in a direction perpendicular to the direction of the line focus. A second 4f mirror- based telescope (e.g., mirror 1 12 and lens 1 14) may be located downstream from the first 4f mirror-based telescope assembly and used to form an image of the galvanometric scanning mirror 1 1 1 on the pupil of the eye. In this embodiment, the final component of the second 4f telescope is the lens 1 14.
[0045] In one embodiment, an optical trombone 1 13 may be placed in the optical path between the lens 1 14 and mirror 1 12 of the second 4f telescope assembly. The optical trombone 1 13 may be a simple optical system where one can change the position of 2 of 4 flat mirrors to adjust the optical path length between the mirror 1 12 and the lens 1 14. The adjusting of the optical path length by the optical trombone 1 13 may be a way to adjust the vergence of light at the plane of the eye without changing pupil position, or scan field size.
[0046] In one embodiment, a pair of mirrors 1 15 may adjust the optical path from the lens 1 14 to the eye 1 16. The pair of mirrors 1 15 may form a periscope assembly that can be used to adjust the relative height and/or separation between two beams (e.g., when used with another apparatus 100 as part of a binocular system). [0047] In one embodiment, the pupil of the eye 1 16 may be placed at a position that is optically conjugate to the galvanometric scanning mirror 1 1 1 . As configured, the scanning beam will pivot about the pupil plane of the eye 1 16 and scan the line across the retina of the eye in a direction that is perpendicular to the line of illumination. Those skilled in the art of basic optical principles will understand that this optical system so described will serve to scan and de-scan a line focus across the retina of the subject. The present system may differ from other systems that may exclude the optical trombone.
[0048] The light that scatters from the line of illumination on the retina of the eye 1 16 may pass back through the optical system in the reverse direction, and will be‘de-scanned’ by the galvanometric scanning mirror 1 1 1. The light that transmits through the beam-splitter 107 may pass through two cylindrical lenses 1 19 and 1 17 that may form an image of the illuminated line on the retina of the eye 1 16 onto a linear array camera 1 18 (e.g., part of an LSO). The two cylindrical lenses 1 19 and 1 17 are used to form the image with anamorphic point spread function, which may increase the light collection efficiency without compromising sampling resolution.
[0049] In another embodiment, lens-based telescopes may replace the mirror-based telescopes (e.g., the mirrors 109 and 1 10 or the mirror 1 12 and the lens 1 14) with no change in optical resolution. An example embodiment uses mirror-based telescopes to minimize back-reflections. In both cases, optical design software may be used to determine the exact component placements to render the entire system to be diffraction-limited over a specified field size.
[0050] In another embodiment, no scanning optics may be used. Rather, the adjacent lines may be generated by projecting a series of lines of illumination across the retina using a display (digital micromirror device, for example) and collecting the returning light with an area detector.
[0051] FIG. 2 illustrates an example of two optical apparatuses illustrated in FIG. 1 arranged for binocular operation. As illustrated in FIG. 2, a binocular LSO system comprises two identical systems, 201 and 202, possibly mirrored in layout, but otherwise identical. The systems 201 and 202 may be similar to the optical apparatus 100 illustrated in FIG. 1. [0052] The arrangement of the systems 201 and 202 may enable imaging of both eyes simultaneously. Both systems 201 and 202 may use the same scanner driver signal and frame acquisition to ensure that they will run synchronously. Careful positioning of the head and the eyes 204 relative to the scanning beams 205 and 206 may be important to optimize image quality and consequent tracking performance. Therefore, in order to accommodate different pupil separations and differences in relative height between the two eyes of different individuals, one of the scanning beams or LSOs 205 or 206 may be placed on a lateral translation and a height adjustable stage (not shown).
[0053] In other embodiments, a pair of translating mirrors may be used to adjust for pupil distance and a goniometer stage will be used to adjust the orientation of the head relative to the line of sight. In yet another embodiment, two periscope assemblies 203 may be used to adjust the relative height and separation between the two beams.
[0054] In the LSO, the exact scale of the scanned image in pixels per degree may be determined by imaging a model eye, comprised of a high quality distortion-free achromatic doublet lens with a calibration target placed at its secondary focal point. Distortions of the LSO system, which might be caused by non-linearity of the scanner or distortions of the lens, may be quantified based on these images and used to generate a transformation that will remove them. The distortion correction will be identical for each LSO frame.
[0055] FIG. 9 illustrates a method 900 for large-field eye tracking using the binocular system illustrated in FIG. 2. In an example, the method 900 may be executed by a system 800 illustrated in FIG. 8, and discussed in further details below. The method 900 may begin at block 902. At block 904, the method 900 generates a reference frame.
[0056] In the first stage to enable wide-field eye tracking, the LSO may be used to record videos from an overlapping series of regions of the retina spanning the extent of the visual field that is to be tracked. The process is illustrated in FIG. 3. Each video 304 and 305 may be processed to make a high-quality image that is free of eye movement distortions, and the videos or images 304 and 305 may be assembled together as a montage of the images 302 and 303 to form a single reference frame image 301. Each pixel in the reference frame image 301 so constructed may indicate a specific visual direction and the change in visual direction between each pixel may be constant across the field.
[0057] At block 906, the method 900 may capture a video frame of an eye.
In one embodiment, the second stage may be data collection. For example, a video may be recorded of the two eyes simultaneously while the subject, or user, is engaged in a fixation task. The fixation task may be to follow an image on a display, for example. The video may include a portion of a retina of the eye.
[0058] A size of the video frame may be smaller than the reference frame. However, the same camera used to capture the video frame may be used to capture video frames of the reference frame that are stitched together. Thus, the video frame and the reference frame, which is larger, may still have the same resolution.
[0059] At block 908, the method 900 may calculate a direction and rotation of gaze of the eye based on the video frame relative to the reference frame.
For example, the third stage may be a measurement/calculation stage. The measurement stage may be a multistep, or pyramidal process. This step is illustrated in FIG. 4. In the first step, each entire frame of the video 402 is registered to the reference frame 401 using a cross-correlation and/or feature matching method (e.g., shown as the frame 403 being fit to the reference frame 401 ). In other words, a video 402 may be matched to a corresponding location on the reference frame 401 . The registration may include a lateral and a torsional component 404. The coordinates of the video frame in the reference frame relative to the fovea 405 are xfrmf, yfrmf, Ofrmf , where the superscript / refers to the frame number.
[0060] Then, as illustrated in Fig 5, the frame 502 is broken into a series of strips in a direction perpendicular to the direction of the scan. The position and orientation of each strip 503 relative to the fovea 504 in the reference frame 501 is then computed. f f f
xstr' , ystr' , 6str‘ , where the subscript s refers to the strip number.
[0061] Any number of similar stages can be employed with the strips getting progressively smaller between steps where the maximum number of strips is equal to the number of lines in the frame. The outcome of this process is a list of the X, Y and Q position for the eye for each strip.
[0062] Then, the final motion trace may be generated as illustrated in FIG. 6. For example, the position of each strip is converted into a measurement of eye gaze direction. Here it should be noted that each strip in the image represents a fixed position in space. Computing the gaze direction means finding the position of the fovea 605 relative to the strip.
[0063] First, as illustrated in FIG. 6 the position of each strip in the raster is computed. The position of the each strip in a frame 601 is given by dl,d2 d-n, where n is the number of strips per frame. For example, if a frame was comprised of 512 lines and the final stage of the measurement involved 16 evenly spaced strips, then the strip positions d di6 (in pixels) would be -240, - 208, -176, -144, -1 12, -80, -48, -16, 16, 48, 80, 1 12, 144, 176, 208, 240. The pixel positions are converted to angular units by multiplying by the image scale in degrees per pixel.
[0064] In FIG. 6, the position 603 of the first strip, xstr , ystr/, 9str/ may be converted to the position of the raster 604 relative to the fovea 605. In general, the position of the raster center within the reference frame is given by:
Figure imgf000014_0001
Figure imgf000015_0001
where n is the number of strips per frame and N is the total number of frames in the movie.
[0065] The next step is illustrated in FIG. 7. For example, in a frame 701 , the conversion from the raster position 702 to gaze direction 703 may be given by
Figure imgf000015_0002
where <pfov =
Figure imgf000015_0003
and the bold terms represent the entire array of x, y and Q positions. In one embodiment, the eye position measurements and the generation of the final motion trace of block 908 and illustrated by FIGs. 4-7 may be made in real time while the data is being recorded.
[0066] At block 912, the method 900 may update an image on a display based on the direction and rotation of gaze of the eye that is calculated. In other words, images on the display may be updated in real-time as the movement of the eyes is tracked. At block 910, the method 900 may end.
[0067] Real-time eye tracking enables the delivery of retina-contingent imagery to the retina. Retina-contingent imagery is when the display of projection system is updated based on the exact position of the retina. An example application is one in which an image is stabilized on the moving retina. There are several methods and systems to generate a retina-contingent display.
[0068] In an embodiment, the mirror 105 in FIG.1 may be replaced by a computer controllable digital micromirror device (DMD). While imaging, the reflectivity of the DMD is modulated across the line of illumination to project a patterned line across the retina. The DMD is modulated differently for each line across the scan field to project a 2-dimensional, or areal, image directly onto the retina of the subject. For a retina-contingent display, each line’s modulation is contingent on the most recently recorded position of the retina.
[0069] In another embodiment, the subject views a second areal display through a beam splitter while the subject is being imaged and tracked with the binocular eye tracker. The position of features on the display are updated based on the most recently recorded position of the retina.
[0070] In another embodiment, the subject views an areal display through a beam-splitter, a pair of scanning mirrors, and an image rotator (dove prism, for example). The scanners and prism adjust the apparent direction of the display. The scanner and prism positions are updated based on the last recorded position of the retina.
[0071] In all three embodiments described above, the accuracy of the retina- contingent display is governed by the accuracy of the most recently recorded position of the retina and the time between the last recorded position of the retina and the updating of the display. This is referred to as the latency. In a retina-image-based eye tracker, the latency is a sum of the following times: (i) the time it takes read a line, or a strip from the line scanning camera, (ii) the time it takes to find a match between the last recorded line (or strip) and the reference frame, and (iii) the time it takes to update the DMD display (in the first embodiment), the time takes to update the display (in the second embodiment), or the time it takes to change the scanners and prism position (in the third embodiment). In all cases, it is possible to keep the latency as small as 1 millisecond.
[0072] FIG. 8 illustrates a block diagram of a general system that may use the binocular apparatus illustrated in FIG. 2 to perform real time binocular eye tracking to update an image on a display. For example, the system 800 may include an eye motion tracker 802, a display 804, and a controller 806. The eye motion tracker 802 may be the binocular apparatus illustrated in FIG. 2 and described above. The controller 806 may be a processor, an application specific integrated controller (ASIC), a computing device, and the like. The display 804 may be an external display or may be a projection of an image onto the retina of a subject. [0073] In one embodiment, the large reference frame may be generated and stored in memory, as described above. The controller 806 may receive video frames of the eye captured by the eye motion tracker 802. The controller 806 may process the video frame of the eye to calculate a direction and rotation of gaze of the eyes, as described above. Based on the calculations, the controller 806 may adjust the image that is shown on the display 804. For example, the controller 806 may change a location of the image on the display 804, change a shape of the image, animate the image on the display 804, change a color of the image on the display 804, or any combination thereof.
[0074] It should be noted that the system 800 may be incorporated into a single hardware device. For example, the system 800 may be part of a head mounted display. In another example, the system 800 may be deployed as one or more separate devices.
[0075] In addition, the design of the present optical apparatus may provide sufficient eye relief to enable use of corneal-reflection-based eye trackers simultaneously. In other words, the present method and apparatus may allow for another eye tracker to be deployed between it and the eyes of a subject or user. The present method and apparatus may then be used to calibrate or determine the accuracy of the additional eye tracker. In other words, the present method and apparatus may be used to aid in the development of other eye trackers.
[0076] While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

IN THE CLAIMS What is claimed is:
1. A method, comprising:
generating a reference frame;
capturing a video frame of an eye, while the eye is engaged in a fixation task;
calculating a direction and rotation of gaze of the eye based on the video frame relative to the reference frame; and
updating an image on a display based on the direction and rotation of gaze of the eye that is calculated.
2. The method of claim 1 , wherein generating the reference frame comprises:
recording a combination of a plurality of overlapping video frames captured of the eye.
3. The method of claim 2, wherein the reference frame spans an extent of a visual field of the eye that is to be tracked.
4. The method of claim 1 , wherein a geometric transformation is applied to the video frame of the eye relative to the reference frame to calculate the direction and the rotation of gaze of the eye.
5. The method of claim 4, wherein the geometric transformation comprises: registering the video frame that is captured to a corresponding location in the reference frame;
breaking the video frame into a plurality of strips in a direction
perpendicular to a direction of a scan;
converting each one of the plurality of strips to a position of a raster relative to a fovea of the reference frame; and
converting the position of the raster to the direction and rotation of gaze.
6. The method of claim 5, wherein the registering includes a lateral component and a torsional component.
7. The method of claim 5, wherein a position and an orientation of each one of the plurality of strips is relative to the fovea of the reference frame.
8. The method of claim 5, wherein the plurality of strips is evenly spaced.
9. The method of claim 1 , wherein the updating the image comprises at least one of: changing a shape of the image, animating the image, or changing a color of the image.
10. The method of claim 1 , wherein the capturing and the calculating are performed for both eyes of a user to perform binocular eye motion tracking.
1 1. An apparatus, comprising:
a light source to emit a light;
a beam-splitter to split the light into a first portion of the light and a second portion of the light;
a series of optics to direct the light towards an eye of a user;
a camera to capture the second portion of the light that includes an image of the eye of the user, wherein a direction and rotation of gaze of the eye is to be calculated based on the image and a reference frame of the eye.
12. The apparatus of claim 1 1 , wherein the series of optics comprises:
a first 4f telescope mirror assembly;
a second 4f telescope assembly located downstream of the first 4f telescope mirror assembly; and
an optical trombone within an optical path of the second 4f telescope assembly to adjust a length of the optical path.
13. The apparatus of claim 12, wherein the series of optics comprises:
a periscope assembly to adjust a height or a separation of the light.
14. The apparatus of claim 12, further comprising:
a display to display an image, wherein the image is updated based on the direction and rotation of gaze of the eye that is calculated.
15. The apparatus of claim 12, further comprising:
a controller to apply a geometric transformation to the image relative to the reference frame to calculate the direction and the rotation of gaze of the eye.
PCT/US2020/032996 2019-05-15 2020-05-14 Method and apparatus to track binocular eye motion WO2020232309A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962848121P 2019-05-15 2019-05-15
US62/848,121 2019-05-15

Publications (1)

Publication Number Publication Date
WO2020232309A1 true WO2020232309A1 (en) 2020-11-19

Family

ID=73289344

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2020/032996 WO2020232309A1 (en) 2019-05-15 2020-05-14 Method and apparatus to track binocular eye motion

Country Status (1)

Country Link
WO (1) WO2020232309A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
US20170293356A1 (en) * 2016-04-08 2017-10-12 Vizzario, Inc. Methods and Systems for Obtaining, Analyzing, and Generating Vision Performance Data and Modifying Media Based on the Vision Performance Data
US20180341107A1 (en) * 2014-03-03 2018-11-29 Eyeway Vision Ltd. Eye projection system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180341107A1 (en) * 2014-03-03 2018-11-29 Eyeway Vision Ltd. Eye projection system
US20160262608A1 (en) * 2014-07-08 2016-09-15 Krueger Wesley W O Systems and methods using virtual reality or augmented reality environments for the measurement and/or improvement of human vestibulo-ocular performance
US20170293356A1 (en) * 2016-04-08 2017-10-12 Vizzario, Inc. Methods and Systems for Obtaining, Analyzing, and Generating Vision Performance Data and Modifying Media Based on the Vision Performance Data

Similar Documents

Publication Publication Date Title
JP6220248B2 (en) Ophthalmic apparatus and control method
US9613419B2 (en) Image generating device and image generating method
JP5038703B2 (en) Ophthalmic equipment
US8596786B2 (en) Arrangements and method for measuring an eye movement, particularly a movement of the fundus of the eye
US9144380B2 (en) Adaptive optical apparatus, image obtaining apparatus, method for controlling adaptive optical apparatus, and storage medium
JPH11137522A (en) Optical characteristic-measuring apparatus
US20210307605A1 (en) Compact retinal scanning device for tracking movement of the eye&#39;s pupil and applications thereof
US20200297209A1 (en) Imaging apparatus and control method therefor
JP6442960B2 (en) Fundus imaging device with wavefront compensation
US8567949B2 (en) Ophthalmic device and method
JP6776189B2 (en) Ophthalmic device and its control method
CN107730545B (en) Optical imaging method and system for dynamically eliminating ghost image
JP2014108212A (en) Imaging device
JP7441588B2 (en) Improvement of slit scanning fundus imaging device
US10213106B2 (en) System and method for eye tracking during retinal imaging
WO2020232309A1 (en) Method and apparatus to track binocular eye motion
US10674902B2 (en) Information processing apparatus, operation method thereof, and computer program
US20220414845A1 (en) Ophthalmic apparatus, method of controlling the same, and recording medium
JP2023511723A (en) Method and apparatus for orbit determination and imaging
US10440256B2 (en) Surgical microscope and method implemented with the surgical microscope
JPH10307314A (en) Observation optical device
JP7309404B2 (en) Imaging device and its control method
JP2020178938A (en) Imaging apparatus and control method of the same
WO2019225347A1 (en) Imaging device and method for controlling same
JP2023074878A (en) Visual line detecting apparatus, method of controlling visual line detecting apparatus, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20805968

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20805968

Country of ref document: EP

Kind code of ref document: A1