WO2011057161A1 - Methods of improving accuracy of video-based eyetrackers - Google Patents

Methods of improving accuracy of video-based eyetrackers Download PDF

Info

Publication number
WO2011057161A1
WO2011057161A1 PCT/US2010/055749 US2010055749W WO2011057161A1 WO 2011057161 A1 WO2011057161 A1 WO 2011057161A1 US 2010055749 W US2010055749 W US 2010055749W WO 2011057161 A1 WO2011057161 A1 WO 2011057161A1
Authority
WO
WIPO (PCT)
Prior art keywords
pupil
relationship
eye
location
center
Prior art date
Application number
PCT/US2010/055749
Other languages
French (fr)
Inventor
Harry J. Wyatt
Original Assignee
The Research Foundation Of State University Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Research Foundation Of State University Of New York filed Critical The Research Foundation Of State University Of New York
Publication of WO2011057161A1 publication Critical patent/WO2011057161A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • the present invention is directed generally to methods of improving the accuracy of an eyetracking device, and more particularly to methods of improving the accuracy of a video-based eyetracking device.
  • a typical human eye 100 includes, among other structures, a cornea 1 10, a pupil 1 12, and an iris 1 14.
  • the cornea 1 10 is a transparent front part of the eye 100 that covers both the iris 1 14 and the pupil 1 12.
  • the cornea 1 10 is somewhat reflective and will reflect some of the light shown on the cornea.
  • the image of a light source formed by reflection from the outer surface of the cornea 1 10 is referred to as a "first Purkinje image.”
  • the pupil 1 12 and the iris 1 14 are positioned behind the cornea 1 10.
  • the pupil 1 12 is an opening in the iris 1 14 having a generally circularly-shaped shaped outer edge "E" (defined by an inner edge "IE" of the iris 1 14) that allows light to enter the interior of the eye 100.
  • the light that enters the eye 100 encounters a retina 1 16, which is a layer of tissue lining a portion of the inside the eye.
  • the iris 1 14 is connected to muscles (not shown) that change the shape of the iris to thereby change a diameter "D" and size of the pupil 1 12.
  • the diameter "D" is decreased, the pupil 1 12 is described as having contracted.
  • the diameter "D" of the pupil 1 12 increases, the pupil 1 12 is described as having dilated.
  • the size of the pupil 1 12 increases or decreases, the amount of light reaching the retina 1 16 increases or decreases, respectively.
  • the location of the pupil 1 12 changes.
  • the location of the pupil 1 12 may be used to determine the orientation of the eye 100 or eye position (also referred to as gaze direction).
  • Eye position may be used to determine where or at what the subject is looking. For example, eye movement information may be collected and used in research experiments, market research, website usability testing, clinical devices for monitoring patients' gaze, and assistive technologies that allow individuals to "speak" to a computing device by changing gaze direction.
  • FIG. 2 is a schematic illustrating some of the components of a conventional video-based eyetracking device 200.
  • the video-based eyetracking device 200 includes a camera 210 (e.g., a digital video camera) positioned in front of the eye 100 to capture images of the pupil 1 12 as it changes position.
  • the video-based eyetracking device 200 may include a separate camera for each of the subject's eyes.
  • the video-based eyetracking device 200 includes one or more infrared (“IR”) light sources 220 that each shines an IR light beam 222 onto the eye 100.
  • the video-based eyetracking device 200 may include one or more IR light source for each of the subject's eyes. At least a portion of the IR light beam 222 creates an image (referred to as a corneal reflection "CR” illustrated in Figures 3A and 3B) of the IR light sources 220 by reflection at the corneal surface "S.”
  • the corneal reflection "CR” is visible to the camera 210. In Figures 3A and 3B, the corneal reflection "CR” is illustrated as a small bright dot.
  • the IR light sources 220 may also illuminate the iris 1 14 (or the retina 1 16) for detection by the video-based eyetracking device 200.
  • the IR light sources 220 both illuminate the front of the eye 100, and provide a reflected image (the corneal reflection "CR" illustrated in Figures 3A and 3B) that is detectable by the camera 210.
  • Infrared light is used instead of visible light because visible light may cause the pupil 1 12 to contract.
  • Video-based eyetracking devices are either "dark-pupil” or "bright- pupil” in nature.
  • the iris 1 14 is illuminated by the IR light sources 220 from a direction off the optic axis (as illustrated in Figure 2), so the pupil 1 12 appears dark relative to the iris.
  • the IR light sources 220 and direction of view are positioned on the optic axis, and the resulting reflection from the retina 1 16 and a choroid (not shown) creates "redeye,” causing the pupil 1 12 to appear brighter than the iris 1 14.
  • the video-based eyetracking device 200 may include a display 225 (e.g., a conventional computer monitor) configured to display visual targets 230 to the eye 100.
  • the display 225 may be a component separate from the video-based eyetracking device 200. Because the subject's left and right eyes typically operate together, a single display may be positioned to be viewable by both of the subject's eyes. Alternatively, if the video-based eyetracking device 200 is configured to track only a single eye, the display 225 is positioned to be viewable by the eye being tracked.
  • the subject is asked to fixate on or track the target(s) with the subject's eye(s) as the camera 210 captures images of the pupil 1 12 and the corneal reflection "CR" of one or both eyes. These images are used to determine the position(s) of the subject's eye(s) when fixating on or tracking the target(s).
  • Eye position within each image may be determined based on at least one relationship between the position of the eye 100 and the locations of a center "PC" of the pupil 1 12 and the corneal reflection "CR.”
  • a typical human subject has a left eye and a right eye, and the relationship is usually different for each of the subject's eyes.
  • how the relationship is determined will be described with respect to the eye 100 (e.g., the right eye). However, the same process may be used to determine the relationship for a different eye (e.g., the left eye).
  • the location of the center “PC” of the pupil 1 12 shifts approximately linearly with changes in eye position within about ⁇ 30 degrees relative to straight-ahead, while the location of the corneal reflection "CR” shifts considerably less.
  • the locations of the iris 1 14 and the pupil 112 both shift, but the corneal reflection "CR" does not shift as much as the iris 1 14 and the pupil 1 12.
  • the location of the center “PC" of the pupil 1 12 may shift linearly and the location of the corneal reflection "CR" may shift linearly or otherwise but by a smaller shift amount with horizontal changes in eye position within about ⁇ 30 degrees relative to straight-ahead. That difference in shift amount lies at the heart of how images captured by video-based eyetracking devices, like the video-based eyetracking device 200, are used to determine eye position.
  • the difference in shift amount may be mapped (e.g., using a mathematical formula, lookup table, data structure, and the like) to eye position. For example, a mathematical relationship (e.g., a function) or model may be created in which the difference in shift amount is an input variable used to determine eye position as an output variable.
  • the video-based eyetracking device 200 includes or is connected to a computing device 240 that stores and analyzes the images captured by the camera 210.
  • the computing device 240 may determine the relationship (e.g., a mathematical relationship) between eye position and the difference in shift amount (between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR").
  • a calibration process is performed in which the difference in shift amount is determined when the eye 100 is looking at a set of known predetermined target locations.
  • the relationship between the difference in shift amount and eye position is then formulated based on the differences in shift amount observed when the eye 100 was gazing towards the known predetermined locations.
  • the images of the front of the globe-shaped eye 100 captured by the camera 210 depict the location of the pupil 1 12 as having shifted a little but also depict an unchanged relationship between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR.”
  • the eye position estimate is considerably stable even when the subject's head moves.
  • the images of the eye 100 captured by the camera 210 may include a two-dimensional array of pixels. Moving from one edge of an image of the eye 100 toward an opposite edge, one encounters the iris 1 14 followed by the pupil, which in turn is followed by the iris again. Thus, in a dark-pupil system, one encounters bright pixels (pixels depicting the iris 1 14) followed by dark pixels (pixels depicting the pupil 1 12), which in turn are followed by bright pixels (pixels depicting the iris) and in a bright pupil system one encounters dark pixels (pixels depicting the iris) followed by bright pixels (pixels depicting the pupil), which in turn are followed by dark pixels (pixels depicting the iris).
  • the computing system 240 By determining transition locations (locations where the pixels switch from dark to bright or vice versa), the computing system 240 identifies a series of chords (or linear arrays of pixels) extending across the image of the pupil 1 12 with the ends of each chord lying on the outer edge "E" of the pupil.
  • the set of transition points (or ends of the chords) can be fitted with a circle or an ellipse and the two-dimensional center "PC" of the pupil 1 12 determined to be the center of the circle or ellipse.
  • the average x-coordinate and y- coordinate values of the chords in the two- dimensional array of pixels in the image may be used as an estimate of the location of the center "PC" of the pupil 1 12.
  • Pupil shape is typically approximately elliptical. See Wyatt infra.
  • Pupil diameter can be obtained from the fitted-circle or the fitted-ellipse (e.g., a horizontal extent or a vertical extent) depending on the approach.
  • the location of the center "PC" of the pupil does not remain fixed even if the eye 100 has not moved.
  • the location of the center "PC" of the pupil 1 12 may be determined by using edge detection techniques to locate the outer edge "E" of the pupil 1 12.
  • the pupil 1 12 changes size, the position of the pupil 1 12 in the iris 144 changes as a result of interactions between the muscles of the iris and the structure of the iris. Even if the subject continues to look at the same target and no head or eye movement occurs, the location of the pupil 1 12 can shift horizontally or vertically as a result.
  • the computing device 240 interprets this as a change in eye position or gaze direction, even though none has occurred. This may cause the computing device 240 to incorrectly determine the location of the center "PC" of the pupil 1 12 when the size of the pupil 1 12 changes. Further, the computing device 240 may detect "pseudo eye movements,” which are movements caused by changes in the diameter "D" of the pupil 1 12 and not by actual movement of the pupil.
  • the pupil 1 12 may change size for different reasons. For example, when the amount of light reaching the eye 100 from the environment increases, the pupil 1 12 gets smaller as a result of a "pupillary light reflex." In dark conditions, the size of the pupil 1 12 will typically increase (i.e., the pupil will typically dilate). In contrast, in light conditions, the size of the pupil 1 12 will typically decrease (i.e., the pupil will typically contract). Thus, the size of the pupil 1 12 will change when the lighting is changed from light to dark conditions and vice versa. Shifts in the location of the center “PC" of the pupil 1 12 between light and dark conditions can be as large as several tenths of a millimeter. Wyatt, H., Vision Res., vol. 35, no. 14, pp 2021 -2036 (1995). The direction of the shift in the location may differ from one subject to another. Id. Further, the direction of shift may be different for each of the eyes of the same subject. Id.
  • Pupil size may also change based on the distance of an object from the subject's face.
  • visual accommodation may cause changes in pupil size.
  • the pupil 1 12 may contract.
  • pupil size may change with changes in the subject's emotional state.
  • shifts in the location of the center "PC" of the pupil 1 12 caused by light and dark conditions may correspond to approximately one or two degrees of eye movement.
  • shifts in the location of the center "PC" of the pupil 1 12 associated with changes of pupil size can generate spurious indications of changes in gaze direction. Therefore, a need exists for methods of determining a relationship between change in pupil size and change in pupil position for a particular subject, and methods of using that information to correct for the spurious indications of changes in gaze direction.
  • the present application provides these and other advantages as will be apparent from the following detailed description and accompanying figures.
  • aspects of the invention include a computer implemented method for use with a camera and one or more light sources.
  • the camera is positioned to capture images of an eye (e.g., a human eye) that has a cornea and a pupil.
  • the one or more light sources are each positioned to illuminate the eye and at least one of the light sources generates a corneal reflection on the cornea of the eye.
  • the method includes obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection.
  • the first relationship may be recorded in memory or determined by performing a first calibration process.
  • the method includes determining a second relationship between the size of the pupil and the distance between the location of the center of the pupil and the location of the corneal reflection.
  • the second relationship may be determined by performing a second calibration process during which the pupil is contracted (e.g., by a bright light) and allowed to at least partially redilate.
  • the method also includes capturing an image of the eye with the camera, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of the corneal reflection in the image captured. An observed distance between the observed locations of the center of the pupil and the corneal reflection is determined.
  • a position of the eye is determined based on the observed distance, the observed size of the pupil, the first relationship, and the second relationship.
  • the position of the eye may be determined by determining a first position of the eye based on the observed distance, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the observed distance, and the second relationship.
  • the first calibration process mentioned above may include capturing a first set of calibration images of the eye with the camera as the eye fixates on each of a plurality of calibration targets.
  • the calibration targets are each arranged to position the pupil of the eye in a predetermined location. Further, each of the first set of calibration images depicts the pupil positioned in one of the
  • the first calibration process includes detecting a location of a center of the pupil, and a location of the corneal reflection in each of the first set of calibration images captured. For each of the first set of calibration images captured, a distance between the locations of the center of the pupil and the corneal reflection is determined and associated with the predetermined location of the pupil depicted in the calibration image. Then, the first relationship is
  • the second calibration process mentioned above may include capturing a second set of calibration images of the eye with the camera as the eye fixates on a target and the pupil contracts and redilates.
  • the target is arranged to position the pupil of the eye in a predetermined location.
  • Each of the second set of calibration images depict the pupil positioned in the predetermined location.
  • the second calibration process includes detecting a location of a center of the pupil, a size of the pupil, and a location of the corneal reflection in each of the second set of calibration images captured. For each of the second set of calibration images, a distance between the locations of the center of the pupil and the corneal reflection is determined. Then, the second relationship is determined based on the distances determined for the second set of calibration images and the
  • the second relationship may be a mathematical relationship relating pupil size to a distance between the locations of the center of the pupil and the corneal reflection.
  • the method further includes deriving or formulating the mathematical relationship.
  • the mathematical relationship may be a linear or polynomial equation.
  • the second relationship may be implemented as a data structure that associates each of a plurality of pupil sizes with a distance between the locations of the center of the pupil and the corneal reflection.
  • the second relationship may include a horizontal relationship between pupil size and a horizontal distance between the locations of the center of the pupil and the corneal reflection, and a separate vertical relationship between pupil size and a vertical distance between the locations of the center of the pupil and the corneal reflection.
  • Another aspect of the invention includes a computer implemented method for use with a camera and one or more light sources.
  • the camera is positioned to capture images of an eye (e.g., a human eye) that has a cornea and a pupil.
  • the one or more light sources are each positioned to illuminate the eye and at least one of the light sources generates a corneal reflection on the cornea of the eye.
  • the method includes obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection.
  • the first relationship may be recorded in memory or determined by performing the first calibration process (described above).
  • the method includes determining a second relationship between the size of the pupil and the location of the center of the pupil.
  • the second relationship may be determined using a third calibration process described below.
  • the method also includes capturing an image of the eye with the camera, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of the corneal reflection in the image captured. An observed distance between the observed locations of the center of the pupil and the corneal reflection is determined. Then, a position of the eye is determined based on the observed distance, the observed size of the pupil, the observed location of the center of the pupil, the first relationship, and the second
  • the position of the eye may be determined by determining a first position of the eye based on the observed distance, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, and the second relationship.
  • the third calibration process includes capturing a third set of calibration images of the eye with the camera as the eye fixates on a target while the pupil contracts and redilates.
  • the target is arranged to position the pupil of the eye in a predetermined location.
  • Each of the third set of calibration images depicts the pupil positioned in the predetermined location.
  • the third calibration process includes detecting a location of a center of the pupil, and a size of the pupil in each of the third set of calibration images captured. Then, the second relationship is determined based on the sizes of the pupil detected for the third set of calibration images and the predetermined location of the pupil depicted in the third set of calibration images.
  • the second relationship may be a mathematical relationship relating pupil sizes to locations of the center of the pupil.
  • the third calibration process further comprises deriving or formulating the mathematical relationship.
  • the mathematical relationship may be a linear or polynomial equation.
  • the second relationship may be implemented as a data structure that associates each of a plurality of pupil sizes with a location of the center of the pupil.
  • the second relationship may include a horizontal relationship between pupil size and a horizontal component of the location of the center of the pupil, and a separate vertical relationship between pupil size and a vertical component of the location of the center of the pupil.
  • Another aspect of the invention includes a system that includes means for obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of a reflection from the cornea of the eye and means for determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection.
  • the means for obtaining the first relationship may include structures described herein as performing the first calibration process.
  • the means for determining the second relationship may include structures described herein as performing the second calibration process and/or the third calibration process.
  • the system also includes means for capturing an image of the eye, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of a corneal reflection in the image captured.
  • the system also includes means for determining a position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, the observed location of the corneal reflection, the first relationship, and the second relationship.
  • the means for determining the position of the eye may include means for determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and means for modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
  • Another aspect of the invention includes a system that includes at least one camera positioned to capture images of the eye and one or more light sources positioned to illuminate the eye and generate a corneal reflection on the cornea of the eye.
  • the camera may be implemented as a digital video camera and the one or more light sources may include one or more infrared light sources.
  • the system further includes a display positioned to display one or more targets viewable by the eye, and a computing device.
  • the computing device includes at least one processor and a memory configured to store instructions executable by the at least one processor. When executed by the at least one processor, the instructions cause the at least one processor to perform portions of the methods described above.
  • the method performed by the at least one processor may include obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection, and determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection.
  • the first relationship may be stored in and obtained from the memory.
  • the first relationship may be obtained by performing the first calibration process described above.
  • the second relationship may be obtained by performing the second calibration process and/or the third calibration process described above.
  • the method may also include instructing the display to display a target to the eye and instructing the at least one camera to capture images of the eye while the eye views the target displayed by the display.
  • the method may further include detecting observed locations of the center of the pupil, observed sizes of the pupil, and observed locations of the corneal reflection in the image captured. Then, positions of the eye are determined based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship.
  • the position of the eye may be determined by determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
  • Another aspect of the invention includes a non-transitory computer- readable medium comprising instructions executable by at least one processor and when executed thereby causing the at least processor to perform a method.
  • the method includes obtaining a first relationship between eye position and a distance between a location of a center of a pupil and a location of a corneal reflection.
  • the first relationship may be stored in and obtained from the memory. Alternatively, the first relationship may be obtained by performing the first calibration process described above.
  • the method further includes determining a second relationship between pupil size and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection.
  • the second relationship may be obtained by performing the second calibration process and/or the third calibration process described above.
  • the method also includes detecting observed locations of the center of the pupil, observed sizes of the pupil, and observed locations of the corneal reflection in a plurality of images of the eye. Then, positions of the eye are determined based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship.
  • the position of the eye may be determined by determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
  • Figure 1A is a cross-section of a human eye including an iris and a pupil.
  • Figure 1 B is a front view the human eye of Figure 1 A.
  • Figure 2 is a schematic illustrating a conventional video-based eyetracking device including a camera.
  • Figure 3A is a front view of the human eye as viewed by the camera of the video-based eyetracking device of Figure 2 with the pupil positioned to look straight ahead.
  • Figure 3B is a front view of the human eye as viewed by the camera of the video-based eyetracking device of Figure 2 with the pupil positioned to look toward the left of the subject.
  • Figure 4A is a schematic illustrating an exemplary embodiment of an eyetracking device and a computing device having a system memory.
  • Figure 4B is a block diagram illustrating modules configured to analyze the image data stored in the system memory of the computing device of Figure 4A.
  • Figure 5 is a flow diagram of a method performed by the eyetracking device and/or the computing device of Figure 4A.
  • Figure 6 is a graph depicting data obtained from a calibration process performed by the eyetracking device of Figure 4A with a subject who during the calibration process, fixated on a left target, a central target, and a right target.
  • Figure 7 depicts three graphs illustrating data obtained from one light/dark trial performed by the eyetracking device of Figure 4A and a subject.
  • Figure 8 depicts three graphs illustrating data obtained from one light/dark trial performed by the eyetracking device of Figure 4A and a different subject from the subject that performed the light/dark trial in Figure 7.
  • Figure 9A depicts a leftmost graph plotting a horizontal distance (y- axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was constricted and a rightmost graph plotting a vertical distance (y-axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was constricted.
  • Figure 9B depicts a leftmost graph plotting a horizontal distance (y- axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was redilated and a rightmost graph plotting a vertical distance (y-axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was redilated.
  • Figure 9C depicts a leftmost graph plotting an average relationship (illustrated as a solid thick line) determined from the data of the leftmost graph of Figure 9A, and an average relationship (illustrated as a dashed thick line) determined from the data of the leftmost graph of Figure 9B; and a rightmost graph plotting an average relationship (illustrated as a solid thick line) determined from the data of the rightmost graph of Figure 9A and an average relationship
  • Figure 10A depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a second different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the second subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the second subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the second subject when the pupil was redilated.
  • an average relationship illustrated as a solid thick line
  • an average relationship illustrated as a dashed thick line
  • Figure 10B depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a third different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the third subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the third subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the third subject when the pupil was redilated.
  • an average relationship illustrated as a solid thick line
  • Figure 10C depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a fourth different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the fourth subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the fourth subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the fourth subject when the pupil was redilated.
  • an average relationship illustrated as a solid thick line
  • an average relationship illustrated as a dashed thick line
  • Figure 10D depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a fifth different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the fifth subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the fifth subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the fifth subject when the pupil was redilated.
  • an average relationship illustrated as a solid thick line
  • Figure 1 1 A is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a first subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned “on” for about two second and then turned “off for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the first subject during a different single 16 second light/dark trial.
  • Figure 1 1 B is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a second subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned “on” (or displayed) for about two second and then turned “off for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the second subject during a different single 16 second light/dark trial.
  • Figure 1 1 C is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a third subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned “on” (or displayed) for about two second and then turned “off” for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the second subject during a different single 16 second light/dark trial.
  • Figure 12 is a diagram of a hardware environment and an operating environment in which the computing device of Figure 4A may be implemented.
  • Eyetracker refers to a device that measures a direction in which the eye 100 is looking.
  • An eyetracker may be, although not limited to, video-based, where a video camera is focused on the front of the subject's eye.
  • eye position the direction of gaze of a human subject, sometimes referred to as "eye position" is most commonly measured with an eyetracker, such as a video-based eyetracking device.
  • eyetracker such as a video-based eyetracking device.
  • conventional video-based eyetracking devices capture images of the front of the eye 100 illuminated with infrared illumination.
  • the computing device 240 detects the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR” in the images captured, measures a horizontal distance "H” between the locations of the center “PC” of the pupil 1 12 and the corneal reflection "CR,” and measures a vertical distance "V” between the locations of the center “PC” of the pupil 1 12 and the corneal reflection "CR.”
  • the horizontal distance “H” is a horizontal difference "H_PC-CR” (see Figures 6, 9A- 9C, 10A-10D, and 1 1A-1 1 C) between the locations of the center "PC” of the pupil 1 12 and the corneal reflection "CR.”
  • the vertical distance "V” is a vertical difference "V_PC-CR” (see Figures 9A-9C and 10A-10D) between the locations of the
  • conventional video-based eyetracking devices typically require a calibration process be performed for each subject and may collect separate calibration data for each eye.
  • the horizontal and vertical distances "H” and “V” as well as one or more relationships between these distances and eye position are used to determine the direction of eye gaze (or eye position).
  • a relationship between the horizontal distance "H” and a horizontal component of eye position and a separate relationship between the vertical distance "V” and a vertical component of eye position may be determined using calibration data.
  • the location of the center "PC" of the pupil 1 12 may shift even if the position of the pupil 1 12 has not moved.
  • changes in the size of the pupil 1 12 may generate spurious indications of change in gaze direction (or "pseudo eye movements").
  • the importance of considering pupil size increases as the accuracy requirements of gaze direction measurement increase.
  • Pupil size changes may constitute a significant problem when measuring small changes in gaze direction (e.g., about one degree of arc).
  • pupil size changes and shifts in the location of the center “PC" of the pupil 1 12 are related in a systematic way, which makes correcting for shifts in the location of the center "PC" of the pupil 1 12 caused by changes in pupil size possible.
  • the relationship between changes in pupil size and the location of the center “PC" of the pupil 1 12 may be somewhat different for each eye.
  • the relationship between changes in pupil size and the horizontal component of the location of the center “PC” of the pupil 1 12 may be somewhat different than the relationship between changes in pupil size and the vertical component of the location of the center "PC” of the pupil 1 12.
  • While the relationship between pupil size and the location of the center "PC" of the pupil 1 12 is different for different eyes, a relatively fixed relationship exists for a particular eye (e.g., the eye 100 illustrated in Figures 1A, 1 B, 2, 3A, 3B, and 4A). This relationship may be determined for the eye 100 by exposing the eye to different lighting conditions (e.g., light conditions and dark conditions) thereby causing changes in pupil size, and collecting pupil size calibration information. Once the relationship between pupil size and the location of the center "PC" of the pupil 1 12 is determined for the eye 100, gaze direction determinations can be corrected to account for pupil size changes.
  • lighting conditions e.g., light conditions and dark conditions
  • a relationship between pupil size and the difference between the locations of the center “PC” and the corneal reflection "CR” may be determined for the eye 100 and used to correct gaze direction determinations.
  • CR corneal reflection
  • Figure 4A illustrates exemplary components of an eyetracker device 400, which is configured to correct determinations of locations of the center "PC" of the pupil 1 12 based on pupil size. While the eyetracker device 400 is illustrated as being a video-based eyetracking device, through application of ordinary skill in the art to the present teachings, embodiments including other types of eyetrackers could be constructed. Because “pseudo eye movements" may occur with any eyetracker device that uses pupil position to estimate gaze direction (regardless of whether the eyetracker device also uses the corneal reflection "CR"), embodiments may be constructed using any eyetracker device that uses pupil position to estimate gaze direction.
  • CR corneal reflection
  • the eyetracker device 400 includes one or more cameras (e.g., a camera 410) each substantially identical to the camera 210 of the video-based eyetracking device 200 (see Figure 2), and one or more IR sources 420
  • the eyetracker device 400 may include a display 425 substantially identical to the display 225 (see Figure 2).
  • the display 425 may be a component separate from the eyetracking device 400.
  • the eyetracker device 400 may be configured to track a single eye or both of the subject's eyes. In embodiments in which the eyetracker device 400 is configured to track both of the subject's eyes, the eyetracker device 400 may include a separate camera (e.g., a digital video camera) for each of the subject's eyes. Further, the eyetracker device 400 may include one or more IR sources 420 for each of the subject's eyes.
  • a separate camera e.g., a digital video camera
  • the eyetracker device 400 is described below as tracking a single eye (e.g., the subject's right eye). However, through application of ordinary skill in the art to the present teachings, embodiments in which the eyetracker device 400 is configured to track both of the subject's eyes may be constructed and are therefore within the scope of the present teachings.
  • the video-based eyetracking device 400 includes or is connected to a computing device 440 that stores and analyzes the images captured by the camera(s) 410.
  • a computing device 440 that stores and analyzes the images captured by the camera(s) 410.
  • the eyetracking device 400 is configured to perform a calibration process.
  • the computing device 440 instructs the display 425 to display a plurality of calibration or fixation targets 432.
  • Each of the fixation targets 432 is arranged on the display 425 to position the pupil 1 12 of the eye 100 in a known predetermined location.
  • the fixation targets 432 may be arranged in a fixed array.
  • the fixation targets 432 may include a central fixation target "CFT", a left target "LT,” a top target "TT,” a right target “RT,” and a bottom target "BT” as viewed by the subject.
  • the subject looks sequentially at each of the fixation targets 432 (as they are displayed sequentially by the display 425) and the camera 410 captures images of the eye 100 as the eye looks at each of the fixation targets 432.
  • the computing device 440 may use the images of the eye 100 captured when the eye was looking at the fixation targets 432 to determine the relationship between eye position and the distance (e.g., the horizontal distance "H,” the vertical distance "V,” a combination thereof, and the like) between the locations of the center “PC" of the pupil 1 12 and the corneal reflection "CR.”
  • the computing device 440 may determine a first (horizontal) relationship between the horizontal distance "H” and a horizontal component of eye position and a separate second (vertical) relationship between the vertical distance "V” and a vertical component of eye position.
  • the location of the center “PC” of the pupil 1 12 shifts approximately linearly with horizontal changes in eye position within about ⁇ 30 degrees relative to straight-ahead, while the location of the corneal reflection "CR” shifts considerably less.
  • the location of the center “PC” of the pupil 1 12 may shift linearly and the location of the corneal reflection "CR” may shift linearly or otherwise but by a smaller shift amount with horizontal changes in eye position within about ⁇ 30 degrees relative to straight-ahead. Therefore, the first
  • (horizontal) relationship may be expressed as a linear equation in which a horizontal component of eye position is determined as a function of the horizontal distance "H.”
  • the first (horizontal) relationship may be expressed as a polynomial equation.
  • the second (vertical) relationship may be expressed as a linear equation in which a vertical component of eye position is determined as a function of the vertical distance "V.”
  • the second (vertical) relationship may be expressed as a polynomial equation.
  • the computing device 440 can determine gaze direction (or eye position) from images of the eye 100 captured by the camera 410 by measuring at least one distance (e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like) between the locations of the center “PC" of the pupil 1 12 and the corneal reflection "CR” and using the relationship(s) determined during the calibration process between eye position and the distance(s) measured to determine at least one component (e.g., a horizontal component, or a vertical component) of eye position.
  • at least one distance e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like
  • Each image (or frame) captured by the camera 410 may be sent as a video signal to the computing device 440 that may locate both the center "CP" of the pupil 1 12 and the corneal reflection "CR,” and use that information to calculate or measure at least one distance between the two.
  • the computing device 440 may determine both the horizontal distance "H” and the vertical distance "V.” Then, the computing device 440 may use the first (horizontal) relationship (determined using data collected during the calibration process) and the horizontal distance "H” to provide an estimate of a horizontal component of eye position. Similarly, the computing device 440 may use the second (vertical) relationship (determined using data collected during the calibration process) and the vertical distance "V” to provide an estimate of a vertical component of eye position. Together the estimates of the horizontal and vertical components of eye position provide a two-dimensional estimate of eye position.
  • Figure 4A also illustrates a visible light source 450 that may be turned “on” to cause the pupil 1 12 to contract and turned “off to allow the pupil 1 12 to redilate.
  • the visible light source 450 may be used to change or determine the size of the pupil.
  • the visible light source 450 may be controlled by the computing device 440. However, this is not a requirement. Nevertheless, when the visible light source 450 is "on” and when the visible light source 450 is “off” may be communicated to the computing device 440 for storage thereby.
  • the computing device 440 includes a system memory 22 (see Figure 12) configured to store image data captured by the camera 410.
  • the system memory 22 also stores other
  • the other programming modules 37 may store one or more modules 442 configured to analyze the image data.
  • the modules 442 may include a PC-CR calibration module 443 that performs the calibration process (described above) to obtain at least one relationship (e.g., the first and second relationships) between at least one component (e.g., the horizontal and vertical components) of eye position and at least one distance (e.g., the horizontal and vertical distances "H” and "V") between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR.”
  • the modules 442 may include a pupil center module 444 configured to detect the outer edge "E" of the pupil 1 12 and determine the location of the center "PC" of the pupil.
  • the pupil center module 444 records the location of the center "PC" of the pupil 1 12.
  • the pupil center module 444 may determine and record the size of the pupil 1 12.
  • the modules 442 may include a corneal reflection module 446 configured to determine the location of the corneal reflection "CR.”
  • the corneal reflection module 446 records the location of the corneal reflection "CR.”
  • the modules 442 may include a distance module 448 configured to determine at least one distance (e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like) between the location of the center “PC" of the pupil 1 12 determined by the pupil center module 444, and the location of the corneal reflections "CR" determined by the corneal reflection module 446.
  • a distance module 448 configured to determine at least one distance (e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like) between the location of the center “PC” of the pupil 1 12 determined by the pupil center module 444, and the location of the corneal reflections "CR" determined by the corneal reflection module 446.
  • the modules 442 may include an eye position module 448 configured to determine eye position (or direction of eye gaze) based on the at least one relationship determined by the PC-CR calibration module 443, and the distance(s) determined by the distance module 448.
  • the eye position module 448 may determine eye position (or direction of eye gaze) based on the first (horizontal) relationship determined by the PC-CR calibration module 443, the second (vertical) relationship determined by the PC-CR calibration module 443, the horizontal distance "H” between the location of the center “PC” of the pupil 1 12 and the location of the corneal reflection "CR,” and the vertical distance "V” between the location of the center “PC” of the pupil 1 12 and the location of the corneal reflection "CR.”
  • the modules 442 may include a pupil size module 490 configured to determine at least one relationship between pupil size and the location of the center "PC" of the pupil 1 12 determined by the pupil center module 444.
  • the pupil size module 490 may determine a third
  • the pupil size module 490 may determine a fourth (vertical) relationship between pupil size and a vertical component of the location of the center "PC" of the pupil 1 12.
  • the pupil size module 490 may be configured to determine at least one relationship between pupil size and the distance(s) (e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like) between the locations of the center “PC" of the pupil 1 12 and the corneal reflections "CR" determined by the distance module 448.
  • the distance(s) e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like
  • the pupil size module 490 may determine a fifth (horizontal) relationship between pupil size and the horizontal distance "H” between the locations of the center “PC” of the pupil 1 12 and the corneal reflections "CR.”
  • the pupil size module 490 may determine a sixth (vertical) relationship between pupil size and the vertical distance "V” between the locations of the center “PC” of the pupil 1 12 and the corneal reflections "CR.”
  • the modules 442 may include a pupil size adjustment module 492 configured to correct the location of the center "PC" of the pupil 1 12 based on changes in pupil size using the at least one relationship (e.g., the third (horizontal) relationship, the fourth (vertical) relationship, and the like) between pupil size and the location of the center "PC" of the pupil 1 12 determined by the pupil size module 490.
  • a pupil size adjustment module 492 configured to correct the location of the center "PC" of the pupil 1 12 based on changes in pupil size using the at least one relationship (e.g., the third (horizontal) relationship, the fourth (vertical) relationship, and the like) between pupil size and the location of the center "PC" of the pupil 1 12 determined by the pupil size module 490.
  • the pupil size adjustment module 492 may be configured to correct the distance(s) between the locations of the center “PC" of the pupil 1 12 and the corneal reflections "CR" (determined by the distance module 448) based on changes in pupil size using the at least one relationship (e.g., the fifth (horizontal) relationship, the sixth (vertical) relationship, and the like) determined by the pupil size module 490.
  • the at least one relationship e.g., the fifth (horizontal) relationship, the sixth (vertical) relationship, and the like
  • a method 500 may be performed to correct eye position data based on changes in pupil size.
  • the method 500 may be used to correct eye position determinations made using prior art methods that do not consider changes in pupil size when determining the location of the center "PC" of the pupil 1 12.
  • the method 500 may be used to correct eye position determinations made by the eye position module 448. Portions of the method 500 are described as being performed by the eyetracking device 400. However, in embodiments in which the computing device 440 is not a component of the eyetracking device 400, one or more of the portions of the method 500 described as being performed by the eyetracking device 400 may be performed the eyetracking device 400 and the computing device 440 together or by the computing device 440 alone.
  • the eyetracking device 400 may perform the calibration process described above.
  • the PC-CR calibration module 443 may be executed in block 510 by the computing device 440 (see Figure 4A).
  • at least one relationship determined for an "average" eye may be used in place of the at least one relationship determined by performing the calibration process.
  • the at least one relationship (e.g., the first and second relationships) determined during the calibration process or the at least one relationship determined for the "average" eye may be associated with a reference pupil size.
  • the reference pupil size may be the pupil size observed when the at least one relationship was determined.
  • the eyetracker device 400 has the at least one relationship (e.g., the first and second relationships) between eye position and at least one distance (e.g., the horizontal and vertical distances "H” and “V”) between the locations of the center “PC” of the pupil 1 12 and the corneal reflection "CR.”
  • the eyetracker device 400 may use the at least one distance (e.g., the horizontal and vertical distances "H” and “V") between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR" to determine eye position.
  • this determination will not necessary be correct because shifts in the location of the center "PC" of the pupil 1 12 caused by changes in pupil size have not yet been considered.
  • the visible light source 450 may be turned “on” and turned “off.”
  • visible light source 450 is bright enough to drive the subject's pupillary light reflex to thereby cause the pupil 1 12 to constrict.
  • the visible light source 450 is turned “off,” the environmental lighting conditions may be dark enough to cause the pupil 1 12 to redilate.
  • several "on'V'off” cycles (“stimulus cycles") may be performed.
  • the state (“on” or “off") of the stimulus (the visible light source 450) is recorded. As the cycles are performed, images of the eye 100 are captured.
  • each of the images may be associated with the state of the stimulus when the image was captured.
  • the eyetracker device 400 determines the size of the pupil and the location of the center "PC" of the pupil 1 12 in each of the images captured in block 530.
  • these values may be associated with the state ("on” or “off) of the stimulus.
  • the pupil center module 444 may be executed in block 540 by the computing device 440 (see Figure 4A) to determine the location of the center "CR" of the pupil 1 12 and the size of the pupil.
  • the eyetracker device 400 determines at least one distance between the locations of the center “PC” of the pupil 1 12 and the corneal reflections "CR.” In such embodiments, in block 540, the eyetracker device 400 also determines the location of the corneal reflection "CR.”
  • the corneal reflection module 446 and the distance module 447 may be executed in block 540 by the computing device 440 (see Figure 4A).
  • decision block 550 the eyetracking device 400 decides whether to repeat blocks 530 and 540 to collect more data.
  • decision block 550 the eyetracking device 400 decides whether to repeat blocks 530 and 540 to collect more data.
  • decision block 550 is "YES,” in block 555, the eyetracker device 400 returns to block 530.
  • the decision in decision block 550 is "NO,” the
  • eyetracking device 400 advances to block 560.
  • the pupil size module 490 may be executed. If one or more relationships between pupil size and the location of the center "PC" of the pupil 1 12 (instead of one or more relationships between pupil size and the distance(s) between the locations of the center “CR” of the pupil 1 12 and the corneal reflection "CR") are to be determined by the method 500, in block 560, the eyetracker device 400 determines at least one relationship (e.g., the third
  • the relationship may be determined by plotting the locations of the center "CR" of the pupil 1 12 against the pupil diameters obtained in block 540, and fitting this plotted data with a mathematical relationship or function (e.g., a smooth curve like the ones illustrated in Figures 9C and 10A-10C).
  • a mathematical relationship or function e.g., a smooth curve like the ones illustrated in Figures 9C and 10A-10C.
  • the eyetracker device 400 determines at least one relationship (e.g., the fifth (horizontal) relationship, the sixth (vertical) relationship, and the like) between pupil size and the distance(s) between the locations of the center “CR” of the pupil 1 12 and the corneal reflection "CR.”
  • the relationship may be determined by plotting the distance(s) between the locations of the center “CR” of the pupil 1 12 and the corneal reflection "CR” against the pupil diameters obtained in block 540, and fitting this plotted data with a mathematical relationship or function (e.g., a smooth curve like the ones illustrated in Figures 9C and 10A- 10C).
  • the eyetracker device 400 may use pupil size to correct the at least one component (e.g., the horizontal and vertical components) of the location of the center "PC" of the pupil 1 12 before the location of the center "PC" of the pupil 1 12 is used to determine eye position thereby correcting the determination of eye position.
  • the at least one relationship e.g., the third and fourth relationships
  • the eyetracker device 400 may use pupil size to correct the at least one component (e.g., the horizontal and vertical components) of the location of the center "PC" of the pupil 1 12 before the location of the center "PC" of the pupil 1 12 is used to determine eye position thereby correcting the determination of eye position.
  • the eyetracker device 400 may analyze the images captured during the calibration process performed in optional block 510 and adjust the determinations of the locations of the center "PC" of the pupil 1 12 made during the calibration process to adjust for changes in pupil size (if any) that occurred during the calibration process.
  • the eyetracker device 400 may use pupil size to correct the at least distance before it is used to determine eye position thereby correcting the determination of eye position.
  • the eyetracker device 400 may analyze the images captured during the calibration process performed in optional block 510 and adjust the determinations of the distance(s) (e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like) between the locations of the center “CR” of the pupil 1 12 and the corneal reflection "CR” made during the calibration process to adjust for changes in pupil size (if any) that occurred during the calibration process.
  • the distance(s) e.g., the horizontal distance "H,” the vertical distance “V,” a combination thereof, and the like
  • the eyetracker device 400 captures images of the eye 100.
  • the subject may be fixated on or tracking one or more of the targets 432.
  • the eyetracker device 400 determines the location of the center "PC" of the pupil 1 12, the size of the pupil, and the location of the corneal reflection "CR” in the images captured in block 562.
  • the pupil center module 444 and the corneal reflection module 446 may be executed by the computing device 540 in block 564.
  • the eyetracker device 400 determines uncorrected eye positions for each of the images captured in block 562.
  • the size of the pupil 1 12 may have changed as the images were captured in block 562.
  • the eye position module 448 may be executed.
  • the eyetracker device 400 corrects the uncorrected eye positions determined in block 580.
  • the adjustment module 492 may be executed.
  • the eyetracker device 400 may correct the uncorrected eye positions using the relationship(s) determined in block 560. For each image captured in block 562, the relationship(s) determined in block 560 is/are used to estimate how much spurious indication of change of eye position has been created by changes in pupil diameter relative to the reference pupil size (or baseline). That amount is then added to or subtracted from, as is appropriate, the uncorrected eye position determined for the image.
  • the location of the center “PC" of the pupil 1 12 may be adjusted to account for the change in pupil size.
  • the eyetracker device 400 may adjust the at least one relationship determined in optional block 510 or alternatively, for an "average" eye to account for the change in pupil size (i.e., from the reference pupil size to a first pupil size observed in the images captured in block 562). If the pupil size in any of the images is different from the first pupil size, the location of the center “PC" of the pupil 1 12 (and/or distance between the locations of the center “PC” of the pupil and the corneal reflection "CR") may be adjusted using the at least one relationship determined in block 560.
  • the eyetracker device 400 may correct the location of the center "PC" of the pupil 1 12 by doing the following:
  • the various embodiments described herein may be used in conjunction with any number of apparatuses for measuring gaze direction and are in no way limited to video-based eyetracking devices.
  • various embodiments described herein may be applied to and useful for any number of fields where accurate eyetracker and/or gaze direction measurements must be performed, including but not limited to, research experiments, market research, website usability testing, clinical devices for monitoring patients' gaze, and assistive technologies that allow individuals to "speak" to a computer by changing gaze direction.
  • the central fixation target "CFT” was located centrally on the display 425, and additional calibration targets were placed about ⁇ 3 degrees from the central fixation target along horizontal and vertical meridians.
  • the calibration targets included the left target "LT,” the top target “TT,” the right target “RT,” and the bottom target “BT.”
  • the central fixation target "CFT” was implemented as a large rectangle displayed in a central portion of the display 425.
  • Subjects sat in an examination chair (not shown) at a location that positioned their eyes at a distance of approximately 75 cm from the display 425.
  • a head-rest (not shown) was provided behind the subject's head (not shown), and the subjects were asked to lean back against the head-rest.
  • Other stabilizing devices e.g., chin-rests
  • Data recorded by the computing device 440 included a location
  • a digital indicator of stimulus behavior was also recorded.
  • whether the visible light source 450 was "on” or “off” was also recorded by the computing device 440.
  • a sampling rate of about 60 samples per second was used. Thus, about 60 images of the subjects' eyes were captured per second.
  • the visible light source 450 was turned “on” and “off repeatedly (e.g., alternating between being “on” for approximately two seconds, and “off” for approximately two seconds) for approximately 16 seconds to provide a single cycle of visual stimulation, and image data was recorded.
  • the luminance of the visible light source 450 was approximately 54 cd/m 2 when the visible light source 450 was "on.”
  • the luminance was approximately 0.001 cd/m 2 when the visible light source 450 was "off” (e.g., the environmental luminance, which may have been partially caused by the display 425).
  • d(pupil diameter)/dt ⁇ 0 periods in which changes in pupil diameter over time were less than zero
  • periods of redilation which were periods in which changes in pupil diameter over time were greater than zero
  • the smoothing illustrated in Figures 7, 8, and 1 1A-1 1 C was performed using a local smoothing filter using polynomial regression (linear regression for the case of polynomial degree of one) and weights computed from a Gaussian density function.
  • polynomial regression linear regression for the case of polynomial degree of one
  • weights computed from a Gaussian density function For data records of length of about 1000 points (16 second long light-dark trials conducted with 60 samples (images) captured per second), the data proportion of 0.015 implies a filter array that is 15 coefficients long. As assessed directly with sinusoids, the filter has a steep response roll-off above about 2.5 Hz.
  • the smoothing may be performed using SigmaPlot software (available from Systat Software Inc.).
  • IGOR Pro software (available from Wavemetrics, Inc.) may be used to analyze the data to obtain approximate confidence interval estimates for the distance (e.g., the horizontal distance "H” or the vertical distance "V,") versus pupil diameter "D.” These confidence interval estimates approximated confidence intervals for pupil position versus pupil diameter "D.” Calibration trials and light/dark trials were run at least twice on each subject and average data are generally discussed below and illustrated in drawings, except where examples of individual light/dark trials are presented (e.g., Figures 7, 8, 9A-9C, 10A-10D, and 1 1A-1 1 C). A pupil size to image pixels calibration was performed using printed disks.
  • This calibration determined within an image of the eye 100 approximately 21 pixels correspond to a change in pupil diameter "D" of about one millimeter. Thus, if the image of the pupil 1 12 increased by 21 pixels, the diameter "D" of the pupil 1 12 increased by about one millimeter. Similarly, if the image of the pupil 1 12 decreased by 21 pixels, the diameter "D" of the pupil 1 12 decreased by about one millimeter.
  • Figures 7, 8, 9A-9C, 10A-10D, and 1 1A-1 1 C illustrate data reported mainly for five of the subjects recruited. Data from the remaining two subjects showed substantial upper lid intrusion contaminating vertical data. Their horizontal data were similar to data for the remaining five subjects; one case is included in examples of horizontal data presented.
  • Figure 6 depicts data collected from a single subject.
  • examples of data are shown for three one second periods.
  • a horizontal axis "H1 " is time and a vertical axis "V1 " is eye (or pupil) position measured in pixels.
  • the subjects fixated on the left target "LT,” which was located at approximately 3 degrees left of center relative to the eye 100.
  • a second period at two seconds to three seconds along the horizontal axis "HI"
  • the subjects fixated on a target e.g., the central fixation target "CFT” located at approximately the center relative to the eye 100.
  • a target e.g., the central fixation target "CFT” located at approximately the center relative to the eye 100.
  • a third period at four seconds to five seconds along the horizontal axis "H1 "
  • the subjects fixated on the right target "RT," which was located at approximately 3 degrees right of center relative to the eye 100.
  • a horizontal position (measured in pixels) of the center “PC" of the pupil 1 12 is shown as upright triangles " ⁇ ”
  • a horizontal position (measured in pixels) of the corneal reflection "CR” is shown as inverted triangles "V”
  • a difference "H_PC-CR” measured in pixels
  • Solid lines 600, 602, and 604 illustrate smoothing of the difference "H_PC-CR” for each of the first, second, and third fixation periods, respectively.
  • the solid lines 600, 602, and 604 are spline plots of the difference "H_PC-CR” for the first, second, and third periods, respectively, determined using a simple five-bin filter having weights of 0.3152, 0.2438, and 0.0986, and a corner frequency of approximately 9 Hz.
  • An inset plot 610 is a plot of an average value of the difference "H_PC-CR" (measured in pixels) for each of the three fixation periods plotted against the direction of gaze (i.e., 3 degrees left of center, center, and 3 degrees right of center). The average values are plotted using a capital letter "I" inside a circle. The three average values were fitted by a linear regression (illustrated as dashed line 612) having a slope of about -1.81 pixels/degree and an r 2 value of about 0.997. The standard deviation of the difference "H_PC-CR" for these three fixation periods was, on average, about 0.38 pixels for raw data and about 0.20 pixels for smoothed data.
  • H_PC-CR the gaze direction in degrees may be determined using this linear equation.
  • the first (horizontal) relationship may be expressed as this linear equation obtained using linear regression analysis.
  • the pupil diameter for the first, second, and third fixation periods was about 86.1 ⁇ 0.7 pixels, about 86.6 ⁇ 0.9 pixels, and about 85.5 ⁇ 1.0 pixels, respectively, or approximately 4.10 ⁇ 0.05 mm, on average, for this subject.
  • the average standard deviation was reduced to about 0.6 pixels or about 0.03 mm.
  • Figure 6 illustrates the first (horizontal) relationship (in this case a linear relationship) between eye position and the horizontal distance "H” between the locations of the center “CR” of the pupil 1 12 and the corneal reflection "CR.” While performed only for horizontal pupil displacement, Figure 6 may be characterized as illustrating the results of a calibration process for the subject. Further, pupil size during the calibration process was substantially constant (approximately 4.10 ⁇ 0.05 mm). Thus, the reference pupil size for this calibration process was approximately 4.10 ⁇ 0.05 mm.
  • Figure 7 illustrates data for the same subject during 16 seconds of visual stimulation, while fixating the central fixation target "CFT.”
  • the 16 second trial contains four "on” periods during which the visible light source 450 was “on” and four "off periods during which the visible light source 450 was “off.”
  • the same smoothing used in Figure 6 has been applied to the data illustrated in Figure 7.
  • An upper graph 710 includes a plot 712 of pupil diameter observed during the 2-sec- on / 2-sec-off visual stimulation.
  • a plot 714 (which appears as a square wave at the bottom of the upper graph 710) indicates stimulus timing.
  • the stimulus was "on” when the plot 714 has a value of about 50 pixels and the stimulus was "off” when the plot 714 has a value significantly less than about 50 pixels.
  • pupil diameter (illustrated by the plot 712) varied from about 56.8 pixels to about 80.0 pixels, or from about 2.7 mm to about 3.8 mm (which corresponds to an amplitude of constriction of about 1.1 mm).
  • a center graph 720 illustrates a plot 722 of the horizontal distance
  • a bottom graph 730 illustrates a plot 732 of pupil "velocity” (rate of change of pupil diameter with respect to time).
  • a curve or line 734 is a plot of smoothed pupil "velocity” data (e.g., a spline plot of the pupil "velocity” data using a simple five-bin filter having weights of 0.3152, 0.2438, and 0.0986, and a corner frequency of approximately 9 Hz).
  • Figures 7 and 8 illustrate the same general pattern of results. Specifically, on average, pupil diameter varied from about 76.9 pixels to about 1 19.1 pixels, or about 3.7 mm to about 5.7 mm in this eye (which corresponds to an amplitude of constriction of about 2.0 mm).
  • the horizontal position of the center “PC" of the pupil 1 12 relative to the horizontal position of the corneal reflection "CR” varied by approximately 1.4 pixels, amounting to about 0.78 degrees of apparent (or pseudo) horizontal eye movement for this subject.
  • Figures 7 and 8 illustrate that the visible light source 450 caused changes in pupil size. Further, while the eye 100 remained fixated on the central fixation target "CFT," the eyetracking device 400 erroneously indicated the eye had moved (because no correction for pupil size changes had been performed). Thus, Figures 7 and 8 provide an example of the existence of the third (horizontal) relationship between pupil size and the horizontal component of the location of the center "PC" of the pupil 1 12 of the right eye.
  • FIGS 9A-9C A graph 910 along the left-hand side of Figure 9A, a graph 920 along the left-hand side of Figure 9B, and a graph 930 along the left-hand side of Figure 9C each illustrate horizontal differences "H_PC-CR” plotted against pupil diameter "D.”
  • a graph 940 along the right-hand side of Figure 9A, a graph 950 along the right-hand side of Figure 9B, and a graph 960 along the right-hand side of Figure 9C each illustrate vertical differences "V_PC-CR" plotted against pupil diameter "D.”
  • the graphs 910 and 940 of Figure 9A illustrate data collected during pupil constriction and the graphs 920 and 950 of Figure 9B illustrate data collected during pupil redilation.
  • each stimulus "on” period contained a relatively brief (approximately 0.5 sec) constriction response, followed by some redilation, and each stimulus "off” period contains further redilation.
  • a single pair of stimulus "on” and stimulus “off” periods contains one relatively brief constriction period and one longer redilation period.
  • the data in Figures 9A-9C were pooled from two 16-second trials, and include eight constriction periods and eight redilation periods.
  • the data points from each constriction or redilation period are connected by lines to indicate data points obtained from a single constriction or redilation.
  • the graph 930 of Figure 9C illustrates an average relationship (e.g., the fifth (horizontal) relationship) between pupil diameter "D" and the horizontal difference "H_PC-CR” during constriction as a thick line "HC1.”
  • Confidence intervals e.g., 95% confidence intervals
  • HC1 Confidence intervals
  • the graph 930 illustrates an average relationship (e.g., the fifth (horizontal) relationship) between pupil diameter "D" and the horizontal difference "H_PC-CR” during redilation as a dashed thick line “HR1.”
  • Confidence intervals e.g., 95% confidence intervals
  • HR1 the fifth (horizontal) relationship between pupil diameter "D" and the horizontal difference "H_PC-CR” during redilation as a dashed thick line "HR1.”
  • Confidence intervals e.g., 95% confidence intervals
  • the graph 960 illustrates an average relationship (e.g., the sixth (vertical) relationship) between pupil diameter "D” and the vertical difference "V_PC-CR" during constriction as a thick line "VC1.”
  • Confidence intervals e.g., 95% confidence intervals
  • VC1 Confidence intervals
  • the graph 960 illustrates an average relationship (e.g., the sixth (vertical) relationship) between pupil diameter "D” and the vertical difference "V_PC-CR" during redilation as a dashed thick line “VR1.”
  • Confidence intervals e.g., 95% confidence intervals
  • VR1 Confidence intervals
  • Figures 10A-10D illustrates data collected from four other subjects plotted in the same format as the graphs 930 and 960 of Figure 9C.
  • Figure 10A includes a graph 1010 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line “HC2”) and an average relationship (e.g., the fifth (horizontal) relationship) observed during redilation (illustrated as dashed thick line "HR2”), and a graph 1050 depicting an average relationship (e.g., the sixth (vertical) relationship) observed during constriction (illustrated as thick line "VC2") and an average relationship (e.g., the sixth (vertical) relationship) observed during redilation (illustrated as dashed thick line "VR2").
  • the graphs 1010 and 1050 also include thin lines and thin dashed lines illustrating confidence intervals.
  • Figure 10B includes a graph 1020 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC3") and an average relationship (e.g., the fifth (horizontal)
  • the graphs 1020 and 1060 also include thin lines and thin dashed lines illustrating confidence intervals.
  • Figure 10C includes a graph 1030 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC4") and an average relationship (e.g., the fifth (horizontal)
  • the graphs 1030 and 1070 also include thin lines and thin dashed lines illustrating confidence intervals.
  • Figure 10D includes a graph 1040 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC5") and an average relationship (e.g., the fifth (horizontal)
  • the graphs 1040 and 1080 also include thin lines and thin dashed lines illustrating confidence intervals.
  • the confidence intervals in Figure 9C and 10A-10D may be determined using IGOR software.
  • the absolute values of the ordinate scale may differ substantially from one subject to another, reflecting differences in pupil position relative to the corneal reflection "CR" in different eyes.
  • the average relationships for three of the subjects had positive slopes
  • the average relationships for one of the subjects had essentially zero slope
  • one the average relationships for one of the subjects had a negative slope.
  • V_PC-CR hysteresis was about 0.09 ⁇ 0.05 degrees (ranging from about 0.04 degrees to about 0.18 degrees). Changes in pupil size observed during the present experiments resulted in a range of vertical "pseudo eye movements" of about 0.54 ⁇ 0.29 degrees (ranging from about 0.02 degrees to about 0.85 degrees).
  • each eye has an idiosyncratic pupil displacement (or location shift) caused by constriction and redilation
  • the direction and amplitude of the apparent (or pseudo) eye movement will also be idiosyncratic to some degree.
  • the horizontal pseudo eye movement will have the same sign (or be in the same direction) because of the fifth (horizontal) relationships determined for the subjects studied each tended to have a negative slope when the horizontal differences "H_PC-CR" were plotted against pupil diameter (as in the graphs 930, 1010, 1020, 1030, and 1040 of Figures 9C and 10A-10D).
  • the size of the overall effect for the subjects in the present experiments was on average about 0.81 degrees (horizontal) and about 0.54 degrees (vertical), with the largest cases being about 1.22 degrees (horizontal) and about 0.85 degrees (vertical).
  • the negative slope of the horizontal plots implies that larger pupils had centers more temporal than smaller pupils.
  • Figures 1 1A-1 1 C illustrate three examples of results obtained using this approach.
  • Figure 1 1 A depicts a graph 1 1 10, which illustrates data obtained from a first subject.
  • Figure 1 1 B depicts a graph 1 120, which illustrates data obtained from a second subject.
  • Figure 1 1 C depicts a graph 1 130, which illustrates data obtained from a third subject.
  • each of graphs 1 1 10, 1 120, and 1 130 illustrates data obtained from a different subject.
  • Each of these subjects performed two 16 second light/dark trials (in which the visible light source 450 alternated between being "on” for two second and “off” for two seconds) during which the subject fixated on the central fixation target "CFT.”
  • a function (an example of the fifth (horizontal) relationship) relating the horizontal difference "H_PC-CR" to pupil diameter was determined for one of the two trials and the resulting function was used to correct the data collected during the other trial.
  • the subject's eye was assumed to remain stationary during the light/dark trials.
  • graph 1 130 several small saccadic eye movements were apparent and those data segments were omitted when determining the function.
  • pupil position was a univariant function of pupil diameter. In other words, if the eye 100 does not move, it is assumed that the center "PC" of the pupil 1 12 will always be in the same location for a particular pupil diameter, independent of history. Thus, hysteresis was ignored.
  • thick lines 1 1 12, 1 122, and 1 132, respectively are plots of raw data and thinner lines 1 1 14, 1 124, and 1 134, respectively, are plots of corrected data.
  • the standard deviation of the raw data plotted in graphs 1 1 10, 1 120, and 1 130 was reduced by about 55%, about 54%, and about 27%, respectively, as a result of the correction (using the function).
  • the reduction in standard deviation ranged from about 25% to about 55%, with an average of about 39%.
  • the axis showing the greatest variation of horizontal difference "H_PC-CR" with pupil size was selected to assess the possibility of correcting the eye position.
  • the graph 1 130 illustrates that the correction process does not affect the small saccadic eye movements present. The reason for this is that pupil diameter does not change during small saccadic eye movements, the dynamics of the former being much slower than those of the latter.
  • the correlation was 0.62 ⁇ 0.21 (standard deviation). Further, the correlation was always positive, and did not depend on whether the trial was the one selected for deriving the horizontal difference "H_PC-CR7pupil diameter relationship, or the one corrected using the relationship.
  • a more successful correction might include consideration of the dynamics of the relationship between pupil diameter and pupil position.
  • Figure 12 is a diagram of hardware and an operating environment in conjunction with which implementations of the one or more modules 442 illustrated in Figure 4B may be practiced.
  • the description of Figure 12 is intended to provide a brief, general description of suitable computer hardware and a suitable computing environment in which implementations may be practiced.
  • implementations are described in the general context of computer- executable instructions, such as program modules, being executed by a computer, such as a personal computer.
  • program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
  • implementations may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Implementations may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing
  • program modules may be located in both local and remote memory storage devices.
  • the exemplary hardware and operating environment of Figure 12 includes a general-purpose computing device in the form of a computing device 12.
  • Each of the one or more modules 442 illustrated in Figure 4B may be implemented using one or more computing devices like the computing device 12.
  • the computing device 440 may be implemented by computing devices substantially similar to the computing device 12.
  • the computing device 12 includes the system memory 22, a processing unit 21 , and a system bus 23 that operatively couples various system components, including the system memory 22, to the processing unit 21.
  • the computing device 12 may be a conventional computer, a distributed computer, or any other type of computer.
  • the system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures.
  • the system memory may also be referred to as simply the memory, and includes read only memory (ROM) 24 and random access memory (RAM) 25.
  • ROM read only memory
  • RAM random access memory
  • BIOS basic input/output system
  • the computing device 12 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM, DVD, or other optical media.
  • a hard disk drive 27 for reading from and writing to a hard disk, not shown
  • a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29
  • an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM, DVD, or other optical media.
  • the hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical disk drive interface 34, respectively.
  • the drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules, and other data for the computing device 12. It should be appreciated by those skilled in the art that any type of computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, USB drives, digital video disks, Bernoulli cartridges, random access memories (RAMs), read only memories (ROMs), and the like, may be used in the exemplary operating environment.
  • the hard disk drive 27 and other forms of computer-readable media e.g., the removable magnetic disk 29, the removable optical disk 31 , flash memory cards, USB drives, and the like
  • the processing unit 21 may be considered components of the system memory 22.
  • a number of program modules may be stored on the hard disk drive 27, magnetic disk 29, optical disk 31 , ROM 24, or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38.
  • a user may enter commands and information into the computing device 12 through input devices such as a keyboard 40 and pointing device 42.
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus 23, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB).
  • a monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48.
  • computers typically include other peripheral output devices (not shown), such as speakers and printers.
  • the input devices described above are operable to receive user input and selections. Together the input and display devices may be described as providing a user interface.
  • the computing device 12 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer 49. These logical connections are achieved by a communication device coupled to or a part of the computing device 12 (as the local computer). Implementations are not limited to a particular type of
  • the remote computer 49 may be another computer, a server, a router, a network PC, a client, a memory storage device, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computing device 12.
  • the remote computer 49 may be connected to a memory storage device 50.
  • the logical connections depicted in Figure 12 include a local-area network (LAN) 51 and a wide-area network (WAN) 52.
  • LAN local-area network
  • WAN wide-area network
  • the computing device 12 When used in a LAN-networking environment, the computing device 12 is connected to the local area network 51 through a network interface or adapter 53, which is one type of communications device. When used in a WAN- networking environment, the computing device 12 typically includes a modem 54, a type of communications device, or any other type of communications device for establishing communications over the wide area network 52, such as the Internet.
  • the modem 54 which may be internal or external, is connected to the system bus 23 via the serial port interface 46.
  • program modules depicted relative to the personal computing device 12, or portions thereof, may be stored in the remote computer 49 and/or the remote memory storage device 50. It is appreciated that the network connections shown are exemplary and other means of and communications devices for establishing a
  • the computing device 12 and related components have been presented herein by way of particular example and also by abstraction in order to facilitate a high-level view of the concepts disclosed.
  • the actual technical design and implementation may vary based on particular implementation while
  • Each of the one or more modules 442 may be implemented using software components that are executable by the processing unit 21 and when executed perform the functions described above. Further, the method 500 may be implemented as computer executable instructions that are executable by the processing unit 21. Such instructions may be encoded on one or more non- transitory computer-readable mediums for execution by one or more processing units.
  • the foregoing described embodiments depict different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same
  • any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved.
  • any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components.
  • any two components so associated can also be viewed as being

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Medical Informatics (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A method of determining eye position. Eye position is often determined based on a relationship between eye position and a distance between the center of the pupil and a corneal reflection. However, the location of the center of the pupil changes when the size of the pupil changes even if the eye has not moved. Spurious indications of eye movement caused by changes in pupil size may be avoided or improved by considering pupil size when determining eye position. In one embodiment, the method determines at least one relationship between pupil size and the location of the center of the pupil. In another embodiment, the method determines at least one relationship between pupil size and the distance between the center of the pupil and the corneal reflection. One or more of these relationships involving pupil size may be used when determining eye position to obtain a more accurate result.

Description

METHODS OF IMPROVING
ACCURACY OF VIDEO-BASED EYETRACKERS CROSS REFERENCE TO RELATED APPLICATION(S)
This application claims the benefit of U.S. Provisional Application No. 61/259,028, filed November 6, 2009, which is incorporated herein by reference in its entirety. STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
This invention was made with U.S. government support under grant numbers 5T35EY00707917 and 5R03EY01454903 awarded by the National Eye Institute of the National Institutes of Health. The U.S. Government has certain rights in the invention.
BACKGROUND OF THE INVENTION
Field of the Invention
The present invention is directed generally to methods of improving the accuracy of an eyetracking device, and more particularly to methods of improving the accuracy of a video-based eyetracking device.
Description of the Related Art
All publications herein are incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference. The following description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art. Referring to Figures 1A and 1 B, a typical human eye 100 includes, among other structures, a cornea 1 10, a pupil 1 12, and an iris 1 14. The cornea 1 10 is a transparent front part of the eye 100 that covers both the iris 1 14 and the pupil 1 12. The cornea 1 10 is somewhat reflective and will reflect some of the light shown on the cornea. The image of a light source formed by reflection from the outer surface of the cornea 1 10 (or corneal first surface "S") is referred to as a "first Purkinje image."
The pupil 1 12 and the iris 1 14 are positioned behind the cornea 1 10. The pupil 1 12 is an opening in the iris 1 14 having a generally circularly-shaped shaped outer edge "E" (defined by an inner edge "IE" of the iris 1 14) that allows light to enter the interior of the eye 100. The light that enters the eye 100 encounters a retina 1 16, which is a layer of tissue lining a portion of the inside the eye. The iris 1 14 is connected to muscles (not shown) that change the shape of the iris to thereby change a diameter "D" and size of the pupil 1 12. When the diameter "D" is decreased, the pupil 1 12 is described as having contracted. On the other hand, when the diameter "D" of the pupil 1 12 increases, the pupil 1 12 is described as having dilated. As the size of the pupil 1 12 increases or decreases, the amount of light reaching the retina 1 16 increases or decreases, respectively.
As the eye 100 rotates within its socket (not shown), the location of the pupil 1 12 changes. The location of the pupil 1 12 may be used to determine the orientation of the eye 100 or eye position (also referred to as gaze direction). Eye position may be used to determine where or at what the subject is looking. For example, eye movement information may be collected and used in research experiments, market research, website usability testing, clinical devices for monitoring patients' gaze, and assistive technologies that allow individuals to "speak" to a computing device by changing gaze direction.
Currently, gaze direction (or eye position) is typically measured or studied using video-based eyetracking devices (also referred to as video-based eyetrackers). Video-based eyetracking devices provide a relatively simple and cost-effective way to obtain two-dimensional (vertical and horizontal) eye position information. Figure 2 is a schematic illustrating some of the components of a conventional video-based eyetracking device 200. As illustrated in Figure 2, the video-based eyetracking device 200 includes a camera 210 (e.g., a digital video camera) positioned in front of the eye 100 to capture images of the pupil 1 12 as it changes position. The video-based eyetracking device 200 may include a separate camera for each of the subject's eyes.
The video-based eyetracking device 200 includes one or more infrared ("IR") light sources 220 that each shines an IR light beam 222 onto the eye 100. The video-based eyetracking device 200 may include one or more IR light source for each of the subject's eyes. At least a portion of the IR light beam 222 creates an image (referred to as a corneal reflection "CR" illustrated in Figures 3A and 3B) of the IR light sources 220 by reflection at the corneal surface "S." The corneal reflection "CR" is visible to the camera 210. In Figures 3A and 3B, the corneal reflection "CR" is illustrated as a small bright dot. Returning to Figure 2, the IR light sources 220 may also illuminate the iris 1 14 (or the retina 1 16) for detection by the video-based eyetracking device 200. In other words, the IR light sources 220 both illuminate the front of the eye 100, and provide a reflected image (the corneal reflection "CR" illustrated in Figures 3A and 3B) that is detectable by the camera 210. Infrared light is used instead of visible light because visible light may cause the pupil 1 12 to contract.
Video-based eyetracking devices are either "dark-pupil" or "bright- pupil" in nature. In a dark-pupil system, the iris 1 14 is illuminated by the IR light sources 220 from a direction off the optic axis (as illustrated in Figure 2), so the pupil 1 12 appears dark relative to the iris. In a bright-pupil system, the IR light sources 220 and direction of view are positioned on the optic axis, and the resulting reflection from the retina 1 16 and a choroid (not shown) creates "redeye," causing the pupil 1 12 to appear brighter than the iris 1 14.
The video-based eyetracking device 200 may include a display 225 (e.g., a conventional computer monitor) configured to display visual targets 230 to the eye 100. Alternatively, the display 225 may be a component separate from the video-based eyetracking device 200. Because the subject's left and right eyes typically operate together, a single display may be positioned to be viewable by both of the subject's eyes. Alternatively, if the video-based eyetracking device 200 is configured to track only a single eye, the display 225 is positioned to be viewable by the eye being tracked. As the display 225 displays one or more of the visual targets 230 to the subject, the subject is asked to fixate on or track the target(s) with the subject's eye(s) as the camera 210 captures images of the pupil 1 12 and the corneal reflection "CR" of one or both eyes. These images are used to determine the position(s) of the subject's eye(s) when fixating on or tracking the target(s).
Eye position within each image may be determined based on at least one relationship between the position of the eye 100 and the locations of a center "PC" of the pupil 1 12 and the corneal reflection "CR." A typical human subject has a left eye and a right eye, and the relationship is usually different for each of the subject's eyes. For ease of illustration, how the relationship is determined will be described with respect to the eye 100 (e.g., the right eye). However, the same process may be used to determine the relationship for a different eye (e.g., the left eye).
According to the relationship between eye position and the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR," for the eye 100, the location of the center "PC" of the pupil 1 12 shifts approximately linearly with changes in eye position within about ±30 degrees relative to straight-ahead, while the location of the corneal reflection "CR" shifts considerably less. For example, turning to Figure 3B, when the subject is not looking straight ahead, but instead looks off to one side, the locations of the iris 1 14 and the pupil 112 both shift, but the corneal reflection "CR" does not shift as much as the iris 1 14 and the pupil 1 12. Specifically, the location of the center "PC" of the pupil 1 12 may shift linearly and the location of the corneal reflection "CR" may shift linearly or otherwise but by a smaller shift amount with horizontal changes in eye position within about ±30 degrees relative to straight-ahead. That difference in shift amount lies at the heart of how images captured by video-based eyetracking devices, like the video-based eyetracking device 200, are used to determine eye position.
The difference in shift amount may be mapped (e.g., using a mathematical formula, lookup table, data structure, and the like) to eye position. For example, a mathematical relationship (e.g., a function) or model may be created in which the difference in shift amount is an input variable used to determine eye position as an output variable. The video-based eyetracking device 200 includes or is connected to a computing device 240 that stores and analyzes the images captured by the camera 210. The computing device 240 may determine the relationship (e.g., a mathematical relationship) between eye position and the difference in shift amount (between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR"). A calibration process is performed in which the difference in shift amount is determined when the eye 100 is looking at a set of known predetermined target locations. The relationship between the difference in shift amount and eye position is then formulated based on the differences in shift amount observed when the eye 100 was gazing towards the known predetermined locations.
For small head movements, both translational and rotational, if fixation is maintained on the same relatively distant target, the images of the front of the globe-shaped eye 100 captured by the camera 210 depict the location of the pupil 1 12 as having shifted a little but also depict an unchanged relationship between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR." Thus, the eye position estimate is considerably stable even when the subject's head moves.
The images of the eye 100 captured by the camera 210 may include a two-dimensional array of pixels. Moving from one edge of an image of the eye 100 toward an opposite edge, one encounters the iris 1 14 followed by the pupil, which in turn is followed by the iris again. Thus, in a dark-pupil system, one encounters bright pixels (pixels depicting the iris 1 14) followed by dark pixels (pixels depicting the pupil 1 12), which in turn are followed by bright pixels (pixels depicting the iris) and in a bright pupil system one encounters dark pixels (pixels depicting the iris) followed by bright pixels (pixels depicting the pupil), which in turn are followed by dark pixels (pixels depicting the iris). By determining transition locations (locations where the pixels switch from dark to bright or vice versa), the computing system 240 identifies a series of chords (or linear arrays of pixels) extending across the image of the pupil 1 12 with the ends of each chord lying on the outer edge "E" of the pupil. The set of transition points (or ends of the chords) can be fitted with a circle or an ellipse and the two-dimensional center "PC" of the pupil 1 12 determined to be the center of the circle or ellipse. Alternatively, the average x-coordinate and y- coordinate values of the chords in the two- dimensional array of pixels in the image may be used as an estimate of the location of the center "PC" of the pupil 1 12. Pupil shape is typically approximately elliptical. See Wyatt infra. Pupil diameter can be obtained from the fitted-circle or the fitted-ellipse (e.g., a horizontal extent or a vertical extent) depending on the approach.
Although video-based eyetracking devices are simple and useful in many circumstances, there is at least one potential problem with using them to measure small changes in eye position: if the diameter of the pupil 1 12 changes, the location of the center "PC" of the pupil does not remain fixed even if the eye 100 has not moved. As discussed above, the location of the center "PC" of the pupil 1 12 may be determined by using edge detection techniques to locate the outer edge "E" of the pupil 1 12. When the pupil 1 12 changes size, the position of the pupil 1 12 in the iris 144 changes as a result of interactions between the muscles of the iris and the structure of the iris. Even if the subject continues to look at the same target and no head or eye movement occurs, the location of the pupil 1 12 can shift horizontally or vertically as a result. Since the eye 100 has not moved, the corneal reflection "CR" has not moved, but the pupil position has changed. As a result, the relationship between the location of the pupil 1 12 and the location of the corneal reflection "CR" changes and the computing device 240 interprets this as a change in eye position or gaze direction, even though none has occurred. This may cause the computing device 240 to incorrectly determine the location of the center "PC" of the pupil 1 12 when the size of the pupil 1 12 changes. Further, the computing device 240 may detect "pseudo eye movements," which are movements caused by changes in the diameter "D" of the pupil 1 12 and not by actual movement of the pupil.
The pupil 1 12 may change size for different reasons. For example, when the amount of light reaching the eye 100 from the environment increases, the pupil 1 12 gets smaller as a result of a "pupillary light reflex." In dark conditions, the size of the pupil 1 12 will typically increase (i.e., the pupil will typically dilate). In contrast, in light conditions, the size of the pupil 1 12 will typically decrease (i.e., the pupil will typically contract). Thus, the size of the pupil 1 12 will change when the lighting is changed from light to dark conditions and vice versa. Shifts in the location of the center "PC" of the pupil 1 12 between light and dark conditions can be as large as several tenths of a millimeter. Wyatt, H., Vision Res., vol. 35, no. 14, pp 2021 -2036 (1995). The direction of the shift in the location may differ from one subject to another. Id. Further, the direction of shift may be different for each of the eyes of the same subject. Id.
Pupil size may also change based on the distance of an object from the subject's face. In other words, visual accommodation may cause changes in pupil size. For example, when a subject looks at an object near the subject's face, the pupil 1 12 may contract. Further, pupil size may change with changes in the subject's emotional state.
If the diameter "D" (see Figure 1 B) of the pupil 1 12 remains constant, a one millimeter shift of the location of the center "PC" of the pupil 1 12
corresponds to approximately ten degrees of eye movement (or eye rotation). Therefore, shifts in the location of the center "PC" of the pupil 1 12 caused by light and dark conditions (which may cause shifts of approximately 0.1 millimeters or 0.2 millimeters in the location of the center of the pupil) may correspond to approximately one or two degrees of eye movement. In other words, shifts in the location of the center "PC" of the pupil 1 12 associated with changes of pupil size can generate spurious indications of changes in gaze direction. Therefore, a need exists for methods of determining a relationship between change in pupil size and change in pupil position for a particular subject, and methods of using that information to correct for the spurious indications of changes in gaze direction. The present application provides these and other advantages as will be apparent from the following detailed description and accompanying figures.
SUMMARY OF INVENTION
Aspects of the invention include a computer implemented method for use with a camera and one or more light sources. The camera is positioned to capture images of an eye (e.g., a human eye) that has a cornea and a pupil. The one or more light sources are each positioned to illuminate the eye and at least one of the light sources generates a corneal reflection on the cornea of the eye. The method includes obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection. By way of a non-limiting example, the first relationship may be recorded in memory or determined by performing a first calibration process. The method includes determining a second relationship between the size of the pupil and the distance between the location of the center of the pupil and the location of the corneal reflection. By way of a non-limiting example, the second relationship may be determined by performing a second calibration process during which the pupil is contracted (e.g., by a bright light) and allowed to at least partially redilate. The method also includes capturing an image of the eye with the camera, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of the corneal reflection in the image captured. An observed distance between the observed locations of the center of the pupil and the corneal reflection is determined. Then, a position of the eye is determined based on the observed distance, the observed size of the pupil, the first relationship, and the second relationship. The position of the eye may be determined by determining a first position of the eye based on the observed distance, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the observed distance, and the second relationship.
The first calibration process mentioned above may include capturing a first set of calibration images of the eye with the camera as the eye fixates on each of a plurality of calibration targets. The calibration targets are each arranged to position the pupil of the eye in a predetermined location. Further, each of the first set of calibration images depicts the pupil positioned in one of the
predetermined locations. The first calibration process includes detecting a location of a center of the pupil, and a location of the corneal reflection in each of the first set of calibration images captured. For each of the first set of calibration images captured, a distance between the locations of the center of the pupil and the corneal reflection is determined and associated with the predetermined location of the pupil depicted in the calibration image. Then, the first relationship is
determined based on the distances determined for the first set of calibration images and the predetermined locations of the pupil depicted in the first set of calibration images.
The second calibration process mentioned above may include capturing a second set of calibration images of the eye with the camera as the eye fixates on a target and the pupil contracts and redilates. The target is arranged to position the pupil of the eye in a predetermined location. Each of the second set of calibration images depict the pupil positioned in the predetermined location. The second calibration process includes detecting a location of a center of the pupil, a size of the pupil, and a location of the corneal reflection in each of the second set of calibration images captured. For each of the second set of calibration images, a distance between the locations of the center of the pupil and the corneal reflection is determined. Then, the second relationship is determined based on the distances determined for the second set of calibration images and the
predetermined location of the pupil depicted in the second set of calibration images. The second relationship may be a mathematical relationship relating pupil size to a distance between the locations of the center of the pupil and the corneal reflection. In such embodiments, the method further includes deriving or formulating the mathematical relationship. By way of a non-limiting example, the mathematical relationship may be a linear or polynomial equation.
The second relationship may be implemented as a data structure that associates each of a plurality of pupil sizes with a distance between the locations of the center of the pupil and the corneal reflection.
The second relationship may include a horizontal relationship between pupil size and a horizontal distance between the locations of the center of the pupil and the corneal reflection, and a separate vertical relationship between pupil size and a vertical distance between the locations of the center of the pupil and the corneal reflection.
Another aspect of the invention includes a computer implemented method for use with a camera and one or more light sources. The camera is positioned to capture images of an eye (e.g., a human eye) that has a cornea and a pupil. The one or more light sources are each positioned to illuminate the eye and at least one of the light sources generates a corneal reflection on the cornea of the eye. The method includes obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection. By way of a non-limiting example, the first relationship may be recorded in memory or determined by performing the first calibration process (described above). The method includes determining a second relationship between the size of the pupil and the location of the center of the pupil. The second relationship may be determined using a third calibration process described below. The method also includes capturing an image of the eye with the camera, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of the corneal reflection in the image captured. An observed distance between the observed locations of the center of the pupil and the corneal reflection is determined. Then, a position of the eye is determined based on the observed distance, the observed size of the pupil, the observed location of the center of the pupil, the first relationship, and the second
relationship. The position of the eye may be determined by determining a first position of the eye based on the observed distance, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, and the second relationship.
The third calibration process includes capturing a third set of calibration images of the eye with the camera as the eye fixates on a target while the pupil contracts and redilates. The target is arranged to position the pupil of the eye in a predetermined location. Each of the third set of calibration images depicts the pupil positioned in the predetermined location. The third calibration process includes detecting a location of a center of the pupil, and a size of the pupil in each of the third set of calibration images captured. Then, the second relationship is determined based on the sizes of the pupil detected for the third set of calibration images and the predetermined location of the pupil depicted in the third set of calibration images.
The second relationship may be a mathematical relationship relating pupil sizes to locations of the center of the pupil. In such embodiments, the third calibration process further comprises deriving or formulating the mathematical relationship. By way of a non-limiting example, the mathematical relationship may be a linear or polynomial equation.
The second relationship may be implemented as a data structure that associates each of a plurality of pupil sizes with a location of the center of the pupil.
The second relationship may include a horizontal relationship between pupil size and a horizontal component of the location of the center of the pupil, and a separate vertical relationship between pupil size and a vertical component of the location of the center of the pupil.
Another aspect of the invention includes a system that includes means for obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of a reflection from the cornea of the eye and means for determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection. The means for obtaining the first relationship may include structures described herein as performing the first calibration process. The means for determining the second relationship may include structures described herein as performing the second calibration process and/or the third calibration process. The system also includes means for capturing an image of the eye, and detecting an observed location of the center of the pupil, an observed size of the pupil, and an observed location of a corneal reflection in the image captured. The system also includes means for determining a position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, the observed location of the corneal reflection, the first relationship, and the second relationship. The means for determining the position of the eye may include means for determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and means for modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
Another aspect of the invention includes a system that includes at least one camera positioned to capture images of the eye and one or more light sources positioned to illuminate the eye and generate a corneal reflection on the cornea of the eye. By way of non-limiting examples, the camera may be implemented as a digital video camera and the one or more light sources may include one or more infrared light sources. The system further includes a display positioned to display one or more targets viewable by the eye, and a computing device. The computing device includes at least one processor and a memory configured to store instructions executable by the at least one processor. When executed by the at least one processor, the instructions cause the at least one processor to perform portions of the methods described above. For example, the method performed by the at least one processor may include obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection, and determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection. The first relationship may be stored in and obtained from the memory. Alternatively, the first relationship may be obtained by performing the first calibration process described above. The second relationship may be obtained by performing the second calibration process and/or the third calibration process described above. The method may also include instructing the display to display a target to the eye and instructing the at least one camera to capture images of the eye while the eye views the target displayed by the display. The method may further include detecting observed locations of the center of the pupil, observed sizes of the pupil, and observed locations of the corneal reflection in the image captured. Then, positions of the eye are determined based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship. The position of the eye may be determined by determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
Another aspect of the invention includes a non-transitory computer- readable medium comprising instructions executable by at least one processor and when executed thereby causing the at least processor to perform a method. The method includes obtaining a first relationship between eye position and a distance between a location of a center of a pupil and a location of a corneal reflection. The first relationship may be stored in and obtained from the memory. Alternatively, the first relationship may be obtained by performing the first calibration process described above. The method further includes determining a second relationship between pupil size and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection. The second relationship may be obtained by performing the second calibration process and/or the third calibration process described above. The method also includes detecting observed locations of the center of the pupil, observed sizes of the pupil, and observed locations of the corneal reflection in a plurality of images of the eye. Then, positions of the eye are determined based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship. The position of the eye may be determined by determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship, and modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
Exemplary embodiments are illustrated in referenced figures. It is intended that the embodiments and figures disclosed herein are to be considered illustrative rather than restrictive.
Figure 1A is a cross-section of a human eye including an iris and a pupil.
Figure 1 B is a front view the human eye of Figure 1 A.
Figure 2 is a schematic illustrating a conventional video-based eyetracking device including a camera.
Figure 3A is a front view of the human eye as viewed by the camera of the video-based eyetracking device of Figure 2 with the pupil positioned to look straight ahead. Figure 3B is a front view of the human eye as viewed by the camera of the video-based eyetracking device of Figure 2 with the pupil positioned to look toward the left of the subject.
Figure 4A is a schematic illustrating an exemplary embodiment of an eyetracking device and a computing device having a system memory.
Figure 4B is a block diagram illustrating modules configured to analyze the image data stored in the system memory of the computing device of Figure 4A.
Figure 5 is a flow diagram of a method performed by the eyetracking device and/or the computing device of Figure 4A.
Figure 6 is a graph depicting data obtained from a calibration process performed by the eyetracking device of Figure 4A with a subject who during the calibration process, fixated on a left target, a central target, and a right target.
Figure 7 depicts three graphs illustrating data obtained from one light/dark trial performed by the eyetracking device of Figure 4A and a subject.
Figure 8 depicts three graphs illustrating data obtained from one light/dark trial performed by the eyetracking device of Figure 4A and a different subject from the subject that performed the light/dark trial in Figure 7.
Figure 9A depicts a leftmost graph plotting a horizontal distance (y- axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was constricted and a rightmost graph plotting a vertical distance (y-axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was constricted.
Figure 9B depicts a leftmost graph plotting a horizontal distance (y- axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was redilated and a rightmost graph plotting a vertical distance (y-axis) between the center of the pupil and the corneal reflection versus pupil diameter (x-axis) when the pupil was redilated.
Figure 9C depicts a leftmost graph plotting an average relationship (illustrated as a solid thick line) determined from the data of the leftmost graph of Figure 9A, and an average relationship (illustrated as a dashed thick line) determined from the data of the leftmost graph of Figure 9B; and a rightmost graph plotting an average relationship (illustrated as a solid thick line) determined from the data of the rightmost graph of Figure 9A and an average relationship
(illustrated as a dashed thick line) determined from the data of the rightmost graph of Figure 9B.
Figure 10A depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a second different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the second subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the second subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the second subject when the pupil was redilated.
Figure 10B depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a third different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the third subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the third subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the third subject when the pupil was redilated. Figure 10C depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a fourth different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the fourth subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the fourth subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the fourth subject when the pupil was redilated.
Figure 10D depicts a leftmost graph substantially similar to the leftmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from horizontal distance and pupil diameter data obtained from a fifth different subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from horizontal distance and pupil diameter data obtained from the fifth subject when the pupil was redilated; and a rightmost graph substantially similar to the rightmost graph of Figure 9C but plotting an average relationship (illustrated as a solid thick line) determined from vertical distance and pupil diameter data obtained from the fifth subject when the pupil was constricted, and an average relationship (illustrated as a dashed thick line) determined from vertical distance and pupil diameter data obtained from the fifth subject when the pupil was redilated.
Figure 1 1 A is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a first subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned "on" for about two second and then turned "off for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the first subject during a different single 16 second light/dark trial.
Figure 1 1 B is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a second subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned "on" (or displayed) for about two second and then turned "off for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the second subject during a different single 16 second light/dark trial.
Figure 1 1 C is a graph of horizontal distances (illustrated as a thick line) between the center of the pupil and the corneal reflection observed for a third subject during a single 16 second light/dark trial in which the subject fixated on a target and a visible light source was repeatedly turned "on" (or displayed) for about two second and then turned "off" for about two seconds, and corrected horizontal distances (illustrated as a thin line) that were corrected using a relationship determined from horizontal distances and pupil size observed for the second subject during a different single 16 second light/dark trial.
Figure 12 is a diagram of a hardware environment and an operating environment in which the computing device of Figure 4A may be implemented.
DETAILED DESCRIPTION OF THE INVENTION
All references cited herein are incorporated by reference in their entirety as though fully set forth. Unless defined otherwise, technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Singleton et al, Dictionary of Microbiology and Molecular Biology 3rd ed., J. Wiley & Sons (New York, NY 2001 ); March, Advanced Organic Chemistry Reactions, Mechanisms and Structure 5th ed., J. Wiley & Sons (New York, NY 2001 ); and Sambrook and Russel, Molecular Cloning: A Laboratory Manual 3rd ed., Cold Spring Harbor Laboratory Press (Cold Spring Harbor, NY 2001 ), provide one skilled in the art with a general guide to many of the terms used in the present application.
"Eyetracker" as used herein, refers to a device that measures a direction in which the eye 100 is looking. An eyetracker may be, although not limited to, video-based, where a video camera is focused on the front of the subject's eye.
As disclosed herein, the direction of gaze of a human subject, sometimes referred to as "eye position" is most commonly measured with an eyetracker, such as a video-based eyetracking device. As discussed in the
Background Section, referring to Figure 2, conventional video-based eyetracking devices (e.g., the video-based eyetracking device 200) capture images of the front of the eye 100 illuminated with infrared illumination. Referring to Figures 2 and 3B, the computing device 240 detects the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR" in the images captured, measures a horizontal distance "H" between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR," and measures a vertical distance "V" between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR." The horizontal distance "H" is a horizontal difference "H_PC-CR" (see Figures 6, 9A- 9C, 10A-10D, and 1 1A-1 1 C) between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR." The vertical distance "V" is a vertical difference "V_PC-CR" (see Figures 9A-9C and 10A-10D) between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR."
As explained in the Background Section, conventional video-based eyetracking devices (the video-based eyetracking device 200 illustrated in Figure 2), typically require a calibration process be performed for each subject and may collect separate calibration data for each eye. The horizontal and vertical distances "H" and "V" as well as one or more relationships between these distances and eye position (determined using the calibration data collected during the performance of the calibration process) are used to determine the direction of eye gaze (or eye position). A relationship between the horizontal distance "H" and a horizontal component of eye position and a separate relationship between the vertical distance "V" and a vertical component of eye position may be determined using calibration data.
However, conventional video-based eyetracking devices (the video- based eyetracking device 200 illustrated in Figure 2) do not consider changes in pupil size. Therefore, when changes in pupil size occur, measurements of small changes in gaze direction may become unreliable. As discussed in the
Background Section, when the pupil 1 12 changes size, the location of the center "PC" of the pupil 1 12 may shift even if the position of the pupil 1 12 has not moved. In other words, changes in the size of the pupil 1 12 may generate spurious indications of change in gaze direction (or "pseudo eye movements"). Thus, the importance of considering pupil size increases as the accuracy requirements of gaze direction measurement increase.
Pupil size changes may constitute a significant problem when measuring small changes in gaze direction (e.g., about one degree of arc). As further disclosed herein, pupil size changes and shifts in the location of the center "PC" of the pupil 1 12 are related in a systematic way, which makes correcting for shifts in the location of the center "PC" of the pupil 1 12 caused by changes in pupil size possible. However, the relationship between changes in pupil size and the location of the center "PC" of the pupil 1 12 may be somewhat different for each eye. Further, the relationship between changes in pupil size and the horizontal component of the location of the center "PC" of the pupil 1 12 may be somewhat different than the relationship between changes in pupil size and the vertical component of the location of the center "PC" of the pupil 1 12.
While the relationship between pupil size and the location of the center "PC" of the pupil 1 12 is different for different eyes, a relatively fixed relationship exists for a particular eye (e.g., the eye 100 illustrated in Figures 1A, 1 B, 2, 3A, 3B, and 4A). This relationship may be determined for the eye 100 by exposing the eye to different lighting conditions (e.g., light conditions and dark conditions) thereby causing changes in pupil size, and collecting pupil size calibration information. Once the relationship between pupil size and the location of the center "PC" of the pupil 1 12 is determined for the eye 100, gaze direction determinations can be corrected to account for pupil size changes. Alternatively, a relationship between pupil size and the difference between the locations of the center "PC" and the corneal reflection "CR" (e.g., the horizontal and vertical differences "H_PC-CR" and "V_PC-CR") may be determined for the eye 100 and used to correct gaze direction determinations. By considering pupil size, calibration can be improved, which may allow for better and more accurate eyetracking and eye position measurements, especially when pupil size changes.
Figure 4A illustrates exemplary components of an eyetracker device 400, which is configured to correct determinations of locations of the center "PC" of the pupil 1 12 based on pupil size. While the eyetracker device 400 is illustrated as being a video-based eyetracking device, through application of ordinary skill in the art to the present teachings, embodiments including other types of eyetrackers could be constructed. Because "pseudo eye movements" may occur with any eyetracker device that uses pupil position to estimate gaze direction (regardless of whether the eyetracker device also uses the corneal reflection "CR"), embodiments may be constructed using any eyetracker device that uses pupil position to estimate gaze direction.
The eyetracker device 400 includes one or more cameras (e.g., a camera 410) each substantially identical to the camera 210 of the video-based eyetracking device 200 (see Figure 2), and one or more IR sources 420
substantially identical to the one or more IR sources 220 of the video-based eyetracking device 200 (see Figure 2). The eyetracker device 400 may include a display 425 substantially identical to the display 225 (see Figure 2). Alternatively, the display 425 may be a component separate from the eyetracking device 400.
The eyetracker device 400 may be configured to track a single eye or both of the subject's eyes. In embodiments in which the eyetracker device 400 is configured to track both of the subject's eyes, the eyetracker device 400 may include a separate camera (e.g., a digital video camera) for each of the subject's eyes. Further, the eyetracker device 400 may include one or more IR sources 420 for each of the subject's eyes.
For ease of illustration, the eyetracker device 400 is described below as tracking a single eye (e.g., the subject's right eye). However, through application of ordinary skill in the art to the present teachings, embodiments in which the eyetracker device 400 is configured to track both of the subject's eyes may be constructed and are therefore within the scope of the present teachings.
The video-based eyetracking device 400 includes or is connected to a computing device 440 that stores and analyzes the images captured by the camera(s) 410. A diagram of hardware and an operating environment in conjunction with which implementations of the computing device 440 may be practiced is provided in Figure 12 and described below.
Like the video-based eyetracking device 200, the eyetracking device 400 is configured to perform a calibration process. During the calibration process, the computing device 440 instructs the display 425 to display a plurality of calibration or fixation targets 432. Each of the fixation targets 432 is arranged on the display 425 to position the pupil 1 12 of the eye 100 in a known predetermined location. The fixation targets 432 may be arranged in a fixed array. By way of a non-limiting example, the fixation targets 432 may include a central fixation target "CFT", a left target "LT," a top target "TT," a right target "RT," and a bottom target "BT" as viewed by the subject.
During the calibration process, the subject looks sequentially at each of the fixation targets 432 (as they are displayed sequentially by the display 425) and the camera 410 captures images of the eye 100 as the eye looks at each of the fixation targets 432. Because the fixation targets 432 position the pupil 1 12 at known locations, the computing device 440 may use the images of the eye 100 captured when the eye was looking at the fixation targets 432 to determine the relationship between eye position and the distance (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR." The computing device 440 may determine a first (horizontal) relationship between the horizontal distance "H" and a horizontal component of eye position and a separate second (vertical) relationship between the vertical distance "V" and a vertical component of eye position.
As explained above, the location of the center "PC" of the pupil 1 12 shifts approximately linearly with horizontal changes in eye position within about ±30 degrees relative to straight-ahead, while the location of the corneal reflection "CR" shifts considerably less. Thus, the location of the center "PC" of the pupil 1 12 may shift linearly and the location of the corneal reflection "CR" may shift linearly or otherwise but by a smaller shift amount with horizontal changes in eye position within about ±30 degrees relative to straight-ahead. Therefore, the first
(horizontal) relationship may be expressed as a linear equation in which a horizontal component of eye position is determined as a function of the horizontal distance "H." However, some degree of nonlinearity can be dealt with by fitting a polynomial (instead of a line) to the horizontal distance "H" data collected during the calibration process. In other words, the first (horizontal) relationship may be expressed as a polynomial equation. Similarly, the second (vertical) relationship may be expressed as a linear equation in which a vertical component of eye position is determined as a function of the vertical distance "V." Again, some degree of nonlinearity can be dealt with by fitting a polynomial (instead of a line) to the vertical distance "V" data collected during the calibration process. In other words, the second (vertical) relationship may be expressed as a polynomial equation.
After the calibration process has been performed, the computing device 440 can determine gaze direction (or eye position) from images of the eye 100 captured by the camera 410 by measuring at least one distance (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR" and using the relationship(s) determined during the calibration process between eye position and the distance(s) measured to determine at least one component (e.g., a horizontal component, or a vertical component) of eye position.
Each image (or frame) captured by the camera 410 may be sent as a video signal to the computing device 440 that may locate both the center "CP" of the pupil 1 12 and the corneal reflection "CR," and use that information to calculate or measure at least one distance between the two. For example, the computing device 440 may determine both the horizontal distance "H" and the vertical distance "V." Then, the computing device 440 may use the first (horizontal) relationship (determined using data collected during the calibration process) and the horizontal distance "H" to provide an estimate of a horizontal component of eye position. Similarly, the computing device 440 may use the second (vertical) relationship (determined using data collected during the calibration process) and the vertical distance "V" to provide an estimate of a vertical component of eye position. Together the estimates of the horizontal and vertical components of eye position provide a two-dimensional estimate of eye position.
Figure 4A also illustrates a visible light source 450 that may be turned "on" to cause the pupil 1 12 to contract and turned "off to allow the pupil 1 12 to redilate. In other words, the visible light source 450 may be used to change or determine the size of the pupil. The visible light source 450 may be controlled by the computing device 440. However, this is not a requirement. Nevertheless, when the visible light source 450 is "on" and when the visible light source 450 is "off" may be communicated to the computing device 440 for storage thereby.
As described in more detail below, the computing device 440 includes a system memory 22 (see Figure 12) configured to store image data captured by the camera 410. The system memory 22 also stores other
programming modules 37 (see Figure 12). Figure 4B illustrates at least a portion of the other programming modules 37 stored in the system memory 22. The other programming modules 37 may store one or more modules 442 configured to analyze the image data. By way of a non-limiting example, the modules 442 may include a PC-CR calibration module 443 that performs the calibration process (described above) to obtain at least one relationship (e.g., the first and second relationships) between at least one component (e.g., the horizontal and vertical components) of eye position and at least one distance (e.g., the horizontal and vertical distances "H" and "V") between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR."
The modules 442 may include a pupil center module 444 configured to detect the outer edge "E" of the pupil 1 12 and determine the location of the center "PC" of the pupil. The pupil center module 444 records the location of the center "PC" of the pupil 1 12. Optionally, the pupil center module 444 may determine and record the size of the pupil 1 12.
The modules 442 may include a corneal reflection module 446 configured to determine the location of the corneal reflection "CR." The corneal reflection module 446 records the location of the corneal reflection "CR."
The modules 442 may include a distance module 448 configured to determine at least one distance (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the location of the center "PC" of the pupil 1 12 determined by the pupil center module 444, and the location of the corneal reflections "CR" determined by the corneal reflection module 446.
The modules 442 may include an eye position module 448 configured to determine eye position (or direction of eye gaze) based on the at least one relationship determined by the PC-CR calibration module 443, and the distance(s) determined by the distance module 448. In particular, the eye position module 448 may determine eye position (or direction of eye gaze) based on the first (horizontal) relationship determined by the PC-CR calibration module 443, the second (vertical) relationship determined by the PC-CR calibration module 443, the horizontal distance "H" between the location of the center "PC" of the pupil 1 12 and the location of the corneal reflection "CR," and the vertical distance "V" between the location of the center "PC" of the pupil 1 12 and the location of the corneal reflection "CR."
The modules 442 may include a pupil size module 490 configured to determine at least one relationship between pupil size and the location of the center "PC" of the pupil 1 12 determined by the pupil center module 444. By way of a non-limiting example, the pupil size module 490 may determine a third
(horizontal) relationship between pupil size and a horizontal component of the location of the center "PC" of the pupil 1 12. By way of another non-limiting example, the pupil size module 490 may determine a fourth (vertical) relationship between pupil size and a vertical component of the location of the center "PC" of the pupil 1 12.
Alternatively, the pupil size module 490 may be configured to determine at least one relationship between pupil size and the distance(s) (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the locations of the center "PC" of the pupil 1 12 and the corneal reflections "CR" determined by the distance module 448. By way of a non-limiting example, the pupil size module 490 may determine a fifth (horizontal) relationship between pupil size and the horizontal distance "H" between the locations of the center "PC" of the pupil 1 12 and the corneal reflections "CR." By way of another non-limiting example, the pupil size module 490 may determine a sixth (vertical) relationship between pupil size and the vertical distance "V" between the locations of the center "PC" of the pupil 1 12 and the corneal reflections "CR."
The modules 442 may include a pupil size adjustment module 492 configured to correct the location of the center "PC" of the pupil 1 12 based on changes in pupil size using the at least one relationship (e.g., the third (horizontal) relationship, the fourth (vertical) relationship, and the like) between pupil size and the location of the center "PC" of the pupil 1 12 determined by the pupil size module 490.
Alternatively, the pupil size adjustment module 492 may be configured to correct the distance(s) between the locations of the center "PC" of the pupil 1 12 and the corneal reflections "CR" (determined by the distance module 448) based on changes in pupil size using the at least one relationship (e.g., the fifth (horizontal) relationship, the sixth (vertical) relationship, and the like) determined by the pupil size module 490.
A method 500 may be performed to correct eye position data based on changes in pupil size. The method 500 may be used to correct eye position determinations made using prior art methods that do not consider changes in pupil size when determining the location of the center "PC" of the pupil 1 12. By way of a non-limiting example, the method 500 may be used to correct eye position determinations made by the eye position module 448. Portions of the method 500 are described as being performed by the eyetracking device 400. However, in embodiments in which the computing device 440 is not a component of the eyetracking device 400, one or more of the portions of the method 500 described as being performed by the eyetracking device 400 may be performed the eyetracking device 400 and the computing device 440 together or by the computing device 440 alone.
In optional block 510, the eyetracking device 400 may perform the calibration process described above. For example, the PC-CR calibration module 443 may be executed in block 510 by the computing device 440 (see Figure 4A). Alternatively, instead of performing the calibration process, at least one relationship determined for an "average" eye may be used in place of the at least one relationship determined by performing the calibration process.
Optionally, the at least one relationship (e.g., the first and second relationships) determined during the calibration process or the at least one relationship determined for the "average" eye may be associated with a reference pupil size. The reference pupil size may be the pupil size observed when the at least one relationship was determined.
At this point, the eyetracker device 400 has the at least one relationship (e.g., the first and second relationships) between eye position and at least one distance (e.g., the horizontal and vertical distances "H" and "V") between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR." Thus, at this point, the eyetracker device 400 may use the at least one distance (e.g., the horizontal and vertical distances "H" and "V") between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR" to determine eye position. However, this determination will not necessary be correct because shifts in the location of the center "PC" of the pupil 1 12 caused by changes in pupil size have not yet been considered.
Then, in block 530, as the subject looks at the central fixation target "CFT," the size of the pupil 1 12 is changed. By way of a non-limiting example, in block 530, as the subject looks at the central fixation target "CFT," the visible light source 450 may be turned "on" and turned "off." When the visible light source 450 is turned "on," visible light source 450 is bright enough to drive the subject's pupillary light reflex to thereby cause the pupil 1 12 to constrict. When the visible light source 450 is turned "off," the environmental lighting conditions may be dark enough to cause the pupil 1 12 to redilate. In block 530 several "on'V'off" cycles ("stimulus cycles") may be performed. Optionally, the state ("on" or "off") of the stimulus (the visible light source 450) is recorded. As the cycles are performed, images of the eye 100 are captured. Optionally, each of the images may be associated with the state of the stimulus when the image was captured.
In block 540, the eyetracker device 400 determines the size of the pupil and the location of the center "PC" of the pupil 1 12 in each of the images captured in block 530. Optionally, these values may be associated with the state ("on" or "off) of the stimulus. The pupil center module 444 may be executed in block 540 by the computing device 440 (see Figure 4A) to determine the location of the center "CR" of the pupil 1 12 and the size of the pupil.
If one or more relationships between pupil size and the distance(s) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" (instead of one or more relationships between pupil size and the location of the center "PC" of the pupil 1 12) are to be determined by the method 500, in block 540, the eyetracker device 400 also determines at least one distance between the locations of the center "PC" of the pupil 1 12 and the corneal reflections "CR." In such embodiments, in block 540, the eyetracker device 400 also determines the location of the corneal reflection "CR." The corneal reflection module 446 and the distance module 447 may be executed in block 540 by the computing device 440 (see Figure 4A).
In decision block 550, the eyetracking device 400 decides whether to repeat blocks 530 and 540 to collect more data. When the decision in decision block 550 is "YES," in block 555, the eyetracker device 400 returns to block 530. On the other hand, when the decision in decision block 550 is "NO," the
eyetracking device 400 advances to block 560.
In block 560, the pupil size module 490 may be executed. If one or more relationships between pupil size and the location of the center "PC" of the pupil 1 12 (instead of one or more relationships between pupil size and the distance(s) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR") are to be determined by the method 500, in block 560, the eyetracker device 400 determines at least one relationship (e.g., the third
(horizontal) relationship, the fourth (vertical) relationship, and the like) between pupil size and the location of the center "PC" of the pupil 1 12. The relationship may be determined by plotting the locations of the center "CR" of the pupil 1 12 against the pupil diameters obtained in block 540, and fitting this plotted data with a mathematical relationship or function (e.g., a smooth curve like the ones illustrated in Figures 9C and 10A-10C).
On the other hand, if one or more relationships between pupil size and the distance(s) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" (instead of one or more relationships between pupil size and the location of the center "PC" of the pupil 1 12) are to be determined by the method 500, in block 560, the eyetracker device 400 determines at least one relationship (e.g., the fifth (horizontal) relationship, the sixth (vertical) relationship, and the like) between pupil size and the distance(s) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR." The relationship may be determined by plotting the distance(s) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" against the pupil diameters obtained in block 540, and fitting this plotted data with a mathematical relationship or function (e.g., a smooth curve like the ones illustrated in Figures 9C and 10A- 10C).
At this point, if in block 560, the eyetracker device 400 determined the at least one relationship (e.g., the third and fourth relationships) between pupil size and at least one component (e.g., the horizontal and vertical components) of the location of the center "PC" of the pupil 1 12, the eyetracker device 400 may use pupil size to correct the at least one component (e.g., the horizontal and vertical components) of the location of the center "PC" of the pupil 1 12 before the location of the center "PC" of the pupil 1 12 is used to determine eye position thereby correcting the determination of eye position. For example, the eyetracker device 400 may analyze the images captured during the calibration process performed in optional block 510 and adjust the determinations of the locations of the center "PC" of the pupil 1 12 made during the calibration process to adjust for changes in pupil size (if any) that occurred during the calibration process.
On the other hand, if in block 560, the eyetracker device 400 determined at least one relationship (e.g., the fifth and sixth relationships) between pupil size and at least one distance (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR," at this point, the eyetracker device 400 may use pupil size to correct the at least distance before it is used to determine eye position thereby correcting the determination of eye position. For example, the eyetracker device 400 may analyze the images captured during the calibration process performed in optional block 510 and adjust the determinations of the distance(s) (e.g., the horizontal distance "H," the vertical distance "V," a combination thereof, and the like) between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" made during the calibration process to adjust for changes in pupil size (if any) that occurred during the calibration process.
In block 562, the eyetracker device 400 captures images of the eye 100. Optionally, in block 562, the subject may be fixated on or tracking one or more of the targets 432. Then, in block 564, the eyetracker device 400 determines the location of the center "PC" of the pupil 1 12, the size of the pupil, and the location of the corneal reflection "CR" in the images captured in block 562. In block 564, the pupil center module 444 and the corneal reflection module 446 may be executed by the computing device 540 in block 564.
Then, in block 580, the eyetracker device 400 determines uncorrected eye positions for each of the images captured in block 562. As is apparent to those of ordinary skill in the art, the size of the pupil 1 12 may have changed as the images were captured in block 562. In block 570, the eye position module 448 may be executed.
Then, in block 582, the eyetracker device 400 corrects the uncorrected eye positions determined in block 580. In block 582, the adjustment module 492 may be executed. In block 582, the eyetracker device 400 may correct the uncorrected eye positions using the relationship(s) determined in block 560. For each image captured in block 562, the relationship(s) determined in block 560 is/are used to estimate how much spurious indication of change of eye position has been created by changes in pupil diameter relative to the reference pupil size (or baseline). That amount is then added to or subtracted from, as is appropriate, the uncorrected eye position determined for the image. Thus, in block 580, if the size of the pupil in an image differs from the reference pupil size, the location of the center "PC" of the pupil 1 12 (and/or distance between the locations of the center "PC" of the pupil and the corneal reflection "CR") may be adjusted to account for the change in pupil size.
By way of another non-limiting example, in block 582, the eyetracker device 400 may adjust the at least one relationship determined in optional block 510 or alternatively, for an "average" eye to account for the change in pupil size (i.e., from the reference pupil size to a first pupil size observed in the images captured in block 562). If the pupil size in any of the images is different from the first pupil size, the location of the center "PC" of the pupil 1 12 (and/or distance between the locations of the center "PC" of the pupil and the corneal reflection "CR") may be adjusted using the at least one relationship determined in block 560.
Then, the method 500 terminates.
Therefore, the eyetracker device 400 may correct the location of the center "PC" of the pupil 1 12 by doing the following:
(a) Performing the conventional calibration process;
(b) Determining pupil size and the location of the center "PC" of the pupil 1 12 while the subject looks at a fixed target and the size of the subject's pupil changes (e.g., the subject's pupillary light reflex is driven with the visible light source 450 followed by darkness);
(c) Determining a relationship between pupil size and the location of the center "PC" of the pupil 1 12 or a distance between the locations of the center "PC" of the pupil 1 12 and the corneal reflection "CR;"
(d) After collecting an image of the eye, detecting the size of the pupil, the location of the corneal reflection "CR," and the location of the center "PC" of the pupil 1 12 in the image; and
(e) Determining eye position (or direction of eye gaze) based on the relationship, the size of the pupil detected, the location of the center "PC" of the pupil 1 12 detected, and the location of the corneal reflection "CR" detected.
As apparent to one of ordinary skill in the art, the various embodiments described herein may be used in conjunction with any number of apparatuses for measuring gaze direction and are in no way limited to video-based eyetracking devices. Similarly, as apparent to one of ordinary skill in the art, various embodiments described herein may be applied to and useful for any number of fields where accurate eyetracker and/or gaze direction measurements must be performed, including but not limited to, research experiments, market research, website usability testing, clinical devices for monitoring patients' gaze, and assistive technologies that allow individuals to "speak" to a computer by changing gaze direction.
EXPERIMENTAL EXAMPLES
The following examples are provided to better illustrate the claimed invention and are not to be interpreted as limiting the scope of the invention. To the extent that specific materials are mentioned, it is merely for purposes of illustration and is not intended to limit the invention. One skilled in the art may develop equivalent means or reactants without the exercise of inventive capacity and without departing from the scope of the invention. Experimental Setup
Seven normal subjects were recruited from the community at the SUNY State College of Optometry. All subjects received a comprehensive ocular examination at the University Optometric Center at SUNY College of Optometry; no indications of ocular disease were found. Best corrected visual acuity was 20/20 or better for all subjects. The research adhered to the tenets of the declaration of Helsinki and was approved by the SUNY State College of Optometry Institutional Review Board. After the nature of the experiment was explained, written informed consent was obtained from each subject prior to testing.
Subjects were studied with the video-based eyetracking device 400 (implemented using a ISCAN EC-101 , ISCAN, Inc., Burlington, Massachusetts) while fixating on targets 430 displayed on the display 425 (implemented using a CRT computer monitor (Radius PressView 21 SR, Miro Displays, Inc., Germany)) driven by the computing device 440 (implemented as a Macintosh G3 computer). The central fixation target "CFT" was located centrally on the display 425, and additional calibration targets were placed about ±3 degrees from the central fixation target along horizontal and vertical meridians. Thus, the calibration targets included the left target "LT," the top target "TT," the right target "RT," and the bottom target "BT." The central fixation target "CFT" was implemented as a large rectangle displayed in a central portion of the display 425.
Subjects sat in an examination chair (not shown) at a location that positioned their eyes at a distance of approximately 75 cm from the display 425. A head-rest (not shown) was provided behind the subject's head (not shown), and the subjects were asked to lean back against the head-rest. Other stabilizing devices (e.g., chin-rests) were not employed.
Data recorded by the computing device 440 included a location
(having horizontal and vertical components) of the corneal reflection "CR," a location (having horizontal and vertical components) of the center "PC" of the pupil 1 12, and pupil diameter "D." Where appropriate, a digital indicator of stimulus behavior was also recorded. In other words, whether the visible light source 450 was "on" or "off" was also recorded by the computing device 440. A sampling rate of about 60 samples per second was used. Thus, about 60 images of the subjects' eyes were captured per second.
On calibration trials, data was recorded (e.g., for about 1.5 seconds) while subjects fixated each of the calibration targets. Trials (e.g., about 1.5 second long) were conducted in which the subjects each fixated on the central fixation target "CFT" to examine fluctuations during repeated trials. All data were obtained from right eyes.
On trials studying the effect of light-induced changes in pupil diameter, subjects fixated steadily on the central fixation target "CFT." The visible light source 450 was turned "on" and "off repeatedly (e.g., alternating between being "on" for approximately two seconds, and "off" for approximately two seconds) for approximately 16 seconds to provide a single cycle of visual stimulation, and image data was recorded. The luminance of the visible light source 450 was approximately 54 cd/m2 when the visible light source 450 was "on." The luminance was approximately 0.001 cd/m2 when the visible light source 450 was "off" (e.g., the environmental luminance, which may have been partially caused by the display 425). Several cycles of visual stimulation were provided before recording began to permit pupil responses to settle into a steady cyclical behavior.
During analysis of the recorded image data, it proved useful to partition the image data into periods of pupil constriction, which were periods in which changes in pupil diameter over time were less than zero (i.e., d(pupil diameter)/dt < 0) and periods of redilation, which were periods in which changes in pupil diameter over time were greater than zero (i.e., d(pupil diameter)/dt > 0). Because changes in pupil diameter over time (or the time derivative of pupil diameter) are quite noisy, smoothing may be used. For example, a negative exponential smoothing (e.g., with a polynomial degree of one, and a data proportion = 0.015) may be used. The smoothing illustrated in Figures 7, 8, and 1 1A-1 1 C was performed using a local smoothing filter using polynomial regression (linear regression for the case of polynomial degree of one) and weights computed from a Gaussian density function. For data records of length of about 1000 points (16 second long light-dark trials conducted with 60 samples (images) captured per second), the data proportion of 0.015 implies a filter array that is 15 coefficients long. As assessed directly with sinusoids, the filter has a steep response roll-off above about 2.5 Hz. By way of a non-limiting example, the smoothing may be performed using SigmaPlot software (available from Systat Software Inc.).
IGOR Pro software (available from Wavemetrics, Inc.) may be used to analyze the data to obtain approximate confidence interval estimates for the distance (e.g., the horizontal distance "H" or the vertical distance "V,") versus pupil diameter "D." These confidence interval estimates approximated confidence intervals for pupil position versus pupil diameter "D." Calibration trials and light/dark trials were run at least twice on each subject and average data are generally discussed below and illustrated in drawings, except where examples of individual light/dark trials are presented (e.g., Figures 7, 8, 9A-9C, 10A-10D, and 1 1A-1 1 C). A pupil size to image pixels calibration was performed using printed disks. This calibration determined within an image of the eye 100, approximately 21 pixels correspond to a change in pupil diameter "D" of about one millimeter. Thus, if the image of the pupil 1 12 increased by 21 pixels, the diameter "D" of the pupil 1 12 increased by about one millimeter. Similarly, if the image of the pupil 1 12 decreased by 21 pixels, the diameter "D" of the pupil 1 12 decreased by about one millimeter.
Results and Measurements
Figures 7, 8, 9A-9C, 10A-10D, and 1 1A-1 1 C illustrate data reported mainly for five of the subjects recruited. Data from the remaining two subjects showed substantial upper lid intrusion contaminating vertical data. Their horizontal data were similar to data for the remaining five subjects; one case is included in examples of horizontal data presented.
Figure 6 depicts data collected from a single subject. In Figure 6, examples of data are shown for three one second periods. A horizontal axis "H1 " is time and a vertical axis "V1 " is eye (or pupil) position measured in pixels. In a first period (at zero to one second along the horizontal axis "H1 "), the subjects fixated on the left target "LT," which was located at approximately 3 degrees left of center relative to the eye 100. In a second period (at two seconds to three seconds along the horizontal axis "HI"), the subjects fixated on a target (e.g., the central fixation target "CFT") located at approximately the center relative to the eye 100. In a third period (at four seconds to five seconds along the horizontal axis "H1 "), the subjects fixated on the right target "RT," which was located at approximately 3 degrees right of center relative to the eye 100. For each fixation period, a horizontal position (measured in pixels) of the center "PC" of the pupil 1 12 is shown as upright triangles "Δ," a horizontal position (measured in pixels) of the corneal reflection "CR" is shown as inverted triangles "V," and a difference "H_PC-CR" (measured in pixels) between the horizontal positions of the center "PC" of the pupil 1 12 and the corneal reflection "CR" is illustrated as circles
"o."
Solid lines 600, 602, and 604 illustrate smoothing of the difference "H_PC-CR" for each of the first, second, and third fixation periods, respectively. The solid lines 600, 602, and 604 are spline plots of the difference "H_PC-CR" for the first, second, and third periods, respectively, determined using a simple five-bin filter having weights of 0.3152, 0.2438, and 0.0986, and a corner frequency of approximately 9 Hz.
An inset plot 610 is a plot of an average value of the difference "H_PC-CR" (measured in pixels) for each of the three fixation periods plotted against the direction of gaze (i.e., 3 degrees left of center, center, and 3 degrees right of center). The average values are plotted using a capital letter "I" inside a circle. The three average values were fitted by a linear regression (illustrated as dashed line 612) having a slope of about -1.81 pixels/degree and an r2 value of about 0.997. The standard deviation of the difference "H_PC-CR" for these three fixation periods was, on average, about 0.38 pixels for raw data and about 0.20 pixels for smoothed data. Using the linear regression, average variability of the smoothed difference "H_PC-CR" (illustrated by lines 600, 602, and 604) were about 0.1 1 degrees of eye position (or about 7 min arc) for the three fixation periods. This average variability value was calculated by dividing the average standard deviation of the difference "H_PC-CR" for the three fixation periods by the slope of the linear regression ((0.20 pixels) / (1.81 pixels/degree) = 0.1 1 degrees of eye position). According to the linear regression, the difference "H_PC-CR" could be modeled (for the right eye of the subject) using the following linear equation: 1 1.2 - 1.8(gaze direction in degrees). Thus, if the difference
"H_PC-CR" is known, the gaze direction in degrees may be determined using this linear equation. In other words, the first (horizontal) relationship may be expressed as this linear equation obtained using linear regression analysis.
The pupil diameter for the first, second, and third fixation periods was about 86.1 ± 0.7 pixels, about 86.6 ± 0.9 pixels, and about 85.5 ± 1.0 pixels, respectively, or approximately 4.10 ± 0.05 mm, on average, for this subject. When the five-bin smoothing used above was applied to the pupil diameter data, the average standard deviation was reduced to about 0.6 pixels or about 0.03 mm.
Thus, Figure 6 illustrates the first (horizontal) relationship (in this case a linear relationship) between eye position and the horizontal distance "H" between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR." While performed only for horizontal pupil displacement, Figure 6 may be characterized as illustrating the results of a calibration process for the subject. Further, pupil size during the calibration process was substantially constant (approximately 4.10 ± 0.05 mm). Thus, the reference pupil size for this calibration process was approximately 4.10 ± 0.05 mm.
Figure 7 illustrates data for the same subject during 16 seconds of visual stimulation, while fixating the central fixation target "CFT." The 16 second trial contains four "on" periods during which the visible light source 450 was "on" and four "off periods during which the visible light source 450 was "off." The same smoothing used in Figure 6 has been applied to the data illustrated in Figure 7. An upper graph 710 includes a plot 712 of pupil diameter observed during the 2-sec- on / 2-sec-off visual stimulation. A plot 714 (which appears as a square wave at the bottom of the upper graph 710) indicates stimulus timing. The stimulus (the visible light source 450) was "on" when the plot 714 has a value of about 50 pixels and the stimulus was "off" when the plot 714 has a value significantly less than about 50 pixels. On average, pupil diameter (illustrated by the plot 712) varied from about 56.8 pixels to about 80.0 pixels, or from about 2.7 mm to about 3.8 mm (which corresponds to an amplitude of constriction of about 1.1 mm).
A center graph 720 illustrates a plot 722 of the horizontal distance
"H" (see Figure 3B) between the center "PC" of the pupil 1 12 and the corneal reflection "CR." It is immediately apparent that the plot 722, which is routinely scaled and used as a signal of horizontal eye position, covaried with pupil diameter (illustrated by the plot 712 in the upper graph 710). The extent of variation was, on average, about 2.1 pixels, amounting to about 1.15 degrees of horizontal apparent (or pseudo) eye movement for this subject.
A bottom graph 730 illustrates a plot 732 of pupil "velocity" (rate of change of pupil diameter with respect to time). A curve or line 734 is a plot of smoothed pupil "velocity" data (e.g., a spline plot of the pupil "velocity" data using a simple five-bin filter having weights of 0.3152, 0.2438, and 0.0986, and a corner frequency of approximately 9 Hz).
Data similar to that depicted in Figure 7 is presented for another subject in Figure 8. Figures 7 and 8 illustrate the same general pattern of results. Specifically, on average, pupil diameter varied from about 76.9 pixels to about 1 19.1 pixels, or about 3.7 mm to about 5.7 mm in this eye (which corresponds to an amplitude of constriction of about 2.0 mm). The horizontal position of the center "PC" of the pupil 1 12 relative to the horizontal position of the corneal reflection "CR" varied by approximately 1.4 pixels, amounting to about 0.78 degrees of apparent (or pseudo) horizontal eye movement for this subject.
Figures 7 and 8 illustrate that the visible light source 450 caused changes in pupil size. Further, while the eye 100 remained fixated on the central fixation target "CFT," the eyetracking device 400 erroneously indicated the eye had moved (because no correction for pupil size changes had been performed). Thus, Figures 7 and 8 provide an example of the existence of the third (horizontal) relationship between pupil size and the horizontal component of the location of the center "PC" of the pupil 1 12 of the right eye.
Pupil Position as a Function of Pupil Diameter Data for the subject of Figure 7 are presented in a different manner in
Figures 9A-9C. A graph 910 along the left-hand side of Figure 9A, a graph 920 along the left-hand side of Figure 9B, and a graph 930 along the left-hand side of Figure 9C each illustrate horizontal differences "H_PC-CR" plotted against pupil diameter "D." A graph 940 along the right-hand side of Figure 9A, a graph 950 along the right-hand side of Figure 9B, and a graph 960 along the right-hand side of Figure 9C each illustrate vertical differences "V_PC-CR" plotted against pupil diameter "D." The graphs 910 and 940 of Figure 9A illustrate data collected during pupil constriction and the graphs 920 and 950 of Figure 9B illustrate data collected during pupil redilation.
As may be seen in Figures 7 and 8, each stimulus "on" period contained a relatively brief (approximately 0.5 sec) constriction response, followed by some redilation, and each stimulus "off" period contains further redilation. Thus, a single pair of stimulus "on" and stimulus "off" periods contains one relatively brief constriction period and one longer redilation period. The data in Figures 9A-9C were pooled from two 16-second trials, and include eight constriction periods and eight redilation periods. The data points from each constriction or redilation period (illustrated as dots in the graphs 910 and 920 of Figures 9A, and the graphs 940 and 950 of Figure 9B) are connected by lines to indicate data points obtained from a single constriction or redilation.
The graph 930 of Figure 9C illustrates an average relationship (e.g., the fifth (horizontal) relationship) between pupil diameter "D" and the horizontal difference "H_PC-CR" during constriction as a thick line "HC1." Confidence intervals (e.g., 95% confidence intervals) for the relationship depicted using the thick line "HC1 " are illustrated as a thin line above the thick line "HC1 " and a thin line below the thick line "HC1."
The graph 930 illustrates an average relationship (e.g., the fifth (horizontal) relationship) between pupil diameter "D" and the horizontal difference "H_PC-CR" during redilation as a dashed thick line "HR1." Confidence intervals (e.g., 95% confidence intervals) for the relationship depicted using the dashed thick line "HR1 " are illustrated as a dashed thin line above the thick line "HR1 " and a dashed thin line below the thick line "HR1."
The graph 960 illustrates an average relationship (e.g., the sixth (vertical) relationship) between pupil diameter "D" and the vertical difference "V_PC-CR" during constriction as a thick line "VC1." Confidence intervals (e.g., 95% confidence intervals) for the relationship depicted using the thick line "VC1 " are illustrated as a thin line above the thick line "VC1 " and a thin line below the thick line "VC1."
The graph 960 illustrates an average relationship (e.g., the sixth (vertical) relationship) between pupil diameter "D" and the vertical difference "V_PC-CR" during redilation as a dashed thick line "VR1." Confidence intervals (e.g., 95% confidence intervals) for the relationship depicted using the dashed thick line "VR1 " are illustrated as a dashed thin line above the thick line "VR1 " and a dashed thin line below the thick line "VR1."
The average relationships illustrated as lines "HC1 ," "HR1 ," "VC1 ," and "VR1 ," are second order polynomials fit to the data depicted in the graph 910 of Figure 9A, the graph 920 of Figure 9B, the graph 940 of Figure 9A, and the graph 950 of Figure 9B, respectively.
For this subject, little difference was observed between the horizontal relationships for constriction and redilation (illustrated as the thick lines "HC1 " and "HR1 "). On the other hand, there was a modest difference in the vertical relationships for constriction and redilation (illustrated as the thick lines "VC1 " and "VR1 ").
Figures 10A-10D illustrates data collected from four other subjects plotted in the same format as the graphs 930 and 960 of Figure 9C. Figure 10A includes a graph 1010 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC2") and an average relationship (e.g., the fifth (horizontal) relationship) observed during redilation (illustrated as dashed thick line "HR2"), and a graph 1050 depicting an average relationship (e.g., the sixth (vertical) relationship) observed during constriction (illustrated as thick line "VC2") and an average relationship (e.g., the sixth (vertical) relationship) observed during redilation (illustrated as dashed thick line "VR2"). The graphs 1010 and 1050 also include thin lines and thin dashed lines illustrating confidence intervals.
Figure 10B includes a graph 1020 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC3") and an average relationship (e.g., the fifth (horizontal)
relationship) observed during redilation (illustrated as dashed thick line "HR3"), and a graph 1060 depicting an average relationship (e.g., the sixth (vertical)
relationship) observed during constriction (illustrated as thick line "VC3") and an average relationship (e.g., the sixth (vertical) relationship) observed during redilation (illustrated as dashed thick line "VR3"). The graphs 1020 and 1060 also include thin lines and thin dashed lines illustrating confidence intervals.
Figure 10C includes a graph 1030 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC4") and an average relationship (e.g., the fifth (horizontal)
relationship) observed during redilation (illustrated as dashed thick line "HR4"), and a graph 1070 depicting an average relationship (e.g., the sixth (vertical)
relationship) observed during constriction (illustrated as thick line "VC4") and an average relationship (e.g., the sixth (vertical) relationship) observed during redilation (illustrated as dashed thick line "VR4"). The graphs 1030 and 1070 also include thin lines and thin dashed lines illustrating confidence intervals.
Figure 10D includes a graph 1040 depicting an average relationship (e.g., the fifth (horizontal) relationship) observed during constriction (illustrated as thick line "HC5") and an average relationship (e.g., the fifth (horizontal)
relationship) observed during redilation (illustrated as dashed thick line "HR5"), and a graph 1080 depicting an average relationship (e.g., the sixth (vertical)
relationship) observed during constriction (illustrated as thick line "VC5") and an average relationship (e.g., the sixth (vertical) relationship) observed during redilation (illustrated as dashed thick line "VR5"). The graphs 1040 and 1080 also include thin lines and thin dashed lines illustrating confidence intervals.
The confidence intervals in Figure 9C and 10A-10D may be determined using IGOR software.
In comparing graphs for different subjects, it should be noted that the absolute values of the ordinate scale (the differences "H_PC-CR" and "V_PC-CR" expressed in pixels) may differ substantially from one subject to another, reflecting differences in pupil position relative to the corneal reflection "CR" in different eyes.
The data of Figures 9C and 10A-10D have some features in common. For example, the average relationships (illustrated as the thick lines "HC1 " to "HC5," and the dashed thick lines "HR1" to "HR5") between the horizontal difference "H_PC- CR" and pupil diameter "D" during constriction and redilation all had a negative slope. Because all of the eyes studied were right eyes, this indicates that larger pupils had centers more temporal than smaller pupils.
The average relationships (illustrated as the thick lines "VC1 " to "VC5," and the dashed thick lines "VR1 " to "VR5") between the vertical difference "V_PC- CR" and pupil diameter "D" during constriction and redilation showed more variability in this regard. Specifically, the average relationships for three of the subjects (illustrated as the thick lines "VC1 ," "VC3," and "VC5" and the dashed thick lines "VR1 ," "VR3," and "VR5") had positive slopes, the average relationships for one of the subjects (illustrated as the thick line "VC4" and the dashed thick lines "VR4") had essentially zero slope, and one the average relationships for one of the subjects (illustrated as the thick line "VC2" and the dashed thick lines "VR2") had a negative slope. In addition, a parabolic regression was used to fit the vertical difference "V_PC-CR" data of the graphs 960, 1050, 1060, 1070, and 1080, and many of the average relationships (illustrated as the thick lines "VC1 " to "VC5," and the dashed thick lines "VR1 " to "VR5") showed significant amounts of curvature.
A significant amount of hysteresis was present in the average relationships (illustrated as the thick lines "HC1 " to "HC5" and "VC1 " to "VC5," and the dashed thick lines "HR1 " to "HR5" and "VR1 " to "VR5"). This was especially true for the horizontal average relationships (illustrated as the thick lines "HC1 " to "HC5," and the dashed thick lines "HR1 " to "HR5"), where the average constriction relationships (illustrated as the thick lines "HC1 " to "HC5") tended to be convex downward or straight, while the average redilation relationships (illustrated as the dashed thick lines "HR1 " to "HR5") tended to be convex upwards. This means that for the central portions of each plot of the average relationships (illustrated as the thick lines "HC1 " to "HC5," and the dashed thick lines "HR1 " to "HR5"), the horizontal difference "H_PC-CR" value was typically smaller during constriction than during redilation.
There was less hysteresis observed for the vertical average relationships (illustrated as the thick lines "VC1 " to "VC5," and the dashed thick lines "VR1 " to "VR5"). As may be seen in Figures 9C and 10A-10D, the average vertical differences "V_PC-CR" observed during constriction and redilation were quite similar.
To compare hysteresis for the horizontal differences "H_PC-CR" and vertical differences "V_PC-CR" directly, calibration data for each subject was used to convert the horizontal differences "H_PC-CR" and the vertical differences "V_PC-CR" to degrees of eye rotation. For the horizontal difference "H_PC-CR," hysteresis was about 0.26 ± 0.13 degrees (ranging from about 0.1 1 degrees to about 0.43 degrees). Changes in pupil size observed during the present experiments resulted in a range of horizontal "pseudo eye movements" of about 0.81 ± 0.25 degrees (ranging from about 0.53 degrees to about 1.22 degrees). For the vertical difference "V_PC-CR," hysteresis was about 0.09 ± 0.05 degrees (ranging from about 0.04 degrees to about 0.18 degrees). Changes in pupil size observed during the present experiments resulted in a range of vertical "pseudo eye movements" of about 0.54 ± 0.29 degrees (ranging from about 0.02 degrees to about 0.85 degrees).
Thus, the graphs 930, 1010, 1020, 1030, and 1040 on the left hand sides of Figures 9C and 10A-10D, respectively, illustrate examples of fifth
(horizontal) relationships between pupil sizes and the horizontal distances "H" between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" determined for the right eyes of five different subjects. The graphs 960, 1050, 1060, 1070, and 1080 on the right hand sides of Figures 9C and 10A-10D, respectively, illustrate examples of sixth (vertical) relationships between pupil sizes and the vertical distances "V" between the locations of the center "CR" of the pupil 1 12 and the corneal reflection "CR" determined for the right eyes of five different subjects. These graphs demonstrate that the fifth and sixth relationships may be determined for different individuals. Further, these graphs indicate that the fifth and sixth relationships may include some amount of hysteresis. These figures further illustrate that the fifth and sixth relationships may be linear (e.g., determined using a linear regression) or curved (e.g., determined using a nonlinear regression analysis or other type of curve fitting).
Discussion
Because each eye has an idiosyncratic pupil displacement (or location shift) caused by constriction and redilation, the direction and amplitude of the apparent (or pseudo) eye movement will also be idiosyncratic to some degree. Although, the horizontal pseudo eye movement will have the same sign (or be in the same direction) because of the fifth (horizontal) relationships determined for the subjects studied each tended to have a negative slope when the horizontal differences "H_PC-CR" were plotted against pupil diameter (as in the graphs 930, 1010, 1020, 1030, and 1040 of Figures 9C and 10A-10D). The size of the overall effect for the subjects in the present experiments was on average about 0.81 degrees (horizontal) and about 0.54 degrees (vertical), with the largest cases being about 1.22 degrees (horizontal) and about 0.85 degrees (vertical). As noted herein, the negative slope of the horizontal plots implies that larger pupils had centers more temporal than smaller pupils.
In addition to the overall apparent (or pseudo) eye movements created in this manner, there was some hysteresis present. Specifically, the relation between apparent (or pseudo) eye movement and pupil diameter "D" was somewhat different during pupil constriction than during pupil redilation. The maximum extent of the hysteresis for a given eye in these experiments was on average about 0.26 degrees (horizontal) and about 0.09 degrees (vertical), with the largest cases being about 0.43 degrees horizontal and about 0.18 degrees vertical. Taken in the context of a general finding that the centers "PC" of more constricted pupils tended to be located more nasally, the finding that horizontal difference "H_PC-CR" for the same pupil diameter was smaller during constriction than during redilation implies that the pupil center lags behind the change in pupil diameter during constriction compared to during redilation. During constriction, the center "PC" is nearer the position that the center "PC" will have when the pupil is larger (or dilated) than the center "PC" is during redilation to the position that the center "PC" will have when the pupil is smaller (or constricted). This has implications for anisotropic aspects of iris structure and function.
To sum up, overall horizontal apparent (or pseudo) eye movements were about 60% larger than the vertical pseudo eye movements, and the horizontal hysteresis effect was three times larger than the vertical hysteresis effect. The magnitude of the change in pupil diameter "D" in the present experiments (i.e., the difference between the largest diameter and the smallest diameter for each subject) ranged from about 1.15 mm to about 2.12 mm (or an average ± standard deviation of about 1.65 ± 0.43 mm). Pseudo eye movement per millimeter of change in pupil diameter was calculated to be about 0.53 ± 0.27 degrees/mm (horizontal) and 0.31 ± 0.19 degrees/mm (vertical). This result was calculated without considering the curvilinear relationships between pupil diameter and either the horizontal differences "H_PC-CR" or the vertical differences "V_PC- CR."
Practical Implications
In particular, the present results suggest that "pseudo eye movements" caused by changes in pupil size are likely to be a problem when the required accuracy of eye position is less than about one degree. However, in some situations, limited accuracy is required therefore the presence of "pseudo eye movements" may not present a problem. Further, in other situations, only saccadic eye movements may be of interest. Because of the slow dynamics of the pupil system, the effect of changing pupil size on eye position signal is also slow. Therefore, relatively fast saccadic eye movements can be distinguished from both slow real eye movements and pseudo eye movements. Further, potential problems caused by pseudo eye movements may be minimal or nonexistent in situations where pupil diameter varies insignificantly. When experiments (or clinical tests) are performed on older subjects, and illumination is near constant, substantial changes in pupil diameter are unlikely to occur. For example, during visual field testing of older patients, substantial variations in pupil size would be unlikely. However, the same is not the case for younger subjects in similar circumstances. Sporadic large changes in pupil diameter "D" have been observed during threshold sensitivity testing of younger subjects. Presumably this is caused by fluctuations of autonomic innervation to the iris 1 14, which could be related to variations in mental state. Pupil diameter variations that are systematically related to accommodative effort may also constitute a potential problem when eye position is being monitored.
Compensating for Changes in Pupil Size
Given the present results described herein, one approach to dealing with "pseudo eye movements" caused by changes in pupil size is to correct measured eye movements for changes in pupil diameter. Figures 1 1A-1 1 C illustrate three examples of results obtained using this approach. Figure 1 1 A depicts a graph 1 1 10, which illustrates data obtained from a first subject. Figure 1 1 B depicts a graph 1 120, which illustrates data obtained from a second subject. Figure 1 1 C depicts a graph 1 130, which illustrates data obtained from a third subject. Thus, each of graphs 1 1 10, 1 120, and 1 130 illustrates data obtained from a different subject. Each of these subjects performed two 16 second light/dark trials (in which the visible light source 450 alternated between being "on" for two second and "off" for two seconds) during which the subject fixated on the central fixation target "CFT."
A function (an example of the fifth (horizontal) relationship) relating the horizontal difference "H_PC-CR" to pupil diameter was determined for one of the two trials and the resulting function was used to correct the data collected during the other trial. In graphs 1 1 10 and 1 120, the subject's eye was assumed to remain stationary during the light/dark trials. However, in graph 1 130, several small saccadic eye movements were apparent and those data segments were omitted when determining the function. When determining the function, it was assumed that pupil position was a univariant function of pupil diameter. In other words, if the eye 100 does not move, it is assumed that the center "PC" of the pupil 1 12 will always be in the same location for a particular pupil diameter, independent of history. Thus, hysteresis was ignored.
In the graphs 1 1 10, 1 120, and 1 130, thick lines 1 1 12, 1 122, and 1 132, respectively, are plots of raw data and thinner lines 1 1 14, 1 124, and 1 134, respectively, are plots of corrected data. The standard deviation of the raw data plotted in graphs 1 1 10, 1 120, and 1 130 (as the thick lines 1 1 12, 1 122, and 1 132, respectively) was reduced by about 55%, about 54%, and about 27%, respectively, as a result of the correction (using the function). For all three subjects, the reduction in standard deviation ranged from about 25% to about 55%, with an average of about 39%. The resulting standard deviations, expressed in degrees of eye rotation, averaged about 0.18 degrees.
The axis showing the greatest variation of horizontal difference "H_PC-CR" with pupil size was selected to assess the possibility of correcting the eye position. The graph 1 130 illustrates that the correction process does not affect the small saccadic eye movements present. The reason for this is that pupil diameter does not change during small saccadic eye movements, the dynamics of the former being much slower than those of the latter.
A positive correlation between the corrected horizontal difference "H_PC-CR" (illustrated by the thin lines 1 1 14, 1 124, and 1 134) and raw horizontal difference "H_PC-CR" data (illustrated by the thick lines 1 1 12, 1 122, and 1 132), indicates additional correction may be possible. The correlation was 0.62 ± 0.21 (standard deviation). Further, the correlation was always positive, and did not depend on whether the trial was the one selected for deriving the horizontal difference "H_PC-CR7pupil diameter relationship, or the one corrected using the relationship. A more successful correction might include consideration of the dynamics of the relationship between pupil diameter and pupil position.
COMPUTING DEVICE
Figure 12 is a diagram of hardware and an operating environment in conjunction with which implementations of the one or more modules 442 illustrated in Figure 4B may be practiced. The description of Figure 12 is intended to provide a brief, general description of suitable computer hardware and a suitable computing environment in which implementations may be practiced. Although not required, implementations are described in the general context of computer- executable instructions, such as program modules, being executed by a computer, such as a personal computer. Generally, program modules include routines, programs, objects, components, data structures, etc., that perform particular tasks or implement particular abstract data types.
Moreover, those skilled in the art will appreciate that implementations may be practiced with other computer system configurations, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Implementations may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing
environment, program modules may be located in both local and remote memory storage devices.
The exemplary hardware and operating environment of Figure 12 includes a general-purpose computing device in the form of a computing device 12. Each of the one or more modules 442 illustrated in Figure 4B may be implemented using one or more computing devices like the computing device 12. Further, the computing device 440 may be implemented by computing devices substantially similar to the computing device 12. The computing device 12 includes the system memory 22, a processing unit 21 , and a system bus 23 that operatively couples various system components, including the system memory 22, to the processing unit 21. There may be only one or there may be more than one processing unit 21 , such that the processor of computing device 12 comprises a single central-processing unit (CPU), or a plurality of processing units, commonly referred to as a parallel processing environment. The computing device 12 may be a conventional computer, a distributed computer, or any other type of computer.
The system bus 23 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. The system memory may also be referred to as simply the memory, and includes read only memory (ROM) 24 and random access memory (RAM) 25. A basic input/output system (BIOS) 26, containing the basic routines that help to transfer information between elements within the computing device 12, such as during start-up, is stored in ROM 24. The computing device 12 further includes a hard disk drive 27 for reading from and writing to a hard disk, not shown, a magnetic disk drive 28 for reading from or writing to a removable magnetic disk 29, and an optical disk drive 30 for reading from or writing to a removable optical disk 31 such as a CD ROM, DVD, or other optical media.
The hard disk drive 27, magnetic disk drive 28, and optical disk drive 30 are connected to the system bus 23 by a hard disk drive interface 32, a magnetic disk drive interface 33, and an optical disk drive interface 34, respectively. The drives and their associated computer-readable media provide nonvolatile storage of computer-readable instructions, data structures, program modules, and other data for the computing device 12. It should be appreciated by those skilled in the art that any type of computer-readable media which can store data that is accessible by a computer, such as magnetic cassettes, flash memory cards, USB drives, digital video disks, Bernoulli cartridges, random access memories (RAMs), read only memories (ROMs), and the like, may be used in the exemplary operating environment. As is apparent to those of ordinary skill in the art, the hard disk drive 27 and other forms of computer-readable media (e.g., the removable magnetic disk 29, the removable optical disk 31 , flash memory cards, USB drives, and the like) accessible by the processing unit 21 may be considered components of the system memory 22.
A number of program modules may be stored on the hard disk drive 27, magnetic disk 29, optical disk 31 , ROM 24, or RAM 25, including an operating system 35, one or more application programs 36, other program modules 37, and program data 38. A user may enter commands and information into the computing device 12 through input devices such as a keyboard 40 and pointing device 42. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 21 through a serial port interface 46 that is coupled to the system bus 23, but may be connected by other interfaces, such as a parallel port, game port, or a universal serial bus (USB). A monitor 47 or other type of display device is also connected to the system bus 23 via an interface, such as a video adapter 48. In addition to the monitor, computers typically include other peripheral output devices (not shown), such as speakers and printers.
The input devices described above are operable to receive user input and selections. Together the input and display devices may be described as providing a user interface.
Returning to Figure 12, the computing device 12 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer 49. These logical connections are achieved by a communication device coupled to or a part of the computing device 12 (as the local computer). Implementations are not limited to a particular type of
communications device. The remote computer 49 may be another computer, a server, a router, a network PC, a client, a memory storage device, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computing device 12. The remote computer 49 may be connected to a memory storage device 50. The logical connections depicted in Figure 12 include a local-area network (LAN) 51 and a wide-area network (WAN) 52. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
When used in a LAN-networking environment, the computing device 12 is connected to the local area network 51 through a network interface or adapter 53, which is one type of communications device. When used in a WAN- networking environment, the computing device 12 typically includes a modem 54, a type of communications device, or any other type of communications device for establishing communications over the wide area network 52, such as the Internet. The modem 54, which may be internal or external, is connected to the system bus 23 via the serial port interface 46. In a networked environment, program modules depicted relative to the personal computing device 12, or portions thereof, may be stored in the remote computer 49 and/or the remote memory storage device 50. It is appreciated that the network connections shown are exemplary and other means of and communications devices for establishing a
communications link between the computers may be used.
The computing device 12 and related components have been presented herein by way of particular example and also by abstraction in order to facilitate a high-level view of the concepts disclosed. The actual technical design and implementation may vary based on particular implementation while
maintaining the overall nature of the concepts disclosed.
Each of the one or more modules 442 may be implemented using software components that are executable by the processing unit 21 and when executed perform the functions described above. Further, the method 500 may be implemented as computer executable instructions that are executable by the processing unit 21. Such instructions may be encoded on one or more non- transitory computer-readable mediums for execution by one or more processing units. The foregoing described embodiments depict different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same
functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being
"operably connected," or "operably coupled," to each other to achieve the desired functionality.
While particular embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that, based upon the teachings herein, changes and modifications may be made without departing from this invention and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of this invention. Furthermore, it is to be
understood that the invention is solely defined by the appended claims. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to," the term "having" should be interpreted as "having at least," the term "includes" should be interpreted as "includes but is not limited to," etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations," without other modifiers, typically means at least two recitations, or two or more recitations).
Accordingly, the invention is not limited except as by the appended claims.

Claims

CLAIMS The invention claimed is:
1. A computer implemented method for use with a camera and one or more light sources, the camera being positioned to capture images of an eye, the eye comprising a cornea and a pupil having a size and a center, the one or more light sources being positioned to illuminate the eye and generate a corneal reflection on the cornea of the eye, the method comprising:
obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection;
determining a second relationship between the size of the pupil and the distance between the location of the center of the pupil and the location of the corneal reflection;
capturing an image of the eye with the camera;
detecting an observed location of the center of the pupil in the image captured;
detecting an observed size of the pupil in the image captured; detecting an observed location of the corneal reflection in the image captured;
determining an observed distance between the observed locations of the center of the pupil and the corneal reflection; and
determining a position of the eye based on the observed distance, the observed size of the pupil, the first relationship, and the second relationship.
2. The computer implemented method of claim 1 , wherein the first relationship is obtained by performing a first calibration process comprising:
capturing a first set of calibration images of the eye with the camera as the eye fixates on each of a plurality of calibration targets, each of the calibration targets being arranged to position the pupil of the eye in a predetermined location, each of the first set of calibration images depicting the pupil positioned in one of the predetermined locations;
detecting a location of a center of the pupil in each of the first set of calibration images captured;
detecting a location of the corneal reflection in each of the first set of calibration images captured;
for each of the first set of calibration images captured, determining a distance between the locations of the center of the pupil and the corneal reflection, and associating the distance determined with the predetermined location of the pupil depicted in the calibration image; and
determining the first relationship based on the distances determined for the first set of calibration images and the predetermined locations of the pupil depicted in the first set of calibration images.
3. The computer implemented method of claim 2, wherein the second relationship is determined by performing a second calibration process comprising:
capturing a second set of calibration images of the eye with the camera as the eye fixates on a target while the pupil contracts and redilates, the target being arranged to position the pupil of the eye in a predetermined location, each of the second set of calibration images depicting the pupil positioned in the predetermined location;
detecting a location of a center of the pupil in each of the second set of calibration images captured;
detecting a size of the pupil in each of the second set of calibration images captured;
detecting a location of the corneal reflection in each of the second set of calibration images captured;
for each of the second set of calibration images captured, determining a distance between the locations of the center of the pupil and the corneal reflection; and determining the second relationship based on the distances determined for the second set of calibration images and the predetermined location of the pupil depicted in the second set of calibration images.
4. The computer implemented method of claim 1 , wherein the second relationship is determined by performing a second calibration process comprising:
capturing a second set of calibration images of the eye with the camera as the eye fixates on a target while the pupil contracts and redilates, the target being arranged to position the pupil of the eye in a predetermined location, each of the second set of calibration images depicting the pupil positioned in the predetermined location;
detecting a location of a center of the pupil in each of the second set of calibration images captured;
detecting a size of the pupil in each of the second set of calibration images captured;
detecting a location of the corneal reflection in each of the second set of calibration images captured;
for each of the second set of calibration images captured, determining a distance between the locations of the center of the pupil and the corneal reflection; and determining the second relationship based on the distances determined for the second set of calibration images and the predetermined location of the pupil depicted in the second set of calibration images.
5. The computer implemented method of claim 4, wherein the second relationship is a mathematical relationship relating pupil size to a distance between the locations of the center of the pupil and the corneal reflection, and
determining the second relationship further comprises deriving the mathematical relationship.
6. The computer implemented method of claim 5, wherein the mathematical relationship is a linear equation or a polynomial equation.
7. The computer implemented method of claim 1 , wherein the second relationship is a data structure that associates each of a plurality of pupil sizes with a distance between the locations of the center of the pupil and the corneal reflection.
8. The computer implemented method of claim 1 , wherein the second relationship is a mathematical relationship relating pupil sizes to locations of the center of the pupil.
9. The computer implemented method of claim 1 , wherein in the second relationship, the distance between the locations of the center of the pupil and the corneal reflection comprises a horizontal distance and a vertical distance, and the second relationship comprises:
a horizontal relationship between pupil size and the horizontal distance; and
a separate vertical relationship between pupil size and the vertical distance.
10. The computer implemented method of claim 1 , wherein determining the position of the eye based on the observed distance, the observed size of the pupil, the first relationship, and the second relationship further comprises:
determining a first position of the eye based on the observed distance, and the first relationship; and
modifying the first position of the eye based on the observed size of the pupil, the observed distance, and the second relationship.
1 1. A computer implemented method for use with a camera and one or more light sources, the camera being positioned to capture images of an eye, the eye comprising a cornea and a pupil having a size and a center, the one or more light sources being positioned to illuminate the eye and generate a corneal reflection on the cornea of the eye, the method comprising:
obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection;
determining a second relationship between the size of the pupil and the location of the center of the pupil;
capturing an image of the eye with the camera; detecting an observed location of the center of the pupil in the image captured;
detecting an observed size of the pupil in the image captured; detecting an observed location of the corneal reflection in the image captured;
determining an observed distance between the observed locations of the center of the pupil and the corneal reflection; and
determining a position of the eye based on the observed distance, the observed size of the pupil, the observed location of the center of the pupil, the first relationship, and the second relationship.
12. The computer implemented method of claim 1 1 , wherein the second relationship is determined by performing a second calibration process comprising:
capturing a second set of calibration images of the eye with the camera as the eye fixates on a target while the pupil contracts and redilates, the target being arranged to position the pupil of the eye in a predetermined location, each of the second set of calibration images depicting the pupil positioned in the predetermined location;
detecting a location of a center of the pupil in each of the second set of calibration images captured;
detecting a size of the pupil in each of the second set of calibration images captured; and
determining the second relationship based on the sizes of the pupil detected for the second set of calibration images and the predetermined location of the pupil depicted in the second set of calibration images.
13. The computer implemented method of claim 12, wherein the second relationship is a mathematical relationship relating pupil sizes to locations of the center of the pupil, and
determining the second relationship further comprises deriving the mathematical relationship.
14. The computer implemented method of claim 13, wherein the mathematical relationship is a linear equation or a polynomial equation.
15. The computer implemented method of claim 1 1 , wherein the second relationship is a data structure that associates each of a plurality of pupil sizes with a location of the center of the pupil.
16. The computer implemented method of claim 1 1 , wherein the second relationship is a mathematical relationship relating pupil sizes to locations of the center of the pupil.
17. The computer implemented method of claim 1 1 , wherein determining the position of the eye based on the observed distance, the observed size of the pupil, the observed location of the center of the pupil, the first relationship, and the second relationship further comprises:
determining a first position of the eye based on the observed distance, and the first relationship; and
modifying the first position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, and the second relationship.
18. The computer implemented method of claim 1 1 , wherein the location of the center of the pupil has a horizontal component and a vertical component, and the second relationship comprises:
a horizontal relationship between pupil size and the horizontal component of the location of the center of the pupil; and
a separate vertical relationship between pupil size and the vertical component of the location of the center of the pupil.
19. A system for use with an eye comprising a cornea, and a pupil having a size and a center, the system comprising: means for obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of a reflection from the cornea of the eye;
means for determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection;
means for capturing an image of the eye;
means for detecting an observed location of the center of the pupil in the image captured;
means for detecting an observed size of the pupil in the image captured; means for detecting an observed location of a corneal reflection in the image captured; and
means for determining a position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, the observed location of the corneal reflection, the first relationship, and the second relationship.
20. The system of claim 19, wherein the means for determining the position of the eye based on the observed size of the pupil, the observed location of the center of the pupil, the observed location of the corneal reflection, the first relationship, and the second relationship further comprises:
means for determining a first position of the eye based on the observed location of the center of the pupil, the observed location of the corneal reflection, and the first relationship; and
means for modifying the first position of the eye based on the observed size of the pupil, the second relationship, and at least one of the observed location of the center of the pupil and the observed location of the corneal reflection.
21. A system for use with an eye comprising a cornea, and a pupil having a size and a center, the system comprising:
at least one camera positioned to capture images of the eye; one or more light sources positioned to illuminate the eye and generate a corneal reflection on the cornea of the eye;
a display positioned to display one or more targets viewable by the eye; and
a computing device comprising at least one processor and a memory configured to store instructions executable by the at least one processor and when executed thereby causing the at least processor to perform a method comprising:
obtaining a first relationship between eye position and a distance between a location of the center of the pupil and a location of the corneal reflection;
determining a second relationship between the size of the pupil and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection;
instructing the display to display a target to the eye;
instructing the at least one camera to capture images of the eye while the eye views the target displayed by the display;
detecting observed locations of the center of the pupil in the images captured;
detecting observed sizes of the pupil in the images captured; detecting observed locations of the corneal reflection in the image captured; and
determining positions of the eye based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship.
22. The system of claim 21 , wherein the one or more light sources comprise one or more infrared light sources.
23. The system of claim 21 , wherein the at least one camera is a digital video camera.
24. A non-transitory computer-readable medium comprising instructions executable by at least one processor and when executed thereby causing the at least processor to perform a method comprising:
obtaining a first relationship between eye position and a distance between a location of a center of a pupil and a location of a corneal reflection;
determining a second relationship between pupil size and at least one of (i) the location of the center of the pupil and (ii) the location of the corneal reflection;
detecting observed locations of the center of the pupil in a plurality of images of the eye;
detecting observed sizes of the pupil in the plurality of images of the eye; detecting observed locations of the corneal reflection in the plurality of images of the eye; and
determining positions of the eye based on the observed sizes of the pupil, the observed locations of the center of the pupil, the observed locations of the corneal reflection, the first relationship, and the second relationship.
PCT/US2010/055749 2009-11-06 2010-11-05 Methods of improving accuracy of video-based eyetrackers WO2011057161A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US25902809P 2009-11-06 2009-11-06
US61/259,028 2009-11-06

Publications (1)

Publication Number Publication Date
WO2011057161A1 true WO2011057161A1 (en) 2011-05-12

Family

ID=43970386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/055749 WO2011057161A1 (en) 2009-11-06 2010-11-05 Methods of improving accuracy of video-based eyetrackers

Country Status (1)

Country Link
WO (1) WO2011057161A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2596300A (en) * 2020-06-23 2021-12-29 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
SE2150387A1 (en) * 2021-03-30 2022-10-01 Tobii Ab System and method for determining reference gaze data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4859050A (en) * 1986-04-04 1989-08-22 Applied Science Group, Inc. Method and system for generating a synchronous display of a visual presentation and the looking response of many viewers
US5532784A (en) * 1992-09-14 1996-07-02 Nikon Corporation Eye-gaze detecting adapter
US5861940A (en) * 1996-08-01 1999-01-19 Sharp Kabushiki Kaisha Eye detection system for providing eye gaze tracking
US6598971B2 (en) * 2001-11-08 2003-07-29 Lc Technologies, Inc. Method and system for accommodating pupil non-concentricity in eyetracker systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4859050A (en) * 1986-04-04 1989-08-22 Applied Science Group, Inc. Method and system for generating a synchronous display of a visual presentation and the looking response of many viewers
US5532784A (en) * 1992-09-14 1996-07-02 Nikon Corporation Eye-gaze detecting adapter
US5861940A (en) * 1996-08-01 1999-01-19 Sharp Kabushiki Kaisha Eye detection system for providing eye gaze tracking
US6598971B2 (en) * 2001-11-08 2003-07-29 Lc Technologies, Inc. Method and system for accommodating pupil non-concentricity in eyetracker systems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2596300A (en) * 2020-06-23 2021-12-29 Sony Interactive Entertainment Inc Gaze tracking apparatus and systems
US11983310B2 (en) 2020-06-23 2024-05-14 Sony Interactive Entertainment Inc. Gaze tracking apparatus and systems
SE2150387A1 (en) * 2021-03-30 2022-10-01 Tobii Ab System and method for determining reference gaze data

Similar Documents

Publication Publication Date Title
US11786117B2 (en) Mobile device application for ocular misalignment measurement
Nyström et al. Pupil size influences the eye-tracker signal during saccades
US10307054B2 (en) Adaptive camera and illuminator eyetracker
US9380938B2 (en) System and methods for documenting and recording of the pupillary red reflex test and corneal light reflex screening of the eye in infants and young children
US9301675B2 (en) Method and apparatus for validating testing procedures in objective ophthalmic eye testing for eye evaluation applications requiring subject compliance with eye fixation to a visual target
Wyatt The human pupil and the use of video-based eyetrackers
JP5255277B2 (en) Screening device for retinal diseases
US6659611B2 (en) System and method for eye gaze tracking using corneal image mapping
Schaeffel Kappa and Hirschberg ratio measured with an automated video gaze tracker
Otero-Millan et al. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion
US9877649B2 (en) Photorefraction method and product
CN114931353B (en) Convenient and fast contrast sensitivity detection system
Wu et al. High-resolution eye-tracking via digital imaging of Purkinje reflections
WO2017171655A1 (en) Vision assessment based on gaze
WO2011057161A1 (en) Methods of improving accuracy of video-based eyetrackers
WO2023196186A1 (en) Improved systems and methods for testing peripheral vision
De Groot et al. The mechanism underlying the Brückner effect studied with an automated, high-resolution, continuously scanning Brückner device
Ren et al. Research on diopter detection method based on optical imaging
Khanna et al. Perimetry-Recent Advances
JPH05127A (en) Eyeball motion inspecting instrument

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10829207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10829207

Country of ref document: EP

Kind code of ref document: A1