CN116482854A - Eye tracking using self-mixing interferometry - Google Patents

Eye tracking using self-mixing interferometry Download PDF

Info

Publication number
CN116482854A
CN116482854A CN202310073547.3A CN202310073547A CN116482854A CN 116482854 A CN116482854 A CN 116482854A CN 202310073547 A CN202310073547 A CN 202310073547A CN 116482854 A CN116482854 A CN 116482854A
Authority
CN
China
Prior art keywords
smi
eye
sensors
processor
tracking device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310073547.3A
Other languages
Chinese (zh)
Inventor
陈童
A·F·西罕
M·T·温克勒
W·尼斯蒂科
E·维尔
周锡斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apple Inc
Original Assignee
Apple Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apple Inc filed Critical Apple Inc
Publication of CN116482854A publication Critical patent/CN116482854A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements

Abstract

The present disclosure relates to "eye tracking using self-mixing interferometry". An eye tracking device includes a head-mounted frame, an optical sensor subsystem mounted to the head-mounted frame, and a processor. The optical sensor subsystem includes a set of one or more SMI sensors. The processor is configured to operate the optical sensor subsystem to cause the set of one or more SMI sensors to emit a set of one or more light beams toward the eyes of the user; receiving a set of one or more SMI signals from the set of one or more SMI sensors; and tracking movement of the eye using the set of one or more SMI signals.

Description

Eye tracking using self-mixing interferometry
The present application is a divisional application of chinese patent application 202211169329.1 with the name of "eye tracking using self-mixing interferometry" on the application day 2022, 9, 22.
Cross Reference to Related Applications
This application is a non-provisional application of U.S. provisional patent application No. 63/247,188 filed on 22 months 2021 and claims the benefit of said U.S. provisional patent application in accordance with 35u.s.c.119 (e), the contents of which are incorporated herein by reference.
Technical Field
The described embodiments relate generally to optical sensing and, more particularly, to tracking eye movement using optical sensors.
Background
Eye monitoring techniques may be used to improve near-eye displays (e.g., head Mounted Displays (HMDs)), augmented Reality (AR) systems, virtual Reality (VR) systems, and the like. For example, gaze vector tracking, also known as gaze location tracking, may be used as input to display foveal rendering or human computer interaction. Traditional eye monitoring techniques are camera-based or video-based and rely on active illumination of the eye, eye image acquisition, and extraction of eye features such as pupil center and corneal glints. The power consumption, form factor, computational cost, and delay of such eye monitoring techniques can be a significant burden for the more user-friendly next generation HMD, AR, and VR systems (e.g., lighter weight, battery-powered, and more fully functional systems).
Disclosure of Invention
Embodiments of the systems, devices, methods, and apparatus described in this disclosure utilize one or more self-mixing interferometry (SMI) sensors to track eye movement. In some embodiments, the SMI sensor may be used alone or in combination with a camera to determine gaze vector or eye position.
In a first aspect, the present disclosure describes an eye tracking device. The eye tracking device can include a head-mounted frame, an optical sensor subsystem mounted to the head-mounted frame, and a processor. The optical sensor subsystem can include a set of one or more SMI sensors. The processor can be configured to operate the optical sensor subsystem to cause the set of one or more SMI sensors to emit a set of one or more light beams toward the eyes of the user; receiving a set of one or more SMI signals from the set of one or more SMI sensors; and tracking movement of the eye using the set of one or more SMI signals.
In a second aspect, the present disclosure describes another eye tracking device. The eye tracking device can include a set of one or more SMI sensors, a camera, and a processor. The processor can be configured to cause the camera to acquire a set of images of the user's eyes at a first frequency; causing a set of one or more SMI sensors to emit a set of one or more light beams toward the eyes of a user; sampling a set of one or more SMI signals generated by the set of one or more SMI sensors at a second frequency that is greater than the first frequency; determining a gaze vector of the eye using at least one image of the set of images; and tracking movement of the eye using the set of one or more SMI signals.
In a third aspect, the present disclosure describes a method of tracking movement of an eye. The method can include operating an optical sensor subsystem such that a set of one or more SMI sensors in the optical sensor subsystem emit a set of one or more light beams toward an eye of a user; receiving a set of one or more SMI signals from the set of one or more SMI sensors; and tracking movement of the eye using the set of one or more SMI signals.
In addition to the exemplary aspects and embodiments, further aspects and embodiments will become apparent by reference to the drawings and by study of the following descriptions.
Drawings
The present disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
FIG. 1 illustrates an exemplary block diagram of an eye tracking device;
FIG. 2A shows an exemplary graph of tracked angular velocity of an eye;
FIG. 2B illustrates an exemplary diagram of a tracked gaze of an eye;
FIG. 3A illustrates a first exemplary eye tracking device wherein the optical sensing subsystem and the processor are mounted to a pair of eyeglasses;
FIG. 3B illustrates a second exemplary eye tracking device in which the optical sensing subsystem and processor are mounted to a VR head display;
FIG. 4A illustrates a side view of an exemplary set of SMI sensors that may be mounted to a head mounted frame and configured to emit light toward the eye;
FIG. 4B illustrates a front view of the eye and SMI sensor set illustrated in FIG. 4A;
FIG. 5 illustrates an exemplary front view of a first set of alternative SMI sensors that may be mounted to a head mounted frame and configured to emit light toward the eye;
FIG. 6 illustrates an exemplary front view of a second set of alternative SMI sensors that may be mounted to a headset frame and configured to emit light toward the eye;
FIG. 7 illustrates an exemplary side view of a third set of alternative SMI sensors that may be mounted to a head mounted frame and configured to emit light toward the eye;
FIG. 8A illustrates an exemplary use of an SMI sensor in combination with a beam splitter;
FIG. 8B illustrates an exemplary use of an SMI sensor in combination with a beam steering component;
FIG. 9A illustrates a first exemplary integration of an SMI sensor with a display subsystem;
FIG. 9B illustrates a second exemplary integration of an SMI sensor with a display subsystem;
FIG. 10 illustrates an exemplary set of components that may be included in an optical sensor subsystem of an eye tracking device;
FIG. 11A illustrates a first exemplary method for tracking eye movement using a set of one or more SMI sensors;
FIG. 11B illustrates a second exemplary method for tracking eye movement using a set of one or more SMI sensors in combination with a camera or other combination; and is also provided with
Fig. 12A and 12B illustrate how a set of one or more SMI sensors may be used to map one or more surfaces or structures of an eye.
The use of cross-hatching or shading in the drawings is generally provided to clarify the boundaries between adjacent elements and also to facilitate legibility of the drawings. Thus, the presence or absence of a non-cross-hatching or shading does not indicate or indicate any preference or requirement for a particular material, material property, proportion of an element, dimension of an element, commonality of similar illustrated elements, or any other feature, attribute, or characteristic of any element shown in the drawings.
Additionally, it should be understood that the proportions and dimensions (relative or absolute) of the various features and elements (and sets and groupings thereof) and the limitations, spacings, and positional relationships presented therebetween are provided in the drawings, merely to facilitate an understanding of the various embodiments described herein, and thus may be unnecessarily presented or shown to scale and are not intended to indicate any preference or requirement of the illustrated embodiments to exclude embodiments described in connection therewith.
Detailed Description
Reference will now be made in detail to the exemplary embodiments illustrated in the drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. On the contrary, it is intended to cover alternatives, modifications and equivalents as may be included within the spirit and scope of the embodiments as defined by the appended claims.
Most eye tracking systems are camera or image based and cannot track eye movements quickly enough or with high enough accuracy using reasonable amounts of power. There is a need for a lower power and lower cost system with higher accuracy and sensitivity.
Rapid and accurate detection and classification of eye movements (such as distinguishing between smooth tracking, saccades, gaze, nystagmus, and blinks) can be mission critical, but can also be challenging for camera-based, video-based, or photodetector-based tracking systems-especially under stringent power budgets. For example, gaze may be as short as tens of microseconds and subtle to <0.25 degrees/second (deg/s) or <0.5deg/s. Detection of such eye movement indicates the need for a high resolution and high frame rate image acquisition and processing system.
Previously, low power imaging-based eye odometers have been proposed in which reduced resolution and higher frame rate image capture is fused with higher resolution and lower frame rate image capture for overall power savings (e.g., as compared to systems that acquire higher resolution images only at higher frame rates). As an alternative, photodiode array based eye trackers with lower delay and lower power consumption have been proposed, but have not proven useful in higher resolution and higher precision applications. The photodiode array based eye tracker is thus more suitable for binary sensing applications, such as waking up a mission critical video based eye tracker, or waking up in a near-eye display (or HMD) system that relies on a coarse gaze area detection or gaze movement threshold.
The following description relates to the use of SMI sensors, alone or in combination with other sensors (such as cameras or other image-based sensors), to track eye movement. For purposes of this description, an SMI sensor is considered to include a light emitter (e.g., a laser light source) and an SMI signal detector (e.g., a light detector, such as a photodiode, or an electrical detector, such as a circuit that measures the junction voltage or drive current of the light emitter). Each of the one or more SMI sensors may emit one or more fixed or scanned beams toward one or more structures of the user's eye (e.g., toward one or more of the iris, sclera, pupil, lens, limbus, eyelid, etc. of the eye). The SMI sensor may operate in accordance with operational safety regulations so as not to harm the user's eyes.
After emitting the light beam, the SMI sensor may receive a retro-reflective portion of the emitted light back into its resonant cavity. For good quality retroreflection, it may be useful to focus the beam emitted by the SMI sensor on the iris or sclera (or another diffusing structure) of the eye instead of focusing the beam on the cornea or pupil of the eye. The phase change of the retro-reflective portion of the emitted light may be mixed with the phase of the light generated within the cavity and may generate an SMI signal that may be amplified and detected. The amplified SMI signal may be analyzed and, in some cases, multiple SMI signals may be analyzed. The rotational movement of the eye can be retrieved and reconstructed by phase tracking the doppler frequency from multiple orientations and positions of the user's eye.
Classification and quantification of user gaze behavior, such as smooth tracking, saccades, gaze, nystagmus, and blinks, may be identified at a high sample rate using an SMI sensor to facilitate efficient, high-fidelity, digital content rendering of numbers, text, or images on a near-eye display (or HMD) system.
In some embodiments, the SMI sensor data or determination may be fused with absolute gaze direction sensing information (e.g., gaze vector sensing or gaze position sensing) acquired from a lower sample rate gaze imaging system (e.g., camera) to achieve absolute gaze tracking at a much higher speed and with good accuracy. The SMI sensor data may be obtained from one or several (e.g., two or three) SMI sensors, as compared to an imaging system that may include one million or more pixels. This may enable the SMI sensor to generate an SMI signal (or SMI sensor data) with much lower power consumption than imaging systems.
When modulating the wavelength of light emitted by the SMI sensor, the SMI signal obtained from the SMI sensor may be used for absolute ranging of the surface, interface and volume structures of the user's eye, approximately 100 μm resolution. Such absolute distance measurements may provide anchors for tracking the displacement of the eye contour during eye rotation.
When the wavelength of light emitted by one or more SMI sensors is modulated, and when the light beam emitted by at least one SMI sensor is scanned and/or emits multiple light beams, a displacement and/or velocity map (also known as a doppler cloud), and/or a distance map (also known as a depth cloud) may be obtained or constructed. Additionally or alternatively, a doppler cloud may also be obtained or built when the wavelength of light emitted by one or more SMI sensors is not modulated. The Doppler cloud may have a native high resolution, which may be, for example, on the μm or sub- μm level for a single frame shift, or as low as the mm/s or deg/s level for speed. The single or multiple frames of the doppler cloud may be considered a differential depth cloud. Additionally or alternatively, the measurements of single or multiple frames of the doppler cloud may be processed in real-time to match a predefined and/or locally calibrated differential map or library and extract eye tracking information and/or position information (also referred to as pose information). The locally calibrated difference map or library may be obtained by including, but not limited to, a camera, depth cloud, etc. The use of doppler clouds, alone or in combination with depth clouds or other sensing modalities (e.g., eye camera images, motion sensors, etc.), can provide an accurate and efficient way of tracking eye movement or position information.
These and other systems, devices, methods, and apparatuses are described with reference to fig. 1-12B. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes only and should not be construed as limiting.
Directional terms such as "top", "bottom", "upper", "lower", "front", "rear", "above", "below", "left", "right" and the like are used with reference to the orientation of some of the components in some of the figures described below. Because components in various embodiments can be positioned in a number of different orientations, the directional terminology is used for purposes of illustration and is in no way limiting. Directional terminology is intended to be interpreted broadly and therefore should not be interpreted as excluding the components that are oriented in a different manner. In addition, as used herein, the phrase "at least one of" after separating a series of items of any of the items with the term "and" or "is a modification of the list as a whole, rather than modifying each member of the list. The phrase "at least one of" does not require the selection of at least one of each item listed; rather, the phrase allows for the inclusion of a minimum of any of the items and/or a minimum of any combination of the items and/or a minimum of each of the items. For example, the phrase "at least one of A, B and C" or "at least one of A, B or C" each refer to a only, B only, or C only; A. any combination of B and C; and/or one or more of each of A, B and C. Similarly, it is to be understood that the order of elements presented for a combined list or a separate list provided herein should not be construed as limiting the disclosure to only the order provided.
Fig. 1 shows an exemplary block diagram of an eye tracking device 100. The eye tracking device 100 may include a head-mounted frame 102, an optical sensor subsystem 104 mounted to the head-mounted frame 102, and a processor 106. In some embodiments, the eye tracking device 100 may also include one or more of a display subsystem 108, a communication subsystem 110 (e.g., a wireless and/or wired communication subsystem), and a power distribution subsystem 112. The processor 106, display subsystem 108, communication subsystem 110, and power distribution subsystem 112 may be partially or fully mounted to the headset frame 102, or in radio frequency or electrical communication with one or more components mounted to the headset frame 102 (e.g., housed in a box or electronic device (e.g., a telephone or a wearable device such as a watch) in radio frequency (i.e., wireless) or electrical (e.g., wired) communication with one or more components mounted to the headset frame 102), or distributed between the headset frame 102 and the box or electronic device in radio frequency or electrical communication with one or more components mounted to the headset frame 102. The optical sensor subsystem 104, the processor 106, the display subsystem 108, the communication subsystem 110, and/or the power distribution subsystem 112 may communicate over one or more buses 116 or over the air (e.g., wireless) using one or more communication protocols.
The headset frame 102 may take the form of a pair of eyeglasses, a set of goggles, an Augmented Reality (AR) headset, a Virtual Reality (VR) headset, or other form of headset frame 102.
The optical sensor subsystem 104 may include a set of one or more SMI sensors 114. Each SMI sensor of the set of one or more SMI sensors may include a light emitter and a light detector. The light emitters may include one or more of Vertical Cavity Surface Emitting Lasers (VCSELs), edge Emitting Lasers (EELs), vertical External Cavity Surface Emitting Lasers (VECSELs), quantum Dot Lasers (QDLs), quantum Cascade Lasers (QCLs), or Light Emitting Diodes (LEDs) (e.g., organic LEDs (OLEDs), resonant cavity LEDs (RC-LEDs), micro LEDs (mleds), super luminescent LEDs (SLEDs), or edge emitting LEDs), etc. In some cases, the light detector (or photodetector) may be positioned laterally adjacent to the light emitter (e.g., mounted or formed on a substrate on which the light detector is mounted or formed). In other cases, the light detector may be stacked above or below the light emitter. For example, the light detector may be a VCSEL, HCSEL, or EEL having primary and secondary emissions, and the light detector may be epitaxially formed in the same epitaxial stack as the light emitters such that the light detector receives some or all of the secondary emissions. In these latter embodiments, the light emitter and the light detector may be similarly formed (e.g., both the light emitter and the light detector may comprise a Multiple Quantum Well (MQW) structure, but the light emitter may be forward biased and the light detector (e.g., a Resonant Cavity Photodetector (RCPD)) alternatively, the light detector may be formed on a substrate and the light emitter may be formed separately and mounted to the substrate, or with respect to the light emitter, such that the secondary light emission of the light emitter impinges the light detector.
In some embodiments, the optical sensor subsystem 104 may include a set of fixed or movable optical components (e.g., one or more lenses, gratings, filters, beam splitters, beam steering components, etc.). The optical sensor subsystem 104 may also include an image sensor (e.g., a camera including an image sensor).
The processor 106 may include any electronic device capable of processing, receiving, or transmitting data or instructions, whether such data or instructions are in the form of software or firmware or otherwise encoded. For example, the processor 106 may include a microprocessor, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a controller, or a combination of such devices. As described herein, the term "processor" is intended to encompass a single processor or processing unit, multiple processors, multiple processing units, or other suitably configured computing element.
In some embodiments, components of the eye tracking device 100 may be controlled by multiple processors. For example, select components of the eye tracking device 100 (e.g., the optical sensor subsystem 104) may be controlled by a first processor, and other components of the eye tracking device 100 (e.g., the display subsystem 108 and/or the communication subsystem 110) may be controlled by a second processor, where the first processor and the second processor may or may not be in communication with each other.
In some embodiments, display subsystem 108 may include a display having one or more light-emitting elements including, for example, LEDs, OLEDs, liquid Crystal Displays (LCDs), electroluminescent (EL) displays, or other types of display elements.
The communication subsystem 110 may enable the eye tracking device 100 to transmit or receive data from a user or another electronic device. The communication subsystem 110 may include a touch-sensitive input surface, a crown, one or more microphones or speakers, or a wired or wireless (e.g., radio Frequency (RF) or optical) communication interface configured to transmit electronic, RF or optical signals. Examples of wireless and wired communication interfaces include, but are not limited to, cellular, wi-Fi, anda communication interface.
The power distribution subsystem 112 may be implemented with any collection of power sources and/or conductors capable of delivering energy to the eye tracking device 100 or components thereof. In some cases, the power distribution subsystem 112 may include one or more batteries or rechargeable batteries. Additionally or alternatively, the power distribution subsystem 112 may include a power connector or power cord that may be used to connect the eye tracking device 100 or components thereof to a remote power source, such as a wall outlet, remote battery pack, or electronic device to which the eye tracking device 100 is tethered.
The processor 106 may be configured to operate the optical sensor subsystem 104. Operating the optical sensor subsystem 104 may include having the power distribution subsystem 112 power the optical sensor subsystem 104, provide control signals to a set of one or more SMI sensors 114, and/or provide control signals that electrically, electromechanically, or otherwise focus or adjust optical components of the optical sensor subsystem 104. Operating optical sensor subsystem 104 may cause a set of one or more SMI sensors 114 to emit a set of one or more light beams 118 toward a user's eye 120. Processor 106 may also be configured to receive a set of one or more SMI signals from a set of one or more SMI sensors 114 and track rotational movement of eye 120 using the set of one or more SMI signals.
In some cases, optical sensor subsystem 104, processor 106, display subsystem 108, communication subsystem 110, and/or power distribution subsystem 112 may communicate via one or more buses, which are generally depicted as bus 116.
In some cases, tracking rotational movement of the eye 120 may include estimating an angular velocity (or gaze movement) of the eye 120. In some cases, the angular velocity may be tracked in each of three orthogonal directions (e.g., in the x, y, and z directions). Tracking angular velocity in different directions may require scanning one or more light beams emitted by the set of one or more SMI sensors 114, splitting one or more light beams emitted by one or more SMI sensors of the set of one or more SMI sensors 114, or configuring the set of one or more SMI sensors 114 to emit two or more light beams in different directions (and preferably in different orthogonal directions).
Tracking rotational movement of the eye 120 may also include tracking gaze (or gaze position) of the eye 120.
In some embodiments, the processor 106 may be configured to categorize the rotational eye movements of the user. For example, the rotational eye movement may be classified as at least one of smooth tracking, saccadic, gazing, nystagmus, or blinking. The processor 106 may then cause the display subsystem 108 to adjust the rendering of the one or more images on the display in response to the classification of the rotational eye movement. In some embodiments, the processor 106 may be further configured to quantify the classified rotational eye movements (e.g., how fast or how fast the user blinks, how fast or how long or how fast the angular velocity of the user's eye movements changes, or how long or how fast the angular velocity of the user's eye 120 changes), and may further adjust the rendering of the one or more images in response to the quantified rotational eye movements.
In some embodiments, the processor 106 may be configured to cause the display subsystem 108 to change the state of the display in response to movement of the eye 120. Changing the state of the display may include changing the displayed content, but may also or alternatively include transitioning the display from a low power or off state to a higher power or on state, or alternatively, transitioning the display from a higher power or on state to a low power or off state.
In some embodiments, the display may include an array of display pixels mounted on a substrate, and the set of one or more SMI sensors 114 may include at least one SMI sensor mounted on the substrate, adjacent to the array of display pixels, or within the array of display pixels. In other embodiments, SMI sensor 114 may be provided separate from the display.
Fig. 2A shows an exemplary graph 200 of tracked angular velocity (or gaze movement) of an eye. For example, graph 200 shows tracking angular velocity in only two orthogonal directions. When the outer surface of the eye or another structure is modeled as a two-dimensional object, the angular velocity can only be tracked in two dimensions. Alternatively, the angular velocity may be tracked more accurately in three dimensions.
Fig. 2B shows an exemplary graph 210 of tracked gaze (or gaze vector or gaze location) of an eye. For example, graph 210 shows tracking gaze in only two orthogonal directions. When the outer surface of the eye or another structure is modeled as a two-dimensional object, gaze can only be tracked in two dimensions. Alternatively, gaze may be tracked more accurately in three dimensions.
SMI-based sensing may be particularly useful when the device or application needs to detect and/or classify subtle eye movements (such as smooth tracking, saccades, fixations, nystagmus, and blinks), because all of these eye movements may be detected and classified by tracking the angular velocity of the eye, and SMI-based sensing is typically faster, more accurate, and more energy efficient (e.g., consumes less power) than video-based or photodetector-based sensing when it involves detecting changes in the angular velocity of the eye. Further, by initial and periodic (low frequency) calibration of gaze location, high frequency gaze location vector tracking as shown in fig. 2B may be obtained from an integration of gaze speed vector tracking, as shown in fig. 2A.
Fig. 3A shows a first exemplary eye tracking device, wherein an optical sensing subsystem 302 and a processor 304 are mounted to a pair of eyeglasses 300. For example, the eyeglass 300 (e.g., one type of head-mounted frame) is shown to include a first eyeglass frame 308 and a second eyeglass frame 310, a nose piece 312 connecting the first eyeglass frame 308 to the second eyeglass frame 310, a first temple 314 connected to the first eyeglass frame 308, and a second temple 316 connected to the second eyeglass frame 310. In some embodiments, glasses 300 may include a heads-up display or function as AR glasses.
Each of the first and second frames 308, 310 may hold a respective lens, such as the first lens 318 or the second lens 320. The lenses 318, 320 may or may not magnify, focus, or otherwise alter the light passing through the lenses 318, 320. For example, the lenses 318, 320 may correct the user's vision, block bright or harmful light, or simply provide a physical barrier through which light may pass without adjustment or minimal adjustment. In some embodiments, the first lens 318 and the second lens 320 can be formed of glass or plastic. In some embodiments, the first lens 318 and/or the second lens 320 can function as a display (e.g., a passive display screen) on which text, numbers, and/or images are projected by the display subsystem 322, which display subsystem 322 can also be mounted to the pair of eyeglasses. Alternatively, the first bezel 308 and/or the second bezel 310 may hold a transparent or translucent display (e.g., light Emitting Diodes (LEDs), organic LEDs (OLEDs), or other light emitting elements that may be operated by the display subsystem 322 to display text, numbers, and/or images.
As another example, the optical sensing subsystem 302 may be configured as described with reference to one or more of fig. 1 and 5-9B, and/or the processor 304 may be configured to operate as described with reference to fig. 1 and 10-12B. One or more components of the optical sensing subsystem 302 (e.g., SMI sensors, optical components, cameras, etc.) may be mounted on one or more substrates attached to the first frame 308, the second frame 310, the nose piece 312, the first temple 314, the second temple 316, the first lens 318, or the second lens 320, or may be mounted directly to one or more of these components. Similarly, the processor 304, the display subsystem 322, the communication subsystem 324, and/or the power distribution subsystem 326 may be mounted to one or more of these components. In some embodiments, some or all of the optical sensing subsystem 302, the processor 304, the display subsystem 322, the communication subsystem 324, and/or the power distribution subsystem 326 may be mounted within one or more components of the eyeglass 300, within a device that is wirelessly or electrically connected to one or more components mounted to the eyeglass 300 (e.g., in a user's phone or a wearable device), or distributed between the eyeglass 300 and a device that is wirelessly or electrically connected to one or more components of the eyeglass 300.
Processor 304, display subsystem 322, communication subsystem 324, and power distribution subsystem 326 may be further configured or operated as described with reference to fig. 1.
Fig. 3B illustrates a second exemplary eye tracking device, wherein an optical sensing subsystem 352 and a processor 354 are mounted to a Virtual Reality (VR) head display 350. For example, a VR headset (one type of headset) is shown to include a VR module 358 that can be attached to the head of a user by a strap 356.
VR module 358 may include a display subsystem 360. Display subsystem 360 may include a display for displaying text, numbers, and/or images.
As an example, the optical sensing subsystem 352 may be configured as described with reference to one or more of fig. 1 and 5-9B, and/or the processor 354 may be configured to operate as described with reference to fig. 1 and 10-12B. One or more components of the optical sensing subsystem 352 (e.g., SMI sensors, optical components, cameras, etc.) may be mounted on one or more substrates attached to the VR module 358 or may be mounted directly to the housing of the VR module 358. Similarly, the processor 354, the display subsystem 360, the communication subsystem 362, and/or the power distribution subsystem 364 may be mounted to the VR module 358. In some embodiments, some or all of the optical sensing subsystem 352, the processor 354, the display subsystem 360, the communication subsystem 362, and/or the power distribution subsystem 364 may be mounted within a device that is wirelessly or electrically connected to the VR module 358 (e.g., in a user's phone or in a wearable device), or distributed between the VR module 358 and a device that is wirelessly or electrically connected to the VR module 358.
Processor 354, display subsystem 360, communication subsystem 362, and power distribution subsystem 364 may be further configured or operated as described with reference to fig. 1.
Fig. 4A and 4B illustrate an exemplary set of SMI sensors 400, 402 that may be mounted to a head-mounted frame, such as one of the head-mounted frames described with reference to fig. 1-3B. The set of SMI sensors 400, 402 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to FIGS. 1-3B or elsewhere herein. The set of SMI sensors 400, 402 may be configured to emit light toward the eye 404. Fig. 4A shows a side view of an eye 404 and a set of SMI sensors 400, 402, and fig. 4B shows a front view of an eye 404 and a set of SMI sensor groups 400, 402.
For example, the set of SMI sensors 400, 402 is shown to include two SMI sensors in FIGS. 4A and 4B (e.g., first SMI sensor 400 and second SMI sensor 402). In other embodiments, the set of SMI sensors 400, 402 may include more or fewer SMI sensors. First SMI sensor 400 and second SMI sensor 402 may emit respective beams of light toward eye 404. In some embodiments, the light beams may be oriented in different directions, which may or may not be orthogonal directions. When the light beams are directed in different directions, the processor receiving the SMI signals generated by SMI sensors 400, 402 may track the movement of eye 404 in two dimensions (e.g., the processor may track the angular velocity of eye 404 in two dimensions). The light beams may strike the eye 404 at the same or different locations. For example, the light beam is shown striking the eye 404 at the same location.
In some embodiments, the light emitted by the SMI sensors 400, 402 may be directed or filtered by optics 406 or 408 (e.g., one or more lenses or beam steering components or other optical components).
Fig. 5 illustrates a front view of a set of alternative SMI sensors 500, 502, 504 that may be mounted to a head-mounted frame, such as one of the head-mounted frames described with reference to fig. 1-3B. The set of SMI sensors 500, 502, 504 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to FIGS. 1-3B or elsewhere herein. The set of SMI sensors 500, 502, 504 may be configured to emit light toward the eye 506.
In contrast to the set of SMI sensors described with reference to FIGS. 4A and 4B, the set of SMI sensors 500, 502, 504 shown in FIG. 5 includes three SMI sensors (e.g., first SMI sensor 500, second SMI sensor 502, and third SMI sensor 504). First SMI sensor 500, second SMI sensor 502, and third SMI sensor 504 may emit respective beams of light toward eye 506. In some embodiments, the light beams may be oriented in different directions, which may or may not be orthogonal directions. The processor receiving the SMI signals generated by the SMI sensors 500, 502, 504 may track movement of the eye 506 in three dimensions (e.g., the processor may track the angular velocity of the eye 506 in three dimensions) as the light beams are directed in different directions. The light beams may strike the eye 506 at the same or different locations. For example, the light beam is shown striking the eye 506 at the same location.
Fig. 6 illustrates an exemplary front view of a second set of alternative SMI sensors 600, 602, 604 that may be mounted to a head-mounted frame, such as one of the head-mounted frames described with reference to fig. 1-3B. The set of SMI sensors 600, 602, 604 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to FIGS. 1-3B or elsewhere herein. The set of SMI sensors 600, 602, 604 may be configured to emit light toward the eye.
The set of SMI sensors 600, 602, 604 shown in FIG. 6 includes three SMI sensors (e.g., a first SMI sensor 600, a second SMI sensor 602, and a third SMI sensor 604). First SMI sensor 600, second SMI sensor 602, and third SMI sensor 604 may emit respective beams of light toward eye 606. In some embodiments, the light beams may be oriented in different directions, which may or may not be orthogonal directions. The processor receiving the SMI signals generated by SMI sensors 600, 602, 604 may track movement of eye 606 in three dimensions (e.g., the processor may track the angular velocity of eye 606 in three dimensions) as the light beams are directed in different directions. In contrast to the set of SMI sensors described with reference to FIG. 5, two of the light beams impinge the eye 606 at the same location (e.g., at a first location), and one of the light beams impinges the eye 606 at a different location (e.g., at a second location different from the first location). Alternatively, all of the light beams may be directed to impinge the eye 606 at the same location, or all of the light beams may be directed to impinge the eye 606 at different locations.
The optical sensor subsystem including the set of SMI sensors 600, 602, 604 may also include a camera 608. Similar to the set of SMI sensors 600, 602, 604, the camera 608 may be mounted to a headset frame. The camera 608 may be positioned and/or oriented to acquire images of the eye 606. The image may be an image of a portion or all of the eye 606. A processor configured to operate the optical sensor subsystem may be configured to track rotational movement of the eye 606 using images acquired by the camera 608 and SMI signals generated by a set of SMI sensors 600, 602, 604. For example, the processor may acquire a set of images of the eye at a first frequency using the camera 608. The processor may also sample the set of one or more SMI signals at a second frequency that is greater than the first frequency. The image and SMI signal samples may be acquired/sampled in a time-overlapping manner or in parallel at different times. In some cases, the processor may acquire one or more images; analyzing the image to determine a position of the eye 606 relative to the SMI sensors 600, 602, 604 and/or the beams of light emitted by the SMI sensors 600, 602, 604; and the optical sensor subsystem is adjusted as necessary to ensure that the light beam impinges on the desired structure of the eye 606. Adjustment of the optical sensor subsystem may include, for example, one or more of the following: the beam steering components are adjusted to steer, address, and cause a particular subset of the SMI sensors 600, 602, 604 to emit light, etc. (see, e.g., the description of fig. 7 and 8B). Alternatively (e.g., as an alternative to relying on camera 608), the SMI sensors 600, 602, 604 may be used to perform range measurements as the user moves their eyes 606, and the range measurements may be mapped to an eye model to determine whether the SMI sensors 600, 602, 604 are focused on the desired eye structure. The eye model may be a generic eye model or an eye model generated by a particular user when the optical sensor subsystem is operating in a training and eye model generation mode.
The optical sensor subsystem including the set of SMI sensors 600, 602, 604 may also include one or more light emitters 610, 612 capable of illuminating the eye 606 for capturing an image of the eye 606 (or for other purposes). The light emitters 610, 612 may take the form of LEDs, lasers, or other light emitting elements of a display, etc. The light emitters 610, 612 may emit visible light or non-visible light (e.g., IR light), depending on the type of light the camera 608 is configured to sense. The light emitters 610, 612 may be used to provide flood, scanning, or spot illumination.
In some embodiments, the processor may use SMI signals generated by a set of SMI sensors 600, 602, 604 to determine or track gaze vectors of eye 606. In some embodiments, the processor may determine or track the gaze vector using one or more images acquired with the camera 608. In some embodiments, the processor may track the gaze vector using one or more images acquired with camera 608 in combination with the SMI signal. For example, one or more images may be used to determine a gaze vector, and then the SMI signal may be used to track movement of the eye 606 and update the gaze vector (or in other words, determine movement of the gaze vector).
Fig. 7 illustrates an exemplary side view of a third set of alternative SMI sensors 700, 702, 704 that may be mounted to a head-mounted frame, such as one of the head-mounted frames described with reference to fig. 1-3B. The set of SMI sensors 700, 702, 704 may form part of an optical sensor subsystem, such as one of the optical subsystems described with reference to FIGS. 1-3B or elsewhere herein. The set of SMI sensors 700, 702, 704 may be configured to emit light toward the eye 706.
The set of SMI sensors 700, 702, 704 shown in FIG. 7 includes three SMI sensors (e.g., a first SMI sensor 700, a second SMI sensor 702, and a third SMI sensor 704). First, second, and third SMI sensors 700, 702, 704 may emit respective light beams toward eye 706, similar to how the light beams may be emitted by the set of SMI sensors described with reference to fig. 4A and 4B (or fig. 5 or 6). However, not all SMI sensors 700, 702, 704 may emit light beams at the same time. For example, second SMI sensor 702 and third SMI sensor 704 may be part of an addressable array of SMI sensors, which may include more than two SMI sensors in some cases. The SMI sensor array may be coupled to circuitry that may be used to address different SMI sensors or different SMI sensor subsets in the SMI sensor array.
In some cases, the light beams emitted by second SMI sensor 702 and third SMI sensor 704 (and in some cases, the light beams emitted by other SMI sensors in the SMI sensor array) may be directed to a shared mirror, a mirror group, a mirror array, or one or more other optical components 708.
In some cases, the optical sensor subsystem including the set of SMI sensors may also include a camera 710, which camera 710 may be used similarly to the camera described with reference to FIG. 6.
In some cases, the processor may selectively operate (e.g., activate and deactivate) different SMI sensors 702, 704 in the SMI sensor array (or different SMI sensor subsets) for different purposes. For example, the processor may operate (or use) circuitry to activate a particular SMI sensor or a particular subset of SMI sensors having a particular focus or focuses. In some cases, the processor may cause the camera 710 to acquire one or more images of the eye 706. The processor may then analyze the image to determine a gaze vector of the eye 706, and may activate one or more SMI sensors 702, 704 (in an SMI sensor array) focused on a particular structure or region (or structures or regions) of the eye 706. The processor may also activate other SMI sensors, such as SMI sensor 700.
Fig. 8A illustrates an exemplary use of an SMI sensor 800 in combination with a beam splitter 802. The SMI sensor 800 may be any of the SMI sensors described with reference to fig. 1-7, or any of the SMI sensors described below. Beam splitter 802 may be positioned to split a beam 804 emitted by the SMI sensor. The light beam may be split into a plurality of light beams 806, 808, 810 (e.g., two, three, or more light beams). In some cases, the beam splitter 802 may be associated with one or more lenses or beam steering components that redirect or steer the plurality of light beams 806, 808, 810. In some cases, the plurality of light beams 806, 808, 810 may be redirected toward a shared focal point on (or in) the eye.
FIG. 8B illustrates an exemplary use of SMI sensor 850 in combination with a beam steering component 852. The SMI sensor 850 may be any of the SMI sensors described with reference to fig. 1-7, or any of the SMI sensors described below. The beam steering component 852 may be positioned to steer the beam 854 emitted by the SMI sensor 850. The processor may be configured to operate the beam steering component 852 and steer the beam 854 to different structures or regions of the eye. In some embodiments, the beam steering component 852 may include a beam focusing component or lens positioning mechanism that may be adjusted to change the focus of the beam along its axis. In some embodiments, the beam steering component 852 may be replaced with a beam focusing component.
Fig. 9A illustrates a first exemplary integration of SMI sensor 900 with a display subsystem. The SMI sensor 900 may be any of the SMI sensors described with reference to FIGS. 1-7, or any of the SMI sensors described below. In some examples, the display subsystem may be the display subsystem described with reference to fig. 1, 3A, or 3B.
The display subsystem may include an array of display pixels 902, 904, 906 mounted or formed on a substrate 908. For example, the display subsystem is shown as including blue pixels 902, green pixels 904, and red pixels 906, but the display subsystem may include multiple instances of each blue pixel, green pixel, and red pixel. In some cases, the display pixels 902, 904, 906 may include LEDs or other types of light emitting elements.
The SMI sensor 900 may be mounted on a substrate 908. For example, SMI sensor 900 is shown mounted on a substrate 908 adjacent to an array of display pixels 902, 904, 906. Alternatively, the SMI sensor 900 may be mounted on the substrate 908 within an array of display pixels 902, 904, 906 (i.e., between display pixels). In some embodiments, more than one SMI sensor 900 may be mounted on the substrate 908, wherein each SMI sensor 900 is positioned adjacent to or within an array of display pixels 902, 904, 906. In some embodiments, SMI sensor 900 may emit IR light. In other embodiments, SMI sensor 900 may emit visible light, ultraviolet light, or other types of light. In some embodiments, one or more of the display pixels 902, 904, 906 may operate as an SMI sensor.
The shared waveguide 910 may be positioned to direct light beams emitted by the display pixels 902, 904, 906 and the SMI sensor 900 to a beam steering component 912, such as a set of one or more mirrors movable by a microelectromechanical system (MEMS). In alternative embodiments, a set of waveguides (e.g., a set of optical fibers) may be used to direct light emitted by display pixels 902, 904, 906 and SMI sensor 900 to beam steering component 912. The processor may operate the display pixels 902, 904, 906 and the beam steering component 912 to render text, numbers, or images on a display. In some cases, the display may include one or more lenses of a pair of eyeglasses, or a display within a VR head display.
Shared waveguide 910 (or a set of waveguides) may receive a return portion of the light emitted by SMI sensor 900, such as a portion of the light reflected or scattered from the eye, and direct the return portion of the emitted light toward and into the cavity of SMI sensor 900.
Fig. 9B illustrates a second exemplary integration of SMI sensor 950 with a display subsystem. The SMI sensor 950 may be any of the SMI sensors described with reference to FIGS. 1-7, or any of the SMI sensors described below. In some examples, the display subsystem may be the display subsystem described with reference to fig. 1, 3A, or 3B.
The display subsystem may include an array of display pixels 952, 954, 956 and an SMI sensor 950 mounted or formed on a substrate 958. Display pixels 952, 954, 956 and SMI sensor 950 may be configured as described with reference to fig. 9A.
The waveguide 960 (or set of waveguides) may be positioned to direct the light beam emitted by the display pixels 952, 954, 956 and the SMI sensor 950 to the further shared waveguide 962, or the distal portion of the shared waveguide 960 (or the distal portion of the set of waveguides) may be curved and light may be emitted from the outcoupling of the further shared waveguide 962, or from the outcoupling of the shared waveguide 962 or the distal portion of the set of waveguides. The processor may operate the display pixels 952, 954, 956 to project text, numbers, or images on a display.
A return portion of the light emitted by the SMI sensor 950, such as a portion of the light reflected or scattered from the eye, may pass through the waveguides 962, 960 and be redirected toward and into the cavity of the SMI sensor 950.
Fig. 10 illustrates an exemplary set of components 1000 that may be included in an optical sensor subsystem of an eye tracking device. The set of components 1000 is typically divided between a subset of optical or optoelectronic components 1002, a subset of analog components 1004, a subset of digital components 1006, and a subset of system components 1008 (e.g., processors, and in some cases, other control components).
The subset of optical or optoelectronic components 1002 can include a laser diode 1010 or another optical emitter having a resonant cavity. Component 1002 can also include a photodetector 1012 (e.g., a photodiode). The photodetector 1012 may be integrated into the same epitaxial stack as the laser diode 1010 (e.g., above, below, or adjacent to the laser diode 1010), or may be formed as a separate component stacked with or positioned adjacent to the laser diode 1010. Alternatively, the photodetector 1012 may be replaced or supplemented with a circuit that measures the junction voltage or drive current of the laser diode 1010 and electronically generates an SMI signal (i.e., without a photosensitive element). The combination of the laser diode 1010 and the photodetector 1012 or alternative circuitry for generating the SMI signal may be referred to as an SMI sensor.
The subset of optical or optoelectronic components 1002 can also include module level optics 1014 integrated with the laser diode 1010 and/or the photodetector 1012, and/or system level optics 1016. The module level optics 1014 and/or the system level optics 1016 may include, for example, lenses, beam splitters, beam steering components, and the like. The module level optics 1014 and/or the system level optics 1016 may determine where the emitted light and the return light (e.g., light scattered from the eye) are directed.
A subset of the analog components 1004 may include a digital-to-analog converter (DAC) 1018 and a current regulator 1020 for converting the drive current into the analog domain and providing it to the laser diode 1010. The component 1004 may also include a component 1022 for ensuring that the laser diode 1010 operates in accordance with operational safety specifications. Component 1004 may also include a transimpedance amplifier (TIA) and/or other amplifier 1024 for amplifying the SMI signal generated by photodetector 1012, and an analog-to-digital converter (ADC) for converting the amplified SMI signal to the digital domain. Component 1004 may also include a component for correcting the SMI signal as it is amplified or otherwise processed.
In some cases, a subset of analog components 1004 may interface with (e.g., be multiplexed with) more than one subset of optical or optoelectronic components 1002. For example, component 1004 can interface with two, three, or more SMI sensors.
The subset of digital components 1006 may include a scheduler 1026 for scheduling (e.g., associating) the supply of drive current to the laser diode 1010 with the supply of digitized photocurrent obtained from the photodetector 1012 to the system component 1008. Component 1006 may also include a drive current waveform generator 1028 that provides a digital drive current to DAC 1018. Component 1006 may further include a digital processing chain for processing the amplified and digitized output of photodetector 1012. The digital processing chain may include, for example, a time domain signal preprocessing circuit 1030, a Fast Fourier Transform (FFT) engine 1032, a frequency domain signal preprocessing circuit 1034, and a distance and/or speed estimator 1036. In some cases, some or all of components 1006 may be instantiated by a set of one or more processors.
A subset of the system components 1008 may include, for example, a system level scheduler 1038, which scheduler 1038 may schedule when an SMI sensor (or SMI sensor) or other components are used to track the position (e.g., gaze vector) or movement (e.g., angular velocity) of the eye. The component 1008 may also include other sensors, such as a camera 1040 or an Inertial Measurement Unit (IMU) 1042. In some cases, the component 1008 (or a processor thereof) may track the position or movement of the eye using one or more SMI signals acquired from one or more subsets of the component 1002, one or more images acquired from the camera 1040, and/or other measurements (e.g., measurements acquired by the IMU 1042). Component 1008 can also include a sensor fusion application 1044 and various other applications 1046.
Various types of drive currents may be used to drive the laser diode 1010. For example, the laser diode 1010 may be driven with a DC current (e.g., in a DC drive mode) for the purpose of performing doppler analysis on the SMI signal generated by the photodetector 1012. Alternatively, when ranging is performed, the laser diode 1010 may be driven in a Frequency Modulated Continuous Waveform (FMCW) mode (e.g., with a triangle wave drive current). Alternatively, when determining the relative displacement of the eye, the laser diode 1010 may be driven in a harmonic drive mode (e.g., with IQ modulated drive current).
Fig. 11A illustrates a first exemplary method 1100 for tracking eye movement using a set of one or more SMI sensors 1104. Method 1100 includes operating optical sensor subsystem 1102 such that the set of one or more SMI sensors 1104 emit a set of one or more light beams toward the eyes of a user. The optical sensor subsystem 1102 and the SMI sensor 1104 may be configured similar to any of the optical sensor subsystems and SMI sensors described herein.
At 1106, method 1100 may include receiving a set of one or more SMI signals from the set of one or more SMI sensors 1104.
At 1108, method 1100 may include tracking rotational movement of the eye using a set of one or more SMI signals. Operations at 1108 may include estimating (at 1110) the linear and angular velocities of the eye using, for example, the SMI signal and doppler interferometry. Operations at 1108 may also include estimating (at 1112) a range (or distance) to the eye, or estimating (at 1114) a surface quality (e.g., surface texture) of the eye. The estimated range or surface quality may be used not only to estimate the rotational movement of the eye, but also to determine (at 1116) the position of the eye, or the structure of the eye on which the set of one or more SMI sensors 1104 are focused.
At 1118, the method 1100 may include using the output of operation 1108 to determine gaze movement.
At 1120, the method 1100 may include using the output of operation 1108 to identify a gaze wake event (e.g., the user opening their eyes, or the user looking in a particular direction, or the user performing a particular series of eye movements). In some cases, the operation or other operations at 1120 may include identifying a gaze sleep event (e.g., the user closing their eyes, the user looking in a particular direction, or the user performing a particular series of eye movements), a blink event, or other event.
At 1122, the method 1100 may include performing an operation (e.g., powering a head mounted display or other device on or off, answering a call, activating an application, adjusting volume, etc.) in response to the identified particular type of event.
At 1124, the method 1100 may include performing doppler ranging to determine a change in position of the eye gaze vector. In some cases, doppler ranging may be performed using an Extended Kalman Filter (EKF).
At 1126, method 1100 may include updating a gaze to head mounted display (gaze-HMD) vector (i.e., determining how the eye gaze vector intersects the display, or how the eye gaze vector moves relative to the display).
At 1128, method 1100 may include causing a display subsystem of the HMD (or another display) to adjust rendering of text, numbers, or images on the display. In some cases, the adjustment may be responsive to classifying movement of the eye (e.g., as smooth tracking, saccadic, gazing, nystagmus, or blinking).
Fig. 11B illustrates a second exemplary method 1150 for tracking movement of an eye using the set of one or more SMI sensors 1104 and operating in combination with a camera 1152 or other sensor (e.g., IMU 1154, outward Facing Camera (OFC) 1156, etc.) described with reference to fig. 11A. The camera 1152 may be configured similarly to other cameras described herein.
At 1158, method 1150 may include acquiring a set of one or more images of the eye using camera 1152. In some embodiments, camera 1152 may acquire a set of images at a first frequency and may sample the SMI signal generated by SMI sensor 1104 at a second frequency (e.g., a second frequency synchronized with the first frequency). The frequencies may be the same or different, but in some embodiments the second frequency may be greater than the first frequency. In this way, camera 1152, which generally consumes more power and produces a greater amount of data, may be used to determine eye position or generate gaze vector data at a lower frequency, and ensure that the SMI sensor is properly focused and produces good data; and typically less power consuming SMI sensor 1104 may be used more or less continuously and at a higher frequency to track eye (or gaze vector) movement between image captures by camera 1152. In some embodiments, the image acquired by camera 1152 may be used to generate, update, or customize an eye model that may be used to direct or focus the light beam emitted by the SMI sensor.
At 1160, method 1150 may include estimating a gaze vector (or position) of the eye based on the image acquired by camera 1152.
At 1162, method 1100 may include determining or updating a head-to-HMD vector (head-HMD vector). In other words, the method 1100 may determine how the user's head is positioned relative to the display.
At 1164, method 1100 may include performing vision-doppler ranging to determine a change in position of an eye gaze vector. In some cases, vision-doppler ranging may be performed using an Extended Kalman Filter (EKF). In contrast to the Doppler ranging performed in method 1100, the vision-Doppler ranging performed at 1164 may utilize an image-based location (or gaze vector) analysis of the eyes to fuse the SMI-based movement (or location) analysis.
At 1126, method 1100 may include updating a gaze to head mounted display (gaze-HMD) vector (i.e., determining how the eye gaze vector intersects the display, or how the eye gaze vector moves relative to the display).
At 1166, the method 1100 may optionally perform inertial, video, or video-inertial ranging using the IMU 1154 or the output of the outward camera 1156 (i.e., a camera focused on the environment surrounding the user instead of on the user's eyes). The video-inertial ranging may then be used to determine or update HMD-to-world (HMD-world) vectors at 1168.
At 1170, method 1100 may include determining or updating a gaze-world vector. Such vectors may be used to augment the user's reality, for example, via a pair of glasses.
At 1128, method 1100 may include causing a display subsystem of the HMD (or another display) to adjust (e.g., in an AR or VR environment) the rendering of text, numbers, or images on the display. In some cases, the adjustment may be responsive to classifying movement of the eye (e.g., as smooth tracking, saccadic, gazing, nystagmus, or blinking).
Fig. 12A and 12B illustrate how a set of one or more SMI sensors may be used to map one or more surfaces or structures of eye 1200. For example, fig. 12B and 12B illustrate a single SMI sensor 1202. In some examples, SMI sensor 1202 may be replaced with multiple SMI sensors (e.g., multiple discrete SMI sensors or an array of SMI sensors). The SMI sensor may be any of the SMI sensors described with reference to fig. 1-11B.
Fig. 12A and 12B each show two side views of the same eye 1200. The first side view (i.e., side views 1204 and 1206) in each figure shows a cross-section of eye 1200, and the second side view (i.e., side views 1208 and 1210) in each figure shows a computer-generated model of various eye structures identified by the processor after analysis of the SMI signals generated by the SMI sensor or the plurality of SMI signals generated by the plurality of different SMI sensors. Fig. 12A shows the eye 1200 in a first position, and fig. 12B shows the eye 1200 in a second position.
In the case of a single SMI sensor 1202, the SMI sensor 1202 may be mounted to the headset frame by means of a MEMS or other structure 1212 that enables scanning of the light beam 1214 emitted by the SMI sensor 1202 across the eye 1200, or the light beam 1214 emitted by the SMI sensor 1202 may be received by a set of one or more optical elements that may be adjusted to scan the light beam 1214 across the eye 1200. Alternatively, the light beam 1214 may be split using a beam splitter, and multiple light beams may impinge the eye 1200 simultaneously or sequentially. Alternatively, SMI sensor 1202 may be mounted to the headset frame in a fixed position, and the user may be required to move their eye 1200 to a different position while SMI sensor 1202 emits a light beam.
A processor, such as any of the processors described herein, may receive the SMI signals generated by the set of one or more SMI sensors and determine a set of ranges for a set of points on or in the eye using the SMI signals. The range may include an absolute range or a relative range. The processor may then generate a map of at least one structure of the eye using the set of ranges. The map may be a two-dimensional (2D) map or a three-dimensional (3D) map.
In some embodiments, the processor may be configured to identify a structure of the eye, or a boundary between a first structure of the eye and a second structure of the eye, using the map. The identified structures may include, for example, one or more of iris 1218, sclera 1220, pupil 1222, lens 1224, limbus 1226, eyelid, and the like.
In some embodiments, the processor may operate an optical sensor subsystem (e.g., MEMS, one or more optical elements, beam splitters, etc.) to direct one or more light beams toward the structure of the identified eye. In some cases, the structure may be more diffuse than another structure of the eye.
In some embodiments, the processor may be configured to determine a gaze vector 1216 of the eye using the map. In some embodiments, the processor may also or alternatively be configured to obtain or construct a doppler cloud using a set of one or more SMI signals. The one or more SMI signals correspond to projecting or transmitting a plurality of light beams of a set of one or more light beams simultaneously or sequentially, and/or scanning at least one light beam of a set of one or more light beams. The Doppler cloud may be obtained or constructed with or without VCSEL wavelength modulation. The processor may also or alternatively obtain or build a depth cloud. The depth cloud may be obtained or built using only VCSEL wavelength modulation.
For example, a doppler cloud and/or depth cloud may be obtained or built when the wavelength of light emitted by one or more SMI sensors is modulated, and when the light beam emitted by at least one SMI sensor is scanned and/or emits multiple light beams. Additionally or alternatively, a doppler cloud may be obtained or built when the wavelength of light emitted by one or more SMI sensors is not modulated. As described earlier, a single or multiple frames of the doppler cloud may be considered a differential depth cloud. Using measurements of single or multiple frames of the doppler cloud processed in real time, a predefined and/or locally calibrated differential map or library may be matched and/or eye tracking or position information may be extracted. As described herein, the locally calibrated differential map or library may be obtained by including, but not limited to, a camera, a depth cloud, and the like. In addition, the use of Doppler clouds, alone or in combination with depth clouds or other sensing modalities (e.g., eye camera images, motion sensors, etc.), can be used to provide an accurate and efficient way of tracking eye movement or position information.
The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the embodiments. However, it will be apparent to one skilled in the art that the embodiments may be practiced without the specific details after reading this description. Thus, the foregoing descriptions of specific embodiments described herein are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art in light of the above teachings, upon reading this specification.
As described above, one aspect of the present technology may be to collect and use data that may be obtained from a variety of sources, including biometric data (e.g., surface quality of a user's skin or fingerprint). The present disclosure contemplates that in some cases, the collected data may include personal information data that uniquely identifies, or may be used to identify, locate, or contact a particular person. Such personal information data may include, for example, biometric data (e.g., fingerprint data) and data linked thereto (e.g., demographic data, location-based data, telephone numbers, email addresses, home addresses, data or records related to the user's health or fitness level (e.g., vital sign measurements, medication information, exercise information), date of birth, or any other identifying information or personal information).
The present disclosure recognizes that the use of such personal information data in the present technology may be used to benefit users. For example, personal information data may be used to authenticate a user to access his device or to collect performance metrics for user interaction with an enhanced or virtual world. In addition, the present disclosure contemplates other uses for personal information data that are beneficial to the user. For example, health and fitness data may be used to provide insight into the overall health of a user, or may be used as positive feedback to individuals using technology to pursue health goals.
The present disclosure contemplates that entities responsible for collecting, analyzing, disclosing, transmitting, storing, or otherwise using such personal information data will adhere to established privacy policies and/or privacy practices. In particular, such entities should exercise and adhere to privacy policies and practices that are recognized as meeting or exceeding industry or government requirements for maintaining the privacy and security of personal information data. Such policies should be readily accessible to the user and should be updated as the collection and/or use of the data changes. Personal information from users should be collected for legal and reasonable use by entities and not shared or sold outside of these legal uses. In addition, such collection/sharing should be performed after informed consent is received from the user. Moreover, such entities should consider taking any necessary steps to defend and secure access to such personal information data and to ensure that others having access to the personal information data adhere to their privacy policies and procedures. In addition, such entities may subject themselves to third party evaluations to prove compliance with widely accepted privacy policies and practices. In addition, policies and practices should be adjusted to collect and/or access specific types of personal information data and to suit applicable laws and standards including specific considerations of jurisdiction. For example, in the united states, the collection or acquisition of certain health data may be governed by federal and/or state law, such as the health insurance flow and liability act (HIPAA); while health data in other countries may be subject to other regulations and policies and should be processed accordingly. Thus, different privacy practices should be maintained for different personal data types in each country.
In spite of the foregoing, the present disclosure also contemplates embodiments in which a user selectively prevents use or access to personal information data. That is, the present disclosure contemplates that hardware elements and/or software elements may be provided to prevent or block access to such personal information data. For example, with respect to advertisement delivery services, the techniques of this disclosure may be configured to allow a user to choose to "opt-in" or "opt-out" to participate in the collection of personal information data during or at any time after registration with the service. As another example, the user may choose not to provide data to the targeted content delivery service. In yet another example, the user may choose to limit the length of time that data is maintained, or to completely prohibit development of a baseline profile for the user. In addition to providing the "opt-in" and "opt-out" options, the present disclosure contemplates providing notifications related to accessing or using personal information. For example, the user may be notified that his personal information data will be accessed when the application is downloaded, and then be reminded again just before the personal information data is accessed by the application.
Further, it is an object of the present disclosure that personal information data should be managed and processed to minimize the risk of inadvertent or unauthorized access or use. Once the data is no longer needed, risk can be minimized by limiting the data collection and deleting the data. In addition, and when applicable, included in certain health-related applications, the data de-identification may be used to protect the privacy of the user. The de-identification may be facilitated by removing a particular identifier (e.g., date of birth, etc.), controlling the amount or specificity of the stored data (e.g., collecting location data at the city level instead of at the address level), controlling the manner in which the data is stored (e.g., aggregating data across users), and/or other methods, where appropriate.
Thus, while the present disclosure broadly covers the use of personal information data to implement one or more of the various disclosed embodiments, the present disclosure also contemplates that the various embodiments may be implemented without accessing such personal information data. That is, various embodiments of the present technology do not fail to function properly due to the lack of all or a portion of such personal information data. For example, the content may be selected and delivered to the user by inferring preferences based on non-personal information data or absolute minimum amount of personal information such as content requested by a device associated with the user, other non-personal information available to the content delivery service, or publicly available information.

Claims (27)

1. An eye tracking device, comprising:
a head-mounted frame;
an optical sensor subsystem mounted to the head-mounted frame and comprising,
a set of one or more self-mixing interferometry (SMI) sensors; and
a processor configured to:
operating the optical sensor subsystem to cause the set of one or more SMI sensors to emit a set of one or more light beams toward the eyes of the user;
Receiving a set of one or more SMI signals from the set of one or more SMI sensors; and is also provided with
The set of one or more SMI signals is used to track movement of the eye.
2. The eye tracking apparatus in claim 1, wherein tracking the movement of the eye comprises estimating an angular velocity of the eye.
3. The eye tracking device according to claim 1, wherein:
the set of one or more SMI sensors includes at least two SMI sensors; and is also provided with
The tracked movement of the eye includes a tracked angular velocity of the eye in two dimensions.
4. The eye tracking device according to claim 1, wherein:
the set of one or more SMI sensors includes at least three SMI sensors; and is also provided with
The tracked movement of the eye includes a tracked angular velocity of the eye in three dimensions.
5. The eye tracking device according to claim 1, wherein the headset frame comprises:
a first mirror frame and a second mirror frame;
a nose bridge connecting the first frame to the second frame;
a first temple, the first temple being connected to the first frame; and
and a second temple connected to the second frame.
6. The eye tracking device according to claim 1, further comprising:
a display subsystem including a display mounted to the head-mounted frame.
7. The eye tracking device according to claim 6, wherein:
the processor is configured to:
classifying the movement of the eye as at least one of: blink, smooth track, glance, gaze or nystagmus; and is also provided with
Causing the display subsystem to adjust rendering of one or more images on the display in response to the classified movement of the eye.
8. The eye tracking device according to claim 7, wherein:
the processor is configured to:
quantifying the classified movement of the eye; wherein, the liquid crystal display device comprises a liquid crystal display device,
the rendering of the one or more images is further adjusted in response to the quantified movement of the eye.
9. The eye tracking device according to claim 6, wherein:
the processor is configured to:
causing the display subsystem to change a state of the display in response to the movement of the eye.
10. The eye tracking device according to claim 6, wherein:
the display subsystem includes a display pixel array mounted on a substrate; and is also provided with
The set of one or more SMI sensors includes at least one SMI sensor mounted on the substrate, adjacent to, or within the display pixel array.
11. The eye tracking device according to claim 1, wherein:
the optical sensor subsystem includes,
a beam splitter positioned to split a beam of light emitted by an SMI sensor of the set of one or more SMI sensors.
12. The eye tracking device according to claim 1, wherein:
the optical sensor subsystem includes,
a waveguide positioned to direct a light beam emitted by an SMI sensor of the set of one or more SMI sensors to a plurality of outcouplings of the waveguide.
13. The eye tracking device according to claim 1, wherein:
the optical sensor subsystem includes,
a beam steering component positioned to steer a beam emitted by an SMI sensor of the set of one or more SMI sensors; and is also provided with
The processor is configured to operate the beam steering component and steer the beam.
14. The eye tracking device according to claim 1, wherein:
The optical sensor subsystem includes,
a lens positioned to focus a light beam emitted by an SMI sensor of the set of one or more SMI sensors; and
a lens positioning mechanism; and is also provided with
The processor is configured to operate the lens positioning mechanism and focus the light beam on a structure of the eye.
15. The eye tracking device according to claim 1, wherein:
the set of one or more SMI sensors includes,
an SMI sensor array; and
circuitry configured to address different SMI sensors or different subsets of SMI sensors in the array of SMI sensors; and is also provided with
The processor is configured to selectively operate the different using the circuitry
SMI sensors or a subset of the different SMI sensors.
16. The eye tracking device according to claim 1, wherein:
the processor is configured to:
a set of ranges for a set of points on or in the eye is determined.
17. The eye tracking device according to claim 16, wherein:
the processor is configured to:
a map of at least one structure of the eye is generated using the set of ranges.
18. The eye tracking device according to claim 17, wherein:
the processor is configured to:
identifying a structure of the eye using the map; and is also provided with
The optical sensor subsystem is operated to direct at least one light beam of the set of one or more light beams toward the identified structure of the eye.
19. The eye tracking device according to claim 16, wherein the set of ranges comprises a set of absolute ranges.
20. The eye tracking device according to claim 16, wherein the set of ranges comprises a set of relative ranges.
21. An eye tracking device, comprising:
a set of one or more self-mixing interferometry (SMI) sensors;
a camera; and
a processor configured to:
causing the camera to acquire a set of images of the user's eyes at a first frequency;
causing the set of one or more SMI sensors to emit a set of one or more light beams toward the eye of the user;
sampling a set of one or more SMI signals generated by the set of one or more SMI sensors at a second frequency that is greater than the first frequency;
determining a gaze vector of the eye using at least one image of the set of images; and is also provided with
The set of one or more SMI signals is used to track movement of the eye.
22. The eye tracking device according to claim 21, wherein:
the processor is configured to:
updating the gaze vector using the tracked movements of the eye.
23. A method of tracking movement of an eye, comprising:
operating an optical sensor subsystem to cause a set of one or more self-mixing interferometry (SMI) sensors in the optical sensor subsystem to emit a set of one or more light beams toward the eye of a user;
receiving a set of one or more SMI signals from the set of one or more SMI sensors; and
the movement of the eye is tracked using the set of one or more SMI signals.
24. The method of claim 23, wherein tracking the movement of the eye using the set of one or more SMI signals comprises:
the set of one or more SMI signals is used to estimate an angular velocity of the eye.
25. The method of claim 23, further comprising:
at least one of the following:
simultaneously projecting a plurality of light beams of the set of one or more light beams; or alternatively
Sequentially emitting the plurality of light beams; or alternatively
Scanning at least one beam of the set of one or more beams; wherein, the liquid crystal display device comprises a liquid crystal display device,
tracking the movement of the eye using the set of one or more SMI signals includes:
a doppler cloud is constructed using the set of one or more SMI signals.
26. The method of claim 25, wherein tracking the movement of the eye using the set of one or more SMI signals further comprises:
eye position information is extracted by matching one or more frames of the Doppler cloud with a difference map.
27. The method of claim 23, further comprising:
modulating the emitted set of one or more light beams, while,
simultaneously projecting a plurality of light beams of the set of one or more light beams; or alternatively
Sequentially emitting the plurality of light beams; or alternatively
Scanning at least one beam of the set of one or more beams; wherein, the liquid crystal display device comprises a liquid crystal display device,
tracking the movement of the eye using the set of one or more SMI signals includes:
a depth cloud is constructed using the set of one or more SMI signals.
CN202310073547.3A 2021-09-22 2022-09-22 Eye tracking using self-mixing interferometry Pending CN116482854A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/247,188 2021-09-22
US202217947874A 2022-09-19 2022-09-19
US17/947,874 2022-09-19
CN202211169329.1 2022-09-22

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN202211169329.1 Division 2021-09-22 2022-09-22

Publications (1)

Publication Number Publication Date
CN116482854A true CN116482854A (en) 2023-07-25

Family

ID=87245285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310073547.3A Pending CN116482854A (en) 2021-09-22 2022-09-22 Eye tracking using self-mixing interferometry

Country Status (1)

Country Link
CN (1) CN116482854A (en)

Similar Documents

Publication Publication Date Title
US11880033B2 (en) Display systems and methods for determining registration between a display and a user&#39;s eyes
CN112507799B (en) Image recognition method based on eye movement fixation point guidance, MR glasses and medium
US11883104B2 (en) Eye center of rotation determination, depth plane selection, and render camera positioning in display systems
US11686945B2 (en) Methods of driving light sources in a near-eye display
US20230037046A1 (en) Depth plane selection for multi-depth plane display systems by user categorization
US10852817B1 (en) Eye tracking combiner having multiple perspectives
CN104603673B (en) Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream
US11150474B2 (en) Adjustable electronic device system with facial mapping
EP3252566B1 (en) Face and eye tracking and facial animation using facial sensors within a head-mounted display
US20140160157A1 (en) People-triggered holographic reminders
JP2017503570A (en) Calibration method for head-mounted eye tracking device
CN112181152A (en) Advertisement push management method, equipment and application based on MR glasses
KR20140059213A (en) Head mounted display with iris scan profiling
US10698483B1 (en) Eye-tracking systems, head-mounted displays including the same, and related methods
US11868525B2 (en) Eye center of rotation determination with one or more eye tracking cameras
Meyer et al. A novel camera-free eye tracking sensor for augmented reality based on laser scanning
US20200150425A1 (en) Inconspicuous near-eye electrical components
US20210181840A1 (en) Display systems and methods for determining vertical alignment between left and right displays and a user&#39;s eyes
US11435820B1 (en) Gaze detection pipeline in an artificial reality system
US20230333371A1 (en) Eye Tracking Using Self-Mixing Interferometry
CN116482854A (en) Eye tracking using self-mixing interferometry
US20230333640A1 (en) Multiple gaze dependent illumination sources for retinal eye tracking
KR20230092834A (en) Eye tracking using self-mixing interferometry
Meyer et al. A Highly Integrated Ambient Light Robust Eye-Tracking Sensor for Retinal Projection AR Glasses Based on Laser Feedback Interferometry
Meyer Towards energy efficient mobile eye tracking for AR glasses through optical sensor technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination