US10401953B2 - Systems and methods for eye vergence control in real and augmented reality environments - Google Patents

Systems and methods for eye vergence control in real and augmented reality environments Download PDF

Info

Publication number
US10401953B2
US10401953B2 US15/335,262 US201615335262A US10401953B2 US 10401953 B2 US10401953 B2 US 10401953B2 US 201615335262 A US201615335262 A US 201615335262A US 10401953 B2 US10401953 B2 US 10401953B2
Authority
US
United States
Prior art keywords
user
eyes
virtual
viewer
response
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US15/335,262
Other versions
US20170131764A1 (en
Inventor
Botond Bognar
Matthew D Moller
Stephen R Ellis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pillantas Inc
Original Assignee
Pillantas Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201562246200P priority Critical
Application filed by Pillantas Inc filed Critical Pillantas Inc
Priority to US15/335,262 priority patent/US10401953B2/en
Assigned to PILLANTAS INC. reassignment PILLANTAS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOGNAR, Botond, ELLIS, STEPHEN R, MOLLER, MATTHEW D
Publication of US20170131764A1 publication Critical patent/US20170131764A1/en
Priority claimed from US15/849,394 external-priority patent/US10466780B1/en
Publication of US10401953B2 publication Critical patent/US10401953B2/en
Application granted granted Critical
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

Systems and methods are provided that allow a person to exert control in real space as well as in virtual spaces. Eye tracking is employed to determine the direction of the viewer's gaze, and head tracking can be used as needed to track the viewer's head in order to provide a stable virtual environment. Eye vergence can then be used to as a triggering mechanism to initiate responses stored in association with objects in the field of view. Additionally, eye vergence can be used to provide a continuous control.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to U.S. provisional patent application No. 62/246,200 filed on Oct. 26, 2015 and entitled “Eye Controlled Virtual User Interface and Associated Information Structure” which is incorporated herein by reference in its entirety.

BACKGROUND

Field of the Invention

The invention is in the field of human-machine interfaces and more particularly in the field of user control of real and virtual user interfaces based on eye tracking.

Related Art

Augmented Reality (AR) headsets include wearable eyeglasses that include an optical combiner that can act as transparent screens and are used to produce virtual images that are superimposed over a real-world background. U.S. Pat. No. 9,035,955, invented by Keane et al. and U.S. Patent Application Publication No. 2015/0248169, invented by Abovitz et al. disclose examples of AR glasses.

Virtual Reality (VR) headsets include wearable viewing screens that block out vision of the real world and are used to create a view of a completely synthetic environment. U.S. Pat. No. 8,957,835, invented by Hoellwarth discloses an example of a VR headset.

Mixed Reality (MR) can be thought of as a continuum of user interaction environments that range between a real environment and a totally virtual environment (Milgram & Colquhou, 1999). MR blends virtual worlds with the real world to produce new environments and visualizations where digital objects can appear to interact with the objects of the real world to various amounts, rather than just being passive within the real-world context.

SUMMARY

The present invention provides methods for a person to exert control through eye vergence, the simultaneous movement of the pupils toward or away from one another during binocular fixating. These methods can be used to exert control in real space as well as in virtual spaces. These methods require the presence of at least some hardware, including hardware capable of tracking both eyes, and hardware for analyzing the eye tracking output. Hardware is also necessary for implementing the intended control, whether by rendering a virtual reality differently in response to the eye vergence, or by similarly triggering actions in the real world such as dimming the room lights, changing a TV channel, or causing a robotic camera to take a picture. Hardware is described herein primarily in terms of head-mounted devices that incorporate all of the components necessary to track eye movement, head movement, to render virtual or digital images, and to implement other methods of the invention that detect eye vergence and act accordingly. It will be understood, however, that any, some, or all of these functions can be performed by hardware located off of the person's body.

An exemplary method comprises, or simply consists of or consists essentially of, the steps of fixating on an object and causing a response by eye vergence. In various embodiments the object is a real object or a virtual object, and in the latter embodiments the virtual object can exist in a virtual reality, augmented reality, or mixed reality. In various embodiments the response is a virtual response, that is, a response affecting the presentation of a virtual reality to the person, or a real response, that is, affecting the person's real world environment. Examples of virtual responses include translations and rotations of the virtual object, where the translations can be either towards or away from the person in addition to laterally. Another exemplary virtual response is to trigger new displays, such as new levels of a menu structure. Virtual responses include launching applications that manifest in the virtual reality, selecting virtual objects, tagging objects, playing sounds, acquiring a “screen shot,” adjusting parametric data associated with the object, and so forth.

BRIEF DESCRIPTION OF DRAWINGS

FIGS. 1A and 1B are side and top views, respectively, of an augmented reality headset, according to various embodiments of the invention, worn on the head produce virtual images that augment the viewer's real world view.

FIGS. 2A and 2B are side and top views, respectively, of an augmented reality headset, according to various embodiments of the invention, worn on the head produce virtual images that augment the viewer's real world view.

FIGS. 3A and 3B are side and top views, respectively, of a virtual reality headset, according to various embodiments of the invention, worn on the head produce a virtual reality.

FIG. 4 shows a perspective view of a sequence of virtual three dimensional (3D) user interface display surfaces according to various embodiments of the invention.

FIGS. 5A and 5B, 6A and 6B, 7A and 7B, and 8A and 8B are perspective and side views, respectively, of a user stepping through successive display surfaces, according to various embodiments of the invention.

FIG. 9 is a graph illustrating how z-depth for binocular fixation varies as a person's eyes fixate on objects at various distances.

FIG. 10 is a flowchart representation of a method according to various embodiments of the invention.

FIG. 11 illustrates a virtual display surface embodiment comprising multiple intersecting spheres, according to various embodiments of the invention.

FIG. 12 shows the virtual display surface embodiment of FIG. 11 further comprising a virtual enlarged sphere with multiple tiled images produced in response to a selection.

DETAILED DESCRIPTION OF THE INVENTION

The following description is presented to enable persons skilled in the art to create and use the eye vergence controlled systems and methods described herein. Various modifications to the embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the inventive subject matter. Moreover, in the following description, numerous details are set forth for the purpose of explanation. However, one of ordinary skill in the art will realize that the inventive subject matter might be practiced without the use of these specific details. In other instances, well known machine components, processes and data structures are shown in block diagram form in order not to obscure the disclosure with unnecessary detail. Identical reference numerals may be used to represent different views of the same item in different drawings. Flowcharts in drawings referenced below are used to represent processes. A hardware processor system may be configured to perform some of these processes. Modules within flow diagrams representing computer implemented processes represent the configuration of a processor system according to computer program code to perform the acts described with reference to these modules. Thus, the inventive subject matter is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.

While methods of the present invention can be implemented in any of AR, MR, or VR, and will be described with reference to the AR and VR headsets discussed with reference to the following drawings that can be used to implement these methods. It will be understood that other systems or devices that are not worn on the user's head at all, or that locate some but not all components on the head, can also be used to generate virtual environments for the user. Thus, methods of the invention described herein are not limited to implementation by head-mounted devices and can also employ back-projected screens for stereoscopic presentations, for example. Finally, methods of the invention can be implemented in real-world contexts without implementing any of AR, MR, or VR.

FIG. 1A illustrates an exemplary AR headset embodiment 100 mounted on a head 102 of a user, also referred to herein as the person or viewer, to produce virtual or digital images that augment the user's view of the real world. The headset 100 includes means for securing the headset 100 to the user's head 102, such as the arms of a pair of glasses, an adjustable headband, or a partial or full helmet, where such means are represented herein generally by mount 104. The headset 100 also includes a transparent display screen 106 and a transparent protective visor 108. In various embodiments the display screen 106 only appears transparent but is actually opaque and made to appear transparent by optical means. The display screen 106 and the protective visor 108 are both suspended from the mount 104 so as to be disposed in front of the viewer's eyes 110 with the display screen 106 positioned between the eyes 110 and the protective visor 108. The protective visor 108 also may be disposed to provide optical power, such as in ordinary reading glasses, to produce the virtual images that the user views. The optical system itself may be customized to provide personalized optical corrections for user who normally need to use spectacles. The headset 100 further includes an eye tracking module 118 to monitor movements of both eyes 110 and a head tracking module 120 to monitor movement of the viewer's head 102. Exemplary eye tracking modules 118 suitable for headset 100, as well as eye tracking eyeglasses, are available from both SensoMotoric Instruments, Inc. of Boston, Mass. and Pupil Labs UG, of Berlin, Germany. Head tracking typically employs coils that pick up induced currents from constant or pulsed magnetic fields, or by camera-based processing of fixed active or passive fiducial markers on rigid bodies attached to the head 102. A user's gaze direction in space is determined by first determining viewer's eye's gaze direction with respect to their head 102 and then adding it to their head's forward direction in space.

An image generation device 122, consisting of a computer graphics system to calculate a geometric projection of synthetic environment and a physical visual display to present particular views of this environment, is configured to produce images 116L and 116R (FIG. 1B; shown together as ray 107 in FIG. 1A) on the display screen 106 that is perceived by the viewer as augmenting their view of the real world as seen through the display screen 106 and visor 108. Image generation device 122 can include a processor to render the images 116L and 116R or the images 116L, 116R can be provided to the image generation device 122 from a processor 126 noted below.

A spatial sensor 124 detects physical objects and maps those objects so that, for example, virtual images can be correctly and accurately displayed in relation to the real-world objects. In some embodiments, the spatial sensor 124 includes an infrared (IR) detector, LIDAR, or a camera that detects markers on rigid bodies. In some embodiments, the spatial sensor 124 measures the distances from the headset 100 to the real-world objects, and as a user turns her head 102, a map can be produced from the distance data, where the map includes three-dimensional geometric representations of objects in the environment. A suitable example of such mapping is simultaneous localization and mapping (SLAM) and can be performed either by a processor within the spatial sensor 124 or by a processor 126, described below, in communication with the spatial sensor 124.

Processor 126 is configured with instructions stored within a non-transitory storage device 128. The processor 126 is in communication with other components such as eye tracking module 118, head tracking module 120, spatial sensor 124, and image generation device 122 to work in conjunction with processing within these components or to provide the computation power these components lack.

In FIGS. 1A and 1B, because of binocular presentation, the user perceives a virtual three-dimensional (3D) image 112, which corresponds to the projected image 116 at a spatial location at distance AA from the eyes 110. The virtual 3D image 112 is perceived as being disposed beyond the display screen 106, such that the display screen 106 is seen as being disposed between the eye 110 and the virtual 3D image 112. A virtual sight line 114 extends from each of the user's eyes 110, through the display screen 106 and through the protective visor 108, to the perceived location of the virtual 3D image 112, which is perceived by the viewer to be at a distance AA from the eyes 110.

The top view of FIG. 1B shows sight lines for the left and right eyes, 114L and 114R, that the user mentally combines to perceive the virtual 3D image 112 at the distance AA. The image generation device 122 produces a left real image 116L on a left portion 106L of the display screen 106 and a right real image 116R on a right portion 106R of the display screen 106. In some embodiments, the image generation device 122 is embedded within the display screen 106 and includes optics to provide sufficient eye relief to allow comfortable viewing. In the illustrated embodiment, the image generation device projects the left and right real images 116L, 116R onto the display screen 106. Thus, the user's left eye 110L receives a left eye view along the virtual left eye sight line 114L and the user's right eye 110R receives a right eye view along the right eye sight line 114R to perceive the virtual 3D image 112 at the distance AA.

FIG. 2A shows a second exemplary AR headset embodiment 200 mounted worn on the head 102 of a user to produce virtual images that augments the user's view of the real world. The headset 200 similarly includes a mount 204, as described above, and a transparent protective visor 208. The protective visor 208 is suspended from the mount 204 so as to be disposed in front of a user's eyes 110 and may act as a semitransparent mirror with optical power, as discussed. The headset 200 further includes a head tracking module 220, an eye tracking module 218, and a spatial sensor 224, as generally described with respect to headset 100. The headset 200 further includes an image generation device 222 that in these embodiments projects an image 216 onto the retina of each of the user's eyes 110 as indicated by an image projection ray 207. More particularly, in some embodiments, the image generation device 222 projects the image onto an inside surface of the protective visor 208, which reflects the projected image back to the user's eyes 110. A processor 226 and non-transitory storage device 228 including instructions also comprise parts of the headset 200.

The user perceives a virtual 3D image 212, which corresponds to the projected image 216 at a spatial location and at a distance BB from the eyes 110. The virtual 3D image 212 is disposed in front of the user's eyes 110 with the protective visor 208 disposed between the second virtual 3D image 212 and the user's eye 110. A line of sight 214 extends from each of the user's eyes 110, through the protective visor 208, to the perceived virtual 3D image 212, which is perceived by the user to be at a distance BB from the eyes 110.

FIG. 2B is a top view of the headset 200 showing left and right sight lines 214L and 214R that the user mentally combines to perceive the virtual 3D image 212 as described with respect to the headset 100.

FIG. 3A shows an exemplary embodiment of a VR headset 300 mounted on a head 102 of a user to produce interactive 3D virtual images. The third headset embodiment 300 includes a mount 304 and an opaque display screen 306. In these embodiments either the mount 304, or the mount 304 in combination with the display screen 306, blocks the user's view of the real world either completely or substantially. The display screen 306 is attached to the mount 304 so as to be disposed in front of a user's eyes 110.

The headset 300 further includes a head tracking module 320, an eye tracking module 318, and a spatial sensor 324, as generally described with respect to headset 100. The headset 300 further includes an image generation device 322 that produces an image 316 on the display screen 306, for example, by any of the methods of the prior embodiments. A processor 326 and non-transitory storage device 328 including instructions also comprise parts of the headset 300.

The user perceives a virtual 3D image 312, which corresponds to the image 316 at a spatial location at distance CC from the eyes 110. The virtual 3D image 312 is disposed in front of the user's eyes 110 with the display screen 306 disposed between the virtual 3D image 312 and the user's eyes 110. A sight line 314 extends from each of the user's eyes 110, through the display screen 306 to the perceived virtual 3D image 312, which is perceived by the user to be at a distance CC from the eyes 110.

FIG. 3B is a top view of the headset 300 showing left and right sight lines 314L and 314R that the user mentally combines to perceive the virtual 3D image 312 as described with respect to the headset 100.

User control through tracking of eye vergence will now be described. The following embodiments describe methods that can be performed by a viewer to exert control through the use of eye movements, as well as methods of providing virtual realities, detecting eye movements, and varying the virtual reality in response to the eye movements. The methods of use do not necessarily require the viewer to wear a headset at all; in these embodiments eye tracking and head tracking, as well as image production, are all through external means. In some embodiments the viewer wears a headset with eye tracking and head tracking but the images are produced by means that are external to the headset. Various other combinations of headset-mounted and external components could also be employed so long as the user's eyes are tracked, and optionally the user's head is tracked and/or images are projected before the user's eyes. For instance, one camera on a cell phone can be configured to perform eye tracking, and the detection of eye vergence triggers the acquisition of a picture with the opposite-facing camera. Additionally, in addition to trigger responses, continuous and/or nondiscrete actions may be possible based on variable amounts of voluntary eye vergence.

Methods of the invention that provide virtual reality interfaces and track eye movement to allow a user to control the interface can be performed by headsets like headsets 100-300 as well as systems in which various or all components are external to the headset. For instance, the processor can be worn elsewhere on the body, or can be located off of the body but nearby, and either in Wi-Fi or wired communication with the headset. The computer instructions stored on the non-transitory memory device, whether as part of a headset or remote thereto, can include instructions necessary to direct the processor to perform the steps of these methods.

These methods, and the user methods, will first be illustrated with respect to a user controlling items like menu structures in a virtual environment. Generally, a system of the invention such as a headset produces a sense of virtual reality by reaction to a virtual environment which in one embodiment may be seen on a portion of a surface in front of the user where that virtual surface is at a distance from the user's eyes. The user accordingly perceives the virtual reality, separate from the real world, or integrated therewith with a gaze that fixates to the distance of the virtual surface. The user's head is tracked to maintain the virtual space, and the user's eyes are tracked to determine a change of fixation and to approximate the new fixation distance, sometimes denoted as z. The user can control menu structures, and achieve other controls, by aiming her head to direct her gaze in a particular orientation and by then refocusing her eyes appropriately such that her eye vergence serves to trigger a response. Eye vergence alone, or in combination with another signal such as a blink or a head nod, can trigger a response or control an object, virtual or real. These secondary signals can also be used to confirm a command made by eye vergence. While the first examples show menu structures, eye vergence can similarly be used to control virtual objects in the virtual reality to reposition, rotate, bring them forward, and push them away, or manipulate any other parameters such as color, associated data, etc.

It is further noted that the vergence detection can be “off-axis” such that the viewer's gaze line is not in line with her head's orientation, depending on the deflection of the eyes and/or depending on where the object is located within the field of view. Also, the eye gaze direction may be added to the head orientation on which the eyes rest, in some embodiments.

FIG. 4 shows an exemplary composite view of a sequence of four virtual 3D user interface display surfaces produced in accordance with some embodiments. These successive surfaces can represent different levels of a menu structure, for example. A first virtual display surface 402 is shown disposed at a first distance x1 from a user's head 102. A second virtual display surface 404 is shown disposed at a second distance x2 from the user's head 102. A third virtual display surface 406 is shown disposed at a third distance x3 from the user's head 102. A fourth virtual display surface 408 is shown disposed at a fourth distance x4 from the user's head 102. In this example, each closer display surface represents a successively deeper level of the menu structure.

In some embodiments, the distances at which successive virtual display surfaces are displayed are selected to simulate physical distances a user may encounter in the real world. For example, the distances can be selected to simulate a user searching for a book in a library. As a further example, the first virtual display surface 402 can be displayed at a ‘searching’ x1 distance that is approximately eight times normal ‘reading distance,’ i.e., a distance that a person typically holds a book from their eyes when reading. The second virtual display surface 404 can be displayed at an ‘approach’ distance x2 of four times the normal reading distance, the third virtual display surface 406 can be displayed at a ‘reach-out’ distance x3 of twice the normal reading distance, while the fourth virtual display surface 408 can be displayed at the ‘reading’ distance x4, equal to the normal reading distance. Here, the virtual distances x1 to x4 are selected to simulate a user approaching a menu structure, where the objects in the closest display surface are perceived to be at a distance in the virtual environment where the user can grasp them. As described further below, the user moves through the several virtual display surfaces by adjusting eye fixation, and as one virtual display surface is selected and presented to the user, the prior one is removed or deemphasized, for instance, by being blurred, made increasingly transparent, or given reduced contrast.

The number of display surfaces, and the distances therebetween, are not limited by the preceding example. In various embodiments the distances can be varied dynamically. For example, these distances for a given menu structure can be varied in an AR or MR environment in response to the real-world component thereof. The same menu, when the user is in an open area can have wider spacings between virtual display surfaces then when the user is in a confined space, such as a bus seat. Mapping from a spatial sensor can be used to dynamically set such spacings.

In operation, the first virtual display surface 402 is initially displayed, being the highest level of the menu structure, and includes a plurality of selectable items A1-A8, each associated with an alternative next lower level of the menu structure. In some embodiments, associated with each item on the first virtual display surface 402 is a visual aid at a distance that is closer to the user than the item itself; the visual aid appears to the user to be in front of the associated item.

Visual aids provide virtual cues to the viewer to aid the process of eye vergence, but once a person acquires the skill, such cues typically become unnecessary. In various embodiments, the salience of the visual aid can be varied by making it blink or shimmer, larger or smaller, providing leading movement, or by varying its alpha, for example. Salience can be adjusted to make visual aids apparent without being distracting, and can become more or less salient as one's gaze approaches or moves away from the visual aids. In some instances, salience can be varied by the image generation device to lead the viewer to modify the dynamic properties of their eye vergence, such as to accelerate or decelerate eye vergence, creating a feedback that can be used, for example, to draw a virtual object closer or push it away. Here, rather than a trigger response, the system provides for continuous control. Visual aids can be represented as a visual ladder, in some instances, rather than a single spot, where the successive points of the ladder trigger differing actions and have different saliences.

In the example of FIG. 4, the second virtual display surface 404 is displayed in response to the user selecting item A1 by bringing her binocular fixation off of the first virtual display surface 402 and towards the visual aid associated with item A1, changing eye vergence. While eye vergence alone can be sufficient for some control actions, items can also be selected, or actions triggered or confirmed, by an additional blink, wink, nod, button press, spoken command, and so forth.

FIGS. 5A-8B shows a user navigating through an exemplary sequence of fixation points and a corresponding sequence of virtual display surfaces in accordance with some embodiments. FIGS. 5A-5B are perspective and side views, respectively, of a user whose head 102 is disposed at a first distance Y1A from a first virtual display surface S1 and a distance Y1B from a first fixation point FPIS11 associated with a first item IS11 displayed on the first virtual display surface S1. The distance Y1A is a greater distance than Y1B, and therefore, the first fixation point FPIS11 is disposed in front of the first virtual display surface S1, relative to the position of the user's head 102. In this embodiment the first virtual display surface S1, and the subsequent display surfaces, are spherically curved to indicate that from the perspective of the user, all points on the surface S1 are equidistant. It should be noted that projection onto a particularly spherical surface is not a requirement so long as an intercept between a gaze direction and the surface may be determined. It is also noted that the term “fixation point” is being used here in place of the terms “focal point” and “focus point” in the priority application.

An optional virtual visual aid B1 can be provided to the user in the form of a visual aid beam, or ray, or beam spot that shines on the surface S1 to indicate to the user where the user is looking, according to the head tracking module. The user can adjust her head position to direct the visual aid B1 to a selectable item on the surface S1. In the illustrative example in FIG. 5A, the visual aid B1 is directed at selectable item IS11. In accordance with some embodiments, to select item IS11, the user brings her fixation to the associated fixation point FPIS11, and optionally signals with an eye blink. An eye tracking module detects the change in user fixation and any blinking. In response to the signal while fixating on the fixation point FPIS11, a processor causes an image generation device to display a second virtual display surface S2A, as shown in FIGS. 6A-6B.

FIGS. 6A-6B show perspective and side views, respectively, of a viewer's head disposed at a distance Y1A from the first virtual display surface S1 and disposed at a second distance Y2A from a second virtual display surface S2A and at a distance Y2B from a second fixation point FPIS23 associated with a third selectable item IS23 displayed on the second virtual display surface S2A. The first virtual display surface S1 may continue to be displayed, although it can be deemphasized by being blurred or dimmed, for example. In the example in FIG. 6A, the virtual visual aid beam B1 is directed at item IS23. Moreover, the virtual visual aid B1 optionally can extend behind the second surface S2A to extend to item IS11. Thus, the visual aid beam B1 can illuminate the search path used to arrive at the second virtual display surface S2A. A third surface is selected in the same manner as above, as shown in FIGS. 7A-7B, and a fourth surface as shown in FIGS. 8A-8B.

Referring again to FIG. 6A, a back fixation point FB21 is shown for use to traverse backward from the second surface S2A to the first surface S1. From FIGS. 6A-6B, it can be seen that the back fixation point FB21 is disposed more distant from the user than is the second virtual display surface S2A. In accordance with some embodiments, the user can navigate back through the menu structure analogously to how the user navigates forward. In some embodiments, the second surface S2A disappears (not shown) upon selection of the first virtual display surface S1.

Table 1 is an exemplary information structure stored in a computer readable storage device for use to configure a processor, while the first virtual display surface is displayed, to control an image generation device to transition to a different virtual display surface in response to information received from a head tracker device and information received from an eye tracking module, in accordance with some embodiments.

TABLE 1 While Currently Displaying Virtual Display Surface S1 Detected Detected Head Fixation Next Virtual Direction Point Display Surface IS11 FPIS11 S2A IS12 FPIS12 S2B IS13 FPIS13 S2C IS14 FPIS14 S2D IS15 FPIS15 S2E IS16 FPIS16 S2F

Thus, for example, in accordance with the information structure of Table 1, while the first virtual surface S1 is displayed, in response to detection of user head direction toward IS13 and user eyes fixating upon FPIS13 and blinking, virtual screen display S2 c is displayed.

Table 2 is an example information structure stored in a computer readable storage device for use to configure a processor, while the second virtual display surface S2A is displayed, to control an image generation device to transition to a different virtual display surface in response to information received from a head tracker device and information received from an eye tracking module in accordance with some embodiments.

TABLE 2 While Currently Displaying Virtual Display Surface S2A Detected Detected Head Fixation Next Virtual Direction Point Display Surface IS21 FP21 S3A IS22 FP22 S3B IS23 FP23 S3C

Persons skilled in the art will appreciated that in accordance with some embodiments, each virtual image, item, object, menu and so forth that is displayed to the viewer can be associated with a spatial location off of the active visual surface and some response that can be triggered, at least in part, by fixating upon that location.

FIG. 9 shows an exemplary graph 900 of the z-depth 910 of a user's eyes as they fixate from a near object to a far object to one in between. The z-depth 910 remains fairly constant while fixation remains on any object, then shifts quickly to the next. A simplistic means of detecting the vergence of the viewer's eyes that is sufficient to trigger a response includes monitoring the z-depth for a change by more than a threshold distance. Thus, the change from one average z-depth 910 to another beyond a threshold could be a trigger. A first derivative of the z-depth 910 as a function of time is approximately zero while the viewer fixates on each object, and either spikes positive or negative when the eyes converge or diverge. Thus, the first derivative, if it exceeds a threshold, or indeed any computable dynamic characteristic of motion, can be another possible trigger. Similarly, one can sample z-depth 910 with a given frequency and create a rolling average of n number of samples, and then monitor the difference between two such rolling averages, where the two are offset in time by some number of samples. Such an example is illustrated by lines 920 and 930, and although they are best fits to the curve, one can see how an average of the z-depth values will change, as will the difference between them. Thus, another trigger can be that the difference between two rolling averages exceeds a threshold. In further embodiments, the z-depth data is processed, such as by smoothing, before being analyzed to detect eye vergence. Data processing of the z-depth data can include applying windowing functions. Further, median values can be used in place of averages. In still further embodiments a second derivative of the z-depth data is determined and can be used to vary salience values of visual aids, for example.

FIG. 10 is a flowchart representation of an exemplary method 1000 of the present invention for providing control to a user. The method 1000 begins with a step 1010 of storing an association between an object and a response in a non-transitory storage device. The object can be real, such as a lamp, or an item shown in a 2D display such as an advertisement displayed on a cell phone, or can be a virtual object in a virtual reality or in an augmented or mixed reality. Storing the association allows the response to be retrieved as needed when a selection implicates the object.

An optional next step 1020 comprises providing a digital image. The step is optional in as much as embodiments of the present invention do not require the creation of virtual objects but can operate on real objects as well. In a further optional step 1030 a visual aid is provided in the viewer's field of view and proximate to the object. Where the object is a real object, headsets like those described above, as well as other systems, can be used to display visual aids to the viewer even in the absence of any other digital image content.

In an optional step 1040, an eye tracking module is used to detect binocular fixation of the user's eyes upon the object. This step is also optional in as much as in some instances one could look in the direction of an object without actually fixating on it. Detecting binocular fixation upon the object can include determining that the binocular fixation of the eyes is within a threshold distance of the object. In a step 1050, vergence of viewer's eyes is detected, again using the eye tracking module. In both steps, monitoring the z-depth of the binocular fixation of the eyes can be employed. Step 1050 can optionally include receiving a confirmation signal from the viewer.

In a step 1060 the response is provided. The response can be either discrete or not. In those embodiments in which the response is not discrete, for example, the response can be continuous with a feedback loop between steps 1050 and 1060 such that the response is dependent upon continuous monitoring of eye vergence.

FIGS. 11 and 12 illustrate another virtual display surface embodiment of the invention. FIG. 11 shows a virtual display surface comprising multiple intersecting virtual spheres 1102-1114, each corresponding to a different dataset. In the illustrated example, the spheres are labeled to indicate the types of data they are associated with, e.g., work, sports, holiday. To select a dataset, in some embodiments, the user directs her gaze at a sphere of the surface and draws her gaze towards her, possibly assisted by a visual aid seen proximate to the sphere. In some instances the selection is confirmed by a signal like a wink or head nod. In response, a processor causes an image generation device to produce a new virtual image such as the virtual enlarged sphere shown in FIG. 12 with numerous image tiles posted on it.

The foregoing description and drawings of embodiments in accordance with the present invention are merely illustrative of the principles of the invention. Therefore, it will be understood that various modifications can be made to the embodiments by those skilled in the art without departing from the spirit and scope of the invention. The use of the term “means” within a claim of this application is intended to invoke 112(f) only as to the limitation to which the term attaches and not to the whole claim, while the absence of the term “means” from any claim should be understood as excluding that claim from being interpreted under 112(f). As used in the claims of this application, “configured to” and “configured for” are not intended to invoke 112(f).

Claims (12)

What is claimed is:
1. A system for providing control to a user, the system comprising:
a non-transitory storage device storing an association between an object and a response;
an eye tracking module configured to determine lines of sight for both of the user's eyes; and
a processor configured to
receive the lines of sight and determine both the user's gaze therefrom and to measure the viewer's eye vergence, and
provide the response following first a determination of a binocular fixation upon the object that includes determining that the binocular fixation of the eyes is within a threshold distance of the object and includes monitoring the z-depth of the binocular fixation of the eyes and determining that the z-depth has remained within a threshold range for at least a threshold dwell time, and then a detection of a simultaneous movement of the pupils of the viewer's eyes toward or away from one another, wherein the response is a virtual response that changes a presentation of a virtual reality to the user.
2. The system of claim 1 wherein the system comprises a headset including a display screen and an image generation device.
3. The system of claim 2 further comprising a spatial sensor.
4. The system of claim 1 further comprising a head tracking module.
5. The system of claim 1 wherein the processor is further configured to generate a visual aid in the viewer's field of view proximate to the object.
6. A method for providing control to a user comprising:
storing, in a non-transitory storage device, an association between an object and a response;
using an eye tracking module to first detect binocular fixation of the user's eyes upon the object, then to detect a simultaneous movement of the pupils of the viewer's eyes toward or away from one another, wherein detecting binocular fixation upon the object includes determining that the binocular fixation of the eyes is within a threshold distance of the object and includes monitoring the z-depth of the binocular fixation of the eyes and determining that the z-depth has remained within a threshold range for at least a threshold dwell time; and then
providing the response, wherein the response is a virtual response that changes a presentation of a virtual reality to the user.
7. The method of claim 6 wherein detecting the vergence of the viewer's eyes includes monitoring the z-depth of the binocular fixation of the eyes and determining that the z-depth has changed by more than a threshold distance.
8. The method of claim 6 wherein detecting the vergence of the viewer's eyes includes monitoring the z-depth of the binocular fixation of the eyes and determining that a first derivative of the z-depth, or a difference between offset rolling averages, has changed by more than a threshold amount.
9. The method of claim 6 further comprising, using an image generation device, generating a visual aid in the viewer's field of view proximate to the object.
10. The method of claim 9 further comprising varying the salience of the visual aid as a function of the viewer's gaze.
11. The method of claim 10 further comprising varying the salience of the visual aid as a function of the viewer's eye vergence.
12. The method of claim 6 further comprising detecting a blink using the eye tracking module, wherein providing the response is further responsive to detecting the blink.
US15/335,262 2015-10-26 2016-10-26 Systems and methods for eye vergence control in real and augmented reality environments Active US10401953B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201562246200P true 2015-10-26 2015-10-26
US15/335,262 US10401953B2 (en) 2015-10-26 2016-10-26 Systems and methods for eye vergence control in real and augmented reality environments

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/335,262 US10401953B2 (en) 2015-10-26 2016-10-26 Systems and methods for eye vergence control in real and augmented reality environments
US15/849,394 US10466780B1 (en) 2015-10-26 2017-12-20 Systems and methods for eye tracking calibration, eye vergence gestures for interface control, and visual aids therefor

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/849,394 Continuation-In-Part US10466780B1 (en) 2015-10-26 2017-12-20 Systems and methods for eye tracking calibration, eye vergence gestures for interface control, and visual aids therefor

Publications (2)

Publication Number Publication Date
US20170131764A1 US20170131764A1 (en) 2017-05-11
US10401953B2 true US10401953B2 (en) 2019-09-03

Family

ID=58630763

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/335,262 Active US10401953B2 (en) 2015-10-26 2016-10-26 Systems and methods for eye vergence control in real and augmented reality environments

Country Status (3)

Country Link
US (1) US10401953B2 (en)
EP (1) EP3369091A4 (en)
WO (1) WO2017075100A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750234B (en) * 2013-12-27 2018-12-21 中芯国际集成电路制造(北京)有限公司 The interactive approach of wearable smart machine and wearable smart machine
JP2017058971A (en) * 2015-09-16 2017-03-23 株式会社バンダイナムコエンターテインメント Program and image formation device
US10466780B1 (en) * 2015-10-26 2019-11-05 Pillantas Systems and methods for eye tracking calibration, eye vergence gestures for interface control, and visual aids therefor
US10401953B2 (en) * 2015-10-26 2019-09-03 Pillantas Inc. Systems and methods for eye vergence control in real and augmented reality environments

Citations (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
US20040240709A1 (en) 2003-04-22 2004-12-02 Garth Shoemaker Method and system for controlling detail-in-context lenses through eye and position tracking
US20050073136A1 (en) * 2002-10-15 2005-04-07 Volvo Technology Corporation Method and arrangement for interpreting a subjects head and eye activity
US7532230B2 (en) 2004-01-29 2009-05-12 Hewlett-Packard Development Company, L.P. Method and system for communicating gaze in an immersive virtual environment
US20090153796A1 (en) * 2005-09-02 2009-06-18 Arthur Rabner Multi-functional optometric-ophthalmic system for testing diagnosing, or treating, vision or eyes of a subject, and methodologies thereof
US20100033333A1 (en) * 2006-06-11 2010-02-11 Volva Technology Corp Method and apparatus for determining and analyzing a location of visual interest
US20100118141A1 (en) * 2007-02-02 2010-05-13 Binocle Control method based on a voluntary ocular signal particularly for filming
US20100201621A1 (en) * 2007-08-07 2010-08-12 Osaka Electro-Communication University Moving object detecting apparatus, moving object detecting method, pointing device, and storage medium
US20110019874A1 (en) * 2008-02-14 2011-01-27 Nokia Corporation Device and method for determining gaze direction
US20110109880A1 (en) * 2006-01-26 2011-05-12 Ville Nummela Eye Tracker Device
US20120194550A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Sensor-based command and control of external devices with feedback from the external device to the ar glasses
US8235529B1 (en) * 2011-11-30 2012-08-07 Google Inc. Unlocking a screen using eye tracking information
US20120212484A1 (en) * 2010-02-28 2012-08-23 Osterhout Group, Inc. System and method for display content placement using distance and location information
US20120300061A1 (en) * 2011-05-25 2012-11-29 Sony Computer Entertainment Inc. Eye Gaze to Alter Device Behavior
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
US20130147686A1 (en) * 2011-12-12 2013-06-13 John Clavin Connecting Head Mounted Displays To External Displays And Other Communication Networks
US20130156265A1 (en) * 2010-08-16 2013-06-20 Tandemlaunch Technologies Inc. System and Method for Analyzing Three-Dimensional (3D) Media Content
US20130169532A1 (en) * 2011-12-29 2013-07-04 Grinbath, Llc System and Method of Moving a Cursor Based on Changes in Pupil Position
US20130176533A1 (en) * 2012-01-06 2013-07-11 Hayes Solos Raffle Structured Light for Eye-Tracking
US20130241805A1 (en) * 2012-03-15 2013-09-19 Google Inc. Using Convergence Angle to Select Among Different UI Elements
US20130293580A1 (en) * 2012-05-01 2013-11-07 Zambala Lllp System and method for selecting targets in an augmented reality environment
US20130300635A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for providing focus correction of displayed information
US20140002444A1 (en) * 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment
US20140002442A1 (en) * 2012-06-29 2014-01-02 Mathew J. Lamb Mechanism to give holographic objects saliency in multiple spaces
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US20140361984A1 (en) * 2013-06-11 2014-12-11 Samsung Electronics Co., Ltd. Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US20140380230A1 (en) * 2013-06-25 2014-12-25 Morgan Kolya Venable Selecting user interface elements via position signal
US8957835B2 (en) 2008-09-30 2015-02-17 Apple Inc. Head-mounted display apparatus for retaining a portable electronic device with display
US20150049114A1 (en) * 2011-09-30 2015-02-19 Kevin A. Geisner Exercising applications for personal audio/visual system
US20150091780A1 (en) 2013-10-02 2015-04-02 Philip Scott Lyren Wearable Electronic Device
US20150091943A1 (en) * 2013-09-30 2015-04-02 Lg Electronics Inc. Wearable display device and method for controlling layer in the same
US9035955B2 (en) 2012-05-16 2015-05-19 Microsoft Technology Licensing, Llc Synchronizing virtual actor's performances to a speaker's voice
US20150185475A1 (en) * 2013-12-26 2015-07-02 Pasi Saarikko Eye tracking apparatus, method and system
US20150192992A1 (en) * 2014-01-03 2015-07-09 Harman International Industries, Inc. Eye vergence detection on a display
US20150248169A1 (en) 2013-07-12 2015-09-03 Magic Leap, Inc. Method and system for generating a virtual user interface related to a physical entity
US20150277123A1 (en) * 2008-04-06 2015-10-01 David Chaum Near to eye display and appliance
US20150288944A1 (en) * 2012-09-03 2015-10-08 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted display
US9170645B2 (en) * 2011-05-16 2015-10-27 Samsung Electronics Co., Ltd. Method and apparatus for processing input in mobile terminal
US20150339468A1 (en) * 2014-05-23 2015-11-26 Samsung Electronics Co., Ltd. Method and apparatus for user authentication
US20160026242A1 (en) * 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
US20160093113A1 (en) * 2014-09-30 2016-03-31 Shenzhen Estar Technology Group Co., Ltd. 3d holographic virtual object display controlling method based on human-eye tracking
US20160179336A1 (en) 2014-12-19 2016-06-23 Anthony Ambrus Assisted object placement in a three-dimensional visualization system
US20160239081A1 (en) 2013-11-01 2016-08-18 Sony Corporation Information processing device, information processing method, and program
US20160246384A1 (en) 2015-02-25 2016-08-25 Brian Mullins Visual gestures for a head mounted device
US20160260258A1 (en) * 2014-12-23 2016-09-08 Meta Company Apparatuses, methods and systems coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest
US9442567B2 (en) 2014-01-23 2016-09-13 Microsoft Technology Licensing, Llc Gaze swipe selection
US9454225B2 (en) 2011-02-09 2016-09-27 Apple Inc. Gaze-based display control
US20170131764A1 (en) * 2015-10-26 2017-05-11 Pillantas Inc. Systems and methods for eye vergence control

Patent Citations (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583795A (en) * 1995-03-17 1996-12-10 The United States Of America As Represented By The Secretary Of The Army Apparatus for measuring eye gaze and fixation duration, and method therefor
US20050073136A1 (en) * 2002-10-15 2005-04-07 Volvo Technology Corporation Method and arrangement for interpreting a subjects head and eye activity
US20040240709A1 (en) 2003-04-22 2004-12-02 Garth Shoemaker Method and system for controlling detail-in-context lenses through eye and position tracking
US7532230B2 (en) 2004-01-29 2009-05-12 Hewlett-Packard Development Company, L.P. Method and system for communicating gaze in an immersive virtual environment
US20090153796A1 (en) * 2005-09-02 2009-06-18 Arthur Rabner Multi-functional optometric-ophthalmic system for testing diagnosing, or treating, vision or eyes of a subject, and methodologies thereof
US20110109880A1 (en) * 2006-01-26 2011-05-12 Ville Nummela Eye Tracker Device
US20100033333A1 (en) * 2006-06-11 2010-02-11 Volva Technology Corp Method and apparatus for determining and analyzing a location of visual interest
US20100118141A1 (en) * 2007-02-02 2010-05-13 Binocle Control method based on a voluntary ocular signal particularly for filming
US20100201621A1 (en) * 2007-08-07 2010-08-12 Osaka Electro-Communication University Moving object detecting apparatus, moving object detecting method, pointing device, and storage medium
US20110019874A1 (en) * 2008-02-14 2011-01-27 Nokia Corporation Device and method for determining gaze direction
US20150277123A1 (en) * 2008-04-06 2015-10-01 David Chaum Near to eye display and appliance
US8957835B2 (en) 2008-09-30 2015-02-17 Apple Inc. Head-mounted display apparatus for retaining a portable electronic device with display
US20120194550A1 (en) * 2010-02-28 2012-08-02 Osterhout Group, Inc. Sensor-based command and control of external devices with feedback from the external device to the ar glasses
US20120212484A1 (en) * 2010-02-28 2012-08-23 Osterhout Group, Inc. System and method for display content placement using distance and location information
US20130156265A1 (en) * 2010-08-16 2013-06-20 Tandemlaunch Technologies Inc. System and Method for Analyzing Three-Dimensional (3D) Media Content
US8913790B2 (en) 2010-08-16 2014-12-16 Mirametrix Inc. System and method for analyzing three-dimensional (3D) media content
US9454225B2 (en) 2011-02-09 2016-09-27 Apple Inc. Gaze-based display control
US9170645B2 (en) * 2011-05-16 2015-10-27 Samsung Electronics Co., Ltd. Method and apparatus for processing input in mobile terminal
US20120300061A1 (en) * 2011-05-25 2012-11-29 Sony Computer Entertainment Inc. Eye Gaze to Alter Device Behavior
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
US20130050432A1 (en) * 2011-08-30 2013-02-28 Kathryn Stone Perez Enhancing an object of interest in a see-through, mixed reality display device
US20150049114A1 (en) * 2011-09-30 2015-02-19 Kevin A. Geisner Exercising applications for personal audio/visual system
US8235529B1 (en) * 2011-11-30 2012-08-07 Google Inc. Unlocking a screen using eye tracking information
US20130147686A1 (en) * 2011-12-12 2013-06-13 John Clavin Connecting Head Mounted Displays To External Displays And Other Communication Networks
US20130169532A1 (en) * 2011-12-29 2013-07-04 Grinbath, Llc System and Method of Moving a Cursor Based on Changes in Pupil Position
US20130176533A1 (en) * 2012-01-06 2013-07-11 Hayes Solos Raffle Structured Light for Eye-Tracking
US20130241805A1 (en) * 2012-03-15 2013-09-19 Google Inc. Using Convergence Angle to Select Among Different UI Elements
US20130293580A1 (en) * 2012-05-01 2013-11-07 Zambala Lllp System and method for selecting targets in an augmented reality environment
US20130300635A1 (en) * 2012-05-09 2013-11-14 Nokia Corporation Method and apparatus for providing focus correction of displayed information
US9035955B2 (en) 2012-05-16 2015-05-19 Microsoft Technology Licensing, Llc Synchronizing virtual actor's performances to a speaker's voice
US20140002442A1 (en) * 2012-06-29 2014-01-02 Mathew J. Lamb Mechanism to give holographic objects saliency in multiple spaces
US20140002444A1 (en) * 2012-06-29 2014-01-02 Darren Bennett Configuring an interaction zone within an augmented reality environment
US20150288944A1 (en) * 2012-09-03 2015-10-08 SensoMotoric Instruments Gesellschaft für innovative Sensorik mbH Head mounted system and method to compute and render a stream of digital images using a head mounted display
US20140361984A1 (en) * 2013-06-11 2014-12-11 Samsung Electronics Co., Ltd. Visibility improvement method based on eye tracking, machine-readable storage medium and electronic device
US20140380230A1 (en) * 2013-06-25 2014-12-25 Morgan Kolya Venable Selecting user interface elements via position signal
US20150248169A1 (en) 2013-07-12 2015-09-03 Magic Leap, Inc. Method and system for generating a virtual user interface related to a physical entity
US20150091943A1 (en) * 2013-09-30 2015-04-02 Lg Electronics Inc. Wearable display device and method for controlling layer in the same
US20150091780A1 (en) 2013-10-02 2015-04-02 Philip Scott Lyren Wearable Electronic Device
US20160239081A1 (en) 2013-11-01 2016-08-18 Sony Corporation Information processing device, information processing method, and program
US20150185475A1 (en) * 2013-12-26 2015-07-02 Pasi Saarikko Eye tracking apparatus, method and system
US20150192992A1 (en) * 2014-01-03 2015-07-09 Harman International Industries, Inc. Eye vergence detection on a display
US9442567B2 (en) 2014-01-23 2016-09-13 Microsoft Technology Licensing, Llc Gaze swipe selection
US20150339468A1 (en) * 2014-05-23 2015-11-26 Samsung Electronics Co., Ltd. Method and apparatus for user authentication
US20160026242A1 (en) * 2014-07-25 2016-01-28 Aaron Burns Gaze-based object placement within a virtual reality environment
US20160093113A1 (en) * 2014-09-30 2016-03-31 Shenzhen Estar Technology Group Co., Ltd. 3d holographic virtual object display controlling method based on human-eye tracking
US20160179336A1 (en) 2014-12-19 2016-06-23 Anthony Ambrus Assisted object placement in a three-dimensional visualization system
US20160260258A1 (en) * 2014-12-23 2016-09-08 Meta Company Apparatuses, methods and systems coupling visual accommodation and visual convergence to the same plane at any depth of an object of interest
US20160246384A1 (en) 2015-02-25 2016-08-25 Brian Mullins Visual gestures for a head mounted device
US20170131764A1 (en) * 2015-10-26 2017-05-11 Pillantas Inc. Systems and methods for eye vergence control

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
Crossman, et al., "Feedback control of hand-movement and Fitts' Law", 1983, The Quarterly Journal of Experimental Psychology Section A: Human Experimental Psychology, 35:2, 251-278.
Definition of Binocular Fixation, Mosby's Medical dictionary, https://medical-dictionary.thefreedictionary.com/binocular+fixation (Year: 2009). *
Hoffman, et al., "Vergence-accommodation conflicts hinder visual performance and cause visual fatigue", Journal of Vision, Mar. 2008, vol. 8, 33.
PCT/US16/58944, International Search Report and Written Opinion, dated Jan. 13, 2017.
Ranney, "Mice get under foot", InfoWorld, Aug. 19, 1985.
Rolland, et al., "Head-mounted display systems", Encyclopedia of Optical Engineering DOI: 10.1081/E-EOE-120009801, Taylor & Francis, London, available as early as Jan. 1, 2005, 14 pages.
Youcha et al., "Evidence-based practice in biofeedback and neurofeedback", 2008, Association for Applied Psychophysiology and Biofeedback.
Zangemeisster et al., "Active head rotations and eye-head coordination", 1981, Annals New York Academy of Sciences, pp. 540-559.

Also Published As

Publication number Publication date
EP3369091A1 (en) 2018-09-05
US20170131764A1 (en) 2017-05-11
EP3369091A4 (en) 2019-04-24
WO2017075100A1 (en) 2017-05-04

Similar Documents

Publication Publication Date Title
US7369101B2 (en) Calibrating real and virtual views
US9030532B2 (en) Stereoscopic image display
KR101789357B1 (en) Automatic focus improvement for augmented reality displays
US9213405B2 (en) Comprehension and intent-based content for augmented reality displays
KR101960980B1 (en) Optimized focal area for augmented reality displays
US9891435B2 (en) Apparatus, systems and methods for providing motion tracking using a personal viewing device
EP2638693B1 (en) Automatic variable virtual focus for augmented reality displays
TWI597623B (en) Wearable behavior-based vision system
US9380287B2 (en) Head mounted system and method to compute and render a stream of digital images using a head mounted display
Drascic et al. Perceptual issues in augmented reality
CA2545202C (en) Method and apparatus for calibration-free eye tracking
EP3014338B1 (en) Tracking head movement when wearing mobile device
US9489044B2 (en) Visual stabilization system for head-mounted displays
KR20150086388A (en) People-triggered holographic reminders
US9122321B2 (en) Collaboration environment using see through displays
JP6144681B2 (en) Head mounted display with iris scan profiling function
US9372348B2 (en) Apparatus and method for a bioptic real time video system
US8743187B2 (en) Three-dimensional (3D) imaging based on MotionParallax
US20130342572A1 (en) Control of displayed content in virtual environments
EP3097460B1 (en) Gaze swipe selection
US7626569B2 (en) Movable audio/video communication interface system
US20130328925A1 (en) Object focus in a mixed reality environment
US20140002442A1 (en) Mechanism to give holographic objects saliency in multiple spaces
CN104067160B (en) Make picture material method placed in the middle in display screen using eyes tracking
JP2016507805A (en) Direct interaction system for mixed reality environments

Legal Events

Date Code Title Description
AS Assignment

Owner name: PILLANTAS INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOGNAR, BOTOND;MOLLER, MATTHEW D;ELLIS, STEPHEN R;REEL/FRAME:040213/0718

Effective date: 20161102

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE