US20140247210A1 - Zonal gaze driven interaction - Google Patents
Zonal gaze driven interaction Download PDFInfo
- Publication number
- US20140247210A1 US20140247210A1 US14/195,755 US201414195755A US2014247210A1 US 20140247210 A1 US20140247210 A1 US 20140247210A1 US 201414195755 A US201414195755 A US 201414195755A US 2014247210 A1 US2014247210 A1 US 2014247210A1
- Authority
- US
- United States
- Prior art keywords
- contact
- computer
- user
- gaze
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/02—Input arrangements using manually operated switches, e.g. using keyboards or dials
- G06F3/0227—Cooperation and interconnection of the input arrangement with other functional units of a computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0485—Scrolling or panning
Definitions
- the present disclosure relates to human-computer interaction generally and more specifically to gaze detection.
- Human-computer interaction generally relates to the input of information to and control of a computer by a user.
- Many common and popular computer programs and operating systems have been developed to function primarily with input methods involving physical contact or manipulation (e.g., a mouse or a key board). This type of physical input method is referred to herein as contact-required input. It can be difficult for people who desire to use non-contact input methods to interact with these computer programs and operating systems to their full potential. Some people must use non-contact input methods for various reasons (e.g., because of an injury or disability).
- An example of a non-contact input device is an eye tracking device such as that described in U.S. Pat. No. 7,572,008.
- Eye tracking devices can operate on the principal of illuminating an eye with infrared light and utilizing an image sensor to detect reflection of the light off the eye.
- a processor can use the image sensor's data to calculate the direction of a user's gaze.
- a form of human-computer interaction is touch-based interaction on a computer, tablet, phone or the like, whereby a user interacts with the device by touching and by performing gestures (e.g., multi-finger gestures) on a touch-sensitive device (e.g., a touchscreen).
- gestures e.g., multi-finger gestures
- a touch-sensitive device e.g., a touchscreen
- This and other forms of user interaction require a very physical connection between the device and the user, often requiring multiple points of physical contact between the user and the touch-sensitive de ce (e.g., for multi-finger gestures).
- Some interaction methods are not intuitive and the user may not know for sure if the eye tracking is functioning or the exact location of the cursor. Some interaction methods result in a cognitive disruption whereby after the user has triggered a movement of a cursor, the user must anticipate the future location of the cursor and adjust accordingly.
- Embodiments of the present disclosure include computer systems that can be controlled with non-contact inputs through zonal control.
- a non-contact input tracks a non-contact action performed by a user.
- a computer's display, and beyond, can be separated into a number of discrete zones according to a configuration. Each zone is associated with a computer function. The zones and/or their functions can, but need not, be indicated to the user.
- the user can perform the various computer functions by performing non-contact actions detected by the non-contact input.
- the user Upon indicating a desired zone associated with a particular function, the user can provide an activation signal of intent.
- the activation signal of intent can be a contact-required or non-contact action, such as a button press or dwelling gaze, respectively.
- the computer system can use the indicated zone (e.g., indicated by the user's non-contact actions) to perform the function associated with that zone.
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices.
- a visual indicator can be presented on a display to indicate the location where a computer function will take place (e.g., a common cursor).
- the visual indicator can be moved to a gaze target in response to continued detection of an action (e.g., touchpad touch) by a user for a predetermined period of time.
- the delay between the action and the movement of the visual indicator provides an opportunity to provide an indication to the user where the visual indicator will be located after a movement, allowing for less of a cognitive disruption after the visual indicator has appeared at a new location.
- the delay can also allow a user time to “abort” movement of the visual indicator.
- the visual indicator can be controlled with additional precision as the user moves gaze while continuing the action (e.g., continued holding of the touchpad).
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices.
- a computer can enlarge a portion of a display adjacent a first gaze target in response to detecting a first action (e.g., pressing a touchpad).
- the computer can then allow a user to position a second gaze target in the enlarged portion (e.g., by looking at the desired location) and perform a second action in order to perform a computer function at that location.
- the enlarging can allow a user to identify a desired location for a computer function (e.g., selecting an icon) with greater precision.
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices.
- non-contact inputs such as eye-tracking devices.
- Various combinations of non-contact actions and contact-required actions can be performed to cause a computer to perform certain computer functions.
- Functions can include scroll functions, movements of visual indicators, zooming of the display, and selecting further functions to perform.
- Combinations of non-contact actions and contact-required actions can include pressing buttons and/or touching touch-sensitive devices while looking at certain places on or off of a display.
- FIG. 1 is a schematic representation of a computer system incorporating non-contact inputs according to certain embodiments.
- FIG. 2A is a graphical depiction of a display as rendered or presented on the display device of FIG. 1 according to certain embodiments.
- FIG. 2B is a graphical depiction of the display of FIG. 2A in zonal control mode with a first configuration according to certain embodiments.
- FIG. 3 is a flow chart depicting a process for onal control according to certain embodiments.
- FIG. 4 is a graphical depiction of a display as rendered or presented on the display device of FIG. 1 while in zonal control mode with a second configuration according to certain embodiments.
- FIG. 5 is a graphical depiction of a display with a visual indicator according to certain embodiments.
- FIG. 6 is a flow chart diagram of delay warp as performed by a computer according to certain embodiments.
- FIG. 7A is a flow chart depicting a multi-step click functionality according to some embodiments.
- FIG. 7B is a flow chart depicting a multi-step click functionality according to some embodiments.
- FIG. 8 is a graphical depiction of a display according to certain embodiments.
- FIG. 9 is a graphical depiction of a display according to certain embodiments.
- FIG. 10 is a graphical depiction of a menu according to certain embodiments.
- FIG. 11 is a graphical depiction of a display according to certain embodiments.
- FIG. 12 is a graphical depiction of a display according to certain embodiments.
- FIG. 13 is a graphical depiction of a display according to certain embodiments.
- FIG. 14 is a graphical depiction of a display according to certain embodiments.
- FIG. 15 is a graphical depiction of a display according to certain embodiments.
- FIG. 16 is a graphical depiction of a display according to certain embodiments.
- FIG. 17 is a graphical depiction of a display according to certain embodiments.
- FIG. 18 is a graphical depiction of a display according to certain embodiments.
- FIG. 19 is a graphical depiction of a display according to certain embodiments.
- FIG. 20 is a graphical depiction of a display according to certain embodiments.
- FIG. 21 is a graphical depiction of a display according to certain embodiments.
- FIG. 22A is a graphical depiction of a display according to certain embodiments.
- FIG. 22B is a graphical depiction of the display of FIG. 22A showing a menu according to certain embodiments.
- FIG. 22C is a graphical depiction of the display of FIG. 22C showing a menu according to certain embodiments.
- FIG. 23 is a graphical depiction of a display according to certain embodiments.
- FIG. 24A is a flow chart of a non-contact action according to certain embodiments.
- FIG. 24B is a flow chart of a non-contact action according to certain embodiments.
- FIG. 24C is a flow chart of a contact-required action according to certain embodiments.
- FIG. 24D is a flow chart of a non-contact action according to certain embodiments.
- FIG. 25 is a flow chart of a delay warp 2500 with a visual marker according to certain embodiments.
- FIG. 26 is a flow chart of a delay warp 2600 without a visual marker according to certain embodiments.
- FIG. 27 is a flow chart of a delay warp 2700 without a hidden visual indicator according to certain embodiments.
- FIG. 28 is a flow chart of a delay warp 2800 according to certain embodiments.
- FIG. 29 is a flow chart depicting a two-step click 2900 according to certain embodiments.
- a computer system can be controlled with non-contact inputs through zonal control.
- a non-contact input that is an eye-tracking device is used to track the gaze of a user.
- a computer's display can be separated into a number of discrete zones according to a configuration. Each zone is associated with a computer function. The zones and/or their functions can, but need not, be indicated to the user.
- the user can perform the various functions by moving gaze towards the zone associated with that function and providing an activation signal of intent.
- the activation signal of intent can be a contact-required or non-contact action such as a button press or dwelling gaze, respectively.
- a computer system can implement a delay warp when being controlled with non-contact inputs.
- a cursor can be presented on a display to indicate the location where a computer function will occur upon a further action (e.g., a click).
- the cursor can be moved to a gaze target in response to continued detection of an action (e.g., continued touching of a touchpad) by a user for a predetermined period of time.
- the delay between the action and the movement of the cursor provides an opportunity to provide an indication to the user where the visual indicator will be located after a movement, allowing for less of a cognitive disruption after the visual indicator has appeared at a new location.
- the delay gives a user an opportunity to “abort” movement of the cursor.
- the cursor can be further controlled with additional precision as the user moves gaze, moves a mouse, or swipes a touchpad while continuing the action (e.g., continued holding of the touchpad).
- a computer system can allow for increased certainty and precision when targeting elements through non-contact, inputs.
- a user can look at a group of elements and perform an action. If the computer cannot determine with certainty which element is targeted by the user, the computer can enlarge and/or separate the elements and allow the user to further focus gaze on the desired element, whereupon performing a second action, the computer will perform the desired function (e.g., selecting an icon) upon the targeted element.
- the desired function e.g., selecting an icon
- a computer system can be controlled through various combinations of non-contact actions and contact-required actions. Scrolling, cursor movements, zooming, and other functions can be controlled through combinations of non-contact actions and/or contact-required actions. Such combinations can include pressing buttons and/or touching touch-sensitive devices while looking at certain places on or off of a display.
- FIG. 1 is a schematic representation of a computer system 100 incorporating non-contact inputs 106 according to certain embodiments.
- the computer system 100 (hereinafter, “computer”) can be implemented in a single housing (e.g., a tablet computer), or can be implemented in several housings connected together by appropriate power and/or data cables (e.g., a standard desktop computer with a monitor, keyboard, and other devices connected to the main housing containing the desktop computer's CPU).
- any reference to an element existing “in” the computer 100 indicates the element is a part of the computer system 100 , rather than physically within a certain housing.
- the computer 100 can include a processor 102 connected to or otherwise in communication with a display device 104 , a non-contact input 106 , and a contact-required input 108 .
- the processor 102 can include a non-contact interpreter 112 , as described in further detail below.
- the term processor 102 refers to one or more individual processors within the computer system, individually or as a group, as appropriate.
- the computer 100 can include programming 116 stored on permanent, rewritable, or transient memory that enables the processor 102 to perform the functionality described herein, including zonal control, delay warp, and two-step click, as well as other functionality.
- the programming when executed by the processor 102 , causes the processor 102 to perform operations described herein.
- the programming may comprise processor-specific programming generated by a compiler and/or an interpreter from code written in any suitable computer-programming language.
- suitable computer-programming languages include C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, Action script, and the like.
- the memory may be a computer-readable medium such as (but not limited to) an electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions.
- Non-limiting examples of such optical, magnetic, or other storage devices include read-only (“ROM”) device(s), random-access memory (“RAM”) device(s), magnetic disk(s), magnetic tape(s) or other magnetic storage, memory chip(s), an ASIC, configured processor(s), optical storage device(s), floppy disk(s), CD-ROM, DVD, or any other medium from which a computer processor can read instructions.
- ROM read-only
- RAM random-access memory
- magnetic disk(s) magnetic tape(s) or other magnetic storage
- memory chip(s) an ASIC, configured processor(s), optical storage device(s), floppy disk(s), CD-ROM, DVD, or any other medium from which a computer processor can read instructions.
- Contact-required inputs 108 can be any device for accepting user input that requires physical manipulation or physical contact (hereinafter, “contact-required actions”). Examples of contact-required inputs 108 include keyboards, mice, switches, buttons, touchpads, touchscreens, touch-sensitive devices, and other inputs that require physical manipulation or physical contact. Examples of contact-required actions include tapping, clicking, swiping, pressing (e.g., a key), and others.
- contact-required actions further include actions that require either physical contact through another device (e.g., using a touchscreen with a stylus) or close proximity to the contact-required input (e.g., hovering or swiping a finger above a touchscreen that responds to fingers in close proximity, such as a projected capacitance touchscreen).
- Signals generated from a user performing contact-required actions as received by contact-required inputs 108 are referred to as contact-based signals 110 .
- references to contact-required actions can include combinations of contact-required actions (e.g., holding a first button while pressing a second button).
- Non-contact inputs 106 can be any device capable of receiving user input without physical manipulation or physical contact. Examples of non-contact inputs 106 include eye-tracking devices, microphones, cameras, light sensors, and others. When a user performs an action detectable by a non-contact input 106 (hereinafter, “non-contact action”), the non-contact interpreter 112 generates a non-contact signal 114 based on the non-contact action performed by the user.
- Non-contact actions can include moving gaze (e.g., moving the direction of the gaze of one or more eyes), gaze saccade, fixating gaze, dwelling gaze (e.g., fixating gaze substantially at a single target for a predetermined length of time), a blink (e.g.
- the non-contact interpreter 112 can send different non-contact signals 114 .
- a user moving gaze can result in a first non-contact signal 114 containing information about the movement and/or new direction of gaze, while a user blinking can result in a non-contact signal 114 indicative that the user blinked.
- the non-contact signal 114 can be used by the processor 102 to perform various tasks, as described in further detail below.
- references to non-contact actions can include combinations of non-contact actions (e.g., blinking while saying “click”).
- references to an action can include combinations of contact-required actions and non-contact actions (e.g., holding a button while saying “click”).
- the processor 102 can use non-contact signals 114 to emulate contact-required signals 110 .
- the processor 102 can use non-contact signals 114 containing information about a user moving gaze to a first target (e.g., computer icon) and dwelling gaze on that target in order to emulate contact-required signals 110 of moving a cursor to the first target and clicking on that target.
- a first target e.g., computer icon
- Computer functions can be any type of action performable on a computer, such as a mouse click; a scroll/pan action; a magnifying action; a zoom action; a touch input/action; a gesture input/action; a voice command; a call-up of a menu; the activation of Eye Tracking/Eye Control/Gaze interaction; the pausing of Eye Tracking/Eye Control/Gaze interaction; adjusting the brightness of the backlight of the device; activating sleep, hibernate, or other power saving mode of the device; resuming from sleep, hibernate, or other power saving mode of the device; or others.
- the computer functions are emulated such that the computer 100 behaves as if it is detecting solely a contact-required action.
- FIG. 2A is a graphical depiction of a display 200 as rendered or presented on a display device 104 according to certain embodiments. While presented with reference to the particular operating system shown in FIG. 2A , the embodiments and disclosure herein can be easily adapted to other operating systems (e.g., Microsoft Windows®, Apple iOS®, or Google Android®) and computer programs to perform various functions thereof.
- the operating system of FIG. 2A includes various computer functionality available through contact-required inputs 108 (e.g., touchscreen taps and gestures). Table 1, below, describes several of these functions and examples of associated contact-required actions to perform each function.
- a computer 100 can use detected non-contact actions (e.g., movement of gaze and fixation on target) as instruction to perform various computer functions.
- the computer can perform a computer function instigated by a non-contact action by either performing the computer function directly (e.g., opening the apps list) or by emulating the contact-required action associated with the computer function (e.g., emulating a swipe up from the center of the Start screen).
- FIG. 2B is a graphical depiction of the display 200 of FIG. 2A in zonal control mode 206 according to certain embodiments.
- the display 200 is separated into a first configuration 204 of eight zones 202 , however any number of zones 202 and any different configuration can be used.
- the zones 202 and/or lines between the zones 202 can, but need not, be presented to the user on the display device 104 (e.g., highlighting of a zone 202 or lines separating the zones 202 ).
- Zones 202 are used to enable non-contact control of the computer 100 .
- Each zone 202 can be associated with a particular computer function.
- the zones 202 are divided and/or located such that a “dead zone” of no functionality exists between some or all of the zones 202 , so ensure that measurement errors and/or data noise do not cause undesired effects.
- hysteresis can be used to avoid inadvertently selecting an undesired function (e.g., by increasing the boundaries of a zone 202 while the gaze target is in the zone 202 or by introducing a certain amount of delay when the gaze target moves out of a particular zone 202 , before altering any performed function).
- the computer function associated with each zone 202 can, but need not, be presented to the user on the display device 104 (e.g., a text box).
- the computer 100 can include a non-contact input 106 that is an eye-tracking device.
- An eye-tracking device can detect eye indications of a user.
- eye indications is inclusive of detecting the direction of a user's gaze, detecting changes in the direction of a user's gaze (e.g., eye movement), detecting blinking of one or both eyes, and detecting other information from a user's eye or eyes.
- a non-contact target 208 is a computed location of where a user's non-contact action is directed.
- the non-contact target 208 can be graphically represented on the display, such as by a symbol shown in FIG. 2 .
- the non-contact target 208 is a gaze target, or the point where the user's gaze is directed.
- the non-contact target 208 can be indicated by 3-D gestures, facial orientation, or other non-contact actions.
- the non-contact target 208 can, but need not, be depicted in some fashion on the display 200 (e.g., presenting a symbol at the non-contact target or highlighting elements or zones at or near the non-contact target).
- a zone 202 can be located outside of the display 200 and the non-contact target 208 need not be constrained to the display 200 of the computer 100 .
- a zone 202 can be located a distance to the left of a display device 104 and can be associated with a certain computer function. The user can perform the function in that zone 202 , as described in further detail below, by focusing gaze to the left of the display device 104 . Determination of a zone 202 outside of the display 200 can occur via an imaging device forming part of an eye tracking device or a separate imaging device.
- a user's gaze is determined to be directed towards an area outside of the display 200 , the direction of the gaze can be determined as herein described and if the gaze target falls within the bounds of a zone 202 outside of the display 200 , an appropriate function can be performed as further described herein.
- statistical analysis can be applied to the detected gaze target and/or detected movements of the gaze target in order to determine whether the gaze target is in a particular zone 202 .
- a lockout time can be implemented whereby if a user activates a function associated with a zone 202 , the function associated with the zone 202 cannot be act rated (e.g., activated in the same manner or in a different manner) until the expiration of a certain length of time.
- the size of the zone 202 decreases for a period of time such that a more deliberate gaze by the user is required to activate the function associated with that particular zone 202 again.
- FIG. 3 is a flow chart depicting a process 300 for zonal control according to certain embodiments. With reference to FIGS. 2B and 3 , one embodiment of zonal control is discussed below.
- the user can generate a mode-enable signal of intent 310 .
- the mode-enable signal of intent 310 can be generated by the user performing a non-contact action or a contact-required action.
- the computer 100 detects the mode-enable signal of intent 310 at block 302 and enters a zonal control mode 206 . During zone control mode 206 , the computer 100 tracks the non-contact target 208 at block 304 .
- the computer 100 detects an activation signal of intent 312 at block 306
- the computer 100 at block 308 , then performs a computer function associated with the zone 202 in which the non-contact target 208 is located at the time of the activation signal of intent 312 .
- the activation signal of intent 312 can, but need not, be the same type of signal of intent as the mode-enable signal of intent 310 .
- Examples of signals of intent include contact-required actions and non-contact actions.
- the computer 100 is always in a zonal control mode 206 (i.e., no separate mode-enable signal of intent 310 is necessary), whereupon receiving an activation signal of intent 312 , the computer 100 will perform the function associated with the zone 202 in which the non-contact target 208 is located at the time of the activation signal of intent 312 .
- the computer 100 can, but need not, provide visual feedback that an activation signal of intent 312 was received, that the non-contact target 208 was in a particular zone 202 , and/or that a particular function was activated.
- a user can speak out loud “zone mode” (i.e., perform a non-contact action) to generate the mode-enable signal of intent 310 .
- the computer 100 enters zonal control mode 206 .
- the user can then focus gaze somewhere in the zone 202 associated with the computer function Open Apps List 210 .
- the user can dwell gaze (i.e., perform a non-contact action) over the zone 202 associated with the computer function Open Apps List 210 for a predetermined amount of time (e.g., 3 seconds) to generate an activation signal of intent 312 .
- the computer 100 can detect that the user is dwelling gaze at least by detecting that the non-contact target 208 that is a gaze target dwells at a location (e.g., does not move substantially or moves only as much as expected for a user attempting to look at the same location) for a predetermined amount of time.
- the computer 100 can perform the Open Apps List 210 function (e.g., by directly performing the function or by simulating a touchscreen swipe up from the center of the Start screen). The computer 100 can then exit out of zonal control mode 206 .
- a user can focus gaze somewhere in the zone 202 associated with the computer function Display Charms 222 .
- the user can depress a hardware button (i.e., perform a contact-required action) to generate an activation signal of intent 312 .
- the computer 100 can perform the Display Charms 222 function.
- no mode-enable signal of intent 310 is necessary.
- Graphical representations of the zones 202 may disappear upon an action by a user, or after a predetermined period of time. In the absence of graphical representations of the zones 202 , the zonal control mode 206 still functions as described
- Some examples of possible computer functions associated with potential zones 202 include Open App Bar 214 , Move App Bar 216 , Hide App Bar 218 , Previous App 220 , Split Window/Close App 224 , and others.
- a signal of intent can be any non-contact action detectable by the computer 100 .
- a signal of intent can be any contact-required action detectable by the computer 100 .
- a signal of intent can be selection and/or activation of an icon in an input menu.
- the shape of the zones 202 and computer functions associated with each zone 202 can change depending on the state of the computer 100 .
- the new window that appears can include different zones 202 with different computer functions associated therewith.
- FIG. 4 is a graphical depiction of a display 400 as rendered or presented on the display device 104 of FIG. 1 while in zonal control mode 206 with a second configuration 402 according to certain embodiments.
- the display 200 is separated into a second configuration 402 of zones 202 .
- the second configuration 402 of zones 202 can be associated with the state of the computer 100 after the Open Apps List 210 function has been performed.
- zones 202 associated with various computer functions, including start 212 , display charms 222 , scroll right 404 , scroll left 406 , zoom out 408 , and zoom in 410 .
- the second configuration 402 need not be dependent on a new screen being displayed on the display 200 , but can be associated with any state of the computer 100 .
- the zoom in 410 and scroll left 406 zones 202 may not be a part of the second configuration 402 , until needed (e.g., a zoom out 408 or scroll right 404 function has been performed, respectively).
- the zones 202 in the second configuration 402 can otherwise perform similarly to the zones 202 of the first configuration 204 .
- zones 202 can overlap, such that multiple computer functions are performed simultaneously from activation of the zones 202 when the gaze target is within both zones 202 .
- the overlapping zones 202 were associated with scrolling functions, such as a scroll up function and a scroll right function, activation of the zones 202 (e.g., by moving a gaze target into the zones) can result in the window scrolling diagonally up and to the right.
- FIGS. 8-10 demonstrate an embodiment whereby a zone is in the form of a menu overlaid atop a computer display.
- the menu may appear or disappear based on a contact or non-contact input action.
- the menu comprises options for selection representing computer functions.
- the options may be selected by gazing at an item and providing an activation signal of intent.
- a user may gaze at an item representing a common computer function known as a “left click”, the user may fixate at the item thus providing the activation signal of intent.
- the computer will perform a “left click” at the next location at which the user fixates or provides another activation signal of intent. In this manner, the user may select the function he or she desires to execute, and then select the location upon the display at which the function is to be executed.
- a user can look at a first zone, then look at a location away from the first zone to perform a computer function associated with the zone in which the non-contact target was located. For example, a user can look at a menu item as described above, initiate an activation signal of intent, then look at an icon, and then initiate a second activation signal of intent.
- the computer 100 can determine the function to be performed on the icon based on where the user's gaze was located (i.e., at the zone).
- any function possible on a computer can be performed (e.g., directly performed or emulated) through the use of zonal control as described herein.
- performed functions can include those such as opening an application, navigating the display (e.g., navigating to a new page by swiping a finger across the display), zooming, hardware buttons (e.g., a home button or return button), multi-finger gestures, and others.
- Some single and multi-finger gestures can include tap, double-tap, long press, press and hold, touch and hold, slide, swipe, drag, press and drag, fling, flick, turn, rotate, scroll, pan, pinch, stretch, edge swipe/bezel swipe, and others.
- zonal control can be used to perform functions including mouse movement, mouse clicking (e.g., single click, double click, right click, or click and drag), keyboard presses, keyboard combinations, or other functions related to contact-required actions.
- the computer 100 can be controlled through one or both of the non-contact input 106 and the contact-required input 108 .
- FIG. 5 is a graphical depiction of a display 500 with a visual indicator 502 according to certain embodiments.
- the visual indicator 502 is used like a mouse cursor to help a user perform computer functions (e.g., clicking, dragging, and others).
- the visual indicator 502 is an indication of where the effect of an additional action (e.g., a mouse click, a touchpad tap, or other contact-required or non-contact action) will occur.
- the computer 100 can generate a visual indicator 502 on the display at an estimated location of the non-contact target 208 , as described above.
- the visual indicator 502 can be displayed at an estimated gaze target of the user.
- the estimated gaze target is calculated by an eye-tracking device or by a computer 100 using information from an eye-tracking device.
- the computer 100 contains programming 116 enabling the processor 102 to perform a delay warp, as described below.
- FIG. 6 is a flow chart diagram of delay warp 600 as performed by a computer 100 according to certain embodiments.
- a computer 100 can optionally perform a click according to input from a contact-required input, such as a computer mouse or a touchpad, at block 602 .
- a user can perform an action which is detected at block 606 .
- the computer 100 upon detecting an action at block 606 , can display a visual marker at an estimated gaze target of the user.
- the visual marker can be an indicator of to where the cursor will move as described herein.
- the user can perform a contact-required action, such as touching a touch-sensitive device.
- the computer 100 can delay, for a predetermined amount of time, at block 610 . This delay can be utilized by the computer 100 to provide sufficient time for a user to alter action (e.g., decide not to move the cursor) and for the computer 100 to be certain of the user's intention.
- the computer 100 can move the visual indicator 502 to the gaze target at block 614 .
- the computer 100 can, at block 502 , do nothing, go back to having a contact-required input move the visual indicator 502 (e.g., cursor), or perform a click at or more the visual indicator 502 to its original location.
- the visual indicator 502 e.g., cursor
- the delay warp 600 additionally includes optional block 618 where the computer 100 determines whether the action is still detected (i.e., after the visual indicator 502 has moved to the gaze target). If the action is not still detected, such as if the user is no longer touching the touch-sensitive device, the computer 100 can perform additional functions as necessary (e.g., perform a “click” where the visual indicator 502 was last located, or do nothing) at path 620 . However, if the action is still detected at optional block 618 , the computer 100 can slowly move (e.g., with more precision) the visual indicator 502 (e.g., a cursor) according to movements of the user's gaze, a computer mouse, or a touchpad at optional block 622 .
- the visual indicator 502 e.g., a cursor
- a user desires to select an element 504 on a display 500 , such as an icon
- the user can look at the icon and touch the touchpad for the predetermined period of time, such as 200 ms, after which the computer 100 will move the visual indicator 502 to the icon. If the user, before the predetermined period of time has elapsed, decides not to have the visual indicator 502 moved to the gaze target, the user can cease touching the touchpad.
- the user can touch the touchpad or click a mouse button and wait the predetermined period of time so that the visual indicator 502 moves to the gaze target. Thereafter, the user can continue touching the touchpad or holding the mouse button while moving gaze away from the visual indicator 502 (e.g., above, below, or to the side) in order to move the visual indicator 502 with fine-tune adjustments until the visual indicator 502 is in a desired location, at which point the user can cease touching the touchpad or holding the mouse button in order to perform an action at the desired location (e.g., click an icon).
- the desired location e.g., click an icon
- the user can touch the touchpad or click a mouse button and wait the predetermined period of time so that the visual indicator 502 moves to the gaze target. Thereafter, the user can continue touching the touchpad or holding the mouse button while moving the cursor with the touchpad or mouse in order to move the visual indicator 502 (e.g., cursor) with fine-tune adjustments until the visual indicator 502 is in a desired location, at which point the user can cease touching the touchpad or holding the mouse button in order to perform an action at the desired location (e.g., click an icon).
- the visual indicator 502 e.g., cursor
- a user can look at a desired screen element 504 (e.g., an icon, a window, or other graphical user interface (“GUI”) element) in order to direct the visual indicator 502 to that element 504 .
- GUI graphical user interface
- the user can perform an additional action (e.g., tap a touchpad).
- the visual indicator 502 may not be regularly displayed, but as the user moves gaze around the display 500 , any elements 504 at or adjacent the gaze target can be highlighted or otherwise distinguish the element 504 as at or near the gaze target.
- a visual indicator 502 enables a user to see the effect of non-contact actions on the computer 100 before performing additional actions (e.g., non-contact actions or contact-required actions).
- additional actions e.g., non-contact actions or contact-required actions.
- a user intends to move the visual indicator 502 or other graphical element 504 on a display
- the user looks at the desired destination of the visual indicator 502 .
- the eye-tracking device calculates an estimated gaze target based on the user's gaze.
- the user then activates a non-contact input 106 or a contact-required input 108 , for example by tapping a touchpad.
- the computer 100 does not perform a computer function.
- the visual indicator 502 is shown on the display 500 at the estimated gaze target. This visual indicator 502 or a separate visual marker can then demonstrate to the user the location to which the visual indicator 502 will be moved. If the user determines to proceed, the visual indicator 502 will be moved after a predetermined period of time. The user can indicate a desire to proceed by initiating an action (e.g., a contact-required action such as moving an input device) or by simply waiting for the predetermined period of time to expire.
- an action e.g., a contact-required action such as moving an input device
- the user may perform a specific action such as removing contact with the input device, tapping the input device or pressing and holding the input device.
- these actions cause the computer 100 to perform a specific function, such as tapping to open an icon, dragging such as dragging an item on a GUI, zooming upon an item, or others.
- Actions that are normally performed with an input device would be readily understood by a person skilled in the art.
- the user can determine that an adjustment is required in order to more accurately reflect the desired movement location of the visual indicator 502 .
- the user can gaze at a different location in order to change the gaze target, or the user can perform a small movement with an input device (e.g., move a computer mouse or move touch on a touchpad) to adjust the location of the visual indicator 502 after the visual indicator 502 has moved to the gaze target.
- an input device e.g., move a computer mouse or move touch on a touchpad
- a computer 100 includes a touchpad as a contact-required input device and an eye tracking device capable of determining a user's gaze target.
- the computer 100 utilizes input from both the touchpad and the eye tracking device to allow a user to navigate through user interfaces. Most frequently this is achieved through moving a visual indicator 502 on a display 500 .
- the computer 100 utilizes gesture type commands used on the touchpad, for example a swipe across the touchpad by a user to move to the next element 504 in a series, or a pinch gesture on the touchpad to zoom a displayed element 504 .
- the computer 100 delays performing a computer function for a predetermined period of time. During this period of time, a visual indicator 502 is shown at an estimated gaze target of the user. The estimated gaze target is calculated based on information from the eye tracking device.
- the computing system moves the location of the visual indicator 502 on the display 500 to the gaze target.
- the user can then move the visual indicator 502 by moving their finger on the touchpad.
- the user can perform another action during the predetermined period of time—such as the aforementioned gesture type commands, or simply remove their finger from the touchpad to cancel any action.
- the computer 100 can locate the visual indicator 502 at an element 504 in proximity to the actual gaze target.
- This element 504 can be a Graphical User Interface (GUI) object, for example, such as a button text box, menu or the like.
- GUI Graphical User Interface
- the computer 100 can determine which element 504 at which to locate the visual indicator 502 based on a weighting system whereby some elements 504 have predetermined weights higher than other elements 504 . For example, a button can have a higher weighting than a text box.
- the determination of which element 504 at which to place the visual indicator 502 can also consider proximity to the gaze target.
- the computer 100 can provide tactile or audible feedback indicating that a gaze target is able to be determined.
- the feedback will indicate to the user whether the system is functioning correctly and if not, it will allow the user to alter their behavior to accommodate the function or non-function of the eye tracking device.
- This feedback can be in the form of a touchpad providing haptic feedback when an estimated gaze position has been determined during a cursor movement procedure.
- FIG. 25 is a flow chart of a delay warp 2500 with a visual marker according to certain embodiments.
- the visual marker is an indication of to where the visual indicator (e.g., cursor) might jump or “warp” under certain conditions.
- a contact-required action is detected, such as a user touching a touchpad.
- the computer 100 waits for the duration of a predefined length of time (e.g., delay) at block 2504 . In some embodiments, the delay can be 0 seconds (i.e., no delay).
- the computer 100 causes a visual marker to be displayed at the non-contact target (e.g., gaze target).
- an additional delay can be incorporated after block 2506 and before block 2508 .
- the computer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, the computer 100 , at block 2510 , then ceases to display the visual marker and moves the visual indicator (e.g., cursor) to the non-contact target. If the contact-required action is not maintained at block 2508 , the computer 100 can cease displaying the visual marker at block 2512 and execute a click at the location of the visual indicator (e.g., cursor) at block 2514 .
- the visual indicator e.g., cursor
- FIG. 26 is a flow chart of a delay warp 2600 without a visual marker according to certain embodiments.
- a contact-required action is detected, such as a user touching a touchpad.
- the computer 100 waits for the duration of a predefined length of time (e.g., delay) at block 2604 .
- the computer 100 moves the visual indicator (e.g., cursor) to the non-contact target (e.g., gaze target).
- an additional delay can be incorporated after block 2606 and before block 2608 .
- the computer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad.
- the process is finished at block 2610 . If the contact-required action is maintained, the process is finished at block 2610 . If the contact-required action is not maintained at block 2608 , the computer 100 , at block 2612 , can move the visual indicator back to its original position prior to the movement from block 2606 . Next, the computer 100 can execute a click at the location of the visual indicator at block 2614 .
- FIG. 27 is a flow chart of a delay warp 2700 without a hidden visual indicator according to certain embodiments.
- the visual indicator is in a hidden state. Block 2702 includes all instances where the visual indicator is hidden, including if it has never been displayed previously.
- a contact-required action is detected, such as a user touching a touchpad.
- the computer 100 waits for the duration of a predefined length of time (e.g., delay) at block 2706 .
- the computer 100 can display either a visual marker or the visual indicator (e.g., cursor) at the non-contact target (e.g., gaze target).
- an additional delay can be incorporated after block 2708 and before block 2710 .
- the computer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, the computer 100 moves the visual indicator to the non-contact target at block 2712 . If the contact-required action is not maintained at block 2708 , the computer 100 , at block 2714 , executes a click at the non-contact target.
- FIG. 28 is a flow chart of a delay warp 2800 according to certain embodiments.
- a contact-required action is detected, such as a user touching a touchpad.
- the computer 100 waits for the duration of a predefined length of time (e.g., delay) at block 2804 .
- the computer 100 determines whether the contact-required action includes signals to move the visual indicator (e.g., cursor). Such actions can include swiping a finger along a touchpad or moving a mouse.
- the computer 100 moves the visual indicator pursuant to the contact-required action. If the contact-required action does not include signals to more the visual indicator (e.g., the user touches a touchpad and does not move the finger around), the computer, at optional block 2810 , can show a visual marker or the visual indicator at the non-contact target (e.g., gaze target), but then determines, at block 2812 , whether the contact-required action is maintained (e.g., the user touches and keeps touching the touchpad).
- the computer 100 executes a click at the original location of the visual indicator. If the contact-required action is maintained, the computer 100 , at block 2816 , moves the visual indicator to the non-contact target. Then, at block 2818 , the computer 100 determines whether the contact-required action now includes signals to move the visual indicator (e.g., the user, after seeing the visual indicator move to the non-contact target, begins moving the finger around on the touchpad). If the contact-required action does not now include signals to move the visual indicator, the computer finishes the process at block 2822 .
- the visual indicator e.g., the user, after seeing the visual indicator move to the non-contact target, begins moving the finger around on the touchpad.
- the contact-required action does now include signals to move the visual indicator the computer 100 moves the visual indicator pursuant to the new contact-required action at block 2820 .
- a “click” or other function may be performed upon release of the contact-required action.
- the movement of the visual indicator at block 2820 is slower than the movement of the visual indicator at block 2808 .
- the computer 100 can display either a visual marker or the visual indicator (e.g., cursor) at the non-contact target (e.g., gaze target).
- the computer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, the computer 100 moves the visual indicator to the non-contact target at block 2812 . If the contact-required action is not maintained at block 2808 , the computer 100 , at block 2814 , executes a click at the noxi-contact target.
- a user can perform non-contact actions detectable by a computer 100 through a non-contact input 106 , such as an eye-tracking device.
- a user can direct gaze at an elements 504 on a display 500 and then perform an additional action in order to perform a computer function (e.g., a click) upon the element 504 upon which the user's gaze is directed.
- the computer 100 may not display any visual indication of the location of a user's gaze target.
- additional action e.g., a sequence of keystrokes or any sequential or simultaneous combination of non-contact and/or contact-required actions
- additional actions e.g., a sequence of keystrokes or any sequential or simultaneous combination of non-contact and/or contact-required actions
- FIG. 7A is a flow chart depicting a multi-step click functionality 700 according to some embodiments.
- the computer 100 detects a user's gaze target at block 702 .
- the user's gaze target can be located adjacent to or away from a display.
- the computer 100 detects a contact-required action (e.g., a button press or a touchpad tap) at block 704 .
- the computer 100 then performs a computer function at block 706 , dependent on the gaze target and the contact-required action.
- no visual indication is displayed to the user until after block 704 , upon which the computer 100 highlights the element 504 located at or near the gaze target.
- FIG. 7B is a flow chart depicting a multi-step click functionality 710 according to some embodiments.
- the computer 100 detects a user's gaze target at block 702 .
- the computer 100 detects a first action (e.g., a button press or a touchpad tap) at block 704 .
- the computer 100 determines whether there are multiple small elements 504 located sufficiently close together adjacent the gaze target.
- multiple elements 504 include instances where each element 504 is sufficiently close to one another that the computer 100 determines additional accuracy is necessary in order for the user to select the desired element 504 .
- the computer presents a zoomed image of a portion of the display near the user's gaze target.
- the portion of the display can be solely the elements 504 or can be the elements 504 and surrounding aspects of the display (e.g., backgrounds).
- the computer 100 continues to detect the user's gaze target at block 716 .
- the computer 100 can additionally highlight any element 504 selected by the user's gaze target at block 716 .
- the computer 100 can detect a second action at block 718 .
- the computer 100 can optionally highlight the element at the gaze target at block 708 and then perform a computer function dependent upon the element at the gaze target at block 706 (e.g., selecting the element located at the gaze target).
- the computer can simply continue directly to optional block 708 and block 706 .
- a computer 100 can dynamically adapt to a user's desire to select small or numerous elements on a display using non-contact actions. If a user attempts to select small or numerous elements using non-contact actions, such as through eye-tracking, the computer 100 can dynamically zoom in to allow the user to have better control for picking the correct element.
- the first actions and second actions can be contact-required or non-contact actions.
- the first and second actions detected at blocks 704 and/or 718 can be the pressing of a button or touching of a touchpad. In some embodiments, the first and second action detected at blocks 704 and/or 718 can be releasing a button that has already been pressed and/or ceasing to touch a touchpad that has previously been touched. For example, a user can depress a button while moving gaze around a display and release the button when it is desired that the computer function take place. In some embodiments, the second action is a release of a button while the first action is a depression of the same button. Additionally, the first action and second action are generally the same type of actions (e.g., a button press), but need not be.
- the computer 100 can highlight the gaze target and/or the elements at or near the gaze target while the button is depressed.
- the computer 100 can highlight the group of elements, instead of a single element.
- the computer 100 can zoom in on the display when a user initiates an action (e.g., a contact-required action).
- the computer 100 can zoom in on the display without first determining whether there are multiple small elements located near the gaze target.
- the computer 100 can otherwise function as described above with reference to FIG. 7B .
- the computer 100 can determine which element 504 at which to locate the gaze target based on a weighting system whereby some items have predetermined weights higher than other items. For example, a button can have a higher weighting than a text box. The determination of which element 504 at which to place the cursor can also consider proximity to the estimated gaze position.
- the computer 100 can provide tactile or audible feedback indicating that an estimated gaze position is able to be determined.
- the feedback will indicate to the user whether the system is functioning correctly and if not it will allow the user to alter their behavior to accommodate the function or non-function of the eye tracking device.
- This feedback can be in the form of a touchpad providing haptic feedback when an estimated gaze position has been determined during a cursor movement procedure.
- FIG. 29 is a flow chart depicting a two-step click 2900 according to certain embodiments.
- a computer 100 can determine a non-contact target (e.g., gaze target) at block 2902 and detect a first action at block 2904 .
- the first action can be a contact-required action (e.g., a button click) or a non-contact action.
- multiple elements provided on a display can be highlighted at once as the computer 100 determines a non-contact target is located on or near the elements.
- the computer 100 determines if an element at the non-contact target can be reliably determined, in other words, the computer 100 determines whether it is able to determine, with enough certainty, which element the user is intending to target.
- the computer 100 can consider multiple parameters to determine which element or elements the user is intending to target. These parameters can be pre-set and/or user-set, but comprise two decision points. Firstly, the computer 100 determines expected tracking deviation based on factors that can include user preference set by the user. Such preferences can include, for example, whether the user desires speed or accuracy, user eye tracking quality including whether the user is wearing glasses, contact lenses or the like, the quality of a calibration function mapping the user's gaze, estimates of offsets of the detected user's gaze relative to the actual location of the user's gaze, signal noise in the eye tracking system, configuration, type and frequency of the eye tracker or a global parameter for determining expected gaze point deviation which may override other factors.
- preferences can include, for example, whether the user desires speed or accuracy, user eye tracking quality including whether the user is wearing glasses, contact lenses or the like, the quality of a calibration function mapping the user's gaze, estimates of offsets of the detected user's gaze relative to the actual location of the user's gaze, signal noise in the eye tracking system
- the computer 100 can optionally highlight the element at the non-contact target at block 2908 .
- the computer 100 can then perform a computer function associated with the element at the non-contact target (e.g., selecting the element or opening the element or others).
- the computer 100 determines possible targets at block 2912 based on factors including geometries of elements near the gaze target such as how close together multiple elements are and the size and/or shape of the elements, layout of the elements, visual point of gravity of the elements representing weightings of the elements such that a gaze target may be weighted towards an element in a gravity-like manner, grouping criteria including grouping elements of the same interaction type, contextual input from the user such as a preference towards avoiding functions such as delete or the like and whether based on the user's gaze point the user has seen an element highlight.
- the computer 100 determines the properties of a region in which to present a visualization of all potential elements.
- the computer 100 may consider various factors such as the size and layout of elements including possible targets determined at block 2912 , a maximum size of a visualization of spaced apart elements, a magnification level set by the user or predetermined by the computer 100 and grouping criteria for analyzing displayed elements.
- the computer 100 at block 2914 , can then present a visualization of all potential elements identified in block 2912 with each element spaced further apart.
- elements displayed at block 2914 can be highlighted as a user's gaze locates upon them.
- An example of spacing further apart at block 2914 can include simply zooming in on the area where the elements are located (i.e., increasing the space between the elements with respect to the size of the display device 104 ).
- spacing further apart at block 2914 can include enlarging and/or moving each individual element further apart on the display to provide greater separation (e.g., moving four elements in a square configuration taking up a small portion of the display to a line formation extending across substantially most of the display).
- the computer 100 can determine the non-contact target again and detect a second action, respectively, in order to identify a targeted element.
- the computer 100 can optionally highlight the targeted element at block 2908 .
- the computer 100 can perform a computer function associated with the targeted element at block 2910 (e.g., clicking the element).
- the computer 100 can receive a maintained first action, such as constant pressure on a touchpad, a held down button, or the like.
- a maintained first action such as constant pressure on a touchpad, a held down button, or the like.
- the user is actively selecting elements and thus is prepared for multiple elements to be highlighted prior to block 2906 and at block 2914 .
- the user releases the maintained first action and the computer 100 performs a computer function associated with the element at the non-contact target.
- the computer 100 is able to detect non-contact actions to switch the focus of windows in the display.
- the non-active window becomes the active window.
- an eye tracking device determines the location of a user's gaze relative to a display 200 , if the computer 100 determines the gaze target to be located in a window or area not currently in an active state, the user can instruct the computer 100 to make the window or area active by providing a contact-required action whilst gazing at the non-active window or area.
- the user can fixate their gaze within the non-active window or area for a predetermined period of time instead of providing a contact-required action, in order to make the window or area active.
- a user can scroll a window or screen area by looking towards an edge of the screen, window, or area and initiating an action such as a contact-required action.
- the area or window can scroll up, down, left, right or diagonal.
- the contents of the area or window at the bottom of the area or window can scroll upwards to the top of the window or area, effectively revealing new information from the bottom of the area or window.
- This functionality can operate in the same manner for every direction at which the user is gazing.
- the computer 100 will scroll the window or area so that point identified by the user is moved to a predetermined location in the window (e.g., the top of the window or the center of the window).
- the computer 100 can determine based on a gaze offset or deviation an area near the edge, top/bottom, left/right, or other in which to enact the functions described.
- the computer 100 is able to directly scroll the window by a predetermined amount for gaze interactions.
- the computer 100 uses the gaze interactions to emulate button presses, such as presses of arrow buttons (e.g., arrow up or arrow down) or page buttons (e.g., page up or page down), in order to scroll the window.
- button presses such as presses of arrow buttons (e.g., arrow up or arrow down) or page buttons (e.g., page up or page down)
- the computer 100 can provide a visual indication (e.g., a different cursor or other indication) that the gaze target is located in a particular zone or area defined for scrolling (e.g., near the top edge of the screen for scrolling up).
- a visual indication e.g., a different cursor or other indication
- a user can scroll a window by holding a contact-required input (e.g., a button or touchpad) while gazing in a direction of scroll (e.g., top, bottom, or side of the computer screen).
- a gaze to an edge of a screen, window, or area can scroll information from that edge towards the center of the screen, window, or area.
- the scrolling can be momentum based scrolling, meaning that the rate of scrolling will gradually increase to a predetermined maximum speed the longer the contact-required input is held (e.g., the longer the button is pressed or the longer the touchpad is touched without release).
- the scrolling will not cease immediately, but will rather slow rapidly until coming to a complete stop.
- the speed at which the scrolling increases whilst the button or touchpad is held down and at which the scrolling decreases after release can be adjusted by the computer 100 .
- the adjustment can be altered by a user.
- a user can scroll a window simply by looking at the edge of the window (e.g., the top, bottom, or sides of the window).
- the window can scroll by a predetermined increment for each look towards the edge of the window.
- the scrolling will only happen if the user looks towards the edge of the window and simultaneously initiates an action, such as a non-contact action like a voice command.
- the increment can depend on the location of the user's gaze and can update continuously. In this way natural scrolling is achieved, for example if a user gazes towards the edge of a map constantly the map scrolls by small increments to smoothly scroll the map.
- the scrolling can occur without the user simultaneously initiating an action.
- a user can scroll a window by looking at a desired location and performing an action (e.g. pressing a button or saying “scroll”), at which point the computer 100 will cause the portion of the window and/or display located at the gaze target to move to the center of the window and/or display.
- a computer 100 can determine whether to scroll or zoom, depending upon the location of the gaze target (e.g., location of the gaze target with respect to a window). For example if a user looks at a window containing a map and moves the gaze target to an edge of the map, pressing a button can cause the map to scroll so the targeted location is now at the center of the map.
- the speed of an action can be controlled based on the location of the gaze target (e.g., location of the gaze target with respect to a window). For example, when a user looks at the edge of a window acrd presses a button, the window can scroll quickly, whereas if the user looks somewhere between the same edge of the window and the center of the window, the window would scroll less quickly.
- the scrolling is based on the length or pressure of contact by a user upon a key, button, or touchpad. For example, a window can scroll by greater increments when a user holds a key, button, or touchpad down longer.
- scrolling can be terminated by performing or ceasing actions (e.g., contact-required actions). In some embodiments, scrolling is slowed before completely terminating. In some embodiments, scrolling can be slowed by the user gradually moving the gaze target away from a predetermined area (e.g., edge of the screen or area where the gaze target was upon initiation of scrolling).
- a predetermined area e.g., edge of the screen or area where the gaze target was upon initiation of scrolling.
- a tap of a contact-required input can cause a visual indicator to appear on the display, or alternatively enact a click at the last known position of a visual indicator or at the gaze location. Thereafter, holding the finger on the contact-required input (e.g., holding a finger on a touchpad) can move the visual indicator to the gaze target.
- movement of a computer mouse can cause the visual indicator to appear on the display at the gaze target.
- use of a contact-required input e.g., computer mouse
- a contact-required input e.g., computer mouse
- a user can initiate an additional action (e.g., a contact-required action such as pressing and holding a mouse button) on an element (e.g., an icon) while using a computer mouse.
- the user's gaze can then be moved towards a desired destination and the mouse button can be released in order to drag the element to the desired destination (e.g., to move an item into another folder).
- a user can select an item on a display by locating a visual indicator on or near the item either by using a gaze enabled visual indicator movement method described previously, or by moving a computer mouse, touchpad or the like.
- the user can then hold down an activator such as a computer mouse or maintain contact on a touchpad to select the item, when the user releases the activator, the item moves to the location of the user's gaze on the display. Therefore, the user can “grab” an item such as an icon and “drag” the icon to the user's gaze location, whereupon on release of an activator the icon is relocated to the user's gaze location.
- performing an action such as a contact-required action can cause the visual indicator to move to the gaze target.
- holding the contact-required action e.g., pressing and holding the mouse button
- holding the contact-required action can move the visual indicator to the gaze target and allow for the user to fine-tune the position of the visual indicator before releasing the contact-required action (e.g., releasing the mouse button) to perform an action at the visual indicator's location (e.g., select an element). While the contact-required action is being held, large movements of gaze target can translate to smaller movements of the visual indicator, in order to allow a user to more finely adjust the location of the visual indicator.
- initiating an action can open a menu at the gaze target.
- a user can look at the desired menu item and initiate a second action (e.g., second contact-required action or release of a maintained first contact-required action) to select that menu item.
- a second action e.g., second contact-required action or release of a maintained first contact-required action
- holding gaze on or near the menu item for a predetermined period of time can act to select that menu item.
- an edge menu or button can be opened (e.g., displayed) by holding a contact-required action, looking to the edge of the display, and releasing the contact-required action.
- the computer 100 presents a visual indication that opening a menu is possible when the user looks to the edge of the screen with the contact-required action held. If the user looks away from the edge of the display without releasing the contact-required action, the menu does not open. For example the user can hold or maintain a contact-required action and look towards an edge of a display. A glow or other visual indicator can appear at the edge of the display indicating a menu can be opened.
- the menu appears on screen. If the contact-required action is ceased while the glow or other visual indicator is present, the menu appears on screen. If the contact-required action is ceased while the glow or other visual indicator is not present, no menu appears on screen.
- the menu can occupy the full screen or part of the screen.
- the menu can alternatively be a button, for example a button representing a “back” movement in a web browser the like.
- initiating an action will zoom the display in or out at the gaze target.
- initiating an action e.g., a double-tap on a touchpad
- initiating an action can zoom the display in at the gaze target.
- initiating an action e.g., tapping the touchpad while holding down a shift button
- movement of two fingers towards each other such as a “pinch” movement on a touchpad, touch screen or the like can enact a zoom in or out, and movement of two fingers away from each other can enact the opposite zoom in or out movement.
- the user looks towards the edge of the display device 104 .
- the computer 100 determines that the gaze target is at or near the edge of the display device 104 and activates a non-contact input mode.
- the computer 100 displays an input menu over or adjacent the display 200 . When the user looks away from the edge of the display 200 , the input menu can disappear immediately, remain on the display 200 indefinitely, or disappear after a predetermined amount of time.
- the user can activate an icon on the input menu by performing a contact-required action while gazing at an icon or by using a non-contact activation method such as dwelling their gaze in the vicinity of an icon for a predetermined period of time, for example one second, or blinking an eye or eyes, which can be interpreted by the computer as an activation command.
- a non-contact activation method such as dwelling their gaze in the vicinity of an icon for a predetermined period of time, for example one second, or blinking an eye or eyes, which can be interpreted by the computer as an activation command.
- Each icon can be associated with a computer function.
- the computer 100 can provide an indication of activation, such as a change in the appearance of the icon, a sound, physical feedback (e.g., a haptic response), or other indication.
- a place cursor icon can be activated to place the mouse cursor on a desired point or position.
- the place cursor icon can be used for mouse-over functions (e.g., functions where a mouse click is not desired).
- a gaze scroll icon can be activated to enable gaze-controlled scrolling within a scrollable window, as described in further detail below.
- a left click icon can be activated to perform a single left-click (e.g., emulate a physical left-click on an attached computer mouse).
- a double click icon can be activated to perform a double left-click (e.g., emulate a physical double-click on an attached computer mouse).
- a right click icon can be activated to perform a single right-click (e.g., emulate a physical right-click on an attached computer mouse).
- a gaze drag and drop icon can be activated to enable the gaze drag and drop mode.
- the gaze drag and drop mode allows a user to use non-contact input to emulate a drag and drop action on a physical mouse (e.g., click and hold the mouse, move the mouse, release the mouse), as described in further detail below.
- a gaze keyboard icon can be activated to open an on-screen, gaze-enabled keyboard for typing using gaze, as described in further detail below.
- a settings icon can be activated to open a settings window or dialog.
- the gaze scroll icon can be activated to enable gaze-controlled scrolling.
- gaze-controlled scrolling the user can scroll windows (e.g., up and down, as well as left and right) using non-contact inputs.
- the user can place a scroll indicator on the window and look above the scroll indicator to scroll up, look below the scroll indicator to scroll down, look to the left of the scroll indicator to scroll left, and look to the right of the scroll indicator to scroll right.
- the user can place the scroll indicator by first enabling gaze-controlled scrolling (e.g., dwell gaze on the gaze scroll icon), then looking at any scrollable area and dwelling gaze upon that area until the scroll indicator appears.
- gaze-controlled scrolling e.g., dwell gaze on the gaze scroll icon
- the gaze drag and drop icon can be activated to enable the gaze drag and drop mode.
- the user can gaze at a first location and provide a user signal (e.g., dwelling gaze, blinking, winking, blinking in a pattern, or using a contact input such as a button) which causes the computer to emulate a mouse click and hold at the first location.
- the user can then move gaze to a second location and provide a second user signal, which causes the computer to emulate moving the mouse to the second position and releasing the mouse button.
- the icon selected may not be de-selected unless a new selection on the input menu is made. In other embodiments, the icon can be de-selected, such as by gazing at the same icon again.
- the computer 100 can provide for a portion of the display to be zoomed (e.g., displayed at a lower resolution). For example, when the user selects the “left click” icon on the input menu and gazes at a portion of the display, an area around the gaze point on the display can zoom so that the user can then select the intended target for their action with greater ease by gazing at the enlarged portion of the display.
- the user can perform a certain action (e.g., contact-required action or non-contact action in order to select an area of the display upon which a computer function is to be perforated, at which point the computer 100 can display an input menu at or around that point to select the desired computer function.
- a certain action e.g., contact-required action or non-contact action in order to select an area of the display upon which a computer function is to be perforated, at which point the computer 100 can display an input menu at or around that point to select the desired computer function.
- the input menu can also be provided external to the display 200 and/or display device 104 .
- an input device such as an eye tracking device, on the housing of a display, or on a separate device.
- the menu can then be comprised of a separate display, or another means of conveying information to a user such as light emitting diodes, switches or the like.
- the action of choosing an icon on the external input menu is shown as a transparent image of that icon at an appropriate position on the regular display.
- the gaze target used to identify the desired icon to be activated can be located outside the display 200 and/or display device 104 .
- a user can perform computer functions on or at the area at or near the gaze target via voice interaction. Once an area has been selected (e.g., by focusing gaze at the area), an action can be performed by the user speaking certain words such as “open”, “click” and the like, which would be readily understood by a skilled addressee.
- selection of a computer function on an input menu can comprise multiple steps and can also comprise multiple menus. For example a user can select an icon or menu using any input method herein described, and then select a second icon or menu for selecting a computer function to be performed.
- a gaze-tracking device can discern the identity of the user (e.g., through a user's gaze patterns or biometric data such as distance between eyes and iris sizes) in order to customize functionality for that particular user (e.g., use particular menus and/or features or set up desired brightness settings or other settings).
- a computer 100 will perform certain functions based on a discernible series or sequence of gaze movements (e.g., movements of the gaze target).
- a computer 100 can determine what various computer functions are available to be performed in whole or in part by non-contact actions based on what elements are presented on the display 200 .
- performing an action can cause information to be presented on the display 200 based on the gaze target. For example, pressing a button can cause a list of active computer programs to appear on the display 200 .
- a user can interact with the presented information (e.g., list of active computer programs) by gazing at parts of the information. For example, a user can press a button to cause a list of active computer programs to appear on the display 200 , then look at a particular program and release the button in order to cause that particular program to gain focus on the computer 100 .
- Location of a gaze target can be determined from detection of various actions, including movement of the eyes, movements of the face, and movements of facial features.
- scrolling function e.g., scrolling a page up or down
- scrolling a page up or down can be controlled by the user's face tilting (e.g., up or down) while the user reads the display 200 , wherein the computer 100 does not control scrolling function based on eye tracking at that time.
- camera based gaze detection systems can rely on facial recognition processing to detect facial features such as nose, mouth, distance between the two eyes, head pose, chin etc. Combinations of these facial features can be used to determine the gaze target.
- vertical scrolling e.g., a scroll up function and/or a scroll down function
- the detection of the gaze target can rely solely or in part on detected eyelid position(s).
- Eye lid position detection is good for determining changes in gaze target in a vertical direction, but not as effective for determining changes in gaze target in a horizontal direction.
- images of the head pose can be used instead.
- a gaze target can be determined to be within scroll zones only when the user's face is determined to be oriented in the general direction of the display device 104 .
- a head pose indicating more than seven degrees off to a side from the display device 104 is an indication that the user is unlikely to be looking at the content (e.g., the display 200 ) presented on the display device 104 .
- a gaze target can occupy a smaller (e.g., more sensitive and/or accurate) or larger (e.g., less sensitive and/or accurate) area relative to the display device 104 .
- Calibration of the gaze detection components can also play a role in the accuracy and sensitivity of gaze target calculations.
- Accuracy or sensitivity can dictate the relationship between an actual direction of a user's gaze and the calculated gaze target. The disclosed embodiments can function even if the relationship between the actual gaze direction and calculated gaze target is not direct.
- the gaze target can be calibrated by using input from a touch screen to assist with calibration.
- the computer 100 can prompt the user to look at and touch the same point(s) on the display device 104 .
- a calibration process can be performed in the background without prompting the user or interrupting the user's normal interaction with the computer 100 .
- a user while normally operating the computer 100 , a user will be pressing buttons, hyperlinks, and other portions of the display 200 , display device 104 , and/or computer 100 having known positions. It can be assumed that the user will normally also be looking at the buttons, hyperlinks, etc. at the same time.
- the computer 1 (X) can recognize the touch point or click point as direction of the user's gaze and then correct any discrepancies between the direction of the user's gaze and the calculated gaze target.
- Such a background calibration process can be helpful in order to slowly improve calibration as the user interacts with the computer 100 over time.
- a computer 100 is able to determine when a user is reading elements 504 on a display 200 rather than attempting to control the computer 100 . For example, detection of whether a user is reading can be based on detecting and evaluating saccades and whether an eye fixates or dwells on or around a constant point on the display. This information can be used to determine indicators of reading as distinguished from a more fixed gaze.
- the computer 100 is configured such that scrolling functions can be initiated even when the user is determined to be reading. For instance when the user is looking at a map, the scrolling (e.g., panning) should be initiated relatively faster than when the user is reading text (e.g., a word-processor document).
- any dwell time before triggering a scroll function when reading text can be longer than for reviewing maps and other graphical content, and the scroll zones and/or scroll interactions can be chosen differently in each case.
- the scroll zone(s) may have to be made larger in the case of a map or other graphical content in order to make the computer 100 sufficiently responsive, while scroll zone(s) for a text document may be smaller because a scroll action is typically not required until the user is reading text very close (e.g., 5 lines) to the bottom or top of a window.
- a computer 100 can be placed in one or more modes, wherein each mode enables different computer functions to be performed in response to a user performing various actions.
- the computer 100 can be configured to use gaze data patterns (e.g., the frequency with which gaze targets appear in certain positions or locations relative the display device 104 ) to determine with greater accuracy, based on statistical analysis, when a user is looking at a particular zone 202 or element 504 .
- gaze data patterns e.g., the frequency with which gaze targets appear in certain positions or locations relative the display device 104
- FIGS. 8-24D depict graphical representations and flow charts of various embodiments and functionality disclosed herein.
- a computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs.
- Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
- Embodiments of the methods disclosed herein may be performed in the operation of such computing devices.
- the order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
- any reference to a series of examples is to be understood as a reference to each of those examples disjunctively (e.g., “Examples 1-4” is to be understood as “Examples 1, 2, 3, or 4”).
- Example 1 is a method of controlling a computer.
- the method includes presenting a display having a visual indicator; detecting a gaze target of a user; detecting a contact-required action of a user; and performing a computer function in response to the contact-required action and the gaze target.
- Performing the computer function includes performing a first function.
- the first function can be scrolling a first portion of the display, in response to detection of the contact-required action, based on the location of the gaze target with respect to the display.
- the first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display.
- the first function can be moving the visual indicator to the gaze target in response to the contact-required action.
- the first function can be moving the visual indicator at a first rate slower than a second rate of movement of the gaze target in response to continued detection of the contact-required action.
- the first function can be zooming a third portion of the display adjacent the gaze target in response to the contact-required action.
- the first function can be performing a second function, in response to the contact-required action, based on the gaze target when the gaze target is outside the display.
- the first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the gaze target.
- Example 2 is the method of example 1, where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display.
- the continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 3 is the method of examples 1 or 2, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 4 is the method of example 1 where the first function is moving the visual indicator to the gaze target in response to the contact-required action.
- the contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 5 is the method of example 1, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action.
- the contact-required action is actuation of a scroll wheel.
- Example 6 is the method of example 1, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action.
- the contact-required action is a combination of depressing a button and performing a second contact-required action.
- Example 7 is the method of example 1, where the contact-required action is touching a touch-sensitive device.
- Example 8 is a computing device having a computer including a display device for presenting a display having a visual indicator.
- the computer further includes a contact-required input and a non-contact input.
- the computer is programmed to detect a non-contact target of a user and detect a contact-required action of the user.
- the computer is further programmed to perform a computer function, in response to the contact-required action, based on the non-contact target, wherein performing the computer function includes performing a first function.
- the first function can be scrolling a first portion of the display, in response the contact-required action, based on the location of the non-contact target with respect to the display.
- the first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the non-contact target with respect to the display.
- the first function can be moving the visual indicator to the non-contact target in response to the contact-required action.
- the first function can be moving the visual indicator at a first rate slower than a second rate of movement of the non-contact target in response to continued detection of the contact-required action.
- the first function can be zooming a third portion of the display adjacent the non-contact target in response to the contact-required action.
- the first function can be performing a second function, in response to the contact-required action, based on the non-contact target when the non-contact target is outside the display.
- the first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the non-contact target.
- Example 9 is the computing device of example 8, where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the non-contact target with respect to the display.
- the continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 10 is the computing device of example 9, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 11 is the computing device of example 8, where the first function is mo ng the visual indicator to the non-contact target in response to the contact-required action.
- the contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 12 is the computing device of example 8, where the first function is zooming a third portion of the display adjacent the non-contact target in response to the contact-required action.
- the contact-required action is actuation of a scroll wheel.
- Example 13 is the computing device of example 8, where the first function is zooming a third portion of the display adjacent the non-contact target in response to the contact-required action.
- the contact-required action is a combination of depressing a button and performing a second contact-required action.
- Example 14 is the computing device of example 8, where the contact-required action is touching a touchpad.
- Example 15 is the computing device of example 8, where the non-contact input is a gaze-tracking device and the non-contact target is a gaze target.
- Example 16 is a system having a computer, the computer including a display device for presenting a display having a visual indicator; an eye-tracking device for detecting a gaze target; a contact-required input for detecting a contact-required action; and a processor operably connected to the eye-tracking device, contact-required input, and display device.
- the computer further includes programming enabling the processor to perform a computer function, in response to the contact-required action, based on the non-contact target, wherein performing the computer function includes performing a first function.
- the first function can be scrolling a first portion of the display, in response the contact-required action, based on the location of the gaze target with respect to the display.
- the first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display.
- the first function can be moving the visual indicator to the gaze target in response to the contact-required action.
- the first function can be moving the visual indicator at a first rate slower than a second rate of movement of the gaze target in response to continued detection of the contact-required action.
- the first function can be zooming a third portion of the display adjacent the gaze target in response to the contact-required action.
- the first function can be performing a second function, in response to the contact-required action, based on the gaze target when the gaze target is outside the display.
- the first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the gaze target.
- Example 17 is the system of example 16 where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display.
- the continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 18 is the system of example 16, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 19 is the system of example 16, where the first function is moving the visual indicator to the gaze target in response to the contact-required action.
- the contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 20 is the system of example 16, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action.
- the contact-required action can be actuation of a scroll wheel.
- the contact-required action can be a combination of depressing a button and performing a second contact-required action.
- Example 21 is the system of example 16, where the contact-required action is touching a touchpad.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Position Input By Displaying (AREA)
Abstract
A computer system can be controlled with non-contact inputs through zonal control. In an embodiment, a non-contact input that is an eye-tracking device is used to track the gaze of a user. A computer's display, and beyond, can be separated into a number of discrete zones according to a configuration. Each zone is associated with a computer function. The zones and/or their functions can, but need not, be indicated to the user. The user can perform the various functions by moving gaze towards the zone associated with that function and providing an activation signal of intent. The activation signal of intent can be a contact-required or non-contact action, such as a button press or dwelling gaze, respectively.
Description
- The present application claims the benefit of U.S. Provisional Patent Application No. 61/771,659 filed Mar. 1, 2013 and U.S. Provisional Patent Application No. 61/905,536 filed Nov. 18, 2013, both of which are hereby incorporated by reference in their entirety. The present application further incorporates by reference U.S. patent application Ser. No. 13/802,240 filed Mar. 13, 2013.
- The present disclosure relates to human-computer interaction generally and more specifically to gaze detection.
- Human-computer interaction generally relates to the input of information to and control of a computer by a user. Many common and popular computer programs and operating systems have been developed to function primarily with input methods involving physical contact or manipulation (e.g., a mouse or a key board). This type of physical input method is referred to herein as contact-required input. It can be difficult for people who desire to use non-contact input methods to interact with these computer programs and operating systems to their full potential. Some people must use non-contact input methods for various reasons (e.g., because of an injury or disability).
- An example of a non-contact input device is an eye tracking device such as that described in U.S. Pat. No. 7,572,008. Eye tracking devices can operate on the principal of illuminating an eye with infrared light and utilizing an image sensor to detect reflection of the light off the eye. A processor can use the image sensor's data to calculate the direction of a user's gaze.
- However, as technology progresses, computer programs and operating systems incorporate new forms of human-computer interaction, based on contact-required inputs, to enable both simple and complex functionality. An example of a form of human-computer interaction is touch-based interaction on a computer, tablet, phone or the like, whereby a user interacts with the device by touching and by performing gestures (e.g., multi-finger gestures) on a touch-sensitive device (e.g., a touchscreen). This and other forms of user interaction require a very physical connection between the device and the user, often requiring multiple points of physical contact between the user and the touch-sensitive de ce (e.g., for multi-finger gestures).
- It can be desirable to develop human-computer interaction methods based on non-contact inputs with the ability to perform both simple and complex functionality. It can be further desirable to develop human-computer interaction methods based on non-contact inputs that can function effectively on computing devices developed for use primarily with contact-required inputs.
- Many non-contact interactions lack the clear definition and identification of contact methods, therefore it can sometimes be ambiguous as to the intention of a non-contact input command. In order to assist with this problem, it has previously been proposed to utilize a non-contact input such as eye-tracking with a contact-required input device, such as a computer mouse or touchpad. For example, U.S. Pat. No. 6,204,828 describes a system whereby display of a cursor on screen is suspended and displayed at a user's gaze location upon movement by a computer mouse.
- Some interaction methods are not intuitive and the user may not know for sure if the eye tracking is functioning or the exact location of the cursor. Some interaction methods result in a cognitive disruption whereby after the user has triggered a movement of a cursor, the user must anticipate the future location of the cursor and adjust accordingly.
- It can be desirable to signal to the user as early as possible the future location of the cursor while determining whether the user intends on triggering a mouse movement. Further, as eye tracking systems may not provide 100% accuracy, the determined gaze position to which a cursor will move may not be the position intended by the user. It can be desirable to assist a user with more accurately determining how and when to use a non-contact input, such as eye tracking, in combination with a contact-required input, such as a touchpad or mouse.
- The term “embodiment” and like terms are intended to refer broadly to all of the subject matter of this disclosure and the claims below. Statements containing these terms should be understood not to limit the subject matter described herein or to limit the meaning or scope of the claims below. Embodiments of the present disclosure covered herein are defined by the claims below, not this summary. This summary is a high-level overview of various aspects of the disclosure and introduces some of the concepts that are further described in the Detailed Description section below. This summary is not intended to identify key or essential features of the claimed subject matter, nor is it intended to be used in isolation to determine the scope of the claimed subject matter. The subject matter should be understood by reference to appropriate portions of the entire specification of this disclosure, any or all drawings and each claim.
- Embodiments of the present disclosure include computer systems that can be controlled with non-contact inputs through zonal control. In one embodiment, a non-contact input tracks a non-contact action performed by a user. A computer's display, and beyond, can be separated into a number of discrete zones according to a configuration. Each zone is associated with a computer function. The zones and/or their functions can, but need not, be indicated to the user. The user can perform the various computer functions by performing non-contact actions detected by the non-contact input. Upon indicating a desired zone associated with a particular function, the user can provide an activation signal of intent. The activation signal of intent can be a contact-required or non-contact action, such as a button press or dwelling gaze, respectively. Upon receiving the activation signal of intent, the computer system can use the indicated zone (e.g., indicated by the user's non-contact actions) to perform the function associated with that zone.
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices. A visual indicator can be presented on a display to indicate the location where a computer function will take place (e.g., a common cursor). The visual indicator can be moved to a gaze target in response to continued detection of an action (e.g., touchpad touch) by a user for a predetermined period of time. The delay between the action and the movement of the visual indicator provides an opportunity to provide an indication to the user where the visual indicator will be located after a movement, allowing for less of a cognitive disruption after the visual indicator has appeared at a new location. Optionally, the delay can also allow a user time to “abort” movement of the visual indicator. Additionally, once the visual indicator has moved, the visual indicator can be controlled with additional precision as the user moves gaze while continuing the action (e.g., continued holding of the touchpad).
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices. A computer can enlarge a portion of a display adjacent a first gaze target in response to detecting a first action (e.g., pressing a touchpad). The computer can then allow a user to position a second gaze target in the enlarged portion (e.g., by looking at the desired location) and perform a second action in order to perform a computer function at that location. The enlarging can allow a user to identify a desired location for a computer function (e.g., selecting an icon) with greater precision.
- Embodiments of the present disclosure include a computer system that can be controlled with non-contact inputs, such as eye-tracking devices. Various combinations of non-contact actions and contact-required actions can be performed to cause a computer to perform certain computer functions. Functions can include scroll functions, movements of visual indicators, zooming of the display, and selecting further functions to perform. Combinations of non-contact actions and contact-required actions can include pressing buttons and/or touching touch-sensitive devices while looking at certain places on or off of a display.
- The specification makes reference to the following appended figures, in which use of like reference numerals in different figures is intended to illustrate like or analogous components
-
FIG. 1 is a schematic representation of a computer system incorporating non-contact inputs according to certain embodiments. -
FIG. 2A is a graphical depiction of a display as rendered or presented on the display device ofFIG. 1 according to certain embodiments. -
FIG. 2B is a graphical depiction of the display ofFIG. 2A in zonal control mode with a first configuration according to certain embodiments. -
FIG. 3 is a flow chart depicting a process for onal control according to certain embodiments. -
FIG. 4 is a graphical depiction of a display as rendered or presented on the display device ofFIG. 1 while in zonal control mode with a second configuration according to certain embodiments. -
FIG. 5 is a graphical depiction of a display with a visual indicator according to certain embodiments. -
FIG. 6 is a flow chart diagram of delay warp as performed by a computer according to certain embodiments. -
FIG. 7A is a flow chart depicting a multi-step click functionality according to some embodiments. -
FIG. 7B is a flow chart depicting a multi-step click functionality according to some embodiments. -
FIG. 8 is a graphical depiction of a display according to certain embodiments. -
FIG. 9 is a graphical depiction of a display according to certain embodiments. -
FIG. 10 is a graphical depiction of a menu according to certain embodiments. -
FIG. 11 is a graphical depiction of a display according to certain embodiments. -
FIG. 12 is a graphical depiction of a display according to certain embodiments. -
FIG. 13 is a graphical depiction of a display according to certain embodiments. -
FIG. 14 is a graphical depiction of a display according to certain embodiments. -
FIG. 15 is a graphical depiction of a display according to certain embodiments. -
FIG. 16 is a graphical depiction of a display according to certain embodiments. -
FIG. 17 is a graphical depiction of a display according to certain embodiments. -
FIG. 18 is a graphical depiction of a display according to certain embodiments. -
FIG. 19 is a graphical depiction of a display according to certain embodiments. -
FIG. 20 is a graphical depiction of a display according to certain embodiments. -
FIG. 21 is a graphical depiction of a display according to certain embodiments. -
FIG. 22A is a graphical depiction of a display according to certain embodiments. -
FIG. 22B is a graphical depiction of the display ofFIG. 22A showing a menu according to certain embodiments. -
FIG. 22C is a graphical depiction of the display ofFIG. 22C showing a menu according to certain embodiments. -
FIG. 23 is a graphical depiction of a display according to certain embodiments. -
FIG. 24A is a flow chart of a non-contact action according to certain embodiments. -
FIG. 24B is a flow chart of a non-contact action according to certain embodiments. -
FIG. 24C is a flow chart of a contact-required action according to certain embodiments. -
FIG. 24D is a flow chart of a non-contact action according to certain embodiments. -
FIG. 25 is a flow chart of adelay warp 2500 with a visual marker according to certain embodiments. -
FIG. 26 is a flow chart of adelay warp 2600 without a visual marker according to certain embodiments. -
FIG. 27 is a flow chart of adelay warp 2700 without a hidden visual indicator according to certain embodiments. -
FIG. 28 is a flow chart of adelay warp 2800 according to certain embodiments. -
FIG. 29 is a flow chart depicting a two-step click 2900 according to certain embodiments. - A computer system can be controlled with non-contact inputs through zonal control. In an embodiment, a non-contact input that is an eye-tracking device is used to track the gaze of a user. A computer's display can be separated into a number of discrete zones according to a configuration. Each zone is associated with a computer function. The zones and/or their functions can, but need not, be indicated to the user. The user can perform the various functions by moving gaze towards the zone associated with that function and providing an activation signal of intent. The activation signal of intent can be a contact-required or non-contact action such as a button press or dwelling gaze, respectively.
- A computer system can implement a delay warp when being controlled with non-contact inputs. In an embodiment, a cursor can be presented on a display to indicate the location where a computer function will occur upon a further action (e.g., a click). The cursor can be moved to a gaze target in response to continued detection of an action (e.g., continued touching of a touchpad) by a user for a predetermined period of time. The delay between the action and the movement of the cursor provides an opportunity to provide an indication to the user where the visual indicator will be located after a movement, allowing for less of a cognitive disruption after the visual indicator has appeared at a new location. Optionally, the delay gives a user an opportunity to “abort” movement of the cursor. Additionally, once the cursor has moved, the cursor can be further controlled with additional precision as the user moves gaze, moves a mouse, or swipes a touchpad while continuing the action (e.g., continued holding of the touchpad).
- A computer system can allow for increased certainty and precision when targeting elements through non-contact, inputs. In an embodiment, a user can look at a group of elements and perform an action. If the computer cannot determine with certainty which element is targeted by the user, the computer can enlarge and/or separate the elements and allow the user to further focus gaze on the desired element, whereupon performing a second action, the computer will perform the desired function (e.g., selecting an icon) upon the targeted element.
- A computer system can be controlled through various combinations of non-contact actions and contact-required actions. Scrolling, cursor movements, zooming, and other functions can be controlled through combinations of non-contact actions and/or contact-required actions. Such combinations can include pressing buttons and/or touching touch-sensitive devices while looking at certain places on or off of a display.
- These illustrative examples are given to introduce the reader to the general subject matter discussed here and are not intended to limit the scope of the disclosed concepts. The following sections describe various additional features and examples with reference to the drawings in which like numerals indicate like elements, and directional descriptions are used to describe the illustrative embodiments but, like the illustrative embodiments, should not be used to limit the present disclosure. The elements included in the illustrations herein may be drawn not to scale. As used herein, examples listed with the use of exempli gratia (“e.g.”) are non-limiting examples.
-
FIG. 1 is a schematic representation of acomputer system 100 incorporatingnon-contact inputs 106 according to certain embodiments. The computer system 100 (hereinafter, “computer”) can be implemented in a single housing (e.g., a tablet computer), or can be implemented in several housings connected together by appropriate power and/or data cables (e.g., a standard desktop computer with a monitor, keyboard, and other devices connected to the main housing containing the desktop computer's CPU). As used herein, any reference to an element existing “in” thecomputer 100 indicates the element is a part of thecomputer system 100, rather than physically within a certain housing. - The
computer 100 can include aprocessor 102 connected to or otherwise in communication with adisplay device 104, anon-contact input 106, and a contact-requiredinput 108. Theprocessor 102 can include anon-contact interpreter 112, as described in further detail below. As used herein, theterm processor 102 refers to one or more individual processors within the computer system, individually or as a group, as appropriate. Thecomputer 100 can includeprogramming 116 stored on permanent, rewritable, or transient memory that enables theprocessor 102 to perform the functionality described herein, including zonal control, delay warp, and two-step click, as well as other functionality. The programming (e.g., computer-executable instructions or other code), when executed by theprocessor 102, causes theprocessor 102 to perform operations described herein. The programming may comprise processor-specific programming generated by a compiler and/or an interpreter from code written in any suitable computer-programming language. Non-limiting examples of suitable computer-programming languages include C, C++, C#, Visual Basic, Java, Python, Perl, JavaScript, Action script, and the like. The memory may be a computer-readable medium such as (but not limited to) an electronic, optical, magnetic, or other storage device capable of providing a processor with computer-readable instructions. Non-limiting examples of such optical, magnetic, or other storage devices include read-only (“ROM”) device(s), random-access memory (“RAM”) device(s), magnetic disk(s), magnetic tape(s) or other magnetic storage, memory chip(s), an ASIC, configured processor(s), optical storage device(s), floppy disk(s), CD-ROM, DVD, or any other medium from which a computer processor can read instructions. - Contact-required
inputs 108 can be any device for accepting user input that requires physical manipulation or physical contact (hereinafter, “contact-required actions”). Examples of contact-requiredinputs 108 include keyboards, mice, switches, buttons, touchpads, touchscreens, touch-sensitive devices, and other inputs that require physical manipulation or physical contact. Examples of contact-required actions include tapping, clicking, swiping, pressing (e.g., a key), and others. As used herein, the term “contact-required actions” further include actions that require either physical contact through another device (e.g., using a touchscreen with a stylus) or close proximity to the contact-required input (e.g., hovering or swiping a finger above a touchscreen that responds to fingers in close proximity, such as a projected capacitance touchscreen). Signals generated from a user performing contact-required actions as received by contact-requiredinputs 108 are referred to as contact-basedsignals 110. Where appropriate, references to contact-required actions can include combinations of contact-required actions (e.g., holding a first button while pressing a second button). -
Non-contact inputs 106 can be any device capable of receiving user input without physical manipulation or physical contact. Examples ofnon-contact inputs 106 include eye-tracking devices, microphones, cameras, light sensors, and others. When a user performs an action detectable by a non-contact input 106 (hereinafter, “non-contact action”), thenon-contact interpreter 112 generates anon-contact signal 114 based on the non-contact action performed by the user. Non-contact actions can include moving gaze (e.g., moving the direction of the gaze of one or more eyes), gaze saccade, fixating gaze, dwelling gaze (e.g., fixating gaze substantially at a single target for a predetermined length of time), a blink (e.g. blinking one or more eyes once or in a discernible pattern, or closing one or both eyes for a longer length of time), performing vocal commands (e.g., saying “click” or “open”), facial recognition (e.g., recognizing features and movements of the user's face), 3-D gestures (e.g., recognizing movements of a user, a user's appendage, or an object held by the user in 3-D space, such as waving), and others. Depending on the action performed, thenon-contact interpreter 112 can send differentnon-contact signals 114. For example, a user moving gaze can result in a firstnon-contact signal 114 containing information about the movement and/or new direction of gaze, while a user blinking can result in anon-contact signal 114 indicative that the user blinked. Thenon-contact signal 114 can be used by theprocessor 102 to perform various tasks, as described in further detail below. Where appropriate, references to non-contact actions can include combinations of non-contact actions (e.g., blinking while saying “click”). - Additionally, where appropriate, references to an action can include combinations of contact-required actions and non-contact actions (e.g., holding a button while saying “click”).
- In some embodiments, the
processor 102 can usenon-contact signals 114 to emulate contact-requiredsignals 110. For example, theprocessor 102 can usenon-contact signals 114 containing information about a user moving gaze to a first target (e.g., computer icon) and dwelling gaze on that target in order to emulate contact-requiredsignals 110 of moving a cursor to the first target and clicking on that target. - The embodiments disclosed herein include the use of non-contact actions and contact-required actions, or non-contact actions alone, to perform various computer functions. Computer functions can be any type of action performable on a computer, such as a mouse click; a scroll/pan action; a magnifying action; a zoom action; a touch input/action; a gesture input/action; a voice command; a call-up of a menu; the activation of Eye Tracking/Eye Control/Gaze interaction; the pausing of Eye Tracking/Eye Control/Gaze interaction; adjusting the brightness of the backlight of the device; activating sleep, hibernate, or other power saving mode of the device; resuming from sleep, hibernate, or other power saving mode of the device; or others. In some cases, the computer functions are emulated such that the
computer 100 behaves as if it is detecting solely a contact-required action. - Zonal Control
-
FIG. 2A is a graphical depiction of adisplay 200 as rendered or presented on adisplay device 104 according to certain embodiments. While presented with reference to the particular operating system shown inFIG. 2A , the embodiments and disclosure herein can be easily adapted to other operating systems (e.g., Microsoft Windows®, Apple iOS®, or Google Android®) and computer programs to perform various functions thereof. The operating system ofFIG. 2A includes various computer functionality available through contact-required inputs 108 (e.g., touchscreen taps and gestures). Table 1, below, describes several of these functions and examples of associated contact-required actions to perform each function. -
TABLE 1 Computer Function Example of Associated Contact-required Action Open Start Menu/ Swipe in from the right edge of the display 200 andStart Screen tap a “Start” button that appears. Open Apps List Swipe up from the center of the Start screen Previous App Swipe in from the left edge of the display 200Display Charms Swipe in from the right edge of the display 200Open App Bar Swipe in from the top or bottom edge of the display 200 Move App Bar Tap and drag the app bar Hide App Bar Tap and hold and swipe down the app bar Split Window Tap and drag the app from the top edge of the display 200 towards the middle of thedisplay 200,and then towards either the left or right edge of the display 200.Close App Tap and drag the app from the top edge of the display 200 to the bottom of thedisplay 200 - In order to accommodate performing these and other computer functions without relying solely on contact-required actions, a
computer 100 can use detected non-contact actions (e.g., movement of gaze and fixation on target) as instruction to perform various computer functions. The computer can perform a computer function instigated by a non-contact action by either performing the computer function directly (e.g., opening the apps list) or by emulating the contact-required action associated with the computer function (e.g., emulating a swipe up from the center of the Start screen). -
FIG. 2B is a graphical depiction of thedisplay 200 ofFIG. 2A inzonal control mode 206 according to certain embodiments. Thedisplay 200 is separated into afirst configuration 204 of eightzones 202, however any number ofzones 202 and any different configuration can be used. Thezones 202 and/or lines between thezones 202 can, but need not, be presented to the user on the display device 104 (e.g., highlighting of azone 202 or lines separating the zones 202).Zones 202 are used to enable non-contact control of thecomputer 100. Eachzone 202 can be associated with a particular computer function. In some embodiments, thezones 202 are divided and/or located such that a “dead zone” of no functionality exists between some or all of thezones 202, so ensure that measurement errors and/or data noise do not cause undesired effects. In some embodiments, hysteresis can be used to avoid inadvertently selecting an undesired function (e.g., by increasing the boundaries of azone 202 while the gaze target is in thezone 202 or by introducing a certain amount of delay when the gaze target moves out of aparticular zone 202, before altering any performed function). - The computer function associated with each
zone 202 can, but need not, be presented to the user on the display device 104 (e.g., a text box). Thecomputer 100 can include anon-contact input 106 that is an eye-tracking device. An eye-tracking device can detect eye indications of a user. As used herein, the term “eye indications” is inclusive of detecting the direction of a user's gaze, detecting changes in the direction of a user's gaze (e.g., eye movement), detecting blinking of one or both eyes, and detecting other information from a user's eye or eyes. Anon-contact target 208 is a computed location of where a user's non-contact action is directed. Thenon-contact target 208 can be graphically represented on the display, such as by a symbol shown inFIG. 2 . In the example of an eye-tracking device, thenon-contact target 208 is a gaze target, or the point where the user's gaze is directed. Thenon-contact target 208 can be indicated by 3-D gestures, facial orientation, or other non-contact actions. Thenon-contact target 208 can, but need not, be depicted in some fashion on the display 200 (e.g., presenting a symbol at the non-contact target or highlighting elements or zones at or near the non-contact target). - In some embodiments, a
zone 202 can be located outside of thedisplay 200 and thenon-contact target 208 need not be constrained to thedisplay 200 of thecomputer 100. For example, azone 202 can be located a distance to the left of adisplay device 104 and can be associated with a certain computer function. The user can perform the function in thatzone 202, as described in further detail below, by focusing gaze to the left of thedisplay device 104. Determination of azone 202 outside of thedisplay 200 can occur via an imaging device forming part of an eye tracking device or a separate imaging device. If a user's gaze is determined to be directed towards an area outside of thedisplay 200, the direction of the gaze can be determined as herein described and if the gaze target falls within the bounds of azone 202 outside of thedisplay 200, an appropriate function can be performed as further described herein. - In some embodiments, statistical analysis can be applied to the detected gaze target and/or detected movements of the gaze target in order to determine whether the gaze target is in a
particular zone 202. - In some embodiments, a lockout time can be implemented whereby if a user activates a function associated with a
zone 202, the function associated with thezone 202 cannot be act rated (e.g., activated in the same manner or in a different manner) until the expiration of a certain length of time. In some embodiments, after a user activates a function associated with azone 202, the size of thezone 202 decreases for a period of time such that a more deliberate gaze by the user is required to activate the function associated with thatparticular zone 202 again. -
FIG. 3 is a flow chart depicting aprocess 300 for zonal control according to certain embodiments. With reference toFIGS. 2B and 3 , one embodiment of zonal control is discussed below. - The user can generate a mode-enable signal of
intent 310. The mode-enable signal ofintent 310 can be generated by the user performing a non-contact action or a contact-required action. Thecomputer 100 detects the mode-enable signal ofintent 310 atblock 302 and enters azonal control mode 206. Duringzone control mode 206, thecomputer 100 tracks thenon-contact target 208 atblock 304. When thecomputer 100 detects an activation signal ofintent 312 atblock 306, thecomputer 100, atblock 308, then performs a computer function associated with thezone 202 in which thenon-contact target 208 is located at the time of the activation signal ofintent 312. The activation signal ofintent 312 can, but need not, be the same type of signal of intent as the mode-enable signal ofintent 310. Examples of signals of intent include contact-required actions and non-contact actions. In some embodiments, thecomputer 100 is always in a zonal control mode 206 (i.e., no separate mode-enable signal ofintent 310 is necessary), whereupon receiving an activation signal ofintent 312, thecomputer 100 will perform the function associated with thezone 202 in which thenon-contact target 208 is located at the time of the activation signal ofintent 312. Thecomputer 100 can, but need not, provide visual feedback that an activation signal ofintent 312 was received, that thenon-contact target 208 was in aparticular zone 202, and/or that a particular function was activated. - In a first example, a user can speak out loud “zone mode” (i.e., perform a non-contact action) to generate the mode-enable signal of
intent 310. Upon detecting this mode-enable signal ofintent 310, thecomputer 100 enterszonal control mode 206. The user can then focus gaze somewhere in thezone 202 associated with the computer functionOpen Apps List 210. The user can dwell gaze (i.e., perform a non-contact action) over thezone 202 associated with the computer functionOpen Apps List 210 for a predetermined amount of time (e.g., 3 seconds) to generate an activation signal ofintent 312. Thecomputer 100 can detect that the user is dwelling gaze at least by detecting that thenon-contact target 208 that is a gaze target dwells at a location (e.g., does not move substantially or moves only as much as expected for a user attempting to look at the same location) for a predetermined amount of time. Upon detecting the activation signal ofintent 312, thecomputer 100 can perform theOpen Apps List 210 function (e.g., by directly performing the function or by simulating a touchscreen swipe up from the center of the Start screen). Thecomputer 100 can then exit out ofzonal control mode 206. - In a second example, a user can focus gaze somewhere in the
zone 202 associated with the computerfunction Display Charms 222. The user can depress a hardware button (i.e., perform a contact-required action) to generate an activation signal ofintent 312. Upon detecting the activation signal ofintent 312, thecomputer 100 can perform theDisplay Charms 222 function. In this second example, no mode-enable signal ofintent 310 is necessary. - Graphical representations of the
zones 202 may disappear upon an action by a user, or after a predetermined period of time. In the absence of graphical representations of thezones 202, thezonal control mode 206 still functions as described - Some examples of possible computer functions associated with
potential zones 202 includeOpen App Bar 214,Move App Bar 216, HideApp Bar 218,Previous App 220, Split Window/Close App 224, and others. - A signal of intent can be any non-contact action detectable by the
computer 100. A signal of intent can be any contact-required action detectable by thecomputer 100. In some embodiments, a signal of intent can be selection and/or activation of an icon in an input menu. - In some embodiments, the shape of the
zones 202 and computer functions associated with eachzone 202 can change depending on the state of thecomputer 100. For example, upon using zonal control to perform the computer functionOpen Apps List 210, the new window that appears can includedifferent zones 202 with different computer functions associated therewith.FIG. 4 is a graphical depiction of adisplay 400 as rendered or presented on thedisplay device 104 ofFIG. 1 while inzonal control mode 206 with asecond configuration 402 according to certain embodiments. Thedisplay 200 is separated into asecond configuration 402 ofzones 202. Thesecond configuration 402 ofzones 202 can be associated with the state of thecomputer 100 after theOpen Apps List 210 function has been performed. In this configuration, there arezones 202 associated with various computer functions, includingstart 212, displaycharms 222, scroll right 404, scroll left 406, zoom out 408, and zoom in 410. Thesecond configuration 402 need not be dependent on a new screen being displayed on thedisplay 200, but can be associated with any state of thecomputer 100. For example, the zoom in 410 and scroll left 406zones 202 may not be a part of thesecond configuration 402, until needed (e.g., a zoom out 408 or scroll right 404 function has been performed, respectively). Thezones 202 in thesecond configuration 402 can otherwise perform similarly to thezones 202 of thefirst configuration 204. - In some embodiments,
zones 202 can overlap, such that multiple computer functions are performed simultaneously from activation of thezones 202 when the gaze target is within bothzones 202. For example, if the overlappingzones 202 were associated with scrolling functions, such as a scroll up function and a scroll right function, activation of the zones 202 (e.g., by moving a gaze target into the zones) can result in the window scrolling diagonally up and to the right. -
FIGS. 8-10 demonstrate an embodiment whereby a zone is in the form of a menu overlaid atop a computer display. The menu may appear or disappear based on a contact or non-contact input action. The menu comprises options for selection representing computer functions. The options may be selected by gazing at an item and providing an activation signal of intent. By way of example, a user may gaze at an item representing a common computer function known as a “left click”, the user may fixate at the item thus providing the activation signal of intent. Once the activation signal of intent has been provided, the computer will perform a “left click” at the next location at which the user fixates or provides another activation signal of intent. In this manner, the user may select the function he or she desires to execute, and then select the location upon the display at which the function is to be executed. - In some embodiments, a user can look at a first zone, then look at a location away from the first zone to perform a computer function associated with the zone in which the non-contact target was located. For example, a user can look at a menu item as described above, initiate an activation signal of intent, then look at an icon, and then initiate a second activation signal of intent. In this example, the
computer 100 can determine the function to be performed on the icon based on where the user's gaze was located (i.e., at the zone). - Any function possible on a computer can be performed (e.g., directly performed or emulated) through the use of zonal control as described herein. In the case of computers designed primarily for touchscreen input (e.g., certain mobile computing devices such as mobile phones and tablets), performed functions can include those such as opening an application, navigating the display (e.g., navigating to a new page by swiping a finger across the display), zooming, hardware buttons (e.g., a home button or return button), multi-finger gestures, and others. Some single and multi-finger gestures can include tap, double-tap, long press, press and hold, touch and hold, slide, swipe, drag, press and drag, fling, flick, turn, rotate, scroll, pan, pinch, stretch, edge swipe/bezel swipe, and others. In the case of computers designed for other contact-required input (e.g., desktop computers with a keyboard and mouse), zonal control can be used to perform functions including mouse movement, mouse clicking (e.g., single click, double click, right click, or click and drag), keyboard presses, keyboard combinations, or other functions related to contact-required actions.
- Delay Warp
- The
computer 100 can be controlled through one or both of thenon-contact input 106 and the contact-requiredinput 108. -
FIG. 5 is a graphical depiction of adisplay 500 with avisual indicator 502 according to certain embodiments. Thevisual indicator 502 is used like a mouse cursor to help a user perform computer functions (e.g., clicking, dragging, and others). Thevisual indicator 502 is an indication of where the effect of an additional action (e.g., a mouse click, a touchpad tap, or other contact-required or non-contact action) will occur. Thecomputer 100 can generate avisual indicator 502 on the display at an estimated location of thenon-contact target 208, as described above. - In one embodiment, the
visual indicator 502 can be displayed at an estimated gaze target of the user. The estimated gaze target is calculated by an eye-tracking device or by acomputer 100 using information from an eye-tracking device. Thecomputer 100 containsprogramming 116 enabling theprocessor 102 to perform a delay warp, as described below. -
FIG. 6 is a flow chart diagram ofdelay warp 600 as performed by acomputer 100 according to certain embodiments. Acomputer 100 can optionally perform a click according to input from a contact-required input, such as a computer mouse or a touchpad, atblock 602. A user can perform an action which is detected atblock 606. In one embodiment, upon detecting an action atblock 606, thecomputer 100 can display a visual marker at an estimated gaze target of the user. The visual marker can be an indicator of to where the cursor will move as described herein. For example, the user can perform a contact-required action, such as touching a touch-sensitive device. Upon detecting the action at block 506, thecomputer 100 can delay, for a predetermined amount of time, atblock 610. This delay can be utilized by thecomputer 100 to provide sufficient time for a user to alter action (e.g., decide not to move the cursor) and for thecomputer 100 to be certain of the user's intention. Atblock 612, if thecomputer 100 still detects the action (e.g., the user continues to touch the touch-sensitive device), thecomputer 100 can move thevisual indicator 502 to the gaze target atblock 614. If the action is not still detected, thecomputer 100 can, atblock 502, do nothing, go back to having a contact-required input move the visual indicator 502 (e.g., cursor), or perform a click at or more thevisual indicator 502 to its original location. - In an embodiment, the
delay warp 600 additionally includesoptional block 618 where thecomputer 100 determines whether the action is still detected (i.e., after thevisual indicator 502 has moved to the gaze target). If the action is not still detected, such as if the user is no longer touching the touch-sensitive device, thecomputer 100 can perform additional functions as necessary (e.g., perform a “click” where thevisual indicator 502 was last located, or do nothing) atpath 620. However, if the action is still detected atoptional block 618, thecomputer 100 can slowly move (e.g., with more precision) the visual indicator 502 (e.g., a cursor) according to movements of the user's gaze, a computer mouse, or a touchpad atoptional block 622. - For example, if a user desires to select an
element 504 on adisplay 500, such as an icon, the user can look at the icon and touch the touchpad for the predetermined period of time, such as 200 ms, after which thecomputer 100 will move thevisual indicator 502 to the icon. If the user, before the predetermined period of time has elapsed, decides not to have thevisual indicator 502 moved to the gaze target, the user can cease touching the touchpad. - In an embodiment, the user can touch the touchpad or click a mouse button and wait the predetermined period of time so that the
visual indicator 502 moves to the gaze target. Thereafter, the user can continue touching the touchpad or holding the mouse button while moving gaze away from the visual indicator 502 (e.g., above, below, or to the side) in order to move thevisual indicator 502 with fine-tune adjustments until thevisual indicator 502 is in a desired location, at which point the user can cease touching the touchpad or holding the mouse button in order to perform an action at the desired location (e.g., click an icon). - In some embodiments, the user can touch the touchpad or click a mouse button and wait the predetermined period of time so that the
visual indicator 502 moves to the gaze target. Thereafter, the user can continue touching the touchpad or holding the mouse button while moving the cursor with the touchpad or mouse in order to move the visual indicator 502 (e.g., cursor) with fine-tune adjustments until thevisual indicator 502 is in a desired location, at which point the user can cease touching the touchpad or holding the mouse button in order to perform an action at the desired location (e.g., click an icon). - A user can look at a desired screen element 504 (e.g., an icon, a window, or other graphical user interface (“GUI”) element) in order to direct the
visual indicator 502 to thatelement 504. In order to perform a desired computer function (e.g., a click), the user can perform an additional action (e.g., tap a touchpad). - In some embodiments, the
visual indicator 502 may not be regularly displayed, but as the user moves gaze around thedisplay 500, anyelements 504 at or adjacent the gaze target can be highlighted or otherwise distinguish theelement 504 as at or near the gaze target. - In some embodiments the use of a
visual indicator 502 enables a user to see the effect of non-contact actions on thecomputer 100 before performing additional actions (e.g., non-contact actions or contact-required actions). When a user intends to move thevisual indicator 502 or othergraphical element 504 on a display, the user looks at the desired destination of thevisual indicator 502. The eye-tracking device calculates an estimated gaze target based on the user's gaze. The user then activates anon-contact input 106 or a contact-requiredinput 108, for example by tapping a touchpad. For a predetermined period of time, for example 200 ms, thecomputer 100 does not perform a computer function. - During this predetermined time, the
visual indicator 502 is shown on thedisplay 500 at the estimated gaze target. Thisvisual indicator 502 or a separate visual marker can then demonstrate to the user the location to which thevisual indicator 502 will be moved. If the user determines to proceed, thevisual indicator 502 will be moved after a predetermined period of time. The user can indicate a desire to proceed by initiating an action (e.g., a contact-required action such as moving an input device) or by simply waiting for the predetermined period of time to expire. - If the user determines not to proceed, the user may perform a specific action such as removing contact with the input device, tapping the input device or pressing and holding the input device. Typically, these actions cause the
computer 100 to perform a specific function, such as tapping to open an icon, dragging such as dragging an item on a GUI, zooming upon an item, or others. Actions that are normally performed with an input device would be readily understood by a person skilled in the art. - If the user is not satisfied with the location of the
visual indicator 502, the user can determine that an adjustment is required in order to more accurately reflect the desired movement location of thevisual indicator 502. The user can gaze at a different location in order to change the gaze target, or the user can perform a small movement with an input device (e.g., move a computer mouse or move touch on a touchpad) to adjust the location of thevisual indicator 502 after thevisual indicator 502 has moved to the gaze target. - In this manner, natural interaction with a touchpad accommodates gaze information. If a user places their finger on a touchpad in order to perform a gesture such as a swipe or movement on the touchpad, the movement can override movement of the mouse cursor to the gaze location.
- In one embodiment, a
computer 100 includes a touchpad as a contact-required input device and an eye tracking device capable of determining a user's gaze target. - The
computer 100 utilizes input from both the touchpad and the eye tracking device to allow a user to navigate through user interfaces. Most frequently this is achieved through moving avisual indicator 502 on adisplay 500. - The
computer 100 utilizes gesture type commands used on the touchpad, for example a swipe across the touchpad by a user to move to thenext element 504 in a series, or a pinch gesture on the touchpad to zoom a displayedelement 504. - According to the present disclosure, when a user contacts the touchpad with a finger or the like, the
computer 100 delays performing a computer function for a predetermined period of time. During this period of time, avisual indicator 502 is shown at an estimated gaze target of the user. The estimated gaze target is calculated based on information from the eye tracking device. - After the predetermined period of time expires, the computing system moves the location of the
visual indicator 502 on thedisplay 500 to the gaze target. The user can then move thevisual indicator 502 by moving their finger on the touchpad. - If the user does not wish for the
visual indicator 502 to be moved to the gaze target, the user can perform another action during the predetermined period of time—such as the aforementioned gesture type commands, or simply remove their finger from the touchpad to cancel any action. - In an embodiment, the
computer 100 can locate thevisual indicator 502 at anelement 504 in proximity to the actual gaze target. Thiselement 504 can be a Graphical User Interface (GUI) object, for example, such as a button text box, menu or the like. Thecomputer 100 can determine whichelement 504 at which to locate thevisual indicator 502 based on a weighting system whereby someelements 504 have predetermined weights higher thanother elements 504. For example, a button can have a higher weighting than a text box. The determination of whichelement 504 at which to place thevisual indicator 502 can also consider proximity to the gaze target. - In an embodiment, the
computer 100 can provide tactile or audible feedback indicating that a gaze target is able to be determined. In this way, the feedback will indicate to the user whether the system is functioning correctly and if not, it will allow the user to alter their behavior to accommodate the function or non-function of the eye tracking device. This feedback can be in the form of a touchpad providing haptic feedback when an estimated gaze position has been determined during a cursor movement procedure. -
FIG. 25 is a flow chart of adelay warp 2500 with a visual marker according to certain embodiments. The visual marker is an indication of to where the visual indicator (e.g., cursor) might jump or “warp” under certain conditions. Atblock 2502, a contact-required action is detected, such as a user touching a touchpad. Thecomputer 100 waits for the duration of a predefined length of time (e.g., delay) atblock 2504. In some embodiments, the delay can be 0 seconds (i.e., no delay). After the delay, atblock 2506, thecomputer 100 causes a visual marker to be displayed at the non-contact target (e.g., gaze target). In alternate embodiments, an additional delay can be incorporated afterblock 2506 and beforeblock 2508. Atblock 2508, thecomputer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, thecomputer 100, atblock 2510, then ceases to display the visual marker and moves the visual indicator (e.g., cursor) to the non-contact target. If the contact-required action is not maintained atblock 2508, thecomputer 100 can cease displaying the visual marker atblock 2512 and execute a click at the location of the visual indicator (e.g., cursor) atblock 2514. -
FIG. 26 is a flow chart of adelay warp 2600 without a visual marker according to certain embodiments. Atblock 2602, a contact-required action is detected, such as a user touching a touchpad. Thecomputer 100 waits for the duration of a predefined length of time (e.g., delay) atblock 2604. After the delay, atblock 2606, thecomputer 100 moves the visual indicator (e.g., cursor) to the non-contact target (e.g., gaze target). In alternate embodiments, an additional delay can be incorporated afterblock 2606 and beforeblock 2608. Atblock 2608, thecomputer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, the process is finished atblock 2610. If the contact-required action is not maintained atblock 2608, thecomputer 100, atblock 2612, can move the visual indicator back to its original position prior to the movement fromblock 2606. Next, thecomputer 100 can execute a click at the location of the visual indicator atblock 2614. -
FIG. 27 is a flow chart of adelay warp 2700 without a hidden visual indicator according to certain embodiments. Atblock 2702, the visual indicator is in a hidden state.Block 2702 includes all instances where the visual indicator is hidden, including if it has never been displayed previously. Atblock 2704, a contact-required action is detected, such as a user touching a touchpad. Thecomputer 100 waits for the duration of a predefined length of time (e.g., delay) atblock 2706. After the delay, atoptional block 2708, thecomputer 100 can display either a visual marker or the visual indicator (e.g., cursor) at the non-contact target (e.g., gaze target). In alternate embodiments, an additional delay can be incorporated afterblock 2708 and beforeblock 2710. Atblock 2710, thecomputer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, thecomputer 100 moves the visual indicator to the non-contact target atblock 2712. If the contact-required action is not maintained atblock 2708, thecomputer 100, atblock 2714, executes a click at the non-contact target. -
FIG. 28 is a flow chart of adelay warp 2800 according to certain embodiments. Atblock 2802, a contact-required action is detected, such as a user touching a touchpad. Thecomputer 100 waits for the duration of a predefined length of time (e.g., delay) atblock 2804. Atblock 2806, thecomputer 100 determines whether the contact-required action includes signals to move the visual indicator (e.g., cursor). Such actions can include swiping a finger along a touchpad or moving a mouse. If the contact-required action includes signals to move the visual indicator (e.g., the user touches a touchpad and moves the finger around), thecomputer 100, atblock 2808, moves the visual indicator pursuant to the contact-required action. If the contact-required action does not include signals to more the visual indicator (e.g., the user touches a touchpad and does not move the finger around), the computer, atoptional block 2810, can show a visual marker or the visual indicator at the non-contact target (e.g., gaze target), but then determines, atblock 2812, whether the contact-required action is maintained (e.g., the user touches and keeps touching the touchpad). If the contact-required action is not maintained, thecomputer 100, atblock 2814, executes a click at the original location of the visual indicator. If the contact-required action is maintained, thecomputer 100, atblock 2816, moves the visual indicator to the non-contact target. Then, atblock 2818, thecomputer 100 determines whether the contact-required action now includes signals to move the visual indicator (e.g., the user, after seeing the visual indicator move to the non-contact target, begins moving the finger around on the touchpad). If the contact-required action does not now include signals to move the visual indicator, the computer finishes the process atblock 2822. However, if the contact-required action does now include signals to move the visual indicator thecomputer 100 moves the visual indicator pursuant to the new contact-required action atblock 2820. Optionally, upon release of the contact-required action, a “click” or other function may be performed. In some embodiments, the movement of the visual indicator atblock 2820 is slower than the movement of the visual indicator atblock 2808. - After the delay, at
optional block 2808, thecomputer 100 can display either a visual marker or the visual indicator (e.g., cursor) at the non-contact target (e.g., gaze target). Atblock 2810, thecomputer 100 determines if the contact-required action is maintained, such as if the user is still touching the touchpad. If the contact-required action is maintained, thecomputer 100 moves the visual indicator to the non-contact target atblock 2812. If the contact-required action is not maintained atblock 2808, thecomputer 100, atblock 2814, executes a click at the noxi-contact target. - Two-Step Click
- A user can perform non-contact actions detectable by a
computer 100 through anon-contact input 106, such as an eye-tracking device. In some embodiments, a user can direct gaze at anelements 504 on adisplay 500 and then perform an additional action in order to perform a computer function (e.g., a click) upon theelement 504 upon which the user's gaze is directed. Thecomputer 100 may not display any visual indication of the location of a user's gaze target. Although embodiments described herein refer to a single “additional action” following an initial non-contact action, is should be appreciated that multiple “additional actions” (e.g., a sequence of keystrokes or any sequential or simultaneous combination of non-contact and/or contact-required actions) may be required to trigger a particular computer function in other embodiments. -
FIG. 7A is a flow chart depicting amulti-step click functionality 700 according to some embodiments. In a general embodiment, thecomputer 100 detects a user's gaze target atblock 702. The user's gaze target can be located adjacent to or away from a display. Thecomputer 100 then detects a contact-required action (e.g., a button press or a touchpad tap) at block 704. Thecomputer 100 then performs a computer function atblock 706, dependent on the gaze target and the contact-required action. - In one embodiment, no visual indication is displayed to the user until after block 704, upon which the
computer 100 highlights theelement 504 located at or near the gaze target. -
FIG. 7B is a flow chart depicting amulti-step click functionality 710 according to some embodiments. Thecomputer 100 detects a user's gaze target atblock 702. Thecomputer 100 detects a first action (e.g., a button press or a touchpad tap) at block 704. Atblock 712, thecomputer 100 then determines whether there are multiplesmall elements 504 located sufficiently close together adjacent the gaze target. In some instance,multiple elements 504 include instances where eachelement 504 is sufficiently close to one another that thecomputer 100 determines additional accuracy is necessary in order for the user to select the desiredelement 504. Atblock 714, the computer presents a zoomed image of a portion of the display near the user's gaze target. The portion of the display can be solely theelements 504 or can be theelements 504 and surrounding aspects of the display (e.g., backgrounds). Thecomputer 100 continues to detect the user's gaze target atblock 716. Optionally, thecomputer 100 can additionally highlight anyelement 504 selected by the user's gaze target atblock 716. Thecomputer 100 can detect a second action atblock 718. Upon detecting the second action, thecomputer 100 can optionally highlight the element at the gaze target atblock 708 and then perform a computer function dependent upon the element at the gaze target at block 706 (e.g., selecting the element located at the gaze target). In situations where thecomputer 100 does not detect multiple small elements atblock 712, the computer can simply continue directly tooptional block 708 and block 706. - In this way, a
computer 100 can dynamically adapt to a user's desire to select small or numerous elements on a display using non-contact actions. If a user attempts to select small or numerous elements using non-contact actions, such as through eye-tracking, thecomputer 100 can dynamically zoom in to allow the user to have better control for picking the correct element. As described above, the first actions and second actions can be contact-required or non-contact actions. - In some embodiments, the first and second actions detected at blocks 704 and/or 718 can be the pressing of a button or touching of a touchpad. In some embodiments, the first and second action detected at blocks 704 and/or 718 can be releasing a button that has already been pressed and/or ceasing to touch a touchpad that has previously been touched. For example, a user can depress a button while moving gaze around a display and release the button when it is desired that the computer function take place. In some embodiments, the second action is a release of a button while the first action is a depression of the same button. Additionally, the first action and second action are generally the same type of actions (e.g., a button press), but need not be.
- In some embodiments, the
computer 100 can highlight the gaze target and/or the elements at or near the gaze target while the button is depressed. When the user looks at a group of elements that are sufficiently small and close together that thecomputer 100 will zoom into them (as described above with reference to block 714), thecomputer 100 can highlight the group of elements, instead of a single element. - In an embodiment, the
computer 100 can zoom in on the display when a user initiates an action (e.g., a contact-required action). Thecomputer 100 can zoom in on the display without first determining whether there are multiple small elements located near the gaze target. Thecomputer 100 can otherwise function as described above with reference toFIG. 7B . - In one embodiment, the
computer 100 can determine whichelement 504 at which to locate the gaze target based on a weighting system whereby some items have predetermined weights higher than other items. For example, a button can have a higher weighting than a text box. The determination of whichelement 504 at which to place the cursor can also consider proximity to the estimated gaze position. - In another embodiment, the
computer 100 can provide tactile or audible feedback indicating that an estimated gaze position is able to be determined. In this way, the feedback will indicate to the user whether the system is functioning correctly and if not it will allow the user to alter their behavior to accommodate the function or non-function of the eye tracking device. This feedback can be in the form of a touchpad providing haptic feedback when an estimated gaze position has been determined during a cursor movement procedure. -
FIG. 29 is a flow chart depicting a two-step click 2900 according to certain embodiments. Acomputer 100 can determine a non-contact target (e.g., gaze target) atblock 2902 and detect a first action atblock 2904. The first action can be a contact-required action (e.g., a button click) or a non-contact action. Optionally, multiple elements provided on a display can be highlighted at once as thecomputer 100 determines a non-contact target is located on or near the elements. Atblock 2906, thecomputer 100 determines if an element at the non-contact target can be reliably determined, in other words, thecomputer 100 determines whether it is able to determine, with enough certainty, which element the user is intending to target. Thecomputer 100 can consider multiple parameters to determine which element or elements the user is intending to target. These parameters can be pre-set and/or user-set, but comprise two decision points. Firstly, thecomputer 100 determines expected tracking deviation based on factors that can include user preference set by the user. Such preferences can include, for example, whether the user desires speed or accuracy, user eye tracking quality including whether the user is wearing glasses, contact lenses or the like, the quality of a calibration function mapping the user's gaze, estimates of offsets of the detected user's gaze relative to the actual location of the user's gaze, signal noise in the eye tracking system, configuration, type and frequency of the eye tracker or a global parameter for determining expected gaze point deviation which may override other factors. If the desired element can be reliably determined, thecomputer 100 can optionally highlight the element at the non-contact target atblock 2908. Thecomputer 100 can then perform a computer function associated with the element at the non-contact target (e.g., selecting the element or opening the element or others). Secondly, thecomputer 100 determines possible targets atblock 2912 based on factors including geometries of elements near the gaze target such as how close together multiple elements are and the size and/or shape of the elements, layout of the elements, visual point of gravity of the elements representing weightings of the elements such that a gaze target may be weighted towards an element in a gravity-like manner, grouping criteria including grouping elements of the same interaction type, contextual input from the user such as a preference towards avoiding functions such as delete or the like and whether based on the user's gaze point the user has seen an element highlight. Atblock 2914 thecomputer 100 determines the properties of a region in which to present a visualization of all potential elements. Thecomputer 100 may consider various factors such as the size and layout of elements including possible targets determined atblock 2912, a maximum size of a visualization of spaced apart elements, a magnification level set by the user or predetermined by thecomputer 100 and grouping criteria for analyzing displayed elements. Thecomputer 100, atblock 2914, can then present a visualization of all potential elements identified inblock 2912 with each element spaced further apart. Optionally, elements displayed atblock 2914 can be highlighted as a user's gaze locates upon them. An example of spacing further apart atblock 2914 can include simply zooming in on the area where the elements are located (i.e., increasing the space between the elements with respect to the size of the display device 104). Another example of spacing further apart atblock 2914 can include enlarging and/or moving each individual element further apart on the display to provide greater separation (e.g., moving four elements in a square configuration taking up a small portion of the display to a line formation extending across substantially most of the display). Atblocks computer 100 can determine the non-contact target again and detect a second action, respectively, in order to identify a targeted element. Thecomputer 100 can optionally highlight the targeted element atblock 2908. Thecomputer 100 can perform a computer function associated with the targeted element at block 2910 (e.g., clicking the element). - In enhancements to the embodiment shown in
FIG. 29 , atblock 2904 thecomputer 100 can receive a maintained first action, such as constant pressure on a touchpad, a held down button, or the like. As the user is maintaining a first action, the user is actively selecting elements and thus is prepared for multiple elements to be highlighted prior to block 2906 and atblock 2914. Atblock 2910 the user releases the maintained first action and thecomputer 100 performs a computer function associated with the element at the non-contact target. - In some embodiments, the
computer 100 is able to detect non-contact actions to switch the focus of windows in the display. In an embodiment, when a user looks at a non-active window and initiates an action (e.g., a contact-required action or a non-contact action), the non-active window becomes the active window. In this embodiment, an eye tracking device determines the location of a user's gaze relative to adisplay 200, if thecomputer 100 determines the gaze target to be located in a window or area not currently in an active state, the user can instruct thecomputer 100 to make the window or area active by providing a contact-required action whilst gazing at the non-active window or area. In alternate embodiments, the user can fixate their gaze within the non-active window or area for a predetermined period of time instead of providing a contact-required action, in order to make the window or area active. - In an embodiment, a user can scroll a window or screen area by looking towards an edge of the screen, window, or area and initiating an action such as a contact-required action. Depending on whether the user was looking towards the top, bottom, left, right, or diagonal edge of the screen, window, or area, the area or window can scroll up, down, left, right or diagonal. For example, if a user is looking towards the bottom of an area or window and initiates an action such as a contact-required action, the contents of the area or window at the bottom of the area or window can scroll upwards to the top of the window or area, effectively revealing new information from the bottom of the area or window. This functionality can operate in the same manner for every direction at which the user is gazing. For example if the user is gazing towards the left edge when initiating an action the information at the left edge will move towards the right edge and effectively scroll information from of the left of the area or window towards the right of the area or window. In an embodiment, when a user looks at a point within a window or area and initiates an action (e.g., a contact-required action), the
computer 100 will scroll the window or area so that point identified by the user is moved to a predetermined location in the window (e.g., the top of the window or the center of the window). In these embodiments, where an area such as an edge, top/bottom, left/right, or other is mentioned, thecomputer 100 can determine based on a gaze offset or deviation an area near the edge, top/bottom, left/right, or other in which to enact the functions described. - In some embodiments, the
computer 100 is able to directly scroll the window by a predetermined amount for gaze interactions. In alternate embodiments, thecomputer 100 uses the gaze interactions to emulate button presses, such as presses of arrow buttons (e.g., arrow up or arrow down) or page buttons (e.g., page up or page down), in order to scroll the window. - In some embodiments, the
computer 100 can provide a visual indication (e.g., a different cursor or other indication) that the gaze target is located in a particular zone or area defined for scrolling (e.g., near the top edge of the screen for scrolling up). - In an embodiment, a user can scroll a window by holding a contact-required input (e.g., a button or touchpad) while gazing in a direction of scroll (e.g., top, bottom, or side of the computer screen). A gaze to an edge of a screen, window, or area can scroll information from that edge towards the center of the screen, window, or area. The scrolling can be momentum based scrolling, meaning that the rate of scrolling will gradually increase to a predetermined maximum speed the longer the contact-required input is held (e.g., the longer the button is pressed or the longer the touchpad is touched without release). In some embodiments, once the button or touchpad is released, the scrolling will not cease immediately, but will rather slow rapidly until coming to a complete stop. The speed at which the scrolling increases whilst the button or touchpad is held down and at which the scrolling decreases after release can be adjusted by the
computer 100. Optionally the adjustment can be altered by a user. - In some embodiments, a user can scroll a window simply by looking at the edge of the window (e.g., the top, bottom, or sides of the window). The window can scroll by a predetermined increment for each look towards the edge of the window. In some embodiments, the scrolling will only happen if the user looks towards the edge of the window and simultaneously initiates an action, such as a non-contact action like a voice command. In other embodiments, the increment can depend on the location of the user's gaze and can update continuously. In this way natural scrolling is achieved, for example if a user gazes towards the edge of a map constantly the map scrolls by small increments to smoothly scroll the map. The scrolling can occur without the user simultaneously initiating an action.
- In some embodiments, a user can scroll a window by looking at a desired location and performing an action (e.g. pressing a button or saying “scroll”), at which point the
computer 100 will cause the portion of the window and/or display located at the gaze target to move to the center of the window and/or display. In some embodiments, acomputer 100 can determine whether to scroll or zoom, depending upon the location of the gaze target (e.g., location of the gaze target with respect to a window). For example if a user looks at a window containing a map and moves the gaze target to an edge of the map, pressing a button can cause the map to scroll so the targeted location is now at the center of the map. However, if the same user looks at or near the center of the map, pressing a button can cause the map to zoom. In some additional embodiments, the speed of an action (e.g., zooming or scrolling) can be controlled based on the location of the gaze target (e.g., location of the gaze target with respect to a window). For example, when a user looks at the edge of a window acrd presses a button, the window can scroll quickly, whereas if the user looks somewhere between the same edge of the window and the center of the window, the window would scroll less quickly. - In some embodiments, the scrolling is based on the length or pressure of contact by a user upon a key, button, or touchpad. For example, a window can scroll by greater increments when a user holds a key, button, or touchpad down longer.
- In various embodiments, scrolling can be terminated by performing or ceasing actions (e.g., contact-required actions). In some embodiments, scrolling is slowed before completely terminating. In some embodiments, scrolling can be slowed by the user gradually moving the gaze target away from a predetermined area (e.g., edge of the screen or area where the gaze target was upon initiation of scrolling).
- In some embodiments, a tap of a contact-required input (e.g., quick button press, quick tap on a touchpad, or movement of a computer mouse) can cause a visual indicator to appear on the display, or alternatively enact a click at the last known position of a visual indicator or at the gaze location. Thereafter, holding the finger on the contact-required input (e.g., holding a finger on a touchpad) can move the visual indicator to the gaze target.
- In some embodiments, movement of a computer mouse can cause the visual indicator to appear on the display at the gaze target. In some embodiments, after the visual indicator has appeared at the gaze target, use of a contact-required input (e.g., computer mouse) to move the visual indicator (e.g., cursor) can be slowed while the visual indicator is near the user's gaze target. In some embodiments, a user can initiate an additional action (e.g., a contact-required action such as pressing and holding a mouse button) on an element (e.g., an icon) while using a computer mouse. The user's gaze can then be moved towards a desired destination and the mouse button can be released in order to drag the element to the desired destination (e.g., to move an item into another folder). In this manner, a user can select an item on a display by locating a visual indicator on or near the item either by using a gaze enabled visual indicator movement method described previously, or by moving a computer mouse, touchpad or the like. The user can then hold down an activator such as a computer mouse or maintain contact on a touchpad to select the item, when the user releases the activator, the item moves to the location of the user's gaze on the display. Therefore, the user can “grab” an item such as an icon and “drag” the icon to the user's gaze location, whereupon on release of an activator the icon is relocated to the user's gaze location.
- In some embodiments, performing an action such as a contact-required action (e.g. clicking a mouse button) can cause the visual indicator to move to the gaze target. In some embodiments, holding the contact-required action (e.g., pressing and holding the mouse button) can move the visual indicator to the gaze target and allow for the user to fine-tune the position of the visual indicator before releasing the contact-required action (e.g., releasing the mouse button) to perform an action at the visual indicator's location (e.g., select an element). While the contact-required action is being held, large movements of gaze target can translate to smaller movements of the visual indicator, in order to allow a user to more finely adjust the location of the visual indicator.
- In an embodiment, initiating an action (e.g., a contact-required action) can open a menu at the gaze target. A user can look at the desired menu item and initiate a second action (e.g., second contact-required action or release of a maintained first contact-required action) to select that menu item. In some embodiments, holding gaze on or near the menu item for a predetermined period of time can act to select that menu item.
- In an embodiment, an edge menu or button can be opened (e.g., displayed) by holding a contact-required action, looking to the edge of the display, and releasing the contact-required action. In some cases, the
computer 100 presents a visual indication that opening a menu is possible when the user looks to the edge of the screen with the contact-required action held. If the user looks away from the edge of the display without releasing the contact-required action, the menu does not open. For example the user can hold or maintain a contact-required action and look towards an edge of a display. A glow or other visual indicator can appear at the edge of the display indicating a menu can be opened. If the contact-required action is ceased while the glow or other visual indicator is present, the menu appears on screen. If the contact-required action is ceased while the glow or other visual indicator is not present, no menu appears on screen. The menu can occupy the full screen or part of the screen. The menu can alternatively be a button, for example a button representing a “back” movement in a web browser the like. - In an embodiment, initiating an action (e.g., a contact-required action such as actuating a scroll wheel) will zoom the display in or out at the gaze target. In an embodiment, initiating an action (e.g., a double-tap on a touchpad) can zoom the display in at the gaze target. In an embodiment, initiating an action (e.g., tapping the touchpad while holding down a shift button) can zoom the display out at the gaze target. In a further embodiment, movement of two fingers towards each other such as a “pinch” movement on a touchpad, touch screen or the like can enact a zoom in or out, and movement of two fingers away from each other can enact the opposite zoom in or out movement.
- In an embodiment, in order to activate a non-contact input mode, the user looks towards the edge of the
display device 104. Thecomputer 100 determines that the gaze target is at or near the edge of thedisplay device 104 and activates a non-contact input mode. In some embodiments, thecomputer 100 displays an input menu over or adjacent thedisplay 200. When the user looks away from the edge of thedisplay 200, the input menu can disappear immediately, remain on thedisplay 200 indefinitely, or disappear after a predetermined amount of time. The user can activate an icon on the input menu by performing a contact-required action while gazing at an icon or by using a non-contact activation method such as dwelling their gaze in the vicinity of an icon for a predetermined period of time, for example one second, or blinking an eye or eyes, which can be interpreted by the computer as an activation command. Each icon can be associated with a computer function. When an icon is activated, thecomputer 100 can provide an indication of activation, such as a change in the appearance of the icon, a sound, physical feedback (e.g., a haptic response), or other indication. - A place cursor icon can be activated to place the mouse cursor on a desired point or position. The place cursor icon can be used for mouse-over functions (e.g., functions where a mouse click is not desired). A gaze scroll icon can be activated to enable gaze-controlled scrolling within a scrollable window, as described in further detail below. A left click icon can be activated to perform a single left-click (e.g., emulate a physical left-click on an attached computer mouse). A double click icon can be activated to perform a double left-click (e.g., emulate a physical double-click on an attached computer mouse). A right click icon can be activated to perform a single right-click (e.g., emulate a physical right-click on an attached computer mouse). A gaze drag and drop icon can be activated to enable the gaze drag and drop mode. The gaze drag and drop mode allows a user to use non-contact input to emulate a drag and drop action on a physical mouse (e.g., click and hold the mouse, move the mouse, release the mouse), as described in further detail below. A gaze keyboard icon can be activated to open an on-screen, gaze-enabled keyboard for typing using gaze, as described in further detail below. A settings icon can be activated to open a settings window or dialog.
- As described above, the gaze scroll icon can be activated to enable gaze-controlled scrolling. When gaze-controlled scrolling is activated, the user can scroll windows (e.g., up and down, as well as left and right) using non-contact inputs. In one embodiment, the user can place a scroll indicator on the window and look above the scroll indicator to scroll up, look below the scroll indicator to scroll down, look to the left of the scroll indicator to scroll left, and look to the right of the scroll indicator to scroll right. The user can place the scroll indicator by first enabling gaze-controlled scrolling (e.g., dwell gaze on the gaze scroll icon), then looking at any scrollable area and dwelling gaze upon that area until the scroll indicator appears. When the user desires to disable gaze-controlled scrolling, the user can gaze outside of the screen or back to the input menu.
- As described above, the gaze drag and drop icon can be activated to enable the gaze drag and drop mode. When the gaze drag and drop mode is enabled, the user can gaze at a first location and provide a user signal (e.g., dwelling gaze, blinking, winking, blinking in a pattern, or using a contact input such as a button) which causes the computer to emulate a mouse click and hold at the first location. The user can then move gaze to a second location and provide a second user signal, which causes the computer to emulate moving the mouse to the second position and releasing the mouse button.
- In some embodiments, once a user has selected an icon on the input menu, the icon selected may not be de-selected unless a new selection on the input menu is made. In other embodiments, the icon can be de-selected, such as by gazing at the same icon again.
- In some embodiments, to facilitate using the action selected on the input menu, the
computer 100 can provide for a portion of the display to be zoomed (e.g., displayed at a lower resolution). For example, when the user selects the “left click” icon on the input menu and gazes at a portion of the display, an area around the gaze point on the display can zoom so that the user can then select the intended target for their action with greater ease by gazing at the enlarged portion of the display. - In some cases, the user can perform a certain action (e.g., contact-required action or non-contact action in order to select an area of the display upon which a computer function is to be perforated, at which point the
computer 100 can display an input menu at or around that point to select the desired computer function. - In a further improvement, the input menu can also be provided external to the
display 200 and/ordisplay device 104. In this manner it can be provided on an input device such as an eye tracking device, on the housing of a display, or on a separate device. The menu can then be comprised of a separate display, or another means of conveying information to a user such as light emitting diodes, switches or the like. In another embodiment, the action of choosing an icon on the external input menu is shown as a transparent image of that icon at an appropriate position on the regular display. In some embodiments, the gaze target used to identify the desired icon to be activated can be located outside thedisplay 200 and/ordisplay device 104. - In some embodiments, a user can perform computer functions on or at the area at or near the gaze target via voice interaction. Once an area has been selected (e.g., by focusing gaze at the area), an action can be performed by the user speaking certain words such as “open”, “click” and the like, which would be readily understood by a skilled addressee.
- In some embodiments, selection of a computer function on an input menu can comprise multiple steps and can also comprise multiple menus. For example a user can select an icon or menu using any input method herein described, and then select a second icon or menu for selecting a computer function to be performed.
- In some embodiments, a gaze-tracking device can discern the identity of the user (e.g., through a user's gaze patterns or biometric data such as distance between eyes and iris sizes) in order to customize functionality for that particular user (e.g., use particular menus and/or features or set up desired brightness settings or other settings).
- In some embodiments, a
computer 100 will perform certain functions based on a discernible series or sequence of gaze movements (e.g., movements of the gaze target). - In some embodiments, a
computer 100 can determine what various computer functions are available to be performed in whole or in part by non-contact actions based on what elements are presented on thedisplay 200. - In some embodiments, performing an action can cause information to be presented on the
display 200 based on the gaze target. For example, pressing a button can cause a list of active computer programs to appear on thedisplay 200. A user can interact with the presented information (e.g., list of active computer programs) by gazing at parts of the information. For example, a user can press a button to cause a list of active computer programs to appear on thedisplay 200, then look at a particular program and release the button in order to cause that particular program to gain focus on thecomputer 100. - Many examples presented herein are described with respect to gaze tracking. It will be understood that, where applicable, tracking of other non-contact actions (e.g., 3-D gestures or others) can be substituted for gaze tracking. When tracking of other non-contact actions is substituted for gaze tracking, references to gaze targets should be considered references to non-contact targets.
- Location of a gaze target can be determined from detection of various actions, including movement of the eyes, movements of the face, and movements of facial features. For example, scrolling function (e.g., scrolling a page up or down) can be controlled by the user's face tilting (e.g., up or down) while the user reads the
display 200, wherein thecomputer 100 does not control scrolling function based on eye tracking at that time. - In some cases, camera based gaze detection systems can rely on facial recognition processing to detect facial features such as nose, mouth, distance between the two eyes, head pose, chin etc. Combinations of these facial features can be used to determine the gaze target. For example, in embodiments where vertical scrolling (e.g., a scroll up function and/or a scroll down function) is to be performed based on face images from a camera, the detection of the gaze target can rely solely or in part on detected eyelid position(s). When the user gazes at the lower portion of the
display device 104, the eye will be detected as being more closed, whereas when the user gazes at the top of thedisplay device 104, the eye will be detect as being more open. - Eye lid position detection is good for determining changes in gaze target in a vertical direction, but not as effective for determining changes in gaze target in a horizontal direction. For better determining changes in gaze target in a horizontal direction, images of the head pose can be used instead. In such cases, a gaze target can be determined to be within scroll zones only when the user's face is determined to be oriented in the general direction of the
display device 104. As general rule, whenever a user looks at an object more than seven degrees off from a direct forward line of sight, the user's head will immediately turn in the direction of that object. Thus, a head pose indicating more than seven degrees off to a side from thedisplay device 104 is an indication that the user is unlikely to be looking at the content (e.g., the display 200) presented on thedisplay device 104. - Depending on the sensitivity and accuracy of the gaze detection components, which can be dictated by camera resolution, processing power, available memory, and the like, a gaze target can occupy a smaller (e.g., more sensitive and/or accurate) or larger (e.g., less sensitive and/or accurate) area relative to the
display device 104. Calibration of the gaze detection components can also play a role in the accuracy and sensitivity of gaze target calculations. Accuracy or sensitivity can dictate the relationship between an actual direction of a user's gaze and the calculated gaze target. The disclosed embodiments can function even if the relationship between the actual gaze direction and calculated gaze target is not direct. - In some embodiments, the gaze target can be calibrated by using input from a touch screen to assist with calibration. For example, the
computer 100 can prompt the user to look at and touch the same point(s) on thedisplay device 104. Alternatively, such a calibration process can be performed in the background without prompting the user or interrupting the user's normal interaction with thecomputer 100. For example, while normally operating thecomputer 100, a user will be pressing buttons, hyperlinks, and other portions of thedisplay 200,display device 104, and/orcomputer 100 having known positions. It can be assumed that the user will normally also be looking at the buttons, hyperlinks, etc. at the same time. Thus, the computer 1(X) can recognize the touch point or click point as direction of the user's gaze and then correct any discrepancies between the direction of the user's gaze and the calculated gaze target. Such a background calibration process can be helpful in order to slowly improve calibration as the user interacts with thecomputer 100 over time. - In some embodiments, a
computer 100 is able to determine when a user is readingelements 504 on adisplay 200 rather than attempting to control thecomputer 100. For example, detection of whether a user is reading can be based on detecting and evaluating saccades and whether an eye fixates or dwells on or around a constant point on the display. This information can be used to determine indicators of reading as distinguished from a more fixed gaze. In some embodiments, thecomputer 100 is configured such that scrolling functions can be initiated even when the user is determined to be reading. For instance when the user is looking at a map, the scrolling (e.g., panning) should be initiated relatively faster than when the user is reading text (e.g., a word-processor document). Thus any dwell time before triggering a scroll function when reading text can be longer than for reviewing maps and other graphical content, and the scroll zones and/or scroll interactions can be chosen differently in each case. For example, the scroll zone(s) may have to be made larger in the case of a map or other graphical content in order to make thecomputer 100 sufficiently responsive, while scroll zone(s) for a text document may be smaller because a scroll action is typically not required until the user is reading text very close (e.g., 5 lines) to the bottom or top of a window. - Various embodiments are disclosed herein that can correlate certain actions with computer functions. In additional embodiments, where applicable, a
computer 100 can be placed in one or more modes, wherein each mode enables different computer functions to be performed in response to a user performing various actions. - In some embodiments, the
computer 100 can be configured to use gaze data patterns (e.g., the frequency with which gaze targets appear in certain positions or locations relative the display device 104) to determine with greater accuracy, based on statistical analysis, when a user is looking at aparticular zone 202 orelement 504. -
FIGS. 8-24D depict graphical representations and flow charts of various embodiments and functionality disclosed herein. - Although embodiments have been described referencing contact required and non-contact required actions, it is intended to be understood that these actions can be interlaced. In other words, if an embodiment is described using a contact required action such as movement of a mouse, touchpad contact, pressing of a button or the like, it is intended that such an action can also be performed by using a non-contact method such as a voice command, gesture, gaze movement or the like.
- All patents, publications and abstracts cited above are incorporated herein by reference in their entirety. Any headers used herein are for organizational purposes only and are not to be construed to limit the disclosure or claims in any way. Various embodiments have been described. It should be recognized that these embodiments are merely illustrative of the principles of the present disclosure. Numerous modifications and adaptations thereof will be readily apparent to those skilled in the art without departing from the spirit and scope of the present disclosure as defined in the following claims.
- Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
- The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general purpose computing apparatus to a specialized computing apparatus implementing one or more embodiments of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
- Embodiments of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
- The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or values beyond those recited.
- As used below, any reference to a series of examples is to be understood as a reference to each of those examples disjunctively (e.g., “Examples 1-4” is to be understood as “Examples 1, 2, 3, or 4”).
- Example 1 is a method of controlling a computer. The method includes presenting a display having a visual indicator; detecting a gaze target of a user; detecting a contact-required action of a user; and performing a computer function in response to the contact-required action and the gaze target. Performing the computer function includes performing a first function. The first function can be scrolling a first portion of the display, in response to detection of the contact-required action, based on the location of the gaze target with respect to the display. The first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display. The first function can be moving the visual indicator to the gaze target in response to the contact-required action. The first function can be moving the visual indicator at a first rate slower than a second rate of movement of the gaze target in response to continued detection of the contact-required action. The first function can be zooming a third portion of the display adjacent the gaze target in response to the contact-required action. The first function can be performing a second function, in response to the contact-required action, based on the gaze target when the gaze target is outside the display. The first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the gaze target.
- Example 2 is the method of example 1, where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display. The continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 3 is the method of examples 1 or 2, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 4 is the method of example 1 where the first function is moving the visual indicator to the gaze target in response to the contact-required action. The contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 5 is the method of example 1, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action. The contact-required action is actuation of a scroll wheel.
- Example 6 is the method of example 1, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action. The contact-required action is a combination of depressing a button and performing a second contact-required action.
- Example 7 is the method of example 1, where the contact-required action is touching a touch-sensitive device.
- Example 8 is a computing device having a computer including a display device for presenting a display having a visual indicator. The computer further includes a contact-required input and a non-contact input. The computer is programmed to detect a non-contact target of a user and detect a contact-required action of the user. The computer is further programmed to perform a computer function, in response to the contact-required action, based on the non-contact target, wherein performing the computer function includes performing a first function. The first function can be scrolling a first portion of the display, in response the contact-required action, based on the location of the non-contact target with respect to the display. The first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the non-contact target with respect to the display. The first function can be moving the visual indicator to the non-contact target in response to the contact-required action. The first function can be moving the visual indicator at a first rate slower than a second rate of movement of the non-contact target in response to continued detection of the contact-required action. The first function can be zooming a third portion of the display adjacent the non-contact target in response to the contact-required action. The first function can be performing a second function, in response to the contact-required action, based on the non-contact target when the non-contact target is outside the display. The first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the non-contact target.
- Example 9 is the computing device of example 8, where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the non-contact target with respect to the display. The continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 10 is the computing device of example 9, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 11 is the computing device of example 8, where the first function is mo ng the visual indicator to the non-contact target in response to the contact-required action. The contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 12 is the computing device of example 8, where the first function is zooming a third portion of the display adjacent the non-contact target in response to the contact-required action. The contact-required action is actuation of a scroll wheel.
- Example 13 is the computing device of example 8, where the first function is zooming a third portion of the display adjacent the non-contact target in response to the contact-required action. The contact-required action is a combination of depressing a button and performing a second contact-required action.
- Example 14 is the computing device of example 8, where the contact-required action is touching a touchpad.
- Example 15 is the computing device of example 8, where the non-contact input is a gaze-tracking device and the non-contact target is a gaze target.
- Example 16 is a system having a computer, the computer including a display device for presenting a display having a visual indicator; an eye-tracking device for detecting a gaze target; a contact-required input for detecting a contact-required action; and a processor operably connected to the eye-tracking device, contact-required input, and display device. The computer further includes programming enabling the processor to perform a computer function, in response to the contact-required action, based on the non-contact target, wherein performing the computer function includes performing a first function. The first function can be scrolling a first portion of the display, in response the contact-required action, based on the location of the gaze target with respect to the display. The first function can be scrolling a second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display. The first function can be moving the visual indicator to the gaze target in response to the contact-required action. The first function can be moving the visual indicator at a first rate slower than a second rate of movement of the gaze target in response to continued detection of the contact-required action. The first function can be zooming a third portion of the display adjacent the gaze target in response to the contact-required action. The first function can be performing a second function, in response to the contact-required action, based on the gaze target when the gaze target is outside the display. The first function can be performing a second computer function, in response to the contact-required action, based on a sequence of movements of the gaze target.
- Example 17 is the system of example 16 where the first function is scrolling the second portion of the display, in response to continued detection of the contact-required action, based on the location of the gaze target with respect to the display. The continued detection of the contact-required action is detection of continued touching of a touchpad.
- Example 18 is the system of example 16, where the first function additionally includes increasing the momentum of the scrolling until detection of continued touching of the touchpad ceases.
- Example 19 is the system of example 16, where the first function is moving the visual indicator to the gaze target in response to the contact-required action. The contact-required action is one of the group consisting of touching a touchpad or moving a computer mouse.
- Example 20 is the system of example 16, where the first function is zooming a third portion of the display adjacent the gaze target in response to the contact-required action. The contact-required action can be actuation of a scroll wheel. The contact-required action can be a combination of depressing a button and performing a second contact-required action.
- Example 21 is the system of example 16, where the contact-required action is touching a touchpad.
Claims (28)
1. A method of controlling a computer, comprising:
detecting a non-contact target, wherein the non-contact target is indicative of a location of a first non-contact action performed by a user;
detecting an activation signal of intent; and
performing a computer function associated with a zone in which the non-contact target is or was located;
wherein performing the computer function is in response to detecting the activation signal of intent.
2. The method of claim 1 , wherein:
detecting the non-contact target is performed by an eye-tracking device; and
the non-contact target is a gaze target.
3. The method of claim 2 , wherein detecting the activation signal of intent is based on information provided by the eye-tracking device.
4. The method of claim 3 , wherein detecting the activation signal of intent is detecting the gaze target fixating at a location for a predetermined length of time.
5. The method of claim 3 , wherein the detecting the activation signal of intent is detecting a blink based on information provided by the eye-tracking device.
6. The method of claim 3 , wherein detecting the activation signal of intent is detecting a saccade based on information provided by the eye-tracking device.
7. The method of claim 1 , wherein the performing the computer function includes emulating a contact signal associated with the computer function.
8. The method of claim 1 additionally comprising detecting a mode-enable signal of intent before the performing the computer function.
9. The method of claim 1 , wherein the activation signal of intent is provided by contact with a physical device.
10. The method of claim 1 , further comprising presenting a graphical representation of at least one zone on a display.
11. The method of claim 1 , wherein the zone is located over elements presented on the display such that detecting an activation signal of intent does not trigger a function based on items displayed under the zone.
12. The method of claim 1 , wherein the zone is located outside the display.
13. The method of claim 1 additionally comprising:
visually presenting the zone on the display;
ceasing presenting the zone on the display.
14. The method of claim 1 wherein the activation signal of intent is a saccade wherein the non-contact target crosses into or out of the zone.
15. The method of claim 1 wherein the zone is adjacent an area with no associated computer function.
16. A computing device, comprising:
a computer including a display device for presenting a display and a non-contact input for detecting a non-contact action;
wherein the computer is programmed to:
detect a non-contact target through the non-contact input, the non-contact target indicative of a direction of a first non-contact action performed by a user;
detect an activation signal of intent; and
perform a computer function associated with a zone in which the non-contact target is located in response to the activation signal of intent.
17. The computing device of claim 16 , wherein the non-contact input is an eye-tracking device and the non-contact target is a gaze target.
18. The computing device of claim 17 , wherein the activation signal of intent is detected by the eye-tracking device.
19. The computing device of claim 17 , wherein the activation signal of intent is a blink.
20. The computing device of claim 17 , therein the activation signal of intent is a signal generated in response to the computer detecting dwelling of the gaze target for a predetermined length of time.
21. The computing device of claim 16 , wherein the computer is operable to perform the computer function by emulating a contact signal.
22. The computing device of claim 16 , wherein the computer is programmed to not perform the computer function in response to the activation signal of intent until after detecting a mode-enable signal of intent.
23. A system, comprising:
a computer including:
a display device for presenting a display;
an eye-tracking device for detecting eye indications;
a processor operably connected to the eye-tracking device and the display device; and
programming enabling the processor to:
determine a gaze target from the eye indications;
determine a computer function associated with a zone in which the gaze target is located;
perform the computer function in response to an activating signal of intent.
24. The system of claim 23 , wherein the activating signal of intent is determined from the eye indications.
25. The system of claim 24 wherein the activating signal of intent is based on a dwelling of the gaze target for a predetermined length of time.
26. The system of claim 24 , wherein:
the eye indications include a blink; and
the activating signal of intent is based on the blink.
27. The system of claim 23 , wherein the programming enables the processor to perform the computer function by emulating a contact-required signal.
28. The system of claim 23 , wherein the programming enables the processor to not perform the computer function in response to the activating signal of intent until after detecting a mode-enable signal of intent.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/195,755 US20140247210A1 (en) | 2013-03-01 | 2014-03-03 | Zonal gaze driven interaction |
US15/446,843 US20170177078A1 (en) | 2013-03-01 | 2017-03-01 | Gaze based selection of a function from a menu |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361771659P | 2013-03-01 | 2013-03-01 | |
US201361905536P | 2013-11-18 | 2013-11-18 | |
US14/195,755 US20140247210A1 (en) | 2013-03-01 | 2014-03-03 | Zonal gaze driven interaction |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/446,843 Continuation US20170177078A1 (en) | 2013-03-01 | 2017-03-01 | Gaze based selection of a function from a menu |
Publications (1)
Publication Number | Publication Date |
---|---|
US20140247210A1 true US20140247210A1 (en) | 2014-09-04 |
Family
ID=50473763
Family Applications (11)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/195,789 Active US9619020B2 (en) | 2013-03-01 | 2014-03-03 | Delay warp gaze interaction |
US14/195,755 Abandoned US20140247210A1 (en) | 2013-03-01 | 2014-03-03 | Zonal gaze driven interaction |
US14/195,743 Abandoned US20140247232A1 (en) | 2013-03-01 | 2014-03-03 | Two step gaze interaction |
US14/986,141 Abandoned US20160116980A1 (en) | 2013-03-01 | 2015-12-31 | Two step gaze interaction |
US15/446,843 Abandoned US20170177078A1 (en) | 2013-03-01 | 2017-03-01 | Gaze based selection of a function from a menu |
US15/449,058 Active 2034-03-25 US10545574B2 (en) | 2013-03-01 | 2017-03-03 | Determining gaze target based on facial features |
US16/459,780 Abandoned US20190324534A1 (en) | 2013-03-01 | 2019-07-02 | Two Step Gaze Interaction |
US16/564,416 Abandoned US20200004331A1 (en) | 2013-03-01 | 2019-09-09 | Two Step Gaze Interaction |
US16/716,073 Active US11604510B2 (en) | 2013-03-01 | 2019-12-16 | Zonal gaze driven interaction |
US17/156,083 Abandoned US20210141451A1 (en) | 2013-03-01 | 2021-01-22 | Two Step Gaze Interaction |
US18/177,489 Active US11853477B2 (en) | 2013-03-01 | 2023-03-02 | Zonal gaze driven interaction |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/195,789 Active US9619020B2 (en) | 2013-03-01 | 2014-03-03 | Delay warp gaze interaction |
Family Applications After (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/195,743 Abandoned US20140247232A1 (en) | 2013-03-01 | 2014-03-03 | Two step gaze interaction |
US14/986,141 Abandoned US20160116980A1 (en) | 2013-03-01 | 2015-12-31 | Two step gaze interaction |
US15/446,843 Abandoned US20170177078A1 (en) | 2013-03-01 | 2017-03-01 | Gaze based selection of a function from a menu |
US15/449,058 Active 2034-03-25 US10545574B2 (en) | 2013-03-01 | 2017-03-03 | Determining gaze target based on facial features |
US16/459,780 Abandoned US20190324534A1 (en) | 2013-03-01 | 2019-07-02 | Two Step Gaze Interaction |
US16/564,416 Abandoned US20200004331A1 (en) | 2013-03-01 | 2019-09-09 | Two Step Gaze Interaction |
US16/716,073 Active US11604510B2 (en) | 2013-03-01 | 2019-12-16 | Zonal gaze driven interaction |
US17/156,083 Abandoned US20210141451A1 (en) | 2013-03-01 | 2021-01-22 | Two Step Gaze Interaction |
US18/177,489 Active US11853477B2 (en) | 2013-03-01 | 2023-03-02 | Zonal gaze driven interaction |
Country Status (6)
Country | Link |
---|---|
US (11) | US9619020B2 (en) |
EP (1) | EP2962175B1 (en) |
KR (1) | KR20160005013A (en) |
CN (1) | CN105339866B (en) |
ES (1) | ES2731560T3 (en) |
WO (1) | WO2014134623A1 (en) |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150338915A1 (en) * | 2014-05-09 | 2015-11-26 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
WO2016008354A1 (en) * | 2014-07-14 | 2016-01-21 | Huawei Technologies Co., Ltd. | System and method for display enhancement |
US20160085408A1 (en) * | 2014-09-22 | 2016-03-24 | Lenovo (Beijing) Limited | Information processing method and electronic device thereof |
EP3002949A1 (en) * | 2014-10-01 | 2016-04-06 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling the same |
WO2017031089A1 (en) * | 2015-08-15 | 2017-02-23 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US9619020B2 (en) | 2013-03-01 | 2017-04-11 | Tobii Ab | Delay warp gaze interaction |
US20170123492A1 (en) * | 2014-05-09 | 2017-05-04 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US20170295177A1 (en) * | 2015-05-21 | 2017-10-12 | Tencent Technology (Shenzhen) Company Limited | Identity verification method, terminal, and server |
US20170308163A1 (en) * | 2014-06-19 | 2017-10-26 | Apple Inc. | User detection by a computing device |
US20170357440A1 (en) * | 2016-06-08 | 2017-12-14 | Qualcomm Incorporated | Providing Virtual Buttons in a Handheld Device |
US20170374212A1 (en) * | 2016-06-23 | 2017-12-28 | Fuji Xerox Co., Ltd. | Information processing apparatus, information processing system, and image forming apparatus |
CN107544432A (en) * | 2016-06-28 | 2018-01-05 | 发那科株式会社 | Life determining apparatus, life determining method and computer-readable recording medium |
WO2018031284A1 (en) * | 2016-08-08 | 2018-02-15 | Microsoft Technology Licensing, Llc | Interacting with a clipboard store |
US9952883B2 (en) | 2014-08-05 | 2018-04-24 | Tobii Ab | Dynamic determination of hardware |
US10025379B2 (en) | 2012-12-06 | 2018-07-17 | Google Llc | Eye tracking wearable devices and methods for use |
US20180205675A1 (en) * | 2017-01-17 | 2018-07-19 | Samsung Electronics Co., Ltd. | Message generation method and wearable electronic device for supporting the same |
US20180239422A1 (en) * | 2017-02-17 | 2018-08-23 | International Business Machines Corporation | Tracking eye movements with a smart device |
US20180275753A1 (en) * | 2017-03-23 | 2018-09-27 | Google Llc | Eye-signal augmented control |
US10248192B2 (en) | 2014-12-03 | 2019-04-02 | Microsoft Technology Licensing, Llc | Gaze target application launcher |
EP3451135A4 (en) * | 2016-04-26 | 2019-04-24 | Sony Corporation | Information processing device, information processing method, and program |
US20190155495A1 (en) * | 2017-11-22 | 2019-05-23 | Microsoft Technology Licensing, Llc | Dynamic device interaction adaptation based on user engagement |
US10317995B2 (en) | 2013-11-18 | 2019-06-11 | Tobii Ab | Component determination and gaze provoked interaction |
US20190187870A1 (en) * | 2017-12-20 | 2019-06-20 | International Business Machines Corporation | Utilizing biometric feedback to allow users to scroll content into a viewable display area |
US10353463B2 (en) * | 2016-03-16 | 2019-07-16 | RaayonNova LLC | Smart contact lens with eye driven control system and method |
US10366540B2 (en) * | 2017-03-23 | 2019-07-30 | Htc Corporation | Electronic apparatus and method for virtual reality or augmented reality system |
US10534526B2 (en) | 2013-03-13 | 2020-01-14 | Tobii Ab | Automatic scrolling based on gaze detection |
US10558262B2 (en) | 2013-11-18 | 2020-02-11 | Tobii Ab | Component determination and gaze provoked interaction |
US20200050280A1 (en) * | 2018-08-10 | 2020-02-13 | Beijing 7Invensun Technology Co., Ltd. | Operation instruction execution method and apparatus, user terminal and storage medium |
US20200070722A1 (en) * | 2016-12-13 | 2020-03-05 | International Automotive Components Group Gmbh | Interior trim part of motor vehicle |
US10719127B1 (en) * | 2018-08-29 | 2020-07-21 | Rockwell Collins, Inc. | Extended life display by utilizing eye tracking |
CN111602102A (en) * | 2018-02-06 | 2020-08-28 | 斯玛特艾公司 | Method and system for visual human-machine interaction |
US11009698B2 (en) * | 2019-03-13 | 2021-05-18 | Nick Cherukuri | Gaze-based user interface for augmented and mixed reality device |
US20210232219A1 (en) * | 2018-05-31 | 2021-07-29 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11320974B2 (en) * | 2020-03-31 | 2022-05-03 | Tobii Ab | Method, computer program product and processing circuitry for pre-processing visualizable data |
US11320900B2 (en) | 2016-03-04 | 2022-05-03 | Magic Leap, Inc. | Current drain reduction in AR/VR display systems |
US11361540B2 (en) * | 2020-02-27 | 2022-06-14 | Samsung Electronics Co., Ltd. | Method and apparatus for predicting object of interest of user |
US11467408B2 (en) | 2016-03-25 | 2022-10-11 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
US11556175B2 (en) | 2021-04-19 | 2023-01-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Hands-free vehicle sensing and applications as well as supervised driving system using brainwave activity |
WO2023130148A1 (en) * | 2022-01-03 | 2023-07-06 | Apple Inc. | Devices, methods, and graphical user interfaces for navigating and inputting or revising content |
US11703990B2 (en) * | 2020-08-17 | 2023-07-18 | Microsoft Technology Licensing, Llc | Animated visual cues indicating the availability of associated content |
US11720171B2 (en) | 2020-09-25 | 2023-08-08 | Apple Inc. | Methods for navigating user interfaces |
US11966055B2 (en) | 2018-07-19 | 2024-04-23 | Magic Leap, Inc. | Content interaction driven by eye metrics |
US11972046B1 (en) * | 2022-11-03 | 2024-04-30 | Vincent Jiang | Human-machine interaction method and system based on eye movement tracking |
US12039142B2 (en) | 2020-06-26 | 2024-07-16 | Apple Inc. | Devices, methods and graphical user interfaces for content applications |
Families Citing this family (118)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US20140187322A1 (en) * | 2010-06-18 | 2014-07-03 | Alexander Luchinskiy | Method of Interaction with a Computer, Smartphone or Computer Game |
US9922651B1 (en) * | 2014-08-13 | 2018-03-20 | Rockwell Collins, Inc. | Avionics text entry, cursor control, and display format selection via voice recognition |
US10037086B2 (en) | 2011-11-04 | 2018-07-31 | Tobii Ab | Portable device |
US9406103B1 (en) | 2012-09-26 | 2016-08-02 | Amazon Technologies, Inc. | Inline message alert |
DE212014000045U1 (en) | 2013-02-07 | 2015-09-24 | Apple Inc. | Voice trigger for a digital assistant |
US10558272B2 (en) | 2013-06-20 | 2020-02-11 | Uday Parshionikar | Gesture control via eye tracking, head tracking, facial expressions and other user actions |
US10089786B2 (en) * | 2013-08-19 | 2018-10-02 | Qualcomm Incorporated | Automatic customization of graphical user interface for optical see-through head mounted display with user interaction tracking |
JP2016529635A (en) * | 2013-08-27 | 2016-09-23 | オークランド ユニサービシズ リミテッド | Gaze control interface method and system |
JP2015049544A (en) * | 2013-08-29 | 2015-03-16 | オリンパス株式会社 | Parameter change device and method |
KR20150031986A (en) * | 2013-09-17 | 2015-03-25 | 삼성전자주식회사 | Display apparatus and control method thereof |
US9468373B2 (en) | 2013-09-24 | 2016-10-18 | Sony Interactive Entertainment Inc. | Gaze tracking variations using dynamic lighting position |
WO2015048030A1 (en) * | 2013-09-24 | 2015-04-02 | Sony Computer Entertainment Inc. | Gaze tracking variations using visible lights or dots |
US9781360B2 (en) | 2013-09-24 | 2017-10-03 | Sony Interactive Entertainment Inc. | Gaze tracking variations using selective illumination |
US20150169048A1 (en) * | 2013-12-18 | 2015-06-18 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to present information on device based on eye tracking |
US10180716B2 (en) | 2013-12-20 | 2019-01-15 | Lenovo (Singapore) Pte Ltd | Providing last known browsing location cue using movement-oriented biometric data |
US9633252B2 (en) | 2013-12-20 | 2017-04-25 | Lenovo (Singapore) Pte. Ltd. | Real-time detection of user intention based on kinematics analysis of movement-oriented biometric data |
KR20150083553A (en) * | 2014-01-10 | 2015-07-20 | 삼성전자주식회사 | Apparatus and method for processing input |
KR102254169B1 (en) * | 2014-01-16 | 2021-05-20 | 삼성전자주식회사 | Dispaly apparatus and controlling method thereof |
JP2015133088A (en) * | 2014-01-16 | 2015-07-23 | カシオ計算機株式会社 | Gui system, display processing device, input processing device, and program |
US11907421B1 (en) * | 2014-03-01 | 2024-02-20 | sigmund lindsay clements | Mixed reality glasses operating public devices with gaze and secondary user input |
KR20150108216A (en) * | 2014-03-17 | 2015-09-25 | 삼성전자주식회사 | Method for processing input and an electronic device thereof |
WO2015157308A1 (en) * | 2014-04-07 | 2015-10-15 | Cubic Corporation | Systems and methods for queue management |
US20150301598A1 (en) * | 2014-04-22 | 2015-10-22 | Kabushiki Kaisha Toshiba | Method, electronic device, and computer program product |
US10802582B1 (en) * | 2014-04-22 | 2020-10-13 | sigmund lindsay clements | Eye tracker in an augmented reality glasses for eye gaze to input displayed input icons |
KR102217336B1 (en) * | 2014-04-22 | 2021-02-19 | 엘지전자 주식회사 | Method for controlling mobile terminal |
US10416759B2 (en) * | 2014-05-13 | 2019-09-17 | Lenovo (Singapore) Pte. Ltd. | Eye tracking laser pointer |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
CN106464959B (en) * | 2014-06-10 | 2019-07-26 | 株式会社索思未来 | Semiconductor integrated circuit and the display device and control method for having the semiconductor integrated circuit |
USD754184S1 (en) * | 2014-06-23 | 2016-04-19 | Google Inc. | Portion of a display panel with an animated computer icon |
US10606920B2 (en) * | 2014-08-28 | 2020-03-31 | Avaya Inc. | Eye control of a text stream |
US9798383B2 (en) * | 2014-09-19 | 2017-10-24 | Intel Corporation | Facilitating dynamic eye torsion-based eye tracking on computing devices |
CN104331219A (en) * | 2014-10-29 | 2015-02-04 | 广东睿江科技有限公司 | Icon displaying method, device and system |
US10645218B2 (en) * | 2014-10-31 | 2020-05-05 | Avaya Inc. | Contact center interactive text stream wait treatments |
WO2016072965A1 (en) * | 2014-11-03 | 2016-05-12 | Bayerische Motoren Werke Aktiengesellschaft | Method and system for calibrating an eye tracking system |
US9535497B2 (en) | 2014-11-20 | 2017-01-03 | Lenovo (Singapore) Pte. Ltd. | Presentation of data on an at least partially transparent display based on user focus |
KR20160080851A (en) * | 2014-12-29 | 2016-07-08 | 엘지전자 주식회사 | Display apparatus and controlling method thereof |
WO2016112531A1 (en) * | 2015-01-16 | 2016-07-21 | Hewlett-Packard Development Company, L.P. | User gaze detection |
JP2016151798A (en) * | 2015-02-16 | 2016-08-22 | ソニー株式会社 | Information processing device, method, and program |
US10860094B2 (en) * | 2015-03-10 | 2020-12-08 | Lenovo (Singapore) Pte. Ltd. | Execution of function based on location of display at which a user is looking and manipulation of an input device |
FR3034215B1 (en) * | 2015-03-27 | 2018-06-15 | Valeo Comfort And Driving Assistance | CONTROL METHOD, CONTROL DEVICE, SYSTEM AND MOTOR VEHICLE COMPRISING SUCH A CONTROL DEVICE |
JP6553418B2 (en) * | 2015-06-12 | 2019-07-31 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | Display control method, display control device and control program |
US10043281B2 (en) | 2015-06-14 | 2018-08-07 | Sony Interactive Entertainment Inc. | Apparatus and method for estimating eye gaze location |
KR20170130582A (en) * | 2015-08-04 | 2017-11-28 | 구글 엘엘씨 | Hover behavior for gaze interaction in virtual reality |
US10248280B2 (en) * | 2015-08-18 | 2019-04-02 | International Business Machines Corporation | Controlling input to a plurality of computer windows |
WO2017033777A1 (en) * | 2015-08-27 | 2017-03-02 | 株式会社コロプラ | Program for controlling head-mounted display system |
CN108604116A (en) * | 2015-09-24 | 2018-09-28 | 托比股份公司 | It can carry out the wearable device of eye tracks |
US10565446B2 (en) | 2015-09-24 | 2020-02-18 | Tobii Ab | Eye-tracking enabled wearable devices |
US10594974B2 (en) | 2016-04-07 | 2020-03-17 | Tobii Ab | Image sensor for vision based on human computer interaction |
WO2017186320A1 (en) | 2016-04-29 | 2017-11-02 | Tobii Ab | Eye-tracking enabled wearable devices |
US10678327B2 (en) * | 2016-08-01 | 2020-06-09 | Microsoft Technology Licensing, Llc | Split control focus during a sustained user interaction |
US10444983B2 (en) * | 2016-09-20 | 2019-10-15 | Rohde & Schwarz Gmbh & Co. Kg | Signal analyzing instrument with touch gesture control and method of operating thereof |
US10345898B2 (en) * | 2016-09-22 | 2019-07-09 | International Business Machines Corporation | Context selection based on user eye focus |
CN106407760B (en) * | 2016-09-22 | 2021-09-24 | 上海传英信息技术有限公司 | User terminal and application program hiding method |
CN107015637B (en) * | 2016-10-27 | 2020-05-05 | 阿里巴巴集团控股有限公司 | Input method and device in virtual reality scene |
CN106681509A (en) * | 2016-12-29 | 2017-05-17 | 北京七鑫易维信息技术有限公司 | Interface operating method and system |
CN106843400A (en) * | 2017-02-07 | 2017-06-13 | 宇龙计算机通信科技(深圳)有限公司 | A kind of terminal and the method for information display based on terminal |
US10244204B2 (en) * | 2017-03-22 | 2019-03-26 | International Business Machines Corporation | Dynamic projection of communication data |
US10248197B2 (en) * | 2017-04-27 | 2019-04-02 | Imam Abdulrahman Bin Faisal University | Systems and methodologies for real time eye tracking for electronic device interaction |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
US10303715B2 (en) | 2017-05-16 | 2019-05-28 | Apple Inc. | Intelligent automated assistant for media exploration |
US10515270B2 (en) * | 2017-07-12 | 2019-12-24 | Lenovo (Singapore) Pte. Ltd. | Systems and methods to enable and disable scrolling using camera input |
US10496162B2 (en) * | 2017-07-26 | 2019-12-03 | Microsoft Technology Licensing, Llc | Controlling a computer using eyegaze and dwell |
CN107390874A (en) * | 2017-07-27 | 2017-11-24 | 深圳市泰衡诺科技有限公司 | A kind of intelligent terminal control method and control device based on human eye |
KR20200101906A (en) | 2017-08-23 | 2020-08-28 | 뉴레이블 인크. | Brain-computer interface with high-speed eye tracking features |
US11023040B2 (en) | 2017-09-21 | 2021-06-01 | Tobii Ab | Systems and methods for interacting with a computing device using gaze information |
CN111052046B (en) | 2017-09-29 | 2022-06-03 | 苹果公司 | Accessing functionality of an external device using a real-world interface |
US10678116B1 (en) | 2017-11-09 | 2020-06-09 | Facebook Technologies, Llc | Active multi-color PBP elements |
JP7496776B2 (en) * | 2017-11-13 | 2024-06-07 | ニューラブル インコーポレイテッド | Brain-Computer Interface with Adaptation for Fast, Accurate and Intuitive User Interaction - Patent application |
CN108171673B (en) * | 2018-01-12 | 2024-01-23 | 京东方科技集团股份有限公司 | Image processing method and device, vehicle-mounted head-up display system and vehicle |
JP2021511567A (en) | 2018-01-18 | 2021-05-06 | ニューラブル インコーポレイテッド | Brain-computer interface with adaptation for fast, accurate, and intuitive user interaction |
US11194161B2 (en) | 2018-02-09 | 2021-12-07 | Pupil Labs Gmbh | Devices, systems and methods for predicting gaze-related parameters |
US11556741B2 (en) | 2018-02-09 | 2023-01-17 | Pupil Labs Gmbh | Devices, systems and methods for predicting gaze-related parameters using a neural network |
US11393251B2 (en) | 2018-02-09 | 2022-07-19 | Pupil Labs Gmbh | Devices, systems and methods for predicting gaze-related parameters |
US20190253700A1 (en) | 2018-02-15 | 2019-08-15 | Tobii Ab | Systems and methods for calibrating image sensors in wearable apparatuses |
US10713813B2 (en) | 2018-02-22 | 2020-07-14 | Innodem Neurosciences | Eye tracking method and system |
US10671890B2 (en) | 2018-03-30 | 2020-06-02 | Tobii Ab | Training of a neural network for three dimensional (3D) gaze prediction |
WO2019190561A1 (en) | 2018-03-30 | 2019-10-03 | Tobii Ab | Deep learning for three dimensional (3d) gaze prediction |
US10534982B2 (en) | 2018-03-30 | 2020-01-14 | Tobii Ab | Neural network training for three dimensional (3D) gaze prediction with calibration parameters |
US10558895B2 (en) | 2018-03-30 | 2020-02-11 | Tobii Ab | Deep learning for three dimensional (3D) gaze prediction |
CN108520728B (en) * | 2018-04-20 | 2020-08-04 | 京东方科技集团股份有限公司 | Backlight adjusting method and device, computing device, display device and storage medium |
US10871874B2 (en) | 2018-05-09 | 2020-12-22 | Mirametrix Inc. | System and methods for device interaction using a pointing device and attention sensing device |
CN112041788B (en) * | 2018-05-09 | 2024-05-03 | 苹果公司 | Selecting text input fields using eye gaze |
US10528131B2 (en) | 2018-05-16 | 2020-01-07 | Tobii Ab | Method to reliably detect correlations between gaze and stimuli |
WO2019221724A1 (en) | 2018-05-16 | 2019-11-21 | Tobii Ab | Method to reliably detect correlations between gaze and stimuli |
US20210256353A1 (en) | 2018-05-17 | 2021-08-19 | Tobii Ab | Autoencoding generative adversarial network for augmenting training data usable to train predictive models |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
CN112703464B (en) | 2018-07-20 | 2024-05-24 | 托比股份公司 | Distributed gaze point rendering based on user gaze |
US10664050B2 (en) | 2018-09-21 | 2020-05-26 | Neurable Inc. | Human-computer interface using high-speed and accurate tracking of user interactions |
US11353952B2 (en) | 2018-11-26 | 2022-06-07 | Tobii Ab | Controlling illuminators for optimal glints |
CN109683705A (en) * | 2018-11-30 | 2019-04-26 | 北京七鑫易维信息技术有限公司 | The methods, devices and systems of eyeball fixes control interactive controls |
SE542553C2 (en) * | 2018-12-17 | 2020-06-02 | Tobii Ab | Gaze tracking via tracing of light paths |
US10969910B2 (en) * | 2018-12-18 | 2021-04-06 | Ford Global Technologies, Llc | Variable size user input device for vehicle |
CN109799908B (en) * | 2019-01-02 | 2022-04-01 | 东南大学 | Image zooming and dragging method based on eye movement signal |
US11537202B2 (en) | 2019-01-16 | 2022-12-27 | Pupil Labs Gmbh | Methods for generating calibration data for head-wearable devices and eye tracking system |
CN110007766A (en) * | 2019-04-15 | 2019-07-12 | 中国航天员科研训练中心 | A kind of display of sight cursor and control method and system |
EP3979896A1 (en) | 2019-06-05 | 2022-04-13 | Pupil Labs GmbH | Devices, systems and methods for predicting gaze-related parameters |
US11907417B2 (en) | 2019-07-25 | 2024-02-20 | Tectus Corporation | Glance and reveal within a virtual environment |
US11650660B2 (en) * | 2019-08-02 | 2023-05-16 | Canon Kabushiki Kaisha | Electronic device, control method, and non-transitory computer readable medium |
CN112584280B (en) * | 2019-09-27 | 2022-11-29 | 百度在线网络技术(北京)有限公司 | Control method, device, equipment and medium for intelligent equipment |
US10901505B1 (en) * | 2019-10-24 | 2021-01-26 | Tectus Corporation | Eye-based activation and tool selection systems and methods |
US11662807B2 (en) | 2020-01-06 | 2023-05-30 | Tectus Corporation | Eye-tracking user interface for virtual tool control |
USD931896S1 (en) * | 2019-11-20 | 2021-09-28 | Beijing Zhangdianzishi Technology Co., Ltd. | Display screen or portion thereof with an animated graphical user interface |
US10955988B1 (en) | 2020-02-14 | 2021-03-23 | Lenovo (Singapore) Pte. Ltd. | Execution of function based on user looking at one area of display while touching another area of display |
US11695758B2 (en) * | 2020-02-24 | 2023-07-04 | International Business Machines Corporation | Second factor authentication of electronic devices |
GB2597533B (en) * | 2020-07-28 | 2022-11-23 | Sony Interactive Entertainment Inc | Gaze tracking system and method |
JP7034228B1 (en) * | 2020-09-30 | 2022-03-11 | 株式会社ドワンゴ | Eye tracking system, eye tracking method, and eye tracking program |
US11503998B1 (en) | 2021-05-05 | 2022-11-22 | Innodem Neurosciences | Method and a system for detection of eye gaze-pattern abnormalities and related neurological diseases |
US20220397975A1 (en) * | 2021-06-09 | 2022-12-15 | Bayerische Motoren Werke Aktiengesellschaft | Method, apparatus, and computer program for touch stabilization |
US20230030433A1 (en) * | 2021-07-27 | 2023-02-02 | App-Pop-Up Inc. | System and method for adding and simultaneously displaying auxiliary content to main content displayed via a graphical user interface (gui) |
JP2023024153A (en) * | 2021-08-06 | 2023-02-16 | トヨタ自動車株式会社 | Information input system |
CN113610897A (en) * | 2021-08-19 | 2021-11-05 | 北京字节跳动网络技术有限公司 | Testing method, device and equipment of cursor control device |
US20230081605A1 (en) * | 2021-09-16 | 2023-03-16 | Apple Inc. | Digital assistant for moving and copying graphical elements |
US11592899B1 (en) | 2021-10-28 | 2023-02-28 | Tectus Corporation | Button activation within an eye-controlled user interface |
US11619994B1 (en) | 2022-01-14 | 2023-04-04 | Tectus Corporation | Control of an electronic contact lens using pitch-based eye gestures |
US11874961B2 (en) | 2022-05-09 | 2024-01-16 | Tectus Corporation | Managing display of an icon in an eye tracking augmented reality device |
US12061776B2 (en) * | 2022-06-24 | 2024-08-13 | Google Llc | List navigation with eye tracking |
US20240111361A1 (en) * | 2022-09-27 | 2024-04-04 | Tobii Dynavox Ab | Method, System, and Computer Program Product for Drawing and Fine-Tuned Motor Controls |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050030322A1 (en) * | 1998-06-25 | 2005-02-10 | Gardos Thomas R. | Perceptually based display |
US20070279591A1 (en) * | 2006-05-31 | 2007-12-06 | Sony Ericsson Mobile Communications Ab | Display based on eye information |
US20110115883A1 (en) * | 2009-11-16 | 2011-05-19 | Marcus Kellerman | Method And System For Adaptive Viewport For A Mobile Device Based On Viewing Angle |
US20110175932A1 (en) * | 2010-01-21 | 2011-07-21 | Tobii Technology Ab | Eye tracker based contextual action |
US20120105486A1 (en) * | 2009-04-09 | 2012-05-03 | Dynavox Systems Llc | Calibration free, motion tolerent eye-gaze direction detector with contextually aware computer interaction and communication methods |
US20120256967A1 (en) * | 2011-04-08 | 2012-10-11 | Baldwin Leo B | Gaze-based content display |
US20130132867A1 (en) * | 2011-11-21 | 2013-05-23 | Bradley Edward Morris | Systems and Methods for Image Navigation Using Zoom Operations |
US20130135196A1 (en) * | 2011-11-29 | 2013-05-30 | Samsung Electronics Co., Ltd. | Method for operating user functions based on eye tracking and mobile device adapted thereto |
US20140002352A1 (en) * | 2012-05-09 | 2014-01-02 | Michal Jacob | Eye tracking based selective accentuation of portions of a display |
Family Cites Families (121)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5103498A (en) | 1990-08-02 | 1992-04-07 | Tandy Corporation | Intelligent help system |
US5390281A (en) | 1992-05-27 | 1995-02-14 | Apple Computer, Inc. | Method and apparatus for deducing user intent and providing computer implemented services |
US5471542A (en) | 1993-09-27 | 1995-11-28 | Ragland; Richard R. | Point-of-gaze tracker |
EP1515248A3 (en) | 1994-12-16 | 2005-07-20 | Canon Kabushiki Kaisha | Hierarchical data display method and information processing system for realizing it |
US6393584B1 (en) | 1995-04-26 | 2002-05-21 | International Business Machines Corporation | Method and system for efficiently saving the operating state of a data processing system |
US6011555A (en) | 1996-04-03 | 2000-01-04 | International Business Machine Corp. | Method and apparatus for a drop down control that changes contents dynamically |
US5835083A (en) | 1996-05-30 | 1998-11-10 | Sun Microsystems, Inc. | Eyetrack-driven illumination and information display |
US5731805A (en) * | 1996-06-25 | 1998-03-24 | Sun Microsystems, Inc. | Method and apparatus for eyetrack-driven text enlargement |
US5850211A (en) | 1996-06-26 | 1998-12-15 | Sun Microsystems, Inc. | Eyetrack-driven scrolling |
US6021403A (en) | 1996-07-19 | 2000-02-01 | Microsoft Corporation | Intelligent user assistance facility |
US6351273B1 (en) | 1997-04-30 | 2002-02-26 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
US6882354B1 (en) | 1997-09-17 | 2005-04-19 | Sun Microsystems, Inc. | Scroll bars with user feedback |
US6067565A (en) | 1998-01-15 | 2000-05-23 | Microsoft Corporation | Technique for prefetching a web page of potential future interest in lieu of continuing a current information download |
US6085226A (en) | 1998-01-15 | 2000-07-04 | Microsoft Corporation | Method and apparatus for utility-directed prefetching of web pages into local cache using continual computation and user models |
TW405135B (en) | 1998-03-19 | 2000-09-11 | Hitachi Ltd | Vacuum insulated switch apparatus |
US6204828B1 (en) * | 1998-03-31 | 2001-03-20 | International Business Machines Corporation | Integrated gaze/manual cursor positioning system |
US7634528B2 (en) | 2000-03-16 | 2009-12-15 | Microsoft Corporation | Harnessing information about the timing of a user's client-server interactions to enhance messaging and collaboration services |
US20070078552A1 (en) | 2006-01-13 | 2007-04-05 | Outland Research, Llc | Gaze-based power conservation for portable media players |
JP4693329B2 (en) | 2000-05-16 | 2011-06-01 | スイスコム・アクチエンゲゼルシヤフト | Command input method and terminal device |
DK1285409T3 (en) | 2000-05-16 | 2005-08-22 | Swisscom Mobile Ag | Process of biometric identification and authentication |
US6603491B2 (en) | 2000-05-26 | 2003-08-05 | Jerome H. Lemelson | System and methods for controlling automatic scrolling of information on a display or screen |
US6873314B1 (en) | 2000-08-29 | 2005-03-29 | International Business Machines Corporation | Method and system for the recognition of reading skimming and scanning from eye-gaze patterns |
US7091928B2 (en) | 2001-03-02 | 2006-08-15 | Rajasingham Arjuna Indraeswara | Intelligent eye |
US7013258B1 (en) | 2001-03-07 | 2006-03-14 | Lenovo (Singapore) Pte. Ltd. | System and method for accelerating Chinese text input |
US6578962B1 (en) | 2001-04-27 | 2003-06-17 | International Business Machines Corporation | Calibration-free eye gaze tracking |
US6886137B2 (en) | 2001-05-29 | 2005-04-26 | International Business Machines Corporation | Eye gaze control of dynamic information presentation |
JP4228617B2 (en) | 2001-09-07 | 2009-02-25 | ソニー株式会社 | Information processing apparatus and information processing method |
US20030052903A1 (en) | 2001-09-20 | 2003-03-20 | Weast John C. | Method and apparatus for focus based lighting |
SE524003C2 (en) | 2002-11-21 | 2004-06-15 | Tobii Technology Ab | Procedure and facility for detecting and following an eye and its angle of view |
US7379560B2 (en) | 2003-03-05 | 2008-05-27 | Intel Corporation | Method and apparatus for monitoring human attention in dynamic power management |
US20050047629A1 (en) * | 2003-08-25 | 2005-03-03 | International Business Machines Corporation | System and method for selectively expanding or contracting a portion of a display using eye-gaze tracking |
US7365738B2 (en) | 2003-12-02 | 2008-04-29 | International Business Machines Corporation | Guides and indicators for eye movement monitoring systems |
US7317449B2 (en) | 2004-03-02 | 2008-01-08 | Microsoft Corporation | Key-based advanced navigation techniques |
US9076343B2 (en) | 2004-04-06 | 2015-07-07 | International Business Machines Corporation | Self-service system for education |
US7486302B2 (en) | 2004-04-14 | 2009-02-03 | Noregin Assets N.V., L.L.C. | Fisheye lens graphical user interfaces |
DK1607840T3 (en) | 2004-06-18 | 2015-02-16 | Tobii Technology Ab | Eye control of a computer device |
US20060066567A1 (en) | 2004-09-29 | 2006-03-30 | Scharenbroch Gregory K | System and method of controlling scrolling text display |
US7614011B2 (en) | 2004-10-21 | 2009-11-03 | International Business Machines Corporation | Apparatus and method for display power saving |
ITFI20040223A1 (en) | 2004-10-29 | 2005-01-29 | Sr Labs S R L | METHOD AND INTEGRATED VISUALIZATION, PROCESSING AND ANALYSIS SYSTEM OF MEDICAL IMAGES |
JP4356594B2 (en) | 2004-11-22 | 2009-11-04 | ソニー株式会社 | Display device, display method, display program, and recording medium on which display program is recorded |
US20060192775A1 (en) | 2005-02-25 | 2006-08-31 | Microsoft Corporation | Using detected visual cues to change computer system operating states |
TWI405135B (en) | 2005-05-17 | 2013-08-11 | Ibm | System, method and recording medium |
US7339834B2 (en) | 2005-06-03 | 2008-03-04 | Sandisk Corporation | Starting program voltage shift with cycling of non-volatile memory |
US20060256133A1 (en) * | 2005-11-05 | 2006-11-16 | Outland Research | Gaze-responsive video advertisment display |
JP5036177B2 (en) | 2005-12-12 | 2012-09-26 | オリンパス株式会社 | Information display device |
WO2007089198A1 (en) | 2006-02-01 | 2007-08-09 | Tobii Technology Ab | Generation of graphical feedback in a computer system |
US8793620B2 (en) | 2011-04-21 | 2014-07-29 | Sony Computer Entertainment Inc. | Gaze-assisted computer interface |
FR2898289B1 (en) | 2006-03-10 | 2009-01-30 | Alcatel Sa | INTERFACE STRUCTURE BETWEEN TWO MECHANICAL PIECES IN MOTION, METHOD FOR ITS IMPLEMENTATION, AND APPLICATION TO VACUUM PUMPS |
US9407747B2 (en) | 2006-03-14 | 2016-08-02 | Nokia Technologies Oy | Mobile device and method |
GB0618979D0 (en) | 2006-09-27 | 2006-11-08 | Malvern Scient Solutions Ltd | Cursor control method |
US8756516B2 (en) | 2006-10-31 | 2014-06-17 | Scenera Technologies, Llc | Methods, systems, and computer program products for interacting simultaneously with multiple application programs |
US20080114614A1 (en) | 2006-11-15 | 2008-05-15 | General Electric Company | Methods and systems for healthcare application interaction using gesture-based interaction enhanced with pressure sensitivity |
US7783077B2 (en) * | 2006-12-01 | 2010-08-24 | The Boeing Company | Eye gaze tracker system and method |
EP1970649A1 (en) | 2007-03-16 | 2008-09-17 | Navitas | Device for adjusting the subcooling of the coolant downstream from the condenser of a refrigeration system and system including this device |
US20080270474A1 (en) | 2007-04-30 | 2008-10-30 | Searete Llc | Collecting influence information |
US7890549B2 (en) | 2007-04-30 | 2011-02-15 | Quantum Leap Research, Inc. | Collaboration portal (COPO) a scaleable method, system, and apparatus for providing computer-accessible benefits to communities of users |
US20080320418A1 (en) | 2007-06-21 | 2008-12-25 | Cadexterity, Inc. | Graphical User Friendly Interface Keypad System For CAD |
US20100018223A1 (en) * | 2007-08-15 | 2010-01-28 | Sundhar Shaam P | Tabletop Quick Cooling Device |
DE102007048666A1 (en) | 2007-10-10 | 2009-04-16 | Bayerische Motoren Werke Aktiengesellschaft | Twin-scroll turbocharger |
DE102007062661A1 (en) | 2007-12-24 | 2009-06-25 | J. Eberspächer GmbH & Co. KG | collector |
US8155479B2 (en) | 2008-03-28 | 2012-04-10 | Intuitive Surgical Operations Inc. | Automated panning and digital zooming for robotic surgical systems |
US8620913B2 (en) | 2008-04-07 | 2013-12-31 | Microsoft Corporation | Information management through a single application |
US20090273562A1 (en) | 2008-05-02 | 2009-11-05 | International Business Machines Corporation | Enhancing computer screen security using customized control of displayed content area |
US8226574B2 (en) | 2008-07-18 | 2012-07-24 | Honeywell International Inc. | Impaired subject detection system |
US20100079508A1 (en) | 2008-09-30 | 2010-04-01 | Andrew Hodge | Electronic devices with gaze detection capabilities |
WO2010051037A1 (en) | 2008-11-03 | 2010-05-06 | Bruce Reiner | Visually directed human-computer interaction for medical applications |
US8788977B2 (en) | 2008-11-20 | 2014-07-22 | Amazon Technologies, Inc. | Movement recognition as input mechanism |
US20100182232A1 (en) | 2009-01-22 | 2010-07-22 | Alcatel-Lucent Usa Inc. | Electronic Data Input System |
JP2010170388A (en) * | 2009-01-23 | 2010-08-05 | Sony Corp | Input device and method, information processing apparatus and method, information processing system, and program |
US8537181B2 (en) | 2009-03-09 | 2013-09-17 | Ventana Medical Systems, Inc. | Modes and interfaces for observation, and manipulation of digital images on computer screen in support of pathologist's workflow |
EP2237237B1 (en) * | 2009-03-30 | 2013-03-20 | Tobii Technology AB | Eye closure detection using structured illumination |
ATE527934T1 (en) | 2009-04-01 | 2011-10-15 | Tobii Technology Ab | ADAPTIVE CAMERA AND ILLUMINATOR EYE TRACKER |
US20100283722A1 (en) | 2009-05-08 | 2010-11-11 | Sony Ericsson Mobile Communications Ab | Electronic apparatus including a coordinate input surface and method for controlling such an electronic apparatus |
US20100295774A1 (en) | 2009-05-19 | 2010-11-25 | Mirametrix Research Incorporated | Method for Automatic Mapping of Eye Tracker Data to Hypermedia Content |
WO2010141403A1 (en) | 2009-06-01 | 2010-12-09 | Dynavox Systems, Llc | Separately portable device for implementing eye gaze control of a speech generation device |
KR20100136616A (en) | 2009-06-19 | 2010-12-29 | 삼성전자주식회사 | Method and apparatus for reducing the multi touch input error in portable communication system |
CN101943982B (en) | 2009-07-10 | 2012-12-12 | 北京大学 | Method for manipulating image based on tracked eye movements |
US20110045810A1 (en) | 2009-08-20 | 2011-02-24 | Oto Technologies, Llc | Semantic callback triggers for an electronic document |
US20110119361A1 (en) | 2009-11-17 | 2011-05-19 | Oto Technologies, Llc | System and method for managing redacted electronic documents using callback triggers |
US9141189B2 (en) | 2010-08-26 | 2015-09-22 | Samsung Electronics Co., Ltd. | Apparatus and method for controlling interface |
WO2012054231A2 (en) | 2010-10-04 | 2012-04-26 | Gerard Dirk Smits | System and method for 3-d projection and enhancements for interactivity |
KR101731346B1 (en) | 2010-11-12 | 2017-04-28 | 엘지전자 주식회사 | Method for providing display image in multimedia device and thereof |
WO2012107892A2 (en) * | 2011-02-09 | 2012-08-16 | Primesense Ltd. | Gaze detection in a 3d mapping environment |
JP2014077814A (en) | 2011-02-14 | 2014-05-01 | Panasonic Corp | Display control device and display control method |
US9478143B1 (en) * | 2011-03-25 | 2016-10-25 | Amazon Technologies, Inc. | Providing assistance to read electronic books |
GB2490864A (en) | 2011-05-09 | 2012-11-21 | Nds Ltd | A device with gaze tracking and zoom |
US8988350B2 (en) | 2011-08-20 | 2015-03-24 | Buckyball Mobile, Inc | Method and system of user authentication with bioresponse data |
WO2013033842A1 (en) | 2011-09-07 | 2013-03-14 | Tandemlaunch Technologies Inc. | System and method for using eye gaze information to enhance interactions |
US8970452B2 (en) | 2011-11-02 | 2015-03-03 | Google Inc. | Imaging method |
US8611015B2 (en) | 2011-11-22 | 2013-12-17 | Google Inc. | User interface |
US8235529B1 (en) | 2011-11-30 | 2012-08-07 | Google Inc. | Unlocking a screen using eye tracking information |
US10013053B2 (en) | 2012-01-04 | 2018-07-03 | Tobii Ab | System for gaze interaction |
JP5945417B2 (en) | 2012-01-06 | 2016-07-05 | 京セラ株式会社 | Electronics |
KR101850034B1 (en) | 2012-01-06 | 2018-04-20 | 엘지전자 주식회사 | Mobile terminal and control method therof |
JP5832339B2 (en) | 2012-03-04 | 2015-12-16 | アルパイン株式会社 | Scale display method and apparatus for scaling operation |
WO2013144807A1 (en) | 2012-03-26 | 2013-10-03 | Primesense Ltd. | Enhanced virtual touchpad and touchscreen |
US9308439B2 (en) | 2012-04-10 | 2016-04-12 | Bally Gaming, Inc. | Controlling three-dimensional presentation of wagering game content |
KR102166254B1 (en) | 2012-04-11 | 2020-10-15 | 삼성전자주식회사 | Method and system to share, synchronize contents in cross platform environments |
US9423870B2 (en) * | 2012-05-08 | 2016-08-23 | Google Inc. | Input determination method |
WO2013168171A1 (en) | 2012-05-10 | 2013-11-14 | Umoove Services Ltd. | Method for gesture-based operation control |
WO2013168173A1 (en) | 2012-05-11 | 2013-11-14 | Umoove Services Ltd. | Gaze-based automatic scrolling |
US9111380B2 (en) | 2012-06-05 | 2015-08-18 | Apple Inc. | Rendering maps |
JP5963584B2 (en) | 2012-07-12 | 2016-08-03 | キヤノン株式会社 | Electronic device and control method thereof |
US20140026098A1 (en) | 2012-07-19 | 2014-01-23 | M2J Think Box, Inc. | Systems and methods for navigating an interface of an electronic device |
US9423871B2 (en) * | 2012-08-07 | 2016-08-23 | Honeywell International Inc. | System and method for reducing the effects of inadvertent touch on a touch screen controller |
US10139937B2 (en) | 2012-10-12 | 2018-11-27 | Microsoft Technology Licensing, Llc | Multi-modal user expressions and user intensity as interactions with an application |
US8963806B1 (en) | 2012-10-29 | 2015-02-24 | Google Inc. | Device authentication |
JP2014092940A (en) | 2012-11-02 | 2014-05-19 | Sony Corp | Image display device and image display method and computer program |
US20140168054A1 (en) | 2012-12-14 | 2014-06-19 | Echostar Technologies L.L.C. | Automatic page turning of electronically displayed content based on captured eye position data |
KR102062310B1 (en) | 2013-01-04 | 2020-02-11 | 삼성전자주식회사 | Method and apparatus for prividing control service using head tracking in an electronic device |
US20140195918A1 (en) | 2013-01-07 | 2014-07-10 | Steven Friedlander | Eye tracking user interface |
US9244529B2 (en) | 2013-01-27 | 2016-01-26 | Dmitri Model | Point-of-gaze estimation robust to head rotations and/or device rotations |
US9864498B2 (en) | 2013-03-13 | 2018-01-09 | Tobii Ab | Automatic scrolling based on gaze detection |
WO2014134623A1 (en) | 2013-03-01 | 2014-09-04 | Tobii Technology Ab | Delay warp gaze interaction |
US20140247208A1 (en) | 2013-03-01 | 2014-09-04 | Tobii Technology Ab | Invoking and waking a computing device from stand-by mode based on gaze detection |
US20150009238A1 (en) | 2013-07-03 | 2015-01-08 | Nvidia Corporation | Method for zooming into and out of an image shown on a display |
WO2015048030A1 (en) * | 2013-09-24 | 2015-04-02 | Sony Computer Entertainment Inc. | Gaze tracking variations using visible lights or dots |
US9400553B2 (en) * | 2013-10-11 | 2016-07-26 | Microsoft Technology Licensing, Llc | User interface programmatic scaling |
US10558262B2 (en) | 2013-11-18 | 2020-02-11 | Tobii Ab | Component determination and gaze provoked interaction |
US10317995B2 (en) | 2013-11-18 | 2019-06-11 | Tobii Ab | Component determination and gaze provoked interaction |
US20150143293A1 (en) | 2013-11-18 | 2015-05-21 | Tobii Technology Ab | Component determination and gaze provoked interaction |
-
2014
- 2014-03-03 WO PCT/US2014/020024 patent/WO2014134623A1/en active Application Filing
- 2014-03-03 CN CN201480024417.XA patent/CN105339866B/en active Active
- 2014-03-03 ES ES14716455T patent/ES2731560T3/en active Active
- 2014-03-03 KR KR1020157027213A patent/KR20160005013A/en not_active Application Discontinuation
- 2014-03-03 EP EP14716455.2A patent/EP2962175B1/en active Active
- 2014-03-03 US US14/195,789 patent/US9619020B2/en active Active
- 2014-03-03 US US14/195,755 patent/US20140247210A1/en not_active Abandoned
- 2014-03-03 US US14/195,743 patent/US20140247232A1/en not_active Abandoned
-
2015
- 2015-12-31 US US14/986,141 patent/US20160116980A1/en not_active Abandoned
-
2017
- 2017-03-01 US US15/446,843 patent/US20170177078A1/en not_active Abandoned
- 2017-03-03 US US15/449,058 patent/US10545574B2/en active Active
-
2019
- 2019-07-02 US US16/459,780 patent/US20190324534A1/en not_active Abandoned
- 2019-09-09 US US16/564,416 patent/US20200004331A1/en not_active Abandoned
- 2019-12-16 US US16/716,073 patent/US11604510B2/en active Active
-
2021
- 2021-01-22 US US17/156,083 patent/US20210141451A1/en not_active Abandoned
-
2023
- 2023-03-02 US US18/177,489 patent/US11853477B2/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050030322A1 (en) * | 1998-06-25 | 2005-02-10 | Gardos Thomas R. | Perceptually based display |
US20070279591A1 (en) * | 2006-05-31 | 2007-12-06 | Sony Ericsson Mobile Communications Ab | Display based on eye information |
US20120105486A1 (en) * | 2009-04-09 | 2012-05-03 | Dynavox Systems Llc | Calibration free, motion tolerent eye-gaze direction detector with contextually aware computer interaction and communication methods |
US20140334666A1 (en) * | 2009-04-09 | 2014-11-13 | Dynavox Systems Llc | Calibration free, motion tolerant eye-gaze direction detector with contextually aware computer interaction and communication methods |
US20110115883A1 (en) * | 2009-11-16 | 2011-05-19 | Marcus Kellerman | Method And System For Adaptive Viewport For A Mobile Device Based On Viewing Angle |
US8762846B2 (en) * | 2009-11-16 | 2014-06-24 | Broadcom Corporation | Method and system for adaptive viewport for a mobile device based on viewing angle |
US20110175932A1 (en) * | 2010-01-21 | 2011-07-21 | Tobii Technology Ab | Eye tracker based contextual action |
US20120256967A1 (en) * | 2011-04-08 | 2012-10-11 | Baldwin Leo B | Gaze-based content display |
US20130132867A1 (en) * | 2011-11-21 | 2013-05-23 | Bradley Edward Morris | Systems and Methods for Image Navigation Using Zoom Operations |
US20130135196A1 (en) * | 2011-11-29 | 2013-05-30 | Samsung Electronics Co., Ltd. | Method for operating user functions based on eye tracking and mobile device adapted thereto |
US20140002352A1 (en) * | 2012-05-09 | 2014-01-02 | Michal Jacob | Eye tracking based selective accentuation of portions of a display |
Non-Patent Citations (1)
Title |
---|
Dictionary.com, "adjacent," in Dictionary.com Unabridged. Source location: Random House, Inc. http://dictionary.reference.com/browse/adjacent, 18 November 2011, page 1. * |
Cited By (72)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10025379B2 (en) | 2012-12-06 | 2018-07-17 | Google Llc | Eye tracking wearable devices and methods for use |
US10545574B2 (en) | 2013-03-01 | 2020-01-28 | Tobii Ab | Determining gaze target based on facial features |
US9619020B2 (en) | 2013-03-01 | 2017-04-11 | Tobii Ab | Delay warp gaze interaction |
US10534526B2 (en) | 2013-03-13 | 2020-01-14 | Tobii Ab | Automatic scrolling based on gaze detection |
US10317995B2 (en) | 2013-11-18 | 2019-06-11 | Tobii Ab | Component determination and gaze provoked interaction |
US10558262B2 (en) | 2013-11-18 | 2020-02-11 | Tobii Ab | Component determination and gaze provoked interaction |
US10564714B2 (en) * | 2014-05-09 | 2020-02-18 | Google Llc | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US20160085302A1 (en) * | 2014-05-09 | 2016-03-24 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US20170123492A1 (en) * | 2014-05-09 | 2017-05-04 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US10620700B2 (en) * | 2014-05-09 | 2020-04-14 | Google Llc | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US20150338915A1 (en) * | 2014-05-09 | 2015-11-26 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US9823744B2 (en) * | 2014-05-09 | 2017-11-21 | Google Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US9600069B2 (en) | 2014-05-09 | 2017-03-21 | Google Inc. | Systems and methods for discerning eye signals and continuous biometric identification |
US11972043B2 (en) | 2014-06-19 | 2024-04-30 | Apple Inc. | User detection by a computing device |
US20170308163A1 (en) * | 2014-06-19 | 2017-10-26 | Apple Inc. | User detection by a computing device |
US11556171B2 (en) | 2014-06-19 | 2023-01-17 | Apple Inc. | User detection by a computing device |
US11307657B2 (en) | 2014-06-19 | 2022-04-19 | Apple Inc. | User detection by a computing device |
US10664048B2 (en) * | 2014-06-19 | 2020-05-26 | Apple Inc. | User detection by a computing device |
WO2016008354A1 (en) * | 2014-07-14 | 2016-01-21 | Huawei Technologies Co., Ltd. | System and method for display enhancement |
US9952883B2 (en) | 2014-08-05 | 2018-04-24 | Tobii Ab | Dynamic determination of hardware |
US20160085408A1 (en) * | 2014-09-22 | 2016-03-24 | Lenovo (Beijing) Limited | Information processing method and electronic device thereof |
EP3737104A1 (en) * | 2014-10-01 | 2020-11-11 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling the same |
EP3002949A1 (en) * | 2014-10-01 | 2016-04-06 | Samsung Electronics Co., Ltd. | Display apparatus and method for controlling the same |
US10114463B2 (en) | 2014-10-01 | 2018-10-30 | Samsung Electronics Co., Ltd | Display apparatus and method for controlling the same according to an eye gaze and a gesture of a user |
US10248192B2 (en) | 2014-12-03 | 2019-04-02 | Microsoft Technology Licensing, Llc | Gaze target application launcher |
US10432624B2 (en) * | 2015-05-21 | 2019-10-01 | Tencent Technology (Shenzhen) Company Limited | Identity verification method, terminal, and server |
US10992666B2 (en) * | 2015-05-21 | 2021-04-27 | Tencent Technology (Shenzhen) Company Limited | Identity verification method, terminal, and server |
US20170295177A1 (en) * | 2015-05-21 | 2017-10-12 | Tencent Technology (Shenzhen) Company Limited | Identity verification method, terminal, and server |
GB2561455B (en) * | 2015-08-15 | 2022-05-18 | Google Llc | Systems and methods for biomechically-based eye signals for interacting with real and virtual objects |
GB2561455A (en) * | 2015-08-15 | 2018-10-17 | Google Llc | Systems and methods for biomechically-based eye signals for interacting with real and virtual objects |
WO2017031089A1 (en) * | 2015-08-15 | 2017-02-23 | Eyefluence, Inc. | Systems and methods for biomechanically-based eye signals for interacting with real and virtual objects |
US11320900B2 (en) | 2016-03-04 | 2022-05-03 | Magic Leap, Inc. | Current drain reduction in AR/VR display systems |
US11775062B2 (en) | 2016-03-04 | 2023-10-03 | Magic Leap, Inc. | Current drain reduction in AR/VR display systems |
US11402898B2 (en) * | 2016-03-04 | 2022-08-02 | Magic Leap, Inc. | Current drain reduction in AR/VR display systems |
US10353463B2 (en) * | 2016-03-16 | 2019-07-16 | RaayonNova LLC | Smart contact lens with eye driven control system and method |
US11966059B2 (en) | 2016-03-25 | 2024-04-23 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
US11467408B2 (en) | 2016-03-25 | 2022-10-11 | Magic Leap, Inc. | Virtual and augmented reality systems and methods |
EP3451135A4 (en) * | 2016-04-26 | 2019-04-24 | Sony Corporation | Information processing device, information processing method, and program |
US10719232B2 (en) * | 2016-06-08 | 2020-07-21 | Qualcomm Incorporated | Providing virtual buttons in a handheld device |
US20170357440A1 (en) * | 2016-06-08 | 2017-12-14 | Qualcomm Incorporated | Providing Virtual Buttons in a Handheld Device |
US20170374212A1 (en) * | 2016-06-23 | 2017-12-28 | Fuji Xerox Co., Ltd. | Information processing apparatus, information processing system, and image forming apparatus |
CN107544432A (en) * | 2016-06-28 | 2018-01-05 | 发那科株式会社 | Life determining apparatus, life determining method and computer-readable recording medium |
US10627993B2 (en) | 2016-08-08 | 2020-04-21 | Microsoft Technology Licensing, Llc | Interacting with a clipboard store |
WO2018031284A1 (en) * | 2016-08-08 | 2018-02-15 | Microsoft Technology Licensing, Llc | Interacting with a clipboard store |
US11420558B2 (en) * | 2016-12-13 | 2022-08-23 | International Automotive Components Group Gmbh | Interior trim part of motor vehicle with thin-film display device |
US20200070722A1 (en) * | 2016-12-13 | 2020-03-05 | International Automotive Components Group Gmbh | Interior trim part of motor vehicle |
US20180205675A1 (en) * | 2017-01-17 | 2018-07-19 | Samsung Electronics Co., Ltd. | Message generation method and wearable electronic device for supporting the same |
US10812418B2 (en) * | 2017-01-17 | 2020-10-20 | Samsung Electronics Co., Ltd. | Message generation method and wearable electronic device for supporting the same |
US20180239422A1 (en) * | 2017-02-17 | 2018-08-23 | International Business Machines Corporation | Tracking eye movements with a smart device |
US20180275753A1 (en) * | 2017-03-23 | 2018-09-27 | Google Llc | Eye-signal augmented control |
US10366540B2 (en) * | 2017-03-23 | 2019-07-30 | Htc Corporation | Electronic apparatus and method for virtual reality or augmented reality system |
US10627900B2 (en) * | 2017-03-23 | 2020-04-21 | Google Llc | Eye-signal augmented control |
US10732826B2 (en) * | 2017-11-22 | 2020-08-04 | Microsoft Technology Licensing, Llc | Dynamic device interaction adaptation based on user engagement |
US20190155495A1 (en) * | 2017-11-22 | 2019-05-23 | Microsoft Technology Licensing, Llc | Dynamic device interaction adaptation based on user engagement |
US20190187870A1 (en) * | 2017-12-20 | 2019-06-20 | International Business Machines Corporation | Utilizing biometric feedback to allow users to scroll content into a viewable display area |
US11029834B2 (en) * | 2017-12-20 | 2021-06-08 | International Business Machines Corporation | Utilizing biometric feedback to allow users to scroll content into a viewable display area |
CN111602102A (en) * | 2018-02-06 | 2020-08-28 | 斯玛特艾公司 | Method and system for visual human-machine interaction |
US20210232219A1 (en) * | 2018-05-31 | 2021-07-29 | Sony Corporation | Information processing apparatus, information processing method, and program |
US11487355B2 (en) * | 2018-05-31 | 2022-11-01 | Sony Corporation | Information processing apparatus and information processing method |
US11966055B2 (en) | 2018-07-19 | 2024-04-23 | Magic Leap, Inc. | Content interaction driven by eye metrics |
US20200050280A1 (en) * | 2018-08-10 | 2020-02-13 | Beijing 7Invensun Technology Co., Ltd. | Operation instruction execution method and apparatus, user terminal and storage medium |
US10719127B1 (en) * | 2018-08-29 | 2020-07-21 | Rockwell Collins, Inc. | Extended life display by utilizing eye tracking |
US11009698B2 (en) * | 2019-03-13 | 2021-05-18 | Nick Cherukuri | Gaze-based user interface for augmented and mixed reality device |
US11361540B2 (en) * | 2020-02-27 | 2022-06-14 | Samsung Electronics Co., Ltd. | Method and apparatus for predicting object of interest of user |
US11320974B2 (en) * | 2020-03-31 | 2022-05-03 | Tobii Ab | Method, computer program product and processing circuitry for pre-processing visualizable data |
US11853539B2 (en) | 2020-03-31 | 2023-12-26 | Tobii Ab | Method, computer program product and processing circuitry for pre-processing visualizable data |
US12039142B2 (en) | 2020-06-26 | 2024-07-16 | Apple Inc. | Devices, methods and graphical user interfaces for content applications |
US11703990B2 (en) * | 2020-08-17 | 2023-07-18 | Microsoft Technology Licensing, Llc | Animated visual cues indicating the availability of associated content |
US11720171B2 (en) | 2020-09-25 | 2023-08-08 | Apple Inc. | Methods for navigating user interfaces |
US11556175B2 (en) | 2021-04-19 | 2023-01-17 | Toyota Motor Engineering & Manufacturing North America, Inc. | Hands-free vehicle sensing and applications as well as supervised driving system using brainwave activity |
WO2023130148A1 (en) * | 2022-01-03 | 2023-07-06 | Apple Inc. | Devices, methods, and graphical user interfaces for navigating and inputting or revising content |
US11972046B1 (en) * | 2022-11-03 | 2024-04-30 | Vincent Jiang | Human-machine interaction method and system based on eye movement tracking |
Also Published As
Publication number | Publication date |
---|---|
US20140247232A1 (en) | 2014-09-04 |
WO2014134623A1 (en) | 2014-09-04 |
US20190324534A1 (en) | 2019-10-24 |
EP2962175B1 (en) | 2019-05-01 |
US11853477B2 (en) | 2023-12-26 |
US20200004331A1 (en) | 2020-01-02 |
US9619020B2 (en) | 2017-04-11 |
US20160116980A1 (en) | 2016-04-28 |
ES2731560T3 (en) | 2019-11-15 |
US20140247215A1 (en) | 2014-09-04 |
CN105339866B (en) | 2018-09-07 |
US20220253134A1 (en) | 2022-08-11 |
KR20160005013A (en) | 2016-01-13 |
CN105339866A (en) | 2016-02-17 |
US20170177078A1 (en) | 2017-06-22 |
US20210141451A1 (en) | 2021-05-13 |
US20170177079A1 (en) | 2017-06-22 |
US20230205316A1 (en) | 2023-06-29 |
EP2962175A1 (en) | 2016-01-06 |
US11604510B2 (en) | 2023-03-14 |
US10545574B2 (en) | 2020-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11853477B2 (en) | Zonal gaze driven interaction | |
EP3088997A1 (en) | Delay warp gaze interaction | |
KR101541881B1 (en) | Gaze-assisted computer interface | |
US20170068416A1 (en) | Systems And Methods for Gesture Input | |
KR20140117469A (en) | System for gaze interaction | |
EP3321791B1 (en) | Gesture control and interaction method and device based on touch-sensitive surface and display | |
CN111045519A (en) | Human-computer interaction method, device and equipment based on eye movement tracking | |
US20150193139A1 (en) | Touchscreen device operation | |
TWI564780B (en) | Touchscreen gestures | |
US9940900B2 (en) | Peripheral electronic device and method for using same | |
WO2016137839A1 (en) | Transparent full-screen text entry interface | |
US20220244791A1 (en) | Systems And Methods for Gesture Input | |
WO2015167531A2 (en) | Cursor grip | |
TWI776013B (en) | Operating method for touch display device | |
US20200233578A1 (en) | Operating method for touch display device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TOBII AB, SWEDEN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HENDEREK, DAVID FIGGINS;OLSSON, ANDERS;SAEVMARKER, MAGNUS CARL OLOF;AND OTHERS;SIGNING DATES FROM 20151216 TO 20151218;REEL/FRAME:037451/0503 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |