WO2015183766A1 - Gaze tracking for one or more users - Google Patents

Gaze tracking for one or more users Download PDF

Info

Publication number
WO2015183766A1
WO2015183766A1 PCT/US2015/032334 US2015032334W WO2015183766A1 WO 2015183766 A1 WO2015183766 A1 WO 2015183766A1 US 2015032334 W US2015032334 W US 2015032334W WO 2015183766 A1 WO2015183766 A1 WO 2015183766A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
eye
capture camera
eye capture
time
Prior art date
Application number
PCT/US2015/032334
Other languages
French (fr)
Inventor
Vaibhav Thukral
Ibrahim Eden
Shivkumar SWAMINATHAN
David Nister
Morgan Venable
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Priority to EP15728308.6A priority Critical patent/EP3149559A1/en
Priority to CN201580028775.2A priority patent/CN106662916A/en
Publication of WO2015183766A1 publication Critical patent/WO2015183766A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • a user may map directions from a current location to an amusement park using a mobile device.
  • a user may read a book using a tablet device.
  • Various types of input may be used to perform tasks, such as touch gestures, mouse input, keyboard input, voice commands, motion control, etc.
  • An object detection component may, for example, be configured to visually detect body movement of a user as input for performing tasks and/or commands.
  • a gaze tracking component may be associated with a user tracking component and an eye capture camera configuration.
  • a user may take affirmative action to provide opt-in consent to allow the gaze tracking component to perform gaze tracking of the user and/or other users.
  • the user tracking component may comprise a depth camera, a passive sensor, an active sensor, an infrared device, a time of flight device, a camera, or any other type of tracking device.
  • the eye capture camera configuration may comprise a plurality of eye capture cameras (e.g., relatively high resolution cameras comprising narrow field of view lenses).
  • the eye capture cameras are configured according to a fixed view frustum configuration, as opposed to a pan/tilt or other movement configuration that may otherwise reduce durability and/or increase cost of the eye capture camera configuration due to, among other things, moving parts and/or associated controls.
  • the gaze tracking component maintains the eye capture cameras in a powered down state (e.g., a low power state or an off state) when not in active use for gaze tracking (e.g., an eye capture camera may be powered on while a user is detectable by the eye capture camera, and may be turned off when the user is not detectable by the eye capture camera such as due to the user moving away from the eye capture camera), which may reduce power consumption and/or bandwidth consumption.
  • the gaze tracking component may utilize the user tracking component to obtain user tracking data for a user.
  • the gaze tracking component may evaluate the user tracking data to identify a spatial location of the user.
  • An eye capture camera may be selected from the eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location.
  • the eye capture camera may be invoked to obtain eye region imagery of the user.
  • eye capture cameras having view frustums that do not correspond to the spatial location may be powered down or maintained in the powered down state.
  • the gaze tracking component may generate gaze tracking information for the user based upon the eye region imagery.
  • Various tasks may be performed based upon the gaze tracking information (e.g., a videogame command may be performed, interaction with a user interface may be facilitated, a file may be opened, an application may be executed, a song may be played, a movie may be played, and/or a wide variety of other computing commands may be performed).
  • the gaze tracking component may be configured to concurrently track gaze tracking information for multiple users that are detected by the user tracking component.
  • Fig. 1 is a flow diagram illustrating an exemplary method of gaze tracking.
  • Fig. 2A is a component block diagram illustrating an exemplary system for gaze tracking.
  • FIG. 2B is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component utilizes a user tracking component to obtain first user tracking data for a first user.
  • Fig. 2C is a component block diagram illustrating an exemplary system for gazing tracking where a gaze tracking component utilizes one or more eye capture cameras for gaze tracking.
  • Fig. 2D is a component block diagram illustrating an exemplary system for gazing tracking where a gaze tracking component utilizes one or more eye capture cameras for gaze tracking.
  • Fig. 2E is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component selectively utilizes one or more eye capture cameras for gaze tracking of multiple users.
  • Fig. 2F is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component selectively utilizes one or more eye capture cameras for gaze tracking of multiple users.
  • Fig. 3A is an illustration of an example of performing a first task based upon gaze tracking information for a first user.
  • Fig. 3B is an illustration of an example of performing a second task based upon gaze tracking information for a first user.
  • FIG. 4 is an illustration of an exemplary computer readable medium wherein processor-executable instructions configured to embody one or more of the provisions set forth herein may be comprised.
  • FIG. 5 illustrates an exemplary computing environment wherein one or more of the provisions set forth herein may be implemented.
  • a user tracking component may be utilized to obtain user tracking data for one or more users (e.g., users providing opt-in consent for gaze tracking).
  • users e.g., users providing opt-in consent for gaze tracking.
  • the eye capture camera may be invoked to obtain eye region imagery of the user.
  • the eye region imagery may be used to generate gaze tracking information, which may be used to perform various tasks such as opening files, executing applications, controlling videogames, and/or interacting with user interfaces.
  • Eye capture cameras may be maintained in a powered down state (e.g., turned off) when not actively tracking a user, which may reduce power and/or bandwidth consumption.
  • gaze tracking information may be concurrently generated and/or tracked for multiple users (e.g., a first user may control a first avatar of a videogame using eye commands, and a second user may concurrently control a second avatar of the videogame using eye commands).
  • a user tracking component may be configured to track spatial locations of one or more users.
  • the user tracking component may comprise a depth camera, a passive sensor, an active sensor, an infrared device, a time of flight device, a camera, and/or any other tracking device.
  • An eye capture camera configuration may comprise a plurality of eye capture cameras configured to obtain gaze tracking information by capturing imagery depicting eyes of users.
  • an eye capture camera may have a pixel resolution that is greater than the pixel resolution of the user tracking component (e.g., a relatively lower resolution camera may be used to track spatial locations of users, while a relatively higher resolution camera may be used to track user eyes such as a resolution capable of capturing about 150 pixels or more across an eye of a user (e.g., 160 pixels across an eye in an x or horizontal direction)).
  • a relatively lower resolution camera may be used to track spatial locations of users
  • a relatively higher resolution camera may be used to track user eyes such as a resolution capable of capturing about 150 pixels or more across an eye of a user (e.g., 160 pixels across an eye in an x or horizontal direction)).
  • the eye capture cameras of the eye capture camera configuration may be configured according to a fixed view frustum configuration (e.g., an eye capture camera may have a fixed field of view and/or may have a stationary non-pan non-tilt configuration that lacks moving parts otherwise used to pan/tilt the camera), which may mitigate cost and/or reliability issues otherwise resulting from a pan/tile or other moveable configuration.
  • the eye capture camera comprises a pupil illumination structure (e.g., a bright pupil ring around the eye capture camera) configured to invoke a pupil response in an eye, which may be detected by the eye capture camera using gaze tracking.
  • an LED dark pupil structure may be turned on to create glint corneal reflections, which may be detected by the eye capture camera using gaze tracking.
  • two or more eye capture cameras may be configured to capture imagery within overlapping view frustums, which may mitigate distortion of or within the imagery (e.g., which may be more pronounced at edges of the imagery).
  • a first eye capture camera may be configured to capture imagery within a first view frustum having a first depth
  • a second eye capture camera may be configured to capture imagery within a second view frustum having a second depth different than the first depth.
  • Eye capture cameras may be selectively utilized to concurrently track gaze tracking information of one or more users.
  • eye capture cameras may be maintained in a powered down state when not being utilized for gaze tracking (e.g., an eye capture camera may be powered on for gaze tracking when a user is within a view frustum of the eye capture camera, and the eye capture camera may be powered down when the user leaves the view frustum), which may mitigate power and/or bandwidth consumption.
  • the user tracking component may be utilized to obtain first user tracking data for a first user at a first time Tl (e.g., the user may sit on a couch in a living room within which the user tracking component is located).
  • the first user tracking data may be evaluated to identify a first spatial location of the first user at the first time Tl (e.g., a spatial location of the couch within the living room).
  • a first eye capture camera within the eye capture camera configuration may be selected based upon the first eye capture camera having a first view frustum corresponding to the first spatial location (e.g., the first eye capture camera may be positioned towards the couch where the user is sitting, and thus may be capable of capturing imagery of the first user's eyes).
  • the first eye capture camera may be invoked to obtain first eye region imagery of the first user at or around the first time Tl (e.g., about 150 pixels or more across at least one eye of the first user).
  • first eye capture camera may be powered on and instructed to capture imagery that may depict the first user's eyes.
  • first gaze tracking information for the first user may be generated based upon the first eye region imagery (e.g., the first eye region imagery may comprise a plurality of images indicating pupil/eye movement by the first user).
  • a task may be performed based upon the first gaze tracking information.
  • the first gaze tracking information may indicate that the user looked left, which may be mapped to a command that may be executed to perform a task (e.g., a look left gaze input may be mapped to a steer car left input for a driving videogame; the look left gaze input may be mapped to a play previous song input for a music player app; the look left gaze input may be mapped to a backspace input for a typing interface; etc.).
  • a task e.g., controlling a videogame based upon analog and/or digital commands derived from gaze tracking information), and that merely a few examples are provided.
  • a second eye capture camera may be selected from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the first spatial location (e.g., the user may be sitting on a portion of the couch that corresponds to an overlap between the first view frustum of the first eye capture camera and the second view frustum of the second eye capture camera).
  • the second eye capture camera may be invoked to obtain second eye region imagery of the first user at or around the first time Tl .
  • the first eye region imagery and the second eye region imagery may be combined (e.g., using image stitching functionality; measurement combination functionality, and/or any other technique(s)) to generate the first gaze tracking
  • Gaze tracking may be performed for the first user as the first user moves around the living room, such as within detectable range of the user tracking component.
  • the user tracking component may obtain first user tracking data indicating that the first user is at second spatial location at a second time T2 (e.g., the first user may have walked from the couch to a table in the living room).
  • a third eye capture camera may be selected from the eye capture camera configuration based upon the third eye capture camera having a third view frustum corresponding to the second spatial location (e.g., the user may walk into a third view frustum 220 associated a third camera 206 illustrated in Fig. 2A).
  • the first eye capture camera is transitioned into a powered down state at or around the second time T2.
  • the third eye capture camera may be invoked to obtain third eye region imagery of the first user at or around the second time T2.
  • Third gaze tracking information may be generated for the first user at or around the second time T2 based upon the third eye region imagery.
  • eye capture cameras may be selectively powered on for obtaining eye region imagery of the first user, and may be selectively powered down when not in use (e.g., an eye capture camera may be powered down when the first user is not within a view frustum of the eye capture camera).
  • spatial location data of the first user may be evaluated to predict a potential new spatial location for the first user.
  • previous spatial location data may indicate that the first user is within the first view frustum but is walking towards the second view frustum (e.g., and thus will presumably enter the second view frustum within a particular time/duration).
  • the second eye capture camera may be awakened into a capture ready state for obtaining eye region imagery (e.g., slightly) prior to when the first user is expected/predicted to enter the second view frustum, based upon the spatial location data. In this way, lag associated with obtaining gaze tracking information between multiple eye capture cameras may be mitigated.
  • Gaze tracking information may be concurrently tracked for multiple users.
  • the user tracking component may be utilized to obtain second user tracking data for a second user at the first time Tl .
  • the second user tracking data may be evaluated to identify a spatial location of the second user at the first time Tl .
  • An eye capture camera may be selected from the eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location of the second user at the first time Tl .
  • the eye capture camera may be invoked to obtain eye region imagery of the second user at or around the first time Tl .
  • Gaze tracking information may be generated for the second user based upon the eye region imagery of the second user at the first time Tl .
  • gaze tracking may be concurrently performed for multiple users, which may allow multiple users to perform tasks (e.g., the first user may control a first avatar of a videogame, and the second user may control a second avatar of the videogame).
  • the method ends.
  • Figs. 2A-2F illustrate examples of a system 201 for gaze tracking.
  • Fig. 2A illustrates an example 200 of a gaze tracking component 214.
  • the gaze tracking component 214 may be configured to utilize a user tracking component 212 to track spatial locations of one or more users.
  • the gaze tracking component 214 may selectively invoke eye capture cameras of an eye capture camera configuration to obtain eye region imagery of users at various times for gaze tracking purposes.
  • the eye capture camera configuration comprises one or more eye capture cameras, such as a first eye capture camera 202 configured to obtain imagery from a first view frustum 216, a second eye capture camera 204 configured to obtain imagery from a second view frustum 218, a third eye capture camera 206 configured to obtain imagery from a third view frustum 220, a fourth eye capture camera 208 configured to obtain imagery from a fourth view frustum 222, a fifth eye capture camera 210 configured to obtain imagery from a fifth view frustum 224, and/or other eye capture cameras (e.g., a relatively high resolution camera, such as about 40MP or greater camera comprising a narrow field of view lens having a horizontal view of about 20 degrees to about 40 degrees (e.g., about a 22 degree horizontal view) and a vertical view of about 10 degrees to about 30 degrees (e.g., about a 17 degree vertical view).
  • eye capture cameras e.g., a relatively high resolution camera, such as about 40MP or greater camera comprising
  • one or more view frustums may overlap, which may mitigate lens distortion around edges of imagery obtained by eye capture cameras.
  • an eye capture camera may be transitioned into a powered down state, which may reduce power and/or bandwidth consumption.
  • Fig. 2B illustrates an example 230 of the gaze tracking component 214 utilizing the user tracking component 212 to obtain first user tracking data for a first user 232 at a first time Tl .
  • the gaze tracking component 214 may evaluate the first user tracking data to identify a first spatial location of the first user 232 at the first time Tl .
  • the gaze tracking component 214 may turn on 234 the first eye capture camera 202 and may invoke the first eye capture camera 202 to obtain first eye region imagery of the first user 232 at or around the first time Tl (e.g., the first eye capture camera 202 may capture imagery comprising about 150 pixels or more across at least one eye of the first user 232).
  • First gaze tracking information may be generated for the first user 232 at the first time Tl based upon the first eye region imagery.
  • One or more tasks may be performed based upon the first gaze tracking information (e.g., the first user 232 may blink a left eye in order to play a song).
  • Fig. 2C illustrates an example 240 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking.
  • the gaze tracking component 214 may determine that the user tracking component 212 obtained first user tracking data indicating that the first user 232 is located at a second spatial location at a second time T2. Because the second spatial location corresponds to the second view frustum 218 of the second eye capture camera 204, the gaze tracking component 214 may turn on 244 the second eye capture camera 204 and may invoke the second eye capture camera 204 to obtain second eye region imagery of the first user 232 at or around the second time T2. First gaze tracking information for the first user 232 may be generated for the first user 232 at the second time T2 based upon the second eye region imagery.
  • One or more tasks may be performed based upon the first gaze tracking information (e.g., the first user 232 may blink a right eye in order to stop playing a song).
  • the gaze tracking component 214 may power down 242 the first eye capture camera 202 into a powered down state because the second spatial location of the first user 232 at the second time T2 does not correspond to the first view frustum 216 of the first eye capture camera 202.
  • Fig. 2D illustrates an example 250 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking.
  • the gaze tracking component 214 may determine that the user tracking component 212 obtained first user tracking data indicating that the first user 232 is located at a third spatial location at a third time T3.
  • the gaze tracking component 214 may turn on 252 the third eye capture camera 206 and may invoke the third eye capture camera 206 to obtain third eye region imagery of the first user 232 at or around the third time T3.
  • the gaze tracking component 214 may combine (e.g., stitch together) the second eye region imagery obtained by the second eye capture camera 204 and the third eye region imagery obtained by the third eye capture camera 206 to generate gaze tracking information for the user at the third time T3.
  • One or more tasks may be performed based upon the gaze tracking information (e.g., the first user 232 may look right in order to skip to a next song to play).
  • Fig. 2E illustrates an example 260 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking of multiple users.
  • the second eye capture camera 204 and the third eye capture camera 206 may be invoked to capture gaze tracking information for the first user 232 at a fourth time T4 based upon the first user 232 being spatially located in an overlap region between the second view frustum 218 and the third view frustum 220.
  • the gaze tracking component 214 may utilize the user tracking component 212 to obtain second user tracking data for a second user 262 at the fourth time T4.
  • the gaze tracking component 214 may evaluate the second user tracking data to identify a spatial location of the second user 262 at the fourth time T4.
  • the gaze tracking component 214 may turn on 262 the fifth eye capture camera 210 and may invoke the fifth eye capture camera 210 to obtain eye region imagery of the second user 262 at or around the fourth time T4. Gaze tracking
  • One or more tasks may be performed on behalf of the first user 232 based upon gaze tracking information of the first user 232 at the fourth time T4 and/or one or more tasks may be performed on behalf of the second user 262 based upon gaze tracking information of the second user 262 at the fourth time T4.
  • Fig. 2F illustrates an example 270 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking of multiple users.
  • the user tracking component 212 may obtain first user tracking data indicating that the first user 232 is within a fourth spatial location at a fifth time T5 and may obtain second user tracking data indicating that the second user 262 is within a fifth spatial location at the fifth time T5.
  • the gaze tracking component 214 may turn on 278 the fourth eye capture camera 208 and may invoke the fourth eye capture camera 208 to obtain eye region imagery of the first user 232 at or around the fifth time T5 and eye region imagery of the second user 262 at or around the fifth time T5.
  • the gaze tracking component may power down 272 the second eye capture camera 204, power down 274 the third eye capture camera 206, and power down 280 the fifth eye capture camera 210 based upon the second view frustum 218, the third view frustum 220, and the fifth view frustum 224 not corresponding to the fourth spatial location and/or the fifth spatial location.
  • Gaze tracking information may be generated for the first user 232 at the fifth time T5 and for the second user 262 at the fifth time T5 based upon the eye region imagery of the first user 232 and the second user 262 captured by the fourth eye capture camera 208.
  • One or more tasks may be performed on behalf of the first user 232 based upon the gaze tracking information of the first user 232 at the fifth time T5 and/or one or more tasks may be performed on behalf of the second user 262 based upon the gaze tracking information of the second user 262 at the fifth time T5.
  • Fig. 3A illustrates an example 300 of performing a first task (e.g., a videogame command) based upon gaze tracking information 302 for a first user at a first time Tl .
  • a gaze tracking component 304 may generate the gaze tracking information 302 based upon eye region imagery of the first user obtained by one or more eye capture cameras.
  • the gaze tracking component 304 may invoke a first eye capture camera to obtain first eye region imagery of the first user at the first time Tl (e.g., the first user may look up and to the right) based upon a user tracking component indicating that a first spatial location of the first user at the first time Tl is within a first view frustum of the first eye capture camera.
  • the gaze tracking component 304 may determine that a look up and right gaze input is mapped to a move avatar up and right videogame command 306 for an adventure videogame 308. Accordingly, an avatar 310 may be moved 312 up and to the right.
  • Fig. 3B illustrates an example 320 of performing a second task (e.g., a videogame command) based upon second gaze tracking information 322 for the first user at a second time T2.
  • the gaze tracking component 304 may generate the second gaze tracking information 322 based upon second eye region imagery of the first user obtained by one or more eye capture cameras. For example, the gaze tracking component 304 may invoke a second eye capture camera to obtain second eye region imagery of the first user at the second time T2 (e.g., the first user may look down) based upon the user tracking component indicating that a second spatial location of the first user at the second time T2 is within a second view frustum of the second eye capture camera.
  • the gaze tracking component 304 may determine that a look down gaze input is mapped to a move avatar down videogame command 324 for the adventure videogame 308. Accordingly, the avatar 310 may be moved 326 down.
  • Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein.
  • An example embodiment of a computer-readable medium or a computer-readable device is illustrated in Fig. 4, wherein the implementation 400 comprises a computer-readable medium 408, such as a CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data 406.
  • This computer-readable data 406 such as binary data comprising at least one of a zero or a one, in turn comprises a set of computer instructions 404 configured to operate according to one or more of the principles set forth herein.
  • the processor- executable computer instructions 404 are configured to perform a method 402, such as at least some of the exemplary method 100 of Fig. 1, for example.
  • the processor-executable instructions 404 are configured to implement a system, such as at least some of the exemplary system 201 of Figs. 2A-2F, for example.
  • Many such computer-readable media are devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a controller and the controller can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
  • article of manufacture as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
  • Fig. 5 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein.
  • the operating environment of Fig. 5 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment.
  • Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Computer readable instructions may be distributed via computer readable media
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • program modules such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • data structures such as data structures, and the like.
  • functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • Fig. 5 illustrates an example of a system 500 comprising a computing device 512 configured to implement one or more embodiments provided herein.
  • computing device 512 includes at least one processing unit 516 and memory 518.
  • memory 518 may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated in Fig. 5 by dashed line 514.
  • device 512 may include additional features and/or functionality.
  • device 512 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like.
  • additional storage e.g., removable and/or non-removable
  • storage 520 Such additional storage is illustrated in Fig. 5 by storage 520.
  • computer readable instructions to implement one or more embodiments provided herein may be in storage 520.
  • Storage 520 may also store other computer readable instructions to implement an operating system, an application program, and the like.
  • Computer readable instructions may be loaded in memory 518 for execution by processing unit 516, for example.
  • Computer readable media includes computer storage media.
  • Computer storage media includes volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of information such as computer readable instructions or other data.
  • Memory 518 and storage 520 are examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 512.
  • Computer storage media does not, however, include propagated signals. Rather, computer storage media excludes propagated signals. Any such computer storage media may be part of device 512.
  • Device 512 may also include communication connection(s) 526 that allows device 512 to communicate with other devices.
  • Communication connection(s) 526 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 512 to other computing devices.
  • Communication connection(s) 526 may include a wired connection or a wireless connection. Communication connection(s) 526 may transmit and/or receive
  • Computer readable media may include communication media.
  • Communication media typically embodies computer readable instructions or other data in a “modulated data signal” such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • Device 512 may include input device(s) 524 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device.
  • Output device(s) 522 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 512.
  • Input device(s) 524 and output device(s) 522 may be connected to device 512 via a wired connection, wireless connection, or any combination thereof.
  • an input device or an output device from another computing device may be used as input device(s) 524 or output device(s) 522 for computing device 512.
  • Components of computing device 512 may be connected by various interconnects, such as a bus.
  • Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • IEEE 1394 Firewire
  • optical bus structure and the like.
  • components of computing device 512 may be interconnected by a network.
  • memory 518 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
  • a computing device 530 accessible via a network 528 may store computer readable instructions to implement one or more embodiments provided herein.
  • Computing device 512 may access computing device 530 and download a part or all of the computer readable instructions for execution.
  • computing device 512 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 512 and some at computing device 530.
  • one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described.
  • the order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.
  • first,” “second,” and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc.
  • a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object.
  • exemplary is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous.
  • “or” is intended to mean an inclusive “or” rather than an exclusive “or”.
  • “a” and “an” as used in this application are generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form.
  • at least one of A and B and/or the like generally means A or B and/or both A and B.
  • such terms are intended to be inclusive in a manner similar to the term “comprising”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

One or more techniques and/or systems are provided for gaze tracking of one or more users. A user tracking component (e.g., a depth camera or a relatively lower resolution camera) may be utilized to obtain user tracking data for a user. The user tracking data is evaluated to identify a spatial location of the user. An eye capture camera (e.g., a relatively higher resolution camera) may be selected from an eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location of the user. The eye capture camera may be invoked to obtain eye region imagery of the user. Other eye capture cameras within the eye capture camera configuration are maintained in a powered down state to reduce power and/or bandwidth consumption. Gaze tracking information may be generated based upon the eye region imagery, and may be used to perform a task.

Description

GAZE TRACKING FOR ONE OR MORE USERS
BACKGROUND
[0001] Many users perform tasks using computing devices. In an example, a user may map directions from a current location to an amusement park using a mobile device. In another example, a user may read a book using a tablet device. Various types of input may be used to perform tasks, such as touch gestures, mouse input, keyboard input, voice commands, motion control, etc. An object detection component may, for example, be configured to visually detect body movement of a user as input for performing tasks and/or commands.
SUMMARY
[0002] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key factors or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
[0003] Among other things, one or more systems and/or techniques for gaze tracking are provided herein. A gaze tracking component may be associated with a user tracking component and an eye capture camera configuration. A user may take affirmative action to provide opt-in consent to allow the gaze tracking component to perform gaze tracking of the user and/or other users. The user tracking component may comprise a depth camera, a passive sensor, an active sensor, an infrared device, a time of flight device, a camera, or any other type of tracking device. The eye capture camera configuration may comprise a plurality of eye capture cameras (e.g., relatively high resolution cameras comprising narrow field of view lenses). In an example, the eye capture cameras are configured according to a fixed view frustum configuration, as opposed to a pan/tilt or other movement configuration that may otherwise reduce durability and/or increase cost of the eye capture camera configuration due to, among other things, moving parts and/or associated controls. In an example, the gaze tracking component maintains the eye capture cameras in a powered down state (e.g., a low power state or an off state) when not in active use for gaze tracking (e.g., an eye capture camera may be powered on while a user is detectable by the eye capture camera, and may be turned off when the user is not detectable by the eye capture camera such as due to the user moving away from the eye capture camera), which may reduce power consumption and/or bandwidth consumption.
[0004] In an example of gaze tracking, the gaze tracking component may utilize the user tracking component to obtain user tracking data for a user. The gaze tracking component may evaluate the user tracking data to identify a spatial location of the user. An eye capture camera may be selected from the eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location. The eye capture camera may be invoked to obtain eye region imagery of the user. In an example, eye capture cameras having view frustums that do not correspond to the spatial location may be powered down or maintained in the powered down state. The gaze tracking component may generate gaze tracking information for the user based upon the eye region imagery. Various tasks may be performed based upon the gaze tracking information (e.g., a videogame command may be performed, interaction with a user interface may be facilitated, a file may be opened, an application may be executed, a song may be played, a movie may be played, and/or a wide variety of other computing commands may be performed). In an example, the gaze tracking component may be configured to concurrently track gaze tracking information for multiple users that are detected by the user tracking component.
[0005] To the accomplishment of the foregoing and related ends, the following description and annexed drawings set forth certain illustrative aspects and
implementations. These are indicative of but a few of the various ways in which one or more aspects may be employed. Other aspects, advantages, and novel features of the disclosure will become apparent from the following detailed description when considered in conjunction with the annexed drawings.
DESCRIPTION OF THE DRAWINGS
[0006] Fig. 1 is a flow diagram illustrating an exemplary method of gaze tracking.
[0007] Fig. 2A is a component block diagram illustrating an exemplary system for gaze tracking.
[0008] Fig. 2B is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component utilizes a user tracking component to obtain first user tracking data for a first user.
[0009] Fig. 2C is a component block diagram illustrating an exemplary system for gazing tracking where a gaze tracking component utilizes one or more eye capture cameras for gaze tracking.
[0010] Fig. 2D is a component block diagram illustrating an exemplary system for gazing tracking where a gaze tracking component utilizes one or more eye capture cameras for gaze tracking. [0011] Fig. 2E is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component selectively utilizes one or more eye capture cameras for gaze tracking of multiple users.
[0012] Fig. 2F is a component block diagram illustrating an exemplary system for gaze tracking where a gaze tracking component selectively utilizes one or more eye capture cameras for gaze tracking of multiple users.
[0013] Fig. 3A is an illustration of an example of performing a first task based upon gaze tracking information for a first user.
[0014] Fig. 3B is an illustration of an example of performing a second task based upon gaze tracking information for a first user.
[0015] Fig. 4 is an illustration of an exemplary computer readable medium wherein processor-executable instructions configured to embody one or more of the provisions set forth herein may be comprised.
[0016] Fig. 5 illustrates an exemplary computing environment wherein one or more of the provisions set forth herein may be implemented.
DETAILED DESCRIPTION
[0017] The claimed subject matter is now described with reference to the drawings, wherein like reference numerals are generally used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth to provide an understanding of the claimed subject matter. It may be evident, however, that the claimed subject matter may be practiced without these specific details.
In other instances, structures and devices are illustrated in block diagram form in order to facilitate describing the claimed subject matter.
[0018] One or more techniques and/or systems for gaze tracking are provided herein. A user tracking component may be utilized to obtain user tracking data for one or more users (e.g., users providing opt-in consent for gaze tracking). When a user is identified as being in a spatial location corresponding to a view frustum of an eye capture camera, then the eye capture camera may be invoked to obtain eye region imagery of the user. The eye region imagery may be used to generate gaze tracking information, which may be used to perform various tasks such as opening files, executing applications, controlling videogames, and/or interacting with user interfaces. Eye capture cameras may be maintained in a powered down state (e.g., turned off) when not actively tracking a user, which may reduce power and/or bandwidth consumption. In an example, gaze tracking information may be concurrently generated and/or tracked for multiple users (e.g., a first user may control a first avatar of a videogame using eye commands, and a second user may concurrently control a second avatar of the videogame using eye commands).
[0019] An embodiment of gaze tracking is illustrated by an exemplary method 100 of Fig. 1. At 102, the method starts. A user tracking component may be configured to track spatial locations of one or more users. In an example the user tracking component may comprise a depth camera, a passive sensor, an active sensor, an infrared device, a time of flight device, a camera, and/or any other tracking device. An eye capture camera configuration may comprise a plurality of eye capture cameras configured to obtain gaze tracking information by capturing imagery depicting eyes of users. In an example, an eye capture camera may have a pixel resolution that is greater than the pixel resolution of the user tracking component (e.g., a relatively lower resolution camera may be used to track spatial locations of users, while a relatively higher resolution camera may be used to track user eyes such as a resolution capable of capturing about 150 pixels or more across an eye of a user (e.g., 160 pixels across an eye in an x or horizontal direction)). In an example, the eye capture cameras of the eye capture camera configuration may be configured according to a fixed view frustum configuration (e.g., an eye capture camera may have a fixed field of view and/or may have a stationary non-pan non-tilt configuration that lacks moving parts otherwise used to pan/tilt the camera), which may mitigate cost and/or reliability issues otherwise resulting from a pan/tile or other moveable configuration. In an example of an eye capture camera, the eye capture camera comprises a pupil illumination structure (e.g., a bright pupil ring around the eye capture camera) configured to invoke a pupil response in an eye, which may be detected by the eye capture camera using gaze tracking. In another example of an eye capture camera, an LED dark pupil structure may be turned on to create glint corneal reflections, which may be detected by the eye capture camera using gaze tracking. In an example, two or more eye capture cameras may be configured to capture imagery within overlapping view frustums, which may mitigate distortion of or within the imagery (e.g., which may be more pronounced at edges of the imagery). In an example where at least two eye capture cameras are configured to capture imagery at different depths, a first eye capture camera may be configured to capture imagery within a first view frustum having a first depth, and a second eye capture camera may be configured to capture imagery within a second view frustum having a second depth different than the first depth.
[0020] Eye capture cameras may be selectively utilized to concurrently track gaze tracking information of one or more users. In an example, eye capture cameras may be maintained in a powered down state when not being utilized for gaze tracking (e.g., an eye capture camera may be powered on for gaze tracking when a user is within a view frustum of the eye capture camera, and the eye capture camera may be powered down when the user leaves the view frustum), which may mitigate power and/or bandwidth consumption.
[0021] At 104, the user tracking component may be utilized to obtain first user tracking data for a first user at a first time Tl (e.g., the user may sit on a couch in a living room within which the user tracking component is located). At 106, the first user tracking data may be evaluated to identify a first spatial location of the first user at the first time Tl (e.g., a spatial location of the couch within the living room). At 108, a first eye capture camera within the eye capture camera configuration may be selected based upon the first eye capture camera having a first view frustum corresponding to the first spatial location (e.g., the first eye capture camera may be positioned towards the couch where the user is sitting, and thus may be capable of capturing imagery of the first user's eyes). At 110, the first eye capture camera may be invoked to obtain first eye region imagery of the first user at or around the first time Tl (e.g., about 150 pixels or more across at least one eye of the first user). In an example, if the first eye capture camera was in a powered down state, then the first eye capture camera may be powered on and instructed to capture imagery that may depict the first user's eyes.
[0022] At 112, first gaze tracking information for the first user (e.g., corresponding to the first time Tl) may be generated based upon the first eye region imagery (e.g., the first eye region imagery may comprise a plurality of images indicating pupil/eye movement by the first user). A task may be performed based upon the first gaze tracking information. For example, the first gaze tracking information may indicate that the user looked left, which may be mapped to a command that may be executed to perform a task (e.g., a look left gaze input may be mapped to a steer car left input for a driving videogame; the look left gaze input may be mapped to a play previous song input for a music player app; the look left gaze input may be mapped to a backspace input for a typing interface; etc.). It may be appreciated that a wide variety of tasks may be performed (e.g., controlling a videogame based upon analog and/or digital commands derived from gaze tracking information), and that merely a few examples are provided.
[0023] In an example of gaze tracking where the user is located within overlapping view frustums, a second eye capture camera may be selected from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the first spatial location (e.g., the user may be sitting on a portion of the couch that corresponds to an overlap between the first view frustum of the first eye capture camera and the second view frustum of the second eye capture camera). The second eye capture camera may be invoked to obtain second eye region imagery of the first user at or around the first time Tl . The first eye region imagery and the second eye region imagery may be combined (e.g., using image stitching functionality; measurement combination functionality, and/or any other technique(s)) to generate the first gaze tracking
information.
[0024] Gaze tracking may be performed for the first user as the first user moves around the living room, such as within detectable range of the user tracking component. In an example, the user tracking component may obtain first user tracking data indicating that the first user is at second spatial location at a second time T2 (e.g., the first user may have walked from the couch to a table in the living room). A third eye capture camera may be selected from the eye capture camera configuration based upon the third eye capture camera having a third view frustum corresponding to the second spatial location (e.g., the user may walk into a third view frustum 220 associated a third camera 206 illustrated in Fig. 2A). If the first view frustum of the first eye capture camera does not correspond to the second spatial location, then the first eye capture camera is transitioned into a powered down state at or around the second time T2. The third eye capture camera may be invoked to obtain third eye region imagery of the first user at or around the second time T2. Third gaze tracking information may be generated for the first user at or around the second time T2 based upon the third eye region imagery. In this way, eye capture cameras may be selectively powered on for obtaining eye region imagery of the first user, and may be selectively powered down when not in use (e.g., an eye capture camera may be powered down when the first user is not within a view frustum of the eye capture camera).
[0025] In an example, spatial location data of the first user may be evaluated to predict a potential new spatial location for the first user. For example, previous spatial location data may indicate that the first user is within the first view frustum but is walking towards the second view frustum (e.g., and thus will presumably enter the second view frustum within a particular time/duration). Accordingly, the second eye capture camera may be awakened into a capture ready state for obtaining eye region imagery (e.g., slightly) prior to when the first user is expected/predicted to enter the second view frustum, based upon the spatial location data. In this way, lag associated with obtaining gaze tracking information between multiple eye capture cameras may be mitigated. [0026] Gaze tracking information may be concurrently tracked for multiple users. In an example, the user tracking component may be utilized to obtain second user tracking data for a second user at the first time Tl . The second user tracking data may be evaluated to identify a spatial location of the second user at the first time Tl . An eye capture camera may be selected from the eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location of the second user at the first time Tl . The eye capture camera may be invoked to obtain eye region imagery of the second user at or around the first time Tl . Gaze tracking information may be generated for the second user based upon the eye region imagery of the second user at the first time Tl . In this way, gaze tracking may be concurrently performed for multiple users, which may allow multiple users to perform tasks (e.g., the first user may control a first avatar of a videogame, and the second user may control a second avatar of the videogame). At 114, the method ends.
[0027] Figs. 2A-2F illustrate examples of a system 201 for gaze tracking. Fig. 2A illustrates an example 200 of a gaze tracking component 214. The gaze tracking component 214 may be configured to utilize a user tracking component 212 to track spatial locations of one or more users. The gaze tracking component 214 may selectively invoke eye capture cameras of an eye capture camera configuration to obtain eye region imagery of users at various times for gaze tracking purposes. In an example, the eye capture camera configuration comprises one or more eye capture cameras, such as a first eye capture camera 202 configured to obtain imagery from a first view frustum 216, a second eye capture camera 204 configured to obtain imagery from a second view frustum 218, a third eye capture camera 206 configured to obtain imagery from a third view frustum 220, a fourth eye capture camera 208 configured to obtain imagery from a fourth view frustum 222, a fifth eye capture camera 210 configured to obtain imagery from a fifth view frustum 224, and/or other eye capture cameras (e.g., a relatively high resolution camera, such as about 40MP or greater camera comprising a narrow field of view lens having a horizontal view of about 20 degrees to about 40 degrees (e.g., about a 22 degree horizontal view) and a vertical view of about 10 degrees to about 30 degrees (e.g., about a 17 degree vertical view). In an example, one or more view frustums may overlap, which may mitigate lens distortion around edges of imagery obtained by eye capture cameras. When not in use (e.g., when user tracking data, obtained from the user tracking component 212, indicates a user is not within a view frustum of an eye capture camera), an eye capture camera may be transitioned into a powered down state, which may reduce power and/or bandwidth consumption.
[0028] Fig. 2B illustrates an example 230 of the gaze tracking component 214 utilizing the user tracking component 212 to obtain first user tracking data for a first user 232 at a first time Tl . The gaze tracking component 214 may evaluate the first user tracking data to identify a first spatial location of the first user 232 at the first time Tl . Because the first spatial location corresponds to the first view frustum 216 of the first eye capture camera 202, the gaze tracking component 214 may turn on 234 the first eye capture camera 202 and may invoke the first eye capture camera 202 to obtain first eye region imagery of the first user 232 at or around the first time Tl (e.g., the first eye capture camera 202 may capture imagery comprising about 150 pixels or more across at least one eye of the first user 232). First gaze tracking information may be generated for the first user 232 at the first time Tl based upon the first eye region imagery. One or more tasks may be performed based upon the first gaze tracking information (e.g., the first user 232 may blink a left eye in order to play a song).
[0029] Fig. 2C illustrates an example 240 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking. The gaze tracking component 214 may determine that the user tracking component 212 obtained first user tracking data indicating that the first user 232 is located at a second spatial location at a second time T2. Because the second spatial location corresponds to the second view frustum 218 of the second eye capture camera 204, the gaze tracking component 214 may turn on 244 the second eye capture camera 204 and may invoke the second eye capture camera 204 to obtain second eye region imagery of the first user 232 at or around the second time T2. First gaze tracking information for the first user 232 may be generated for the first user 232 at the second time T2 based upon the second eye region imagery. One or more tasks may be performed based upon the first gaze tracking information (e.g., the first user 232 may blink a right eye in order to stop playing a song). The gaze tracking component 214 may power down 242 the first eye capture camera 202 into a powered down state because the second spatial location of the first user 232 at the second time T2 does not correspond to the first view frustum 216 of the first eye capture camera 202.
[0030] Fig. 2D illustrates an example 250 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking. The gaze tracking component 214 may determine that the user tracking component 212 obtained first user tracking data indicating that the first user 232 is located at a third spatial location at a third time T3. Because the third spatial location corresponds to the second view frustum 218 of the second eye capture camera 204 and the third view frustum 220 of the third eye capture camera 206 (e.g., the third spatial location of the first user 232 at the third time T3 may correspond to an overlap between the second view frustum 218 and the third view frustum 220), the gaze tracking component 214 may turn on 252 the third eye capture camera 206 and may invoke the third eye capture camera 206 to obtain third eye region imagery of the first user 232 at or around the third time T3. In an example, the gaze tracking component 214 may combine (e.g., stitch together) the second eye region imagery obtained by the second eye capture camera 204 and the third eye region imagery obtained by the third eye capture camera 206 to generate gaze tracking information for the user at the third time T3. One or more tasks may be performed based upon the gaze tracking information (e.g., the first user 232 may look right in order to skip to a next song to play).
[0031] Fig. 2E illustrates an example 260 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking of multiple users. In an example, the second eye capture camera 204 and the third eye capture camera 206 may be invoked to capture gaze tracking information for the first user 232 at a fourth time T4 based upon the first user 232 being spatially located in an overlap region between the second view frustum 218 and the third view frustum 220. The gaze tracking component 214 may utilize the user tracking component 212 to obtain second user tracking data for a second user 262 at the fourth time T4. The gaze tracking component 214 may evaluate the second user tracking data to identify a spatial location of the second user 262 at the fourth time T4. Because the spatial location corresponds to the fifth view frustum 224 of the fifth eye capture camera 210, the gaze tracking component 214 may turn on 262 the fifth eye capture camera 210 and may invoke the fifth eye capture camera 210 to obtain eye region imagery of the second user 262 at or around the fourth time T4. Gaze tracking
information may be generated for the second user 262 at the fourth time T4. One or more tasks may be performed on behalf of the first user 232 based upon gaze tracking information of the first user 232 at the fourth time T4 and/or one or more tasks may be performed on behalf of the second user 262 based upon gaze tracking information of the second user 262 at the fourth time T4.
[0032] Fig. 2F illustrates an example 270 of the gaze tracking component 214 selectively utilizing one or more eye capture cameras for gaze tracking of multiple users. In an example, the user tracking component 212 may obtain first user tracking data indicating that the first user 232 is within a fourth spatial location at a fifth time T5 and may obtain second user tracking data indicating that the second user 262 is within a fifth spatial location at the fifth time T5. Because the fourth spatial location and the fifth spatial location correspond to the fourth view frustum 222 of the fourth eye capture camera 208, the gaze tracking component 214 may turn on 278 the fourth eye capture camera 208 and may invoke the fourth eye capture camera 208 to obtain eye region imagery of the first user 232 at or around the fifth time T5 and eye region imagery of the second user 262 at or around the fifth time T5. The gaze tracking component may power down 272 the second eye capture camera 204, power down 274 the third eye capture camera 206, and power down 280 the fifth eye capture camera 210 based upon the second view frustum 218, the third view frustum 220, and the fifth view frustum 224 not corresponding to the fourth spatial location and/or the fifth spatial location. Gaze tracking information may be generated for the first user 232 at the fifth time T5 and for the second user 262 at the fifth time T5 based upon the eye region imagery of the first user 232 and the second user 262 captured by the fourth eye capture camera 208. One or more tasks may be performed on behalf of the first user 232 based upon the gaze tracking information of the first user 232 at the fifth time T5 and/or one or more tasks may be performed on behalf of the second user 262 based upon the gaze tracking information of the second user 262 at the fifth time T5.
[0033] Fig. 3A illustrates an example 300 of performing a first task (e.g., a videogame command) based upon gaze tracking information 302 for a first user at a first time Tl . A gaze tracking component 304 may generate the gaze tracking information 302 based upon eye region imagery of the first user obtained by one or more eye capture cameras. For example, the gaze tracking component 304 may invoke a first eye capture camera to obtain first eye region imagery of the first user at the first time Tl (e.g., the first user may look up and to the right) based upon a user tracking component indicating that a first spatial location of the first user at the first time Tl is within a first view frustum of the first eye capture camera. The gaze tracking component 304 may determine that a look up and right gaze input is mapped to a move avatar up and right videogame command 306 for an adventure videogame 308. Accordingly, an avatar 310 may be moved 312 up and to the right.
[0034] Fig. 3B illustrates an example 320 of performing a second task (e.g., a videogame command) based upon second gaze tracking information 322 for the first user at a second time T2. The gaze tracking component 304 may generate the second gaze tracking information 322 based upon second eye region imagery of the first user obtained by one or more eye capture cameras. For example, the gaze tracking component 304 may invoke a second eye capture camera to obtain second eye region imagery of the first user at the second time T2 (e.g., the first user may look down) based upon the user tracking component indicating that a second spatial location of the first user at the second time T2 is within a second view frustum of the second eye capture camera. The gaze tracking component 304 may determine that a look down gaze input is mapped to a move avatar down videogame command 324 for the adventure videogame 308. Accordingly, the avatar 310 may be moved 326 down.
[0035] Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device is illustrated in Fig. 4, wherein the implementation 400 comprises a computer-readable medium 408, such as a CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data 406. This computer-readable data 406, such as binary data comprising at least one of a zero or a one, in turn comprises a set of computer instructions 404 configured to operate according to one or more of the principles set forth herein. In some embodiments, the processor- executable computer instructions 404 are configured to perform a method 402, such as at least some of the exemplary method 100 of Fig. 1, for example. In some embodiments, the processor-executable instructions 404 are configured to implement a system, such as at least some of the exemplary system 201 of Figs. 2A-2F, for example. Many such computer-readable media are devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
[0036] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing at least some of the claims.
[0037] As used in this application, the terms "component," "module," "system", "interface", and/or the like are generally intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
[0038] Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term "article of manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. Of course, many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
[0039] Fig. 5 and the following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. The operating environment of Fig. 5 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
[0040] Although not required, embodiments are described in the general context of "computer readable instructions" being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media
(discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
[0041] Fig. 5 illustrates an example of a system 500 comprising a computing device 512 configured to implement one or more embodiments provided herein. In one configuration, computing device 512 includes at least one processing unit 516 and memory 518. Depending on the exact configuration and type of computing device, memory 518 may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. This configuration is illustrated in Fig. 5 by dashed line 514.
[0042] In other embodiments, device 512 may include additional features and/or functionality. For example, device 512 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic storage, optical storage, and the like. Such additional storage is illustrated in Fig. 5 by storage 520. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in storage 520. Storage 520 may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in memory 518 for execution by processing unit 516, for example.
[0043] The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and nonremovable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Memory 518 and storage 520 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 512.
Computer storage media does not, however, include propagated signals. Rather, computer storage media excludes propagated signals. Any such computer storage media may be part of device 512.
[0044] Device 512 may also include communication connection(s) 526 that allows device 512 to communicate with other devices. Communication connection(s) 526 may include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device 512 to other computing devices. Communication connection(s) 526 may include a wired connection or a wireless connection. Communication connection(s) 526 may transmit and/or receive
communication media.
[0045] The term "computer readable media" may include communication media. Communication media typically embodies computer readable instructions or other data in a "modulated data signal" such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" may include a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
[0046] Device 512 may include input device(s) 524 such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, video input devices, and/or any other input device. Output device(s) 522 such as one or more displays, speakers, printers, and/or any other output device may also be included in device 512. Input device(s) 524 and output device(s) 522 may be connected to device 512 via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s) 524 or output device(s) 522 for computing device 512.
[0047] Components of computing device 512 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device 512 may be interconnected by a network. For example, memory 518 may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
[0048] Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 530 accessible via a network 528 may store computer readable instructions to implement one or more embodiments provided herein. Computing device 512 may access computing device 530 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 512 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 512 and some at computing device 530.
[0049] Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments. [0050] Further, unless specified otherwise, "first," "second," and/or the like are not intended to imply a temporal aspect, a spatial aspect, an ordering, etc. Rather, such terms are merely used as identifiers, names, etc. for features, elements, items, etc. For example, a first object and a second object generally correspond to object A and object B or two different or two identical objects or the same object.
[0051] Moreover, "exemplary" is used herein to mean serving as an example, instance, illustration, etc., and not necessarily as advantageous. As used herein, "or" is intended to mean an inclusive "or" rather than an exclusive "or". In addition, "a" and "an" as used in this application are generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form. Also, at least one of A and B and/or the like generally means A or B and/or both A and B. Furthermore, to the extent that "includes", "having", "has", "with", and/or variants thereof are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising".
[0052] Also, although the disclosure has been shown and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art based upon a reading and understanding of this specification and the annexed drawings. The disclosure includes all such modifications and alterations and is limited only by the scope of the following claims. In particular regard to the various functions performed by the above described components (e.g., elements, resources, etc.), the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., that is functionally equivalent), even though not structurally equivalent to the disclosed structure. In addition, while a particular feature of the disclosure may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application.

Claims

1. A system for gaze tracking, comprising:
a gaze tracking component configured to:
utilize a user tracking component to obtain first user tracking data for a first user at a first time Tl ;
evaluate the first user tracking data to identify a first spatial location of the first user at the first time T 1 ;
select a first eye capture camera from an eye capture camera configuration based upon the first eye capture camera having a first view frustum corresponding to the first spatial location, the eye capture camera configuration comprising a plurality of eye capture cameras having a fixed view frustum configuration;
invoke the first eye capture camera to obtain first eye region imagery of the first user at the first time T 1 ; and
generate first gaze tracking information for the first user at the first time Tl based upon the first eye region imagery.
2. The system of claim 1, the gaze tracking component configured to:
predict a potential new spatial location of the first user; and
responsive to the potential new spatial location corresponding to a second view frustum for eye capture camera, awaken the second eye capture camera into a capture ready state for obtaining eye region imagery.
3. The system of claim 1, the gaze tracking component configured to:
maintain one or more eye capture cameras of the eye capture camera configuration in a powered down state at the first time Tl based upon the one or more eye capture cameras having view frustums not corresponding to the first spatial location.
4. The system of claim 1, the gaze tracking component configured to:
responsive to the first user tracking data indicating that the first user is, at a second time T2, within a second spatial location to which the first view frustum does not correspond, transition the first eye capture camera into a powered down state at the second time T2.
5. The system of claim 1, the gaze tracking component configured to:
select a second eye capture camera from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the first spatial location; invoke the second eye capture camera to obtain second eye region imagery of the first user at the first time T 1 ; and
combine the first eye region imagery and the second eye region imagery to generate the first gaze tracking information.
6. The system of claim 1, the gaze tracking component configured to:
responsive to the first user tracking data indicating that the first user is, at a second time T2, within a second spatial location:
select a second eye capture camera from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the second spatial location;
transition, at the second time T2, the first eye capture camera into a powered down state based upon the first view frustum not corresponding to the second spatial location;
invoke the second eye capture camera to obtain second eye region imagery of the first user at the second time T2; and
generate second gaze tracking information for the first user at the second time T2 based upon the second eye region imagery.
7. A method for gaze tracking, comprising:
utilizing a user tracking component to obtain first user tracking data for a first user at a first time Tl;
evaluating the first user tracking data to identify a first spatial location of the first user at the first time T 1 ;
selecting a first eye capture camera from an eye capture camera configuration based upon the first eye capture camera having a first view frustum corresponding to the first spatial location, the eye capture camera configuration comprising a plurality of eye capture cameras having a fixed view frustum configuration;
invoking the first eye capture camera to obtain first eye region imagery of the first user at the first time T 1 ; and
generating first gaze tracking information for the first user at the first time Tl based upon the first eye region imagery.
8. The method of claim 7, comprising:
maintaining one or more eye capture cameras of the eye capture camera configuration in a powered down state at the first time Tl based upon the one or more eye capture cameras having view frustums not corresponding to the first spatial location.
9. The method of claim 7, comprising:
responsive to the first user tracking data indicating that the first user is, at a second time T2, within a second spatial location:
selecting a second eye capture camera from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the second spatial location;
transitioning, at the second time T2, the first eye capture camera into a powered down state based upon the first view frustum not corresponding to the second spatial location;
invoking the second eye capture camera to obtain second eye region imagery of the first user at the second time T2; and
generating second gaze tracking information for the first user at the second time T2 based upon the second eye region imagery.
10. The method of claim 7, comprising:
utilizing the user tracking component to obtain second user tracking data for a second user at the first time Tl;
evaluating the second user tracking data to identify a second spatial location of the second user at the first time Tl;
selecting a second eye capture camera from the eye capture camera configuration based upon the second eye capture camera having a second view frustum corresponding to the second spatial location;
invoking the second eye capture camera to obtain second eye region imagery of the second user at the first time Tl; and
generating second gaze tracking information for the second user at the first time Tl based upon the second eye region imagery.
PCT/US2015/032334 2014-05-30 2015-05-25 Gaze tracking for one or more users WO2015183766A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15728308.6A EP3149559A1 (en) 2014-05-30 2015-05-25 Gaze tracking for one or more users
CN201580028775.2A CN106662916A (en) 2014-05-30 2015-05-25 Gaze tracking for one or more users

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/291,254 2014-05-30
US14/291,254 US20150346814A1 (en) 2014-05-30 2014-05-30 Gaze tracking for one or more users

Publications (1)

Publication Number Publication Date
WO2015183766A1 true WO2015183766A1 (en) 2015-12-03

Family

ID=53373617

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/032334 WO2015183766A1 (en) 2014-05-30 2015-05-25 Gaze tracking for one or more users

Country Status (5)

Country Link
US (1) US20150346814A1 (en)
EP (1) EP3149559A1 (en)
CN (1) CN106662916A (en)
TW (1) TW201544996A (en)
WO (1) WO2015183766A1 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170064209A1 (en) * 2015-08-26 2017-03-02 David Cohen Wearable point of regard zoom camera
US10397546B2 (en) 2015-09-30 2019-08-27 Microsoft Technology Licensing, Llc Range imaging
US9799161B2 (en) * 2015-12-11 2017-10-24 Igt Canada Solutions Ulc Enhanced electronic gaming machine with gaze-aware 3D avatar
US10523923B2 (en) 2015-12-28 2019-12-31 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
CN108733203A (en) * 2017-04-20 2018-11-02 上海耕岩智能科技有限公司 A kind of method and apparatus of eyeball tracking operation
US11153465B2 (en) * 2017-06-21 2021-10-19 Dell Products L.P. System and method of processing video of a tileable wall
US10585277B2 (en) * 2017-08-31 2020-03-10 Tobii Ab Systems and methods for tracking a gaze of a user across a multi-display arrangement
JP7223303B2 (en) * 2019-03-14 2023-02-16 日本電気株式会社 Information processing device, information processing system, information processing method and program
WO2020209491A1 (en) 2019-04-11 2020-10-15 Samsung Electronics Co., Ltd. Head-mounted display device and operating method of the same
CN110171427B (en) * 2019-05-30 2020-10-27 北京七鑫易维信息技术有限公司 Sight tracking method, device and system
FR3099837A1 (en) * 2019-08-09 2021-02-12 Orange Establishment of communication by analyzing eye movements
US11382713B2 (en) * 2020-06-16 2022-07-12 Globus Medical, Inc. Navigated surgical system with eye to XR headset display calibration
US20230308505A1 (en) * 2022-03-22 2023-09-28 Microsoft Technology Licensing, Llc Multi-device gaze tracking

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
GB2379354A (en) * 2001-07-31 2003-03-05 Hewlett Packard Co Monitoring system with motion-dependent resolution selection
WO2013025354A2 (en) * 2011-08-18 2013-02-21 Qualcomm Incorporated Smart camera for taking pictures automatically
US20130178287A1 (en) * 2010-12-13 2013-07-11 Microsoft Corporation Human-computer interface system having a 3d gaze tracker
EP2699022A1 (en) * 2012-08-16 2014-02-19 Alcatel Lucent Method for provisioning a person with information associated with an event

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030038754A1 (en) * 2001-08-22 2003-02-27 Mikael Goldstein Method and apparatus for gaze responsive text presentation in RSVP display
US8292433B2 (en) * 2003-03-21 2012-10-23 Queen's University At Kingston Method and apparatus for communication between humans and devices
CN1293446C (en) * 2005-06-02 2007-01-03 北京中星微电子有限公司 Non-contact type visual control operation system and method
US7878910B2 (en) * 2005-09-13 2011-02-01 Igt Gaming machine with scanning 3-D display system
US8077914B1 (en) * 2006-08-07 2011-12-13 Arkady Kaplan Optical tracking apparatus using six degrees of freedom
EP2235713A4 (en) * 2007-11-29 2012-04-25 Oculis Labs Inc Method and apparatus for display of secure visual content
US20100079508A1 (en) * 2008-09-30 2010-04-01 Andrew Hodge Electronic devices with gaze detection capabilities
ES2880475T3 (en) * 2009-04-01 2021-11-24 Tobii Ab Visual representation system with illuminators for gaze tracking
DE112011105941B4 (en) * 2011-12-12 2022-10-20 Intel Corporation Scoring the interestingness of areas of interest in a display element
CA2882606A1 (en) * 2012-08-22 2014-02-27 Neuro Assessment Systems Inc. Method and apparatus for assessing neurocognitive status

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0714081A1 (en) * 1994-11-22 1996-05-29 Sensormatic Electronics Corporation Video surveillance system
GB2379354A (en) * 2001-07-31 2003-03-05 Hewlett Packard Co Monitoring system with motion-dependent resolution selection
US20130178287A1 (en) * 2010-12-13 2013-07-11 Microsoft Corporation Human-computer interface system having a 3d gaze tracker
WO2013025354A2 (en) * 2011-08-18 2013-02-21 Qualcomm Incorporated Smart camera for taking pictures automatically
EP2699022A1 (en) * 2012-08-16 2014-02-19 Alcatel Lucent Method for provisioning a person with information associated with an event

Also Published As

Publication number Publication date
CN106662916A (en) 2017-05-10
US20150346814A1 (en) 2015-12-03
EP3149559A1 (en) 2017-04-05
TW201544996A (en) 2015-12-01

Similar Documents

Publication Publication Date Title
US20150346814A1 (en) Gaze tracking for one or more users
US20220334646A1 (en) Systems and methods for extensions to alternative control of touch-based devices
KR101950641B1 (en) Scene analysis for improved eye tracking
US10416789B2 (en) Automatic selection of a wireless connectivity protocol for an input device
US9342160B2 (en) Ergonomic physical interaction zone cursor mapping
CA2942377C (en) Object tracking in zoomed video
US9658695B2 (en) Systems and methods for alternative control of touch-based devices
US20090153468A1 (en) Virtual Interface System
US20160088060A1 (en) Gesture navigation for secondary user interface
JP2016510144A (en) Detection of natural user input involvement
US10474324B2 (en) Uninterruptable overlay on a display
KR20160106653A (en) Coordinated speech and gesture input
KR102448223B1 (en) Media capture lock affordance for graphical user interfaces
BR112020009381A2 (en) haptic method and device for capturing haptic content from an object
US9857869B1 (en) Data optimization
US9507429B1 (en) Obscure cameras as input
US9898183B1 (en) Motions for object rendering and selection
US9761009B2 (en) Motion tracking device control systems and methods
US11199906B1 (en) Global user input management
US20240187687A1 (en) Smart home automation using multi-modal contextual information
Yeo et al. OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera
Rodrigues et al. Can People With High Physical Movement Restrictions Access to Any Computer? The CaNWII Tool

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15728308

Country of ref document: EP

Kind code of ref document: A1

DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
REEP Request for entry into the european phase

Ref document number: 2015728308

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015728308

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE