WO2015065478A1 - Gaze-assisted touchscreen inputs - Google Patents

Gaze-assisted touchscreen inputs Download PDF

Info

Publication number
WO2015065478A1
WO2015065478A1 PCT/US2013/068125 US2013068125W WO2015065478A1 WO 2015065478 A1 WO2015065478 A1 WO 2015065478A1 US 2013068125 W US2013068125 W US 2013068125W WO 2015065478 A1 WO2015065478 A1 WO 2015065478A1
Authority
WO
WIPO (PCT)
Prior art keywords
touch
signal
gaze
user
touchscreen
Prior art date
Application number
PCT/US2013/068125
Other languages
French (fr)
Inventor
Nathan R. Andrysco
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to PCT/US2013/068125 priority Critical patent/WO2015065478A1/en
Priority to JP2016524529A priority patent/JP6165979B2/en
Priority to US14/127,955 priority patent/US9575559B2/en
Priority to CN201380080038.8A priority patent/CN105593785B/en
Priority to EP13896672.6A priority patent/EP3063602B1/en
Publication of WO2015065478A1 publication Critical patent/WO2015065478A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04104Multi-touch detection in digitiser, i.e. details about the simultaneous detection of a plurality of touching locations, e.g. multiple fingers or pen and finger
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Definitions

  • the present disclosure relates generally to the field of data processing, and more particularly, to gaze-assisted touchscreen inputs.
  • touchscreen-based devices such as tablets and smartphones
  • touchscreen-based devices are often frustrated by the devices' limited ability to differentiate between different kinds of touches and to respond in unexpected ways to stray touches.
  • touch accuracy is compromised by the parallax effect, in which the desired location of touch does not align with the actual location of touch.
  • FIG. 1 is a block diagram of an illustrative computing system configured for gaze- assisted touchscreen inputs, in accordance with various embodiments.
  • FIG. 2 is a block diagram of an illustrative gaze-assisted touchscreen input system that may be implemented by the computing system of FIG. 1, in accordance with various embodiments.
  • FIG. 3 illustrates a scenario for the generation of a gaze location signal when a user views a touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
  • FIG. 4 illustrates a region of a user's gaze on the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
  • FIGS. 5 and 6 illustrate a region of a user's gaze, touches and displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of touch signals of a user, in accordance with various embodiments.
  • FIG. 7 illustrates two users viewing the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
  • FIGS. 8 and 9 illustrate gaze regions, touches and displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of touch signals of two users, in accordance with various embodiments.
  • FIG. 10 illustrates a scenario for the generation of a position signal when a user views the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
  • FIGS. 11 and 12 illustrate displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of a position signal, in accordance with various embodiments.
  • FIG. 13-15 are flow diagrams of illustrative processes for generating gaze-assisted touchscreen inputs, in accordance with various embodiments.
  • a computing system may receive a gaze location signal indicative of a region of a user's gaze on a touchscreen, receive a touch signal indicative of a touch of the user on the touchscreen, and generate an input signal for the computing system, based at least in part on the gaze location signal and the touch signal.
  • the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present disclosure, are synonymous.
  • the phrase “coupled” may mean that two or more elements are in direct physical or electrical contact, or that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other (e.g., via one or more intermediate elements, which may perform their own transformations or have their own effects).
  • two elements may be coupled to each other when both elements communicate with a common element (e.g., a memory device).
  • the term “logic” may refer to, be part of, or include an Application Specific Integrated
  • ASIC Application Specific integrated circuit
  • ASIC Application Specific integrated circuit
  • processor shared, dedicated, or group
  • memory shared, dedicated, or group
  • a signal may be "received" by a component if it is generated externally or internally to that component, and acknowledged and/or processed by that component.
  • FIG. 1 depicts an illustrative computing system 100 configured for gaze- assisted touchscreen inputs, in accordance with various embodiments.
  • the computing system 100 may be configured to receive a gaze location signal indicative of a region of a user's gaze on a touchscreen of the computing system, receive a touch signal indicative of a touch of the user on the touchscreen, and generate an input signal for the computer system, based at least in part on the gaze location signal and the touch signal.
  • the computing system 100 may include a personal computing device 102, a touchscreen 104, and a remote computing device 106. Each of the personal computing device 102, the touchscreen 104 and the remote computing device 106 may include gaze-assisted touchscreen input components (illustrated in FIG.
  • Gaze- assisted touchscreen input operations may be distributed between the gaze-assisted touchscreen input components 114, 116 and 118 of the computing system 100 as suitable.
  • Gaze- assisted touchscreen input operations may be distributed between the personal computing device 102, the touchscreen 104 and the remote computing device 106 as discussed herein, but any other combination of more or fewer components, and any other distribution of the operations, may be used.
  • the gaze-assisted touchscreen input component 114 or the gaze-assisted touchscreen input component 118 may be omitted, and all suitable gaze-assisted touchscreen input operations (e.g., any of those described herein) may be performed by the remaining gaze-assisted touchscreen input component(s).
  • the computing system 100 may be configured as the gaze-assisted touchscreen input system 200 discussed below with reference to FIG. 2. Except for the gaze-assisted touchscreen input teachings of the present disclosure incorporated therein, the personal computing device 102, the touchscreen 104, and the remote computing device 106 may be a broad range of such devices known in the art. Specific, but not limiting, examples are described below.
  • Communication between the components of the computing system 100 may be enabled by the communication pathways 108, 110 and 112.
  • the communication pathways 108, 110 and 112 may each include wired communication pathways and/or wireless communication pathways, over direct couplings, and/or over personal, local and/or wide area networks.
  • the touchscreen 104 and the remote computing device 106 may include suitable hardware for supporting the communication pathways 108, 110 and 112, such as network interface cards, modems, WiFi devices, Bluetooth devices, and so forth.
  • the communication pathways 108, 110 and 112 may be direct communication pathways between the components as illustrated in FIG. 1.
  • references to "direct" communication pathways between two components of the computing system 100 of FIG. 1 may refer to a communication pathway that does not route through another illustrated component, but that may route through other non-illustrated devices (e.g., routers and/or switches).
  • Each of the devices included in the computing system 100 may
  • the processing device may include one or more processing devices, such as one or more processing cores, ASICs, electronic circuits, processors (shared, dedicated, or group), combinational logic circuits, and/or other suitable components that may be configured to process electronic data.
  • the storage device may include any suitable memory or mass storage devices (such as solid-state drive, diskette, hard drive, compact disc read only memory (CD-ROM) and so forth).
  • Each of the computing devices included in the computing system 100 may include one or more buses (and bus bridges, if suitable) to communicatively couple the processing device, the storage device, and any other devices included in the respective computing devices.
  • the storage device may include a set of computational logic, which may include one or more copies of computer readable media having instructions stored therein which, when executed by the processing device of the computing device, may cause the computing device to implement any of the techniques and methods disclosed herein, or any portion thereof.
  • the computational logic may include any of the logic discussed below with reference to FIG. 2.
  • the personal computing device 102 may be a tablet or smartphone, and the touchscreen 104 may be integral to the tablet or smartphone (e.g., forming a surface of the tablet or smartphone).
  • the touchscreen 104 may be integral to the tablet or smartphone (e.g., forming a surface of the tablet or smartphone).
  • touchscreen 104 may be a standalone device (e.g., a drawing tablet) and the personal computing device 102 may be a desktop computer configured to perform gaze-assisted touchscreen input operations (such as those described herein) based on touch data transmitted from the touchscreen 104 to the personal computing device 102 through a wired or wireless communication pathway 108.
  • gaze-assisted touchscreen input operations such as those described herein
  • the personal computing device 102 may be a computing device that
  • a wearable personal computing device 102 may include glasses, a headset, a hair accessory (e.g., a headband or barrette), an ear piece, jewelry (e.g., brooch, earrings or a necklace), a wrist band (e.g., a wristwatch), a neck band (e.g., a tie or scarf), a garment (e.g., a shirt, pants, dress skirt or jacket), shoes, a lanyard or nametag, a contact lens, or an implantable support structure, among others.
  • a hair accessory e.g., a headband or barrette
  • jewelry e.g., brooch, earrings or a necklace
  • a wrist band e.g., a wristwatch
  • a neck band e.g., a tie or scarf
  • a garment e.g., a shirt, pants, dress skirt or jacket
  • shoes e.g., a lanyard or nametag, a contact lens, or an implantable support structure
  • the personal computing device 102 may be a wearable computing device including an image capture device (e.g., the image capture device 232 of FIG. 2, discussed below). In some embodiments, the personal computing device 102 may be a wrist-mounted computing device having an image capture device. In some embodiments, the personal computing device 102 may be a glasses-mounted computing device having an image capture device facing the wearer. In some embodiments, the personal computing device 102 may be a wearable computing that includes a "world-facing" image capture device (i.e., an image capture device directed away from the wearer).
  • a "world-facing" image capture device i.e., an image capture device directed away from the wearer.
  • the personal computing device 102 may be a desktop or stand-alone
  • computing device or a computing device configured for carrying in a pocket, backpack or other carrying case, and for operation with one or more of a user's hands.
  • Examples of computing devices that may serve as the personal computing device 102 include cellular phones, smartphones, other handheld mobile communication devices, tablets, electronic book readers, personal digital assistants, laptops, or other such computing devices.
  • the personal computing device 102 (and other components described herein) may be referred to in the singular, any number of personal computing devices may be included in the personal computing device 102 (and similarly, any component may include multiple such components).
  • the computing device 102 may be controlled by an app or plug-in on the personal computing device 102, for example.
  • the personal computing device 102 may include two or more computing devices, one of which has more computing resources (e.g., processing power, memory, and/or communication bandwidth) than
  • the personal computing device 102 may include a larger tablet computing device and a smaller wrist- or glasses-mounted computing device.
  • data captured and preliminarily processed by the smaller computing device e.g., image, audio, or other sensor data
  • the computing system 100 may include a touchscreen 104.
  • a "touchscreen" may include a device that provides a screen on which a visual display is rendered that may be controlled by contact with a user's finger or other contact instrument (e.g., a stylus).
  • the primary contact instrument discussed herein may be a user's finger, but any suitable contact instrument may be used in place of a finger.
  • Non-limiting examples of touchscreen technologies that may be used to implement the touchscreen 104 include resistive touchscreens, surface acoustic wave touchscreens, capacitive touchscreens, infrared-based touchscreens, and any other suitable touchscreen technology.
  • the touchscreen 104 may include suitable sensor hardware and logic to generate a touch signal.
  • a touch signal may include information regarding a location of the touch (e.g., one or more sets of (x,y) coordinates describing an area, shape or skeleton of the touch), a pressure of the touch (e.g., as measured by area of contact between a user's finger or a deformable stylus and the touchscreen 104, or by a pressure sensor), a duration of contact, any other suitable information, or any combination of such information.
  • the touchscreen 104 may be configured to stream the touch signal to the personal computing device 102 and/or the remote computing device 106 via a wired or wireless communication pathway (e.g., the pathways 108 and 112, respectively).
  • the touchscreen 104 may be connected locally to (or integrated with) the personal computing device 102.
  • the remote computing device 106 may include one or more servers
  • the communication pathway 110 between the personal computing device 102 and the remote computing device 106, and communication pathway 112 between the touchscreen 104 and the remote computing device 106, may be configured according to any remote wired or wireless communication protocol.
  • the remote computing device 106 may have more computing resources (e.g., processing power, memory, and/or communication bandwidth) than the personal computing device 102 or the touchscreen 104.
  • data captured and preliminarily processed by the personal computing device 102 and/or the touchscreen 104 e.g., touch data embodied in a touch signal
  • the remote computing device 106 may perform most of the gaze-assisted touchscreen input operations discussed below with reference to FIG. 2.
  • the remote computing device 106 may include a storage device for storing touch signals, gaze location signals (discussed below), or any other data that may be accessed when the computing system 100 performs a gaze-assisted touchscreen input operation in accordance with the techniques disclosed herein.
  • one or more of the communication pathways between components of the computing system 100 may not be included.
  • the touchscreen 104 may not communicate directly with the remote computing device 106 via the communication pathway 112, but may communicate with the remote computing device 106 via the personal computing device 102 and the communication pathways 108 and 110.
  • FIG. 2 is a block diagram of an illustrative gaze-assisted touchscreen input system 200, in accordance with various embodiments.
  • the system 200 may include input/output (I/O) devices 228, processing logic 202, and a storage device 226.
  • the system 200 may be implemented by the computing system 100 of FIG. 1, in accordance with various embodiments.
  • the components of the system 200 may be distributed in any suitable manner among one or more of the components of the computing system 100.
  • Components of the system 200 may be described as implemented by the computing system 100 for illustrative purposes, but the system 200 may be implemented by any suitably configured computing device or collection of computing devices.
  • the system 200 may be implemented by the personal computing device 102 of the computing system 100. In some such
  • the touchscreen 104 may be integral to the personal computing device 102.
  • the system 200 may be configured to perform any of a number of gaze-assisted touchscreen input operations.
  • the system 200 may be configured to receive a touch signal indicative of a touch of a user on a touchscreen of the system 200, receive a gaze location signal indicative of a region of a user's gaze on the touchscreen, and generate an input signal based at least in part on the gaze location signal and the touch signal.
  • the input signal may, e.g., be provided to an operating system of the system 200, an application running on the system 200, another device in communication with the system 200, or any other component internal or external to the system 200.
  • the system 200 may not include the gaze location logic 204, but may be coupled with the gaze location logic 204 (embodied in, e.g., a separate device) via a wired or wireless communication pathway so as to be able to receive signals from and/or send signals to the gaze location logic 204.
  • the system 200 may not include the touch detection logic 206, but may be coupled with the touch detection logic 206 (embodied in, e.g., a separate device) via a wired or wireless communication pathway so as to be able to receive signals from and/or send signals to the touch detection logic 206.
  • the touch detection logic 206 embodied in, e.g., a separate device
  • the system 200 may not be configured for display adjustment (as discussed below), and thus may not include the viewing position logic 212 and/or the display adjustment logic 216.
  • the system 200 may include the I/O devices 228.
  • the I/O devices 228 may include a touchscreen 104, an image capture device 232 and other devices 234.
  • the touchscreen 104 may take the form of any of the embodiments discussed above with reference to FIG. 1.
  • the image capture device 232 may include one or more cameras. As used herein, the term "camera” may include still image cameras and video cameras. A camera may be analog or digital. In some embodiments, the image capture device 232 may capture high-definition video. In some embodiments, the image capture device 232 may be configured to stream image data (e.g., video data) to the personal computing device 102 and/or the remote computing device 106 via a wired or wireless communication pathway (e.g., the pathways 108 and 112, respectively). In some embodiments, the image capture device 232 may be connected locally to (or integrated with) the personal computing device 102, while in other embodiments, the image capture device 232 may be remote from the personal computing device 102.
  • image data e.g., video data
  • the image capture device 232 may be connected locally to (or integrated with) the personal computing device 102, while in other embodiments, the image capture device 232 may be remote from the personal computing device 102.
  • the image capture device 232 may use any imaging wavelength (e.g., visible or infrared light).
  • the image capture device 232 may include a visible light camera and an infrared camera, and may combine the images captured by these devices or treat them separately.
  • the image capture device 232 may include two or more cameras having different orientations (e.g., one camera that is mounted on a wearable personal computing device 102 and faces away from the user in a "world-facing" orientation, and one camera that is mounted on the personal computing device 102 and faces toward the user when the personal computing device 102 is in use).
  • the image capture device 232 may include a single image capture device (e.g., a single camera).
  • the image capture device 232 may include an array camera, in which
  • the image capture device 232 may include a processing device which is configured to execute any known technique for combining the images or provide various image browsing experiences (e.g., in conjunction with other components of the computing system 100).
  • the image capture device 232 may include a depth camera, which may provide information about the depth of various objects in the imaged scene. Some depth cameras may use a time-of-flight technique to determine depth information.
  • the image capture device 232 may be mounted on or proximate to the touchscreen 104, and may capture one or more images of a user of the touchscreen 104. These images may be used to determine a region of the user's gaze (e.g., as discussed below with reference to the gaze location logic 204) and/or to determine a position of the user's eyes relative to the touchscreen 104 (e.g., as discussed below with reference to the viewing position logic 212).
  • the image capture device 232 may be mounted in a wearable personal computing device 102 that attaches on or near a user's eyes, and may capture images of the touchscreen 104 while the touchscreen 104 is being used. These images may be used to determine a region of the user's gaze (e.g., as discussed below with reference to the gaze location logic 204) and/or to determine a position of the user's eyes relative to the
  • touchscreen 104 e.g., as discussed below with reference to the viewing position logic 2112.
  • the other devices 234 included in the I/O devices 228 may include any suitable input, output or storage devices, for example.
  • Devices that may be included in the other devices 234 may include proximity sensors (which may be mounted in a user's glasses and in the touchscreen 104, and may generate a signal indicative of the distance between the user's eyes and the touchscreen 104), one or more microphones (which may be mounted on or proximate to the touchscreen 104 and may triangulate the position of the user's head based on analysis of the user's voice), or any other suitable devices.
  • the other devices 234 may include one or more light sources that may operate in conjunction with the image capture device 232 to generate visible, infrared or other types of light during image capture to aid in the identification of various features in the image.
  • some known eye tracking techniques use one or more infrared LEDs to provide illumination of a user's face and generate reflections on the surface of the cornea. The reflections may be used to locate the eye and the center of the cornea in the image.
  • the system 200 may include the processing logic 202.
  • the processing logic 202 may include a number of logic components.
  • the processing logic 202 may include gaze location logic 204.
  • the gaze location logic 204 may be configured to generate a gaze location signal indicative of a region of a user's gaze on the touchscreen 104.
  • a region of a user's gaze may include the one or more locations on the touchscreen 104 which are viewed with the highest acuity region of the user's eyes.
  • the processing logic 202 may include image capture logic 210, which may be coupled to the gaze location logic 204 and may be configured to receive an image of the user's eyes from the image capture device 232.
  • the gaze location logic 204 may be configured to generate the gaze location signal based at least in part on the image received from the image capture device 232.
  • FIG. 3 depicts two views 302 and 304 of a scenario for the generation of a gaze location signal when a user 306 views the touchscreen 104 of the system 200, in accordance with various embodiments.
  • the touchscreen 104 is shown as included in the personal computing device 102 (which may be, for example, a smartphone or tablet device).
  • the gaze of the user 306 may be directed to the touchscreen 104, and in particular, to a region 312 on the touchscreen 104.
  • the user's eyes 310 may be located at a distance z above the touchscreen 104 in a direction perpendicular to a surface of the touchscreen 104.
  • the angle al may represent the angle at which the pupil 308 is directed, as measured from the horizontal plane 314 of the eyes 310.
  • the angle ct2 may represent the angle at which the pupil 308 is directed, as measured from the vertical plane 316 of the user's eyes 310.
  • the user's gaze may be characterized by the distance z, and the angles al and a2, and the location signal (indicative of the gaze region 312) generated by the gaze location logic 204 accordingly.
  • the gaze location logic 204 may use any suitable measurements from any suitable devices to determine the gaze region 312.
  • Existing technologies for eye tracking include some which use multiple cameras arranged to capture images of a user's eyes in a stereo configuration that enables the use of triangulation techniques to determine distance from the camera arrangement.
  • Some technologies employ a physical model of the eye, which may include reflection and refraction properties of the cornea, the location of the center of the pupil and the center of curvature of the cornea, the offset of the fovea from the optical axis, the radius of curvature of the cornea, and other physical parameters. Any suitable gaze tracking technology may be implemented by the gaze location logic 204.
  • the gaze region 312 may be characterized in any of a number of ways. In some embodiments, the gaze region 312 may be characterized as a point on the
  • the gaze location signal may represent the coordinates of the point in a coordinate system for the touchscreen 104 (e.g., (x,y) coordinates in a two-dimensional coordinate system in the plane of the
  • the gaze region 312 may be characterized as an area of the touchscreen 104.
  • the area may have any suitable shape.
  • the gaze region 312 may be a circle, and the gaze location signal may represent coordinates of the center of the circle and may also represent the radius of the circle.
  • the gaze region 312 may be an ellipse, and the gaze location signal may represent coordinates of the foci of the ellipse and the lengths of the major and minor axes of the ellipse.
  • FIG. 4 illustrates an elliptical gaze region 312 on the touchscreen 104, having a major axis 402, a minor axis 404, and a center 406.
  • the touchscreen 104 may be partitioned into a number of labeled rectangles or other polygons, and the gaze region may include one or more of these partitions.
  • the boundaries and labels of the partitions may be stored in the storage device 226.
  • the gaze location signal may represent the labels of each partition included in the gaze region.
  • the gaze region 312 may have any shape (e.g., an irregular shape), and the gaze location signal may represent coordinates of the perimeter of the gaze region 312.
  • the gaze location logic 204 may use any suitable characterization of the gaze region 312.
  • the shape and/or size of the gaze region 312 may depend on the precision with which the gaze location logic 204 is able to determine where the gaze of the user 306 is directed.
  • the gaze location logic 204 may identify an elliptical gaze region with a minor axis corresponding to a direction in which the gaze of the user 306 may be determined with greater precision and a major axis corresponding to a direction in which the gaze of the user 306 may be determined with lesser precision.
  • the processing logic 202 may include touch detection logic 206.
  • the touch detection logic 206 may be configured to generate a touch signal indicative of a touch of a user on the touchscreen 104.
  • a touch signal may include information regarding a location of the touch (e.g., one or more sets of (x,y) coordinates describing an area, shape or skeleton of the touch), a pressure of the touch (e.g., as measured by area of contact between a user's finger or a deformable stylus and the touchscreen 104, or by a pressure sensor), a duration of contact, any other suitable information, or any combination of such information.
  • the processing logic 202 may include input registration logic 208.
  • the input registration logic 208 may be coupled to the gaze location logic 204 and the touch detection logic 206.
  • the input registration logic 208 may be configured to receive the gaze location signal from the gaze location logic 204 and to receive the touch signal from the touch detection logic 206.
  • the input registration logic 208 may also be configured to generate an input signal based at least in part on the gaze location signal and the touch signal.
  • an "input signal" may be any signal provided as a user input.
  • An input signal may be provided to a hardware or software component of the system 200 and/or to a hardware or software component of a device separate from the system 200.
  • Examples of input signals may include a user's touch on a particular portion of the touchscreen 104 and the properties of that touch. Other examples of input signals may be a signal indicating a user selection of a particular option displayed on the touchscreen 104, the user invocation of a particular function through contact with the touchscreen 104, or any other signal indicative of a user input.
  • the input signal generated by the registration logic 208 may be generated at the operating system level of the system 200.
  • an operating system of the system 200 may be configured to generate touch signals that can be queried or otherwise monitored by applications running in the operating system (e.g., a map application may include a function that re-centers the map in response to a user tap at a particular location, and information about the tap and the location of the tap may be provided by an operating system-level function invoked by the map application).
  • the input registration logic 208 may evaluate touch signals at the operating system level before they are provided to applications, and thereby may serve to "filter” such touch signals.
  • the input registration logic 208 may operate at the application level, and may be used by a particular application to "filter” or otherwise process touch signals provided to the application by the operating system- level functions.
  • the input registration logic 208 may be configured to generate the input signal through selection of one of a plurality of predetermined touch types based at least in part on the touch signal.
  • predetermined touch types include a tap, a swipe, a pinch, and a spread.
  • a tap may include a momentary single contact between the touchscreen 104 and a user (e.g., through a single finger or stylus).
  • a swipe may include an extended single contact between the touchscreen 104 and the user over a line or curve (e.g., as may be useful when a user moves her finger from right to left to turn a page of a book rendered on the touchscreen 104).
  • a pinch may include two simultaneous points of contact between the touchscreen 104 and the user, with those points of contact drawn together on the surface of the touchscreen 104 (e.g., as may be useful when a user brings her fingers closer together on the touchscreen 104 to zoom into a portion of a displayed webpage).
  • a spread may include two simultaneous points of contact between the touchscreen 104 and the user, with those points of contact drawn apart on the surface of the touchscreen 104.
  • touch types include press-and-hold, rotate, and slide-and-drag, for example.
  • Different touch types may be associated with different regions of the touchscreen 104; for example, a "flick" touch type may be recognized by the system 200 when the user touches a point proximate to an edge of the touchscreen 104 and quickly and briefly slides her finger toward the interior of the touchscreen 104.
  • Characteristics of various touch types may be stored in the storage device 226, and may be accessed by the input registration logic 208 (e.g., when the input registration logic 208 compares a received touch signal to the stored characteristics of various touch types in order to select a touch type that best corresponds to the received touch signal).
  • the input signal generated by the input registration logic 208 may indicate which touch type is associated with a detected touch.
  • the input registration logic 208 may be configured to select one of the plurality of predetermined touch types based at least in part on the touch signal and the gaze location signal.
  • the touch types stored in the storage device 226 may include one or more non-gaze- associated types and one or more gaze-associated types.
  • a non-gaze-associated type may be a touch type whose location on the touchscreen does not typically correspond with the user's gaze region. In other words, a non-gaze-associated type represents a touch action that a user will perform without looking at the portion of the touchscreen on which the touch action is performed.
  • a swipe may be a non- gaze-associated type, in that users do not typically look at the same region of the touchscreen in which they're performing a swipe.
  • a pinch may be another example of a non-gaze-associated type.
  • a gaze-associated type may be a touch type whose location on the screen does typically correspond with the user's gaze region.
  • a tap may be a gaze-associated type, in that users typically look at the same region of the touchscreen in which they are tapping.
  • Whether a touch type is gaze-associated or non-gaze-associated may vary depending upon the context (e.g., depending upon which application is executing on the system 100 and displaying a user interface on the touchscreen 104). For example, some applications may use a swipe touch type in different regions of the touchscreen 104 to indicate user selection of various options. In such applications, a swipe touch type may be gaze-associated in that a user will typically look to the region of the touchscreen 104 corresponding to her selection. In other applications, a swipe touch type may be used to unlock a portion of a user interface (e.g., a control panel) or move to a previous document in a sequence of documents, for example.
  • a swipe touch type may be used to unlock a portion of a user interface (e.g., a control panel) or move to a previous document in a sequence of documents, for example.
  • a swipe touch type may not be gaze-associated, meaning that users will often look at regions of the screen other than the touched region when performing the swipe.
  • the storage device 226 may store information about whether various touch types are gaze-associated or non-gaze-associated in various contexts (e.g., in various applications, operating systems, or other operating environments).
  • the input registration logic 208 may be configured to select a touch type based on the gaze location signal by selecting a touch type that is gaze-associated or non-gaze-associated depending on the relative locations of the touch and the gaze region. In particular, the input registration logic 208 may determine, based at least in part on the touch signal, that the touch was located outside of the gaze region. In response to this determination, the input registration logic 208 may select a non-gaze- associated touch type for the touch. In some embodiments, in response to a
  • the input registration logic 208 may select a gaze-associated or non-gaze- associated touch type for the touch.
  • FIG. 5 illustrates the gaze region 312 and several touches 502, 504 and 506 on the touchscreen 104.
  • the touches 502 and 504 may represent touches that have a short duration and are highly localized relative to the extended contact area of the touch 506.
  • the touch detection logic 206 may analyze the characteristics of the touches 502, 504 (e.g., against a set of predetermined touch types stored in the storage device 226, as discussed above) and may select a preliminary touch type for each of the touches 502, 504 and 506 before any gaze location information is available, received and/or processed. In some embodiments, this preliminary determination may be made by the input registration logic 208; for ease of illustration, this preliminary determination will be discussed as performed by the touch detection logic 206.
  • the touch detection logic 206 may determine that the touches 502 and 504 are best classified as “taps” based on the duration of contact and the area of contact, while the touch 506 is best classified as a "slide.”
  • the touch detection logic 206 (or the input registration logic 208, as appropriate) may generate a preliminary touch type signal for each of these touches indicative of the corresponding touch type.
  • the input registration logic 208 may receive the preliminary touch type signals (or may receive the touch signals from the touch detection logic 206 without preliminary touch type identification) and may determine whether a location of each touch is within the gaze region 312. If a touch location is not within the gaze region 312, the input registration logic 208 may select a non-gaze-associated touch type for that touch. If a touch location is within the gaze region 312, the input registration logic 208 may select a gaze-associated or a non-gaze associated-touch type for that touch. For example, as illustrated in FIG. 5, the touch 502 is located within the gaze region 312.
  • the input registration logic 208 may generate an input signal indicating that the touch 502 is a tap.
  • the touch 504 is not located within the gaze region 312. If a tap is a gaze-associated touch type, the input registration logic 208 may not generate an input signal indicating that the touch 504 is a tap even if the non-location characteristics of the touch 504 (e.g., the area and duration of contact) are compatible with the characteristics of a tap. Instead, the input registration logic 208 may seek another touch type compatible with the characteristics of the touch 504. If no suitable touch type can be found, the input registration logic 208 may select a "none" type. In some embodiments, the input registration logic 206 may select a "none" type by ignoring the touch 504 for the purposes of generating an input signal (e.g., the touch 504 may be treated as an incidental contact between the user and the touchscreen 104).
  • the input registration logic 208 may not generate an input signal indicating that the touch 504 is a tap even if the non-location characteristics of the touch 504 (e.g., the area and duration of contact) are compatible with the characteristics of a
  • the touch 506 is located outside the gaze region 312. However, if the characteristics of the touch 506 are compatible with the characteristics of a slide (e.g., as stored in the storage device 226), and if a slide is a non- gaze-associated touch type, the input registration logic 208 may generate an input signal indicating that the touch 506 is a slide.
  • the input registration logic 208 may not require a touch to be strictly within a gaze region for the touch to be designated as a gaze-associated touch type.
  • a touch may be partially within the gaze region and partially outside of the gaze region.
  • a touch may commence within the gaze region and end outside of the gaze region.
  • a touch need only be within a predetermined distance of the gaze region to be designated as a gaze-associated touch type (if appropriate).
  • the predetermined distance may be an absolute distance (e.g., 1 centimeter), a relative distance (e.g., within a distance of a gaze region less than or equal to 10% of a radius of the gaze region), or any other suitable distance.
  • the processing logic 202 may include display logic 214.
  • the display logic 214 may be coupled to the touchscreen 104, and may be configured to cause the display of various visual elements on the touchscreen 104.
  • the display logic 214 may be coupled to the input registration logic 208, and may be configured to cause the display, on the touchscreen 104, of one or more visual elements based on the input signal generated by the input registration logic 208.
  • FIG. 5 illustrates a display 500 on the touchscreen 104 that may be provided by the display logic 214.
  • the display 500 may include multiple visual elements, such as the letter blocks 508 and the theme change area 510.
  • the input registration logic 208 may generate a "tap" input signal in response to the touch 502, as discussed above, the display logic 214 may cause the display, on the touchscreen 104, of a visual element based on this input signal.
  • a visual element is shown in the display 600 of FIG. 6 as the shaded box 602.
  • the display logic 214 may cause the display, on the touchscreen 104, of a visual element based on this input signal.
  • Such a visual element is shown in the display 600 as the theme graphic 604, which may replace the theme graphic 512 of the display 500.
  • the gaze location logic 204 may be configured to generate multiple gaze location signals, each corresponding to a different user viewing the touchscreen 104.
  • the touch detection logic 206 may be configured to generate multiple touch signals, each corresponding to different touches on the touchscreen 104.
  • the input registration logic 208 may be configured to receive the multiple location signals and the multiple touch signals, and determine which touch signals correspond to which users by comparing the locations of the touch signals to the gaze regions for each user.
  • the input registration logic 208 may be configured to receive location signals corresponding to the gaze regions of each of two or more users, receive a touch signal, identify the gaze region closest to the location of the touch signal, and associate the touch signal with the user corresponding to the closest gaze region.
  • the input registration logic 208 may receive multiple touch signals, associate the touch signals with different users based on the proximity of the locations of the touch signals to different gaze regions (indicated by different gaze location signals), and generate multiple different input signals based at least in part on the received gaze location signals and the received touch signals.
  • the touch detection logic 206 may generate the multiple touch signals at least partially in parallel.
  • the input registration logic 208 may generate the multiple input signals at least partially in parallel.
  • FIG. 7 illustrates first and second users 702 and 704 viewing the touchscreen 104 (as shown, included in the personal computing device 102).
  • the gaze of the first user 702 may be directed to a first region 706 on the touchscreen 104
  • the gaze of the second user 704 may be directed to a second region 708 on the touchscreen 104.
  • the first and second gaze regions 706 and 708 are illustrated as superimposed on the display 800 of the touchscreen 104 in FIG. 8. As illustrated in FIG. 8, the first and second gaze regions 706 and 708 may have different shapes, and may have different locations on the touchscreen 104.
  • the gaze location logic 204 may generate first and second gaze location signals indicative of the first and second gaze regions 706 and 708, and may provide these gaze location signals to the input registration logic 208.
  • FIG. 8 also illustrates two touches 802 and 804. As shown, the touch 802 falls within the first gaze region 706.
  • the input registration logic 208 may be configured to receive the first gaze location signal (indicative of the first gaze region 706), receive the first touch signal (indicative of the touch 802) and determine that the touch 802 falls within the first gaze region 706. In response to that
  • the input registration logic 208 may determine that the touch 802 was performed by the first user 702, and may generate an input signal associated with the first user 702.
  • the input registration logic 208 may also be configured to receive the second gaze location signal (indicative of the second gaze region 708), receive the second touch signal (indicative of the touch 804) and determine that the touch 804 falls at least partially within the second gaze region 708.
  • the input registration logic 208 may determine that the touch 804 was performed by the second user 704, and may generate an input signal associated with the second user 704.
  • the input registration logic 208 may receive touch signals indicative of the touches 802 and 804 in parallel, in rapid succession, or in any suitable order relative to receipt of the gaze location signals indicative of the first and second gaze regions 706 and 708. Thus, the input registration logic 208 may evaluate all received touch signals (e.g., within a given window of time) against all received gaze location signals to determine which touch signals may correspond with the same user as a particular gaze location signal. In the example of FIG. 8, the input registration logic 208 may determine that the touch 802 is closer to the first gaze region 706 than to the second gaze region 708, and in response, determine that the touch 802 is not associated with the second user 704. Alternately, the input registration logic 208 may determine that the touch 802 is farther than a predetermined distance away from the second gaze region 708, and in response, determine that the touch 802 is not associated with the second user 704.
  • the display logic 214 may be configured to cause the display, on the touchscreen 104, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
  • the first and second visual elements may be displayed simultaneously.
  • the display 800 also includes first and second visual elements 806 and 808.
  • the first and second visual elements 806 and 808 of FIG. 8 are avatars, and may represent player characters in a computer game or representatives in a virtual world environment, for example.
  • the first visual element 806 may be associated with the first user 702 (e.g., the first visual element 806 may be a player character controlled by the first user 702) and the second visual element 808 may be associated with the second user 704.
  • the input registration logic 208 may generate an input signal indicating that the first user 702 wishes to move the first visual element 806 to the location of the touch 802 and an input signal indicating that the second user 704 wishes to move the second visual element 808 to the location of the touch 804.
  • the display logic 214 may cause the display 900 of FIG. 9 on the touchscreen 104. As shown in FIG. 9, the first visual element 806 is relocated to the location 902 of the touch 802 and the second visual element 808 is relocated to the location 904 of the touch 804.
  • the input registration logic 208 may distinguish input signals from multiple users on a single touchscreen, and may enable multi-user computing applications such as game playing, editing of documents, simultaneous web browsing, or any other multi-user scenario.
  • the processing logic 202 may include viewing position logic 212.
  • the viewing position logic 212 may be coupled to the input registration logic 208 and may generate a position signal indicative of a position of the user's eyes relative to the touchscreen 104.
  • the viewing position logic 212 may be coupled to the image capture logic 210, and may be configured to generate the position signal based at least in part on an image of the user's eyes received from the image capture device 232.
  • FIG. 10 depicts two views 1002 and 1004 of a scenario for the generation of a position signal when a user 1006 views the touchscreen 104 (shown as included in the personal computing device 102).
  • the user's eyes 1010 may be located at a distance z above the touchscreen 104 in a direction perpendicular to a surface of the
  • a reference point 1008 may be defined on the touchscreen 104 (or in any location whose position is defined with reference to the touchscreen 104). In some embodiments, the reference point 1008 may be a point at which the image capture device 232 is located on the personal computing device 102.
  • the angle ⁇ may represent the angle at which the user's eyes 1010 are located, as measured from the horizontal plane 1014 of the surface of the touchscreen 104.
  • the angle ⁇ 2 may represent the angle at which the center point 1018 between the user's eyes 1010 is located, as measured from the vertical plane 1016 of the reference point 1008.
  • the position of the user's eyes may be characterized by the distance z, and the angles ⁇ and ⁇ 2, and the position signal generated accordingly by the viewing position logic 212.
  • the viewing position logic 212 may use any suitable measurements to determine the position of the user's eyes for generating the position signal.
  • some existing technologies use images of the user's face, captured by an image capture device 232 mounted in a known position relative to the touchscreen 104, to create a three- dimensional model of landmarks on the user's face, and thereby determine the position of the user's eyes relative to the touchscreen 104.
  • one or more devices may be included in a head-mounted device (e.g., radio frequency identification tags included in a pair of glasses), and these devices may communicate with cooperating devices mounted on or proximate to the touchscreen 104 (e.g., radio frequency identification tag readers) to determine the relative position between the user's eyes and the touchscreen 104 (e.g., based on the strength of the radio frequency signals detected). Any known technique for head position modeling may be implemented by the viewing position logic 212.
  • the processing logic 202 may include display adjustment logic 216.
  • the display adjustment logic 216 may be coupled to the viewing position logic 212, and may be configured to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal generated by the viewing position logic 212.
  • the display adjustment logic 216 may be configured to determine an angle at which the user is viewing the touchscreen 104 (e.g., based on the position signal generated by the viewing position logic 212) and generate an adjustment signal to correct the display by visually distorting the displayed elements so that they appear to the user the same as they would appear if the user were viewing the touchscreen 104 in a direction perpendicular to a surface plane of the touchscreen 104.
  • an angle at which the user is viewing the touchscreen may include one or more angular measurements representing the position of the user's eyes relative to an axis that is perpendicular to the surface plane of the touchscreen.
  • an angle may include two angular measurements.
  • the display adjustment logic 216 may be configured to generate the adjustment signal in order to correct the apparent distortion of a display on the touchscreen 104 that occurs when a user views the touchscreen 104 from an angle other than an angle perpendicular to the surface plane of the touchscreen 104.
  • Certain examples of this distortion may be referred to as the "keystone effect” or "tombstone effect.”
  • FIG. 11 illustrates an example of this distortion. In FIG.
  • a desired display 1100 is displayed (e.g., by the display logic 214) on the touchscreen 104.
  • the display 1100 will appear distorted.
  • the portion 1104 of the display 1100 closer to the user will be enlarged relative to the portion 1106 of the display 1100 farther from the user.
  • the display adjustment logic 216 may be configured to use the position signal generated by the viewing position logic 212 to generate an adjustment signal indicative of a distortion of the display 1100 so that a user viewing the touchscreen 104 from the position indicated by the position signal will see the display 1100 appropriately dimensioned.
  • the display logic 214 may be coupled with the display adjustment logic 216, and may be configured to cause the display, on the
  • FIG. 12 illustrates a display 1200 that may be generated by the display logic 214 based on the adjustment signal generated by the viewing position logic 212.
  • the display 1200 may be distorted with respect to the desired display 1100.
  • the display adjustment logic 216 may include a threshold of distortion such that, unless the user's position indicates that the distortion of the display should exceed the threshold, no adjustment should be made. Such a threshold may prevent the display adjustment logic 216 from making frequent and slight adjustments to the display on the touchscreen 104, which may be disconcerting for the user.
  • FIG. 13 is a flow diagram illustrating a process 1300 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments.
  • the operations of the process 1300 (and the other processes described herein), although illustrated as performed in a particular sequence for the sake of illustration, may be performed in parallel as suitable or in any other order. For example, operations related to receiving a location signal may be performed in parallel, partially in parallel, or in any suitable order, relative to operations related to receiving a touch signal.
  • Operations of the process 1300 may be described as performed by components of the system 200, as embodied in the computing system 100, for illustrative purposes, but the operations of the process 1300 (and the other processes described herein) may be performed by any suitably configured computing device or collection of computing devices. Any of the operations of the process 1300 (and the other processes described herein) may be performed in accordance with any of the embodiments of the systems 100 and 200 described herein.
  • the process 1300 may begin at the operation 1302, in which a gaze location signal indicative of a region of a user's gaze on the touchscreen 104 may be received (e.g., by the input registration logic 208).
  • the gaze location signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the gaze location signal may be generated based on an image of the user's eyes from the image capture device 232.
  • a touch signal indicative of a touch of the user on the touchscreen 104 may be received (e.g., by the input registration logic 208).
  • the touch signal may be generated in accordance with any of the embodiments described herein.
  • an input signal may be generated (e.g., by the input registration logic 208), based at least in part on the gaze location signal (received at the operation 1302) and the touch signal (received at the operation 1304).
  • the input signal may be generated in accordance with any of the embodiments described herein.
  • the operation 1306 may include selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and the input signal may indicate the selected touch type.
  • the plurality of predetermined touch types may include one or more non- gaze-associated types and one or more gaze-associated types.
  • Selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal may include determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, and, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
  • the process 1300 may then end.
  • FIG. 14 is a flow diagram illustrating a process 1400 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments.
  • the process 1400 may begin at the operation 1402, in which multiple gaze location signals indicative of multiple region of corresponding multiple users' gaze on the touchscreen 104 may be received (e.g., by the input registration logic 208).
  • the gaze location signals may be generated in accordance with any of the embodiments described herein.
  • the gaze location signals may be generated based on one or more images of the users' eyes from the image capture device 232.
  • the remaining discussion of FIG. 14 may refer to two users (a first user and a second user), but the process 1400 may be applied to any number of users.
  • the multiple gaze location signals may be received at the operation 1402 at least partially in parallel.
  • a touch signal indicative of a touch of a user on the touchscreen 104 may be received (e.g., by the input registration logic 208).
  • the touch signal may be generated in accordance with any of the embodiments described herein.
  • the touch signal may not identify which user performed the touch.
  • the operation 1404 may include receiving two touch signals indicative of the touch of one or more users on the touchscreen 104.
  • multiple touch signals may be received at the operation 1404 at least partially in parallel.
  • an input signal may be generated (e.g., by the input registration logic 208) based at least in part on the gaze location signals (received at the operation 1402) and the touch signal (received at the operation 1404).
  • the input signal may be generated in accordance with any of the embodiments described herein.
  • the location of the touch (indicated by the touch signal received at the operation 1404) with the gaze regions of the first and second users (indicated by the gaze location signals received at the operation 1402) may be compared (e.g., by the input registration logic 208), and the touch may be assigned to one of the first and second users (e.g., by the input registration logic 208).
  • each touch signal may be assigned to the first and second users as discussed above (e.g., by the system 200).
  • a first touch may be associated with a first user and a second touch may be associated with a second user.
  • first and second input signals, corresponding to the first and second touches may be generated (e.g., by the system 200).
  • the display of a visual element based at least in part on the input signal generated at the operation 1406 may be caused (e.g., by the display logic 214). For example, the movement of a visual element associated with the first user
  • first and second input signals are generated at the operation 1406
  • the simultaneous display, on the touchscreen 104, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal may be caused (e.g., by the system 200).
  • An example of such a display was discussed above with reference to FIGS. 8-9.
  • the process 1400 may then end.
  • FIG. 15 is a flow diagram illustrating a process 1500 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments.
  • the process 1500 may begin at the
  • a position signal indicative of a position of the user's eyes relative to the touchscreen 104 may be received (e.g., by the viewing position logic 212).
  • the position signal may be generated in accordance with any of the embodiments described herein (e.g., those discussed above with reference to FIGS. 10-12). For example, the position signal may be generated based on one or more images of the user's eyes from the image capture device 232.
  • an adjustment signal indicative of a desired visual distortion based at least in part on the position signal received at the operation 1502 may be generated (e.g., by the display adjustment logic 216).
  • the adjustment signal may be generated in accordance with any of the embodiments described herein.
  • the adjustment signal may indicate adjustments to the display of a visual element on the touchscreen 104 to correct a keystone or related visual effect arising from the user's perspective on the touchscreen 104.
  • one or more visual elements distorted in accordance with the adjustment signal may be caused to be displayed on touchscreen 104 (e.g., by the display logic 214).
  • the display logic 214 An example of such a display was discussed above with reference to
  • the process 1500 may then end.
  • the processes 1300, 1400 and 1500 may be combined in any desired combination to perform touchscreen-related data processing operations.
  • the process 1500 may be performed (e.g., by the system 200) to continually adjust the display on the touchscreen 104 in response to the position of the user's eyes.
  • the adjusted display may include different visual elements associated with different users, the display of which may be adjusted in response to input signals from the different users generated in accordance with the process 1400.
  • the combined process may also include the gaze-associated and non- gaze-associated touch type operations discussed above with reference to various embodiments of the process 1300. Accordingly, any desired combination of these processes may be performed to improve a user's touchscreen experience.
  • Various ones of the embodiments disclosed herein may improve the quality of experience of a user of a touchscreen device.
  • some embodiments may improve the ability of computing systems to distinguish between two potential touch points that are close together on a touchscreen; by using gaze location information, the computing system may improve its ability to identify the desired touch point.
  • Embodiments that distinguish between gaze-associated and non-gaze-associated touch types may improve the computing system's ability to distinguish between different touch types (e.g., reducing the likelihood that a pinch will be mistaken for a tap), enabling better interaction between the user and the computing system.
  • Some embodiments that employ the display adjustment techniques disclosed herein may better align the points on the touchscreen that a user believes she has touched with the points she has actually touched, reducing user frustration.
  • Example 1 is a computing system with gaze-assisted touchscreen inputs, including input registration logic to: receive a touch signal indicative of a touch of a user on a touchscreen of the computing system, receive a gaze location signal indicative of a region of a user's gaze on the touchscreen, and generate an input signal to the computer system based at least in part on the gaze location signal and the touch signal.
  • Example 2 may include the subject matter of Example 1, and may further include gaze location logic, coupled to the input registration logic, to generate the gaze location signal.
  • Example 3 may include the subject matter of Example 2, and may further include image capture logic, coupled to the gaze location logic, to receive an image of the user's eyes from an image capture device, wherein the gaze location logic is to generate the gaze location signal based at least in part on the received image.
  • Example 4 may include the subject matter of any of Examples 1-3, and may further include touch detection logic, coupled to the input registration logic, to generate the touch signal.
  • Example 5 may include the subject matter of any of Examples 1-4, and may further specify that the input registration logic is to generate the input signal through selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
  • Example 6 may include the subject matter of Example 5, and may further specify that the plurality of predetermined touch types includes one or more non-gaze- associated types and one or more gaze-associated types, and that selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: determination, based at least in part on the touch signal, that the touch was located outside of the region of the user's gaze; and, in response to the determination that the touch was located outside of the region of the user's gaze, selection of a non-gaze-associated type.
  • Example 7 may include the subject matter of any of Examples 1-6, and may further specify that the input registration logic is further to: receive a second touch signal indicative of a second touch of the user on the touchscreen; receive a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; determine, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; determine, based at least in part on the second touch signal, that the second touch signal is not compatible with any non- gaze-associated touch types; and in response to a determination that the second touch was located outside of the second region of the user's gaze and a determination that the second touch signal is not compatible with any non-gaze-associated touch types, ignore the second touch signal for the purposes of generating of an input signal.
  • Example 8 may include the subject matter of any of Examples 1-7, and may further specify that the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and that the input registration logic is to receive a second touch signal indicative of a touch of a second user on the touchscreen, receive a second gaze location signal indicative of a region of the second user's gaze on the touchscreen, and generate a second input signal based at least in part on the second gaze location signal and the second touch signal.
  • the gaze location signal is a first gaze location signal
  • the user is a first user
  • the touch signal is a first touch signal
  • the input signal is a first input signal
  • the input registration logic is to receive a second touch signal indicative of a touch of a second user on the touchscreen, receive a second gaze location signal indicative of a region of the second user's gaze on the touchscreen, and generate a second input signal based at least in part on the second gaze location signal and the second touch
  • Example 9 may include the subject matter of Example 8, and may further include touch detection logic, coupled to the input registration logic, to generate the first and second touch signals at least partially in parallel.
  • Example 10 may include the subject matter of any of Examples 8-9, and may further include display logic to cause the display, on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal, the first and second visual elements displayed simultaneously.
  • Example 11 may include the subject matter of any of Examples 1-10, and may further specify that the input registration logic is to receive a position signal indicative of a position of the user's eyes relative to the touchscreen, and that the input signal is based at least in part on the position signal.
  • Example 12 may include the subject matter of Example 11, and may further include viewing position logic, coupled to the input registration logic, to generate the position signal.
  • Example 13 may include the subject matter of any of Examples 11-12, and may further include: display adjustment logic, coupled to the viewing position logic, to receive the position signal and to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and display logic, coupled with the display adjustment logic, to cause the display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
  • display adjustment logic coupled to the viewing position logic, to receive the position signal and to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal
  • display logic coupled with the display adjustment logic, to cause the display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
  • Example 14 is a method for generating gaze-assisted touchscreen inputs for a computing system, including: receiving, by the computing system, a gaze location signal indicative of a region of a user's gaze on a touchscreen of the computing system;
  • Example 15 may include the subject matter of Example 14, and may further include receiving, by the computing system, an image of the user's eyes from an image capture device, wherein the gaze location signal is based at least in part on the received image.
  • Example 16 may include the subject matter of any of Examples 14-15, and may further specify that generating an input signal based at least in part on the gaze location signal and the touch signal includes selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
  • Example 17 may include the subject matter of Example 16, wherein the plurality of predetermined touch types includes one or more non-gaze-associated types and one or more gaze-associated types, and wherein selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: determining that the touch signal indicates that the touch was located outside of the region of the user's gaze; and, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
  • Example 18 may include the subject matter of any of Examples 14-17, and may further include: receiving a second touch signal indicative of a second touch of the user on the touchscreen; receiving a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; determining, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; determining, based at least in part on the second touch signal, that the second touch signal is not compatible with any non-gaze-associated touch types; and in response to determining that the second touch was located outside of the second region of the user's gaze and that the second touch signal is not compatible with any non-gaze- associated touch types, ignore the second touch signal for the purposes of generating an input signal.
  • Example 19 may include the subject matter of any of Examples 14-18, wherein the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and wherein the method may further include: receiving, by the computing system, a second gaze location signal indicative of a region of a second user's gaze on the touchscreen; receiving, by the computing system, a second touch signal indicative of a touch of the second user on the touchscreen; and generating, by the computing system, a second input signal based at least in part on the second gaze location signal and the second touch signal.
  • the method may further include: receiving, by the computing system, a second gaze location signal indicative of a region of a second user's gaze on the touchscreen; receiving, by the computing system, a second touch signal indicative of a touch of the second user on the touchscreen; and generating, by the computing system, a second input signal based at least in part on the second gaze location signal and the second touch signal.
  • Example 20 may include the subject matter of Example 19, wherein receiving the first touch signal is performed by the computing system at least partially in parallel with receiving the second touch signal.
  • Example 21 may include the subject matter of any of Examples 19-20, and may further include causing simultaneous display, by the computing system on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
  • Example 22 may include the subject matter of any of Examples 14-21, and may further include receiving, by the computing system, a position signal indicative of a position of the user's eyes relative to the touchscreen, wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
  • Example 23 may include the subject matter of Example 22, and may further include: generating, by the computing system, an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and causing display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
  • Example 24 is one or more computer readable media having instructions thereon that, when executed by one or more processing devices of a computing device, cause the computing device to perform the method of any of Examples 14-23.
  • Example 25 is an apparatus including one or more processing devices and one or more computer readable media having instructions thereon that, when executed by the one or more processing devices, cause the apparatus to perform the method of any of Examples 14-23.
  • Example 26 is a system with gaze-assisted touchscreen inputs, including: means for receiving a gaze location signal indicative of a region of a user's gaze on a touchscreen of a computing system; means for receiving a touch signal indicative of a touch of the user on the touchscreen; and means for generating an input signal for the system based at least in part on the gaze location signal and the touch signal.
  • Example 27 may include the subject matter of Example 26, and may further include means for generating the gaze location signal.
  • Example 28 may include the subject matter of Example 27, wherein the means for generating the gaze location signal includes means for receiving an image of the user's eyes from an image capture device, wherein the gaze location signal is based at least in part on the received image.
  • Example 29 may include the subject matter of any of Examples 26-28, and may further include means for generating the touch signal.
  • Example 30 may include the subject matter of any of Examples 26-29, and may further specify that the means for generating an input signal based at least in part on the gaze location signal and the touch signal includes means for selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
  • Example 31 may include the subject matter of Example 30, and may further specify that the plurality of predetermined touch types includes one or more non-gaze- associated types and one or more gaze-associated types, and that the means for selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: means for determining that the touch signal indicates that the touch was located outside of the region of the user's gaze; and means for, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
  • Example 32 may include the subject matter of any of Examples 26-31, and may further include: means for receiving a second touch signal indicative of a second touch of the user on the touchscreen; means for receiving a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; means for determining, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; and means for determining, based at least in part on the second touch signal, that the second touch signal is not compatible with any non- gaze-associated touch types; wherein the means for generating an input signal is configured to, in response to determining that the second touch was located outside of the second region of the user's gaze and that the second touch signal is not compatible with any non-gaze-associated touch types, ignore the second touch signal.
  • Example 33 may include the subject matter of any of Examples 26-32, and may further specify that the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and that the system may further include: means for receiving a second gaze location signal indicative of a region of a second user's gaze on the touchscreen; means for receiving a second touch signal indicative of a touch of the second user on the touchscreen; and means for generating a second input signal based at least in part on the second gaze location signal and the second touch signal.
  • Example 34 may include the subject matter of Example 33, wherein the means for receiving the first touch signal is configured to receive the first touch signal at least partially in parallel with the reception of the second touch signal by the means for generating the second touch signal.
  • Example 35 may include the subject matter of any of Examples 33-34, and may further include means for causing simultaneous display, on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
  • Example 36 may include the subject matter of any of Examples 26-35, and may further include means for receiving a position signal indicative of a position of the user's eyes relative to the touchscreen, wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
  • Example 37 may include the subject matter of Example 36, and may further include means for generating the position signal.
  • Example 38 may include the subject matter of Example 37, and may further include: means for generating an adjustment signal indicative of a desired visual distortion based at least in part on the position signal, and means for causing display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

Embodiments related to gaze-assisted touchscreen inputs are disclosed. In some embodiments, a computing system may receive a gaze location signal indicative of a region of a user's gaze on a touchscreen, receive a touch signal indicative of a touch of the user on the touchscreen, and generate an input signal for the computing system based at least in part on the gaze location signal and the touch signal. Other embodiments may be disclosed and/or claimed.

Description

GAZE-ASSISTED TOUCHSCREEN INPUTS
Technical Field
The present disclosure relates generally to the field of data processing, and more particularly, to gaze-assisted touchscreen inputs.
Background
Users of touchscreen-based devices (such as tablets and smartphones) are often frustrated by the devices' limited ability to differentiate between different kinds of touches and to respond in unexpected ways to stray touches. Additionally, when users view these devices at unusual angles (e.g., from the extreme right or left of the device), touch accuracy is compromised by the parallax effect, in which the desired location of touch does not align with the actual location of touch. These performance limitations may significantly reduce a user's quality of experience with touchscreen devices.
Brief Description of the Drawings
Embodiments will be readily understood by the following detailed
description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements. Embodiments are illustrated by way of example, and not by way of limitation, in the figures of the
accompanying drawings.
FIG. 1 is a block diagram of an illustrative computing system configured for gaze- assisted touchscreen inputs, in accordance with various embodiments.
FIG. 2 is a block diagram of an illustrative gaze-assisted touchscreen input system that may be implemented by the computing system of FIG. 1, in accordance with various embodiments.
FIG. 3 illustrates a scenario for the generation of a gaze location signal when a user views a touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
FIG. 4 illustrates a region of a user's gaze on the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
FIGS. 5 and 6 illustrate a region of a user's gaze, touches and displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of touch signals of a user, in accordance with various embodiments. FIG. 7 illustrates two users viewing the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
FIGS. 8 and 9 illustrate gaze regions, touches and displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of touch signals of two users, in accordance with various embodiments.
FIG. 10 illustrates a scenario for the generation of a position signal when a user views the touchscreen of the computing system of FIG. 1, in accordance with various embodiments.
FIGS. 11 and 12 illustrate displays rendered on the touchscreen of the computing system of FIG. 1 before and after processing of a position signal, in accordance with various embodiments.
FIG. 13-15 are flow diagrams of illustrative processes for generating gaze-assisted touchscreen inputs, in accordance with various embodiments.
Detailed Description
Embodiments related to gaze-assisted touchscreen inputs are disclosed. In some embodiments, a computing system may receive a gaze location signal indicative of a region of a user's gaze on a touchscreen, receive a touch signal indicative of a touch of the user on the touchscreen, and generate an input signal for the computing system, based at least in part on the gaze location signal and the touch signal.
In the following detailed description, reference is made to
the accompanying drawings which form a part hereof wherein like numerals designate like parts throughout, and in which is shown by way of illustration embodiments that may be practiced. It is to be understood that other embodiments may be utilized and structural or logical changes may be made without departing from the scope of the present disclosure. Therefore, the following detailed description is not to be taken in a limiting sense.
Various operations may be described as multiple discrete actions or
operations in turn, in a manner that is most helpful in understanding the claimed subject matter. However, the order of description should not be construed as to imply that these operations are necessarily order dependent. In particular, these operations may not be performed in the order of presentation. Operations described may be performed in a different order than the described embodiment. Various additional operations may be performed and/or described operations may be omitted in additional embodiments.
For the purposes of the present disclosure, the phrase "A and/or B" means (A), (B), or (A and B). For the purposes of the present disclosure, the phrase "A, B, and/or C" means (A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).
The description uses the phrases "in an embodiment," or "in
embodiments," which may each refer to one or more of the same or different embodiments. Furthermore, the terms "comprising," "including," "having," and the like, as used with respect to embodiments of the present disclosure, are synonymous. As used herein, the phrase "coupled" may mean that two or more elements are in direct physical or electrical contact, or that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other (e.g., via one or more intermediate elements, which may perform their own transformations or have their own effects). For example, two elements may be coupled to each other when both elements communicate with a common element (e.g., a memory device). As used herein, the term "logic" may refer to, be part of, or include an Application Specific Integrated
Circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and/or memory (shared, dedicated, or group) that execute one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality. As used herein, a signal may be "received" by a component if it is generated externally or internally to that component, and acknowledged and/or processed by that component.
FIG. 1 depicts an illustrative computing system 100 configured for gaze- assisted touchscreen inputs, in accordance with various embodiments. In some embodiments, the computing system 100 may be configured to receive a gaze location signal indicative of a region of a user's gaze on a touchscreen of the computing system, receive a touch signal indicative of a touch of the user on the touchscreen, and generate an input signal for the computer system, based at least in part on the gaze location signal and the touch signal. The computing system 100 may include a personal computing device 102, a touchscreen 104, and a remote computing device 106. Each of the personal computing device 102, the touchscreen 104 and the remote computing device 106 may include gaze-assisted touchscreen input components (illustrated in FIG. 1 as gaze-assisted touchscreen input components 114, 116 and 118, respectively). Gaze- assisted touchscreen input operations may be distributed between the gaze-assisted touchscreen input components 114, 116 and 118 of the computing system 100 as suitable. Several examples of the distribution of gaze-assisted touchscreen input operations between the personal computing device 102, the touchscreen 104 and the remote computing device 106 are discussed herein, but any other combination of more or fewer components, and any other distribution of the operations, may be used. For example, in some embodiments, the gaze-assisted touchscreen input component 114 or the gaze-assisted touchscreen input component 118 may be omitted, and all suitable gaze-assisted touchscreen input operations (e.g., any of those described herein) may be performed by the remaining gaze-assisted touchscreen input component(s). In some embodiments, the computing system 100 may be configured as the gaze-assisted touchscreen input system 200 discussed below with reference to FIG. 2. Except for the gaze-assisted touchscreen input teachings of the present disclosure incorporated therein, the personal computing device 102, the touchscreen 104, and the remote computing device 106 may be a broad range of such devices known in the art. Specific, but not limiting, examples are described below.
Communication between the components of the computing system 100 may be enabled by the communication pathways 108, 110 and 112. The communication pathways 108, 110 and 112 may each include wired communication pathways and/or wireless communication pathways, over direct couplings, and/or over personal, local and/or wide area networks. Each of the personal computing device 102, the
touchscreen 104 and the remote computing device 106 may include suitable hardware for supporting the communication pathways 108, 110 and 112, such as network interface cards, modems, WiFi devices, Bluetooth devices, and so forth. In some embodiments, the communication pathways 108, 110 and 112 may be direct communication pathways between the components as illustrated in FIG. 1. As used herein, references to "direct" communication pathways between two components of the computing system 100 of FIG. 1 (or any system or device disclosed herein) may refer to a communication pathway that does not route through another illustrated component, but that may route through other non-illustrated devices (e.g., routers and/or switches). Each of the devices included in the computing system 100 may
include a processing device and a storage device (not shown). The processing device may include one or more processing devices, such as one or more processing cores, ASICs, electronic circuits, processors (shared, dedicated, or group), combinational logic circuits, and/or other suitable components that may be configured to process electronic data. The storage device may include any suitable memory or mass storage devices (such as solid-state drive, diskette, hard drive, compact disc read only memory (CD-ROM) and so forth). Each of the computing devices included in the computing system 100 may include one or more buses (and bus bridges, if suitable) to communicatively couple the processing device, the storage device, and any other devices included in the respective computing devices. The storage device may include a set of computational logic, which may include one or more copies of computer readable media having instructions stored therein which, when executed by the processing device of the computing device, may cause the computing device to implement any of the techniques and methods disclosed herein, or any portion thereof. In some embodiments, the computational logic may include any of the logic discussed below with reference to FIG. 2.
Although illustrated as three separate components in FIG. 1, the personal computing device 102, the touchscreen 104, and the remote computing device 106 may be combined or divided in any desired manner. For example, in some embodiments of the computing system 100, the personal computing device 102 may be a tablet or smartphone, and the touchscreen 104 may be integral to the tablet or smartphone (e.g., forming a surface of the tablet or smartphone). In some embodiments, the
touchscreen 104 may be a standalone device (e.g., a drawing tablet) and the personal computing device 102 may be a desktop computer configured to perform gaze-assisted touchscreen input operations (such as those described herein) based on touch data transmitted from the touchscreen 104 to the personal computing device 102 through a wired or wireless communication pathway 108. A number of additional combinations are described herein.
The personal computing device 102 may be a computing device that
is integrated into a garment, accessory or other support structure that is configured to be worn on the body of the user (or "wearer"). Examples of suitable support structures for a wearable personal computing device 102 may include glasses, a headset, a hair accessory (e.g., a headband or barrette), an ear piece, jewelry (e.g., brooch, earrings or a necklace), a wrist band (e.g., a wristwatch), a neck band (e.g., a tie or scarf), a garment (e.g., a shirt, pants, dress skirt or jacket), shoes, a lanyard or nametag, a contact lens, or an implantable support structure, among others. In some embodiments, the personal computing device 102 may be a wearable computing device including an image capture device (e.g., the image capture device 232 of FIG. 2, discussed below). In some embodiments, the personal computing device 102 may be a wrist-mounted computing device having an image capture device. In some embodiments, the personal computing device 102 may be a glasses-mounted computing device having an image capture device facing the wearer. In some embodiments, the personal computing device 102 may be a wearable computing that includes a "world-facing" image capture device (i.e., an image capture device directed away from the wearer).
The personal computing device 102 may be a desktop or stand-alone
computing device, or a computing device configured for carrying in a pocket, backpack or other carrying case, and for operation with one or more of a user's hands. Examples of computing devices that may serve as the personal computing device 102 include cellular phones, smartphones, other handheld mobile communication devices, tablets, electronic book readers, personal digital assistants, laptops, or other such computing devices. Although the personal computing device 102 (and other components described herein) may be referred to in the singular, any number of personal computing devices may be included in the personal computing device 102 (and similarly, any component may include multiple such components).
Image processing and other operations performed by the personal
computing device 102 may be controlled by an app or plug-in on the personal computing device 102, for example. In some embodiments, the personal computing device 102 may include two or more computing devices, one of which has more computing resources (e.g., processing power, memory, and/or communication bandwidth) than
another. For example, the personal computing device 102 may include a larger tablet computing device and a smaller wrist- or glasses-mounted computing device. In such embodiments, data captured and preliminarily processed by the smaller computing device (e.g., image, audio, or other sensor data) may be transmitted from the smaller computing device to the larger computing device for further processing. The computing system 100 may include a touchscreen 104. As used herein, a "touchscreen" may include a device that provides a screen on which a visual display is rendered that may be controlled by contact with a user's finger or other contact instrument (e.g., a stylus). For ease of discussion, the primary contact instrument discussed herein may be a user's finger, but any suitable contact instrument may be used in place of a finger. Non-limiting examples of touchscreen technologies that may be used to implement the touchscreen 104 include resistive touchscreens, surface acoustic wave touchscreens, capacitive touchscreens, infrared-based touchscreens, and any other suitable touchscreen technology.
The touchscreen 104 may include suitable sensor hardware and logic to generate a touch signal. A touch signal may include information regarding a location of the touch (e.g., one or more sets of (x,y) coordinates describing an area, shape or skeleton of the touch), a pressure of the touch (e.g., as measured by area of contact between a user's finger or a deformable stylus and the touchscreen 104, or by a pressure sensor), a duration of contact, any other suitable information, or any combination of such information. In some embodiments, the touchscreen 104 may be configured to stream the touch signal to the personal computing device 102 and/or the remote computing device 106 via a wired or wireless communication pathway (e.g., the pathways 108 and 112, respectively). In some embodiments, as noted above, the touchscreen 104 may be connected locally to (or integrated with) the personal computing device 102.
The remote computing device 106 may include one or more servers
(e.g., arranged in a "cloud" computing configuration) or other computing devices remote from the personal computing device 102 and/or the
touchscreen 104. The communication pathway 110 between the personal computing device 102 and the remote computing device 106, and communication pathway 112 between the touchscreen 104 and the remote computing device 106, may be configured according to any remote wired or wireless communication protocol. In some embodiments, the remote computing device 106 may have more computing resources (e.g., processing power, memory, and/or communication bandwidth) than the personal computing device 102 or the touchscreen 104. Thus, in some embodiments, data captured and preliminarily processed by the personal computing device 102 and/or the touchscreen 104 (e.g., touch data embodied in a touch signal) may be transmitted over the communication pathways 110 and/or 112 to the remote computing device 106 for further processing. In some embodiments, the remote computing device 106 may perform most of the gaze-assisted touchscreen input operations discussed below with reference to FIG. 2. In some embodiments, the remote computing device 106 may include a storage device for storing touch signals, gaze location signals (discussed below), or any other data that may be accessed when the computing system 100 performs a gaze-assisted touchscreen input operation in accordance with the techniques disclosed herein.
In some embodiments of the gaze-assisted touchscreen input systems disclosed herein, one or more of the communication pathways between components of the computing system 100 may not be included. For example, in some embodiments, the touchscreen 104 may not communicate directly with the remote computing device 106 via the communication pathway 112, but may communicate with the remote computing device 106 via the personal computing device 102 and the communication pathways 108 and 110.
FIG. 2 is a block diagram of an illustrative gaze-assisted touchscreen input system 200, in accordance with various embodiments. The system 200 may include input/output (I/O) devices 228, processing logic 202, and a storage device 226. The system 200 may be implemented by the computing system 100 of FIG. 1, in accordance with various embodiments. In particular, the components of the system 200 may be distributed in any suitable manner among one or more of the components of the computing system 100. Components of the system 200 may be described as implemented by the computing system 100 for illustrative purposes, but the system 200 may be implemented by any suitably configured computing device or collection of computing devices. In some embodiments, the system 200 may be implemented by the personal computing device 102 of the computing system 100. In some such
embodiments, the touchscreen 104 may be integral to the personal computing device 102.
The system 200 may be configured to perform any of a number of gaze-assisted touchscreen input operations. For example, the system 200 may be configured to receive a touch signal indicative of a touch of a user on a touchscreen of the system 200, receive a gaze location signal indicative of a region of a user's gaze on the touchscreen, and generate an input signal based at least in part on the gaze location signal and the touch signal. The input signal may, e.g., be provided to an operating system of the system 200, an application running on the system 200, another device in communication with the system 200, or any other component internal or external to the system 200.
Although a number of components of the system 200 are illustrated in FIG. 2, various embodiments may omit components as appropriate for the gaze-assisted touchscreen input operations to be performed. For example, in some embodiments, the system 200 may not include the gaze location logic 204, but may be coupled with the gaze location logic 204 (embodied in, e.g., a separate device) via a wired or wireless communication pathway so as to be able to receive signals from and/or send signals to the gaze location logic 204. In another example, in some embodiments, the system 200 may not include the touch detection logic 206, but may be coupled with the touch detection logic 206 (embodied in, e.g., a separate device) via a wired or wireless communication pathway so as to be able to receive signals from and/or send signals to the touch detection logic 206. In another example, some embodiments of
the system 200 may not be configured for display adjustment (as discussed below), and thus may not include the viewing position logic 212 and/or the display adjustment logic 216.
As noted above, the system 200 may include the I/O devices 228. The I/O devices 228 may include a touchscreen 104, an image capture device 232 and other devices 234. The touchscreen 104 may take the form of any of the embodiments discussed above with reference to FIG. 1.
In some embodiments, the image capture device 232 may include one or more cameras. As used herein, the term "camera" may include still image cameras and video cameras. A camera may be analog or digital. In some embodiments, the image capture device 232 may capture high-definition video. In some embodiments, the image capture device 232 may be configured to stream image data (e.g., video data) to the personal computing device 102 and/or the remote computing device 106 via a wired or wireless communication pathway (e.g., the pathways 108 and 112, respectively). In some embodiments, the image capture device 232 may be connected locally to (or integrated with) the personal computing device 102, while in other embodiments, the image capture device 232 may be remote from the personal computing device 102.
The image capture device 232 may use any imaging wavelength (e.g., visible or infrared light). In some embodiments, the image capture device 232 may include a visible light camera and an infrared camera, and may combine the images captured by these devices or treat them separately. In some embodiments, the image capture device 232 may include two or more cameras having different orientations (e.g., one camera that is mounted on a wearable personal computing device 102 and faces away from the user in a "world-facing" orientation, and one camera that is mounted on the personal computing device 102 and faces toward the user when the personal computing device 102 is in use). In some embodiments, the image capture device 232 may include a single image capture device (e.g., a single camera).
The image capture device 232 may include an array camera, in which
multiple lenses enable simultaneous capture of multiple images of the same subject. Each image captured by an array camera may vary by exposure time, focal distance, white balance, etc. The image capture device 232 may include a processing device which is configured to execute any known technique for combining the images or provide various image browsing experiences (e.g., in conjunction with other components of the computing system 100). In some embodiments, the image capture device 232 may include a depth camera, which may provide information about the depth of various objects in the imaged scene. Some depth cameras may use a time-of-flight technique to determine depth information.
In some embodiments, the image capture device 232 may be mounted on or proximate to the touchscreen 104, and may capture one or more images of a user of the touchscreen 104. These images may be used to determine a region of the user's gaze (e.g., as discussed below with reference to the gaze location logic 204) and/or to determine a position of the user's eyes relative to the touchscreen 104 (e.g., as discussed below with reference to the viewing position logic 212). In some embodiments, the image capture device 232 may be mounted in a wearable personal computing device 102 that attaches on or near a user's eyes, and may capture images of the touchscreen 104 while the touchscreen 104 is being used. These images may be used to determine a region of the user's gaze (e.g., as discussed below with reference to the gaze location logic 204) and/or to determine a position of the user's eyes relative to the
touchscreen 104 (e.g., as discussed below with reference to the viewing position logic 212).
The other devices 234 included in the I/O devices 228 may include any suitable input, output or storage devices, for example. Devices that may be included in the other devices 234 may include proximity sensors (which may be mounted in a user's glasses and in the touchscreen 104, and may generate a signal indicative of the distance between the user's eyes and the touchscreen 104), one or more microphones (which may be mounted on or proximate to the touchscreen 104 and may triangulate the position of the user's head based on analysis of the user's voice), or any other suitable devices. In some embodiments, the other devices 234 may include one or more light sources that may operate in conjunction with the image capture device 232 to generate visible, infrared or other types of light during image capture to aid in the identification of various features in the image. For example, some known eye tracking techniques use one or more infrared LEDs to provide illumination of a user's face and generate reflections on the surface of the cornea. The reflections may be used to locate the eye and the center of the cornea in the image.
As noted above, the system 200 may include the processing logic 202. The processing logic 202 may include a number of logic components. In some embodiments, the processing logic 202 may include gaze location logic 204. The gaze location logic 204 may be configured to generate a gaze location signal indicative of a region of a user's gaze on the touchscreen 104. A region of a user's gaze may include the one or more locations on the touchscreen 104 which are viewed with the highest acuity region of the user's eyes. In some embodiments, the processing logic 202 may include image capture logic 210, which may be coupled to the gaze location logic 204 and may be configured to receive an image of the user's eyes from the image capture device 232. The gaze location logic 204 may be configured to generate the gaze location signal based at least in part on the image received from the image capture device 232.
FIG. 3 depicts two views 302 and 304 of a scenario for the generation of a gaze location signal when a user 306 views the touchscreen 104 of the system 200, in accordance with various embodiments. In particular, the touchscreen 104 is shown as included in the personal computing device 102 (which may be, for example, a smartphone or tablet device). The gaze of the user 306 may be directed to the touchscreen 104, and in particular, to a region 312 on the touchscreen 104. The user's eyes 310 may be located at a distance z above the touchscreen 104 in a direction perpendicular to a surface of the touchscreen 104. The angle al may represent the angle at which the pupil 308 is directed, as measured from the horizontal plane 314 of the eyes 310. The angle ct2 may represent the angle at which the pupil 308 is directed, as measured from the vertical plane 316 of the user's eyes 310. The user's gaze may be characterized by the distance z, and the angles al and a2, and the location signal (indicative of the gaze region 312) generated by the gaze location logic 204 accordingly.
The angles and distances represented in FIG. 3 are simply illustrative, and the gaze location logic 204 may use any suitable measurements from any suitable devices to determine the gaze region 312. Existing technologies for eye tracking include some which use multiple cameras arranged to capture images of a user's eyes in a stereo configuration that enables the use of triangulation techniques to determine distance from the camera arrangement. Some technologies employ a physical model of the eye, which may include reflection and refraction properties of the cornea, the location of the center of the pupil and the center of curvature of the cornea, the offset of the fovea from the optical axis, the radius of curvature of the cornea, and other physical parameters. Any suitable gaze tracking technology may be implemented by the gaze location logic 204.
The gaze region 312 may be characterized in any of a number of ways. In some embodiments, the gaze region 312 may be characterized as a point on the
touchscreen 104. In some such embodiments, the gaze location signal may represent the coordinates of the point in a coordinate system for the touchscreen 104 (e.g., (x,y) coordinates in a two-dimensional coordinate system in the plane of the
touchscreen 104). In some embodiments, the gaze region 312 may be characterized as an area of the touchscreen 104. The area may have any suitable shape. For example, the gaze region 312 may be a circle, and the gaze location signal may represent coordinates of the center of the circle and may also represent the radius of the circle. In another example, the gaze region 312 may be an ellipse, and the gaze location signal may represent coordinates of the foci of the ellipse and the lengths of the major and minor axes of the ellipse. FIG. 4 illustrates an elliptical gaze region 312 on the touchscreen 104, having a major axis 402, a minor axis 404, and a center 406. In another example, the touchscreen 104 may be partitioned into a number of labeled rectangles or other polygons, and the gaze region may include one or more of these partitions. The boundaries and labels of the partitions may be stored in the storage device 226. In some such embodiments, the gaze location signal may represent the labels of each partition included in the gaze region. In another example, the gaze region 312 may have any shape (e.g., an irregular shape), and the gaze location signal may represent coordinates of the perimeter of the gaze region 312. The gaze location logic 204 may use any suitable characterization of the gaze region 312. The shape and/or size of the gaze region 312 may depend on the precision with which the gaze location logic 204 is able to determine where the gaze of the user 306 is directed. For example, the gaze location logic 204 may identify an elliptical gaze region with a minor axis corresponding to a direction in which the gaze of the user 306 may be determined with greater precision and a major axis corresponding to a direction in which the gaze of the user 306 may be determined with lesser precision.
In some embodiments, the processing logic 202 may include touch detection logic 206. The touch detection logic 206 may be configured to generate a touch signal indicative of a touch of a user on the touchscreen 104. A touch signal may include information regarding a location of the touch (e.g., one or more sets of (x,y) coordinates describing an area, shape or skeleton of the touch), a pressure of the touch (e.g., as measured by area of contact between a user's finger or a deformable stylus and the touchscreen 104, or by a pressure sensor), a duration of contact, any other suitable information, or any combination of such information.
In some embodiments, the processing logic 202 may include input registration logic 208. The input registration logic 208 may be coupled to the gaze location logic 204 and the touch detection logic 206. The input registration logic 208 may be configured to receive the gaze location signal from the gaze location logic 204 and to receive the touch signal from the touch detection logic 206. The input registration logic 208 may also be configured to generate an input signal based at least in part on the gaze location signal and the touch signal. As used herein, an "input signal" may be any signal provided as a user input. An input signal may be provided to a hardware or software component of the system 200 and/or to a hardware or software component of a device separate from the system 200. Examples of input signals may include a user's touch on a particular portion of the touchscreen 104 and the properties of that touch. Other examples of input signals may be a signal indicating a user selection of a particular option displayed on the touchscreen 104, the user invocation of a particular function through contact with the touchscreen 104, or any other signal indicative of a user input. In some embodiments, the input signal generated by the registration logic 208 may be generated at the operating system level of the system 200. For example, an operating system of the system 200 may be configured to generate touch signals that can be queried or otherwise monitored by applications running in the operating system (e.g., a map application may include a function that re-centers the map in response to a user tap at a particular location, and information about the tap and the location of the tap may be provided by an operating system-level function invoked by the map application). In such embodiments, the input registration logic 208 may evaluate touch signals at the operating system level before they are provided to applications, and thereby may serve to "filter" such touch signals. In some embodiments, the input registration logic 208 may operate at the application level, and may be used by a particular application to "filter" or otherwise process touch signals provided to the application by the operating system- level functions.
In some embodiments, the input registration logic 208 may be configured to generate the input signal through selection of one of a plurality of predetermined touch types based at least in part on the touch signal. Examples of predetermined touch types include a tap, a swipe, a pinch, and a spread. A tap may include a momentary single contact between the touchscreen 104 and a user (e.g., through a single finger or stylus). A swipe may include an extended single contact between the touchscreen 104 and the user over a line or curve (e.g., as may be useful when a user moves her finger from right to left to turn a page of a book rendered on the touchscreen 104). A pinch may include two simultaneous points of contact between the touchscreen 104 and the user, with those points of contact drawn together on the surface of the touchscreen 104 (e.g., as may be useful when a user brings her fingers closer together on the touchscreen 104 to zoom into a portion of a displayed webpage). A spread may include two simultaneous points of contact between the touchscreen 104 and the user, with those points of contact drawn apart on the surface of the touchscreen 104. Other examples of touch types include press-and-hold, rotate, and slide-and-drag, for example. Different touch types may be associated with different regions of the touchscreen 104; for example, a "flick" touch type may be recognized by the system 200 when the user touches a point proximate to an edge of the touchscreen 104 and quickly and briefly slides her finger toward the interior of the touchscreen 104. Characteristics of various touch types may be stored in the storage device 226, and may be accessed by the input registration logic 208 (e.g., when the input registration logic 208 compares a received touch signal to the stored characteristics of various touch types in order to select a touch type that best corresponds to the received touch signal). In some embodiments, as discussed below, the input signal generated by the input registration logic 208 may indicate which touch type is associated with a detected touch.
In some embodiments, the input registration logic 208 may be configured to select one of the plurality of predetermined touch types based at least in part on the touch signal and the gaze location signal. As noted above, in some embodiments, the touch types stored in the storage device 226 may include one or more non-gaze- associated types and one or more gaze-associated types. A non-gaze-associated type may be a touch type whose location on the touchscreen does not typically correspond with the user's gaze region. In other words, a non-gaze-associated type represents a touch action that a user will perform without looking at the portion of the touchscreen on which the touch action is performed. In some embodiments, a swipe may be a non- gaze-associated type, in that users do not typically look at the same region of the touchscreen in which they're performing a swipe. A pinch may be another example of a non-gaze-associated type. A gaze-associated type may be a touch type whose location on the screen does typically correspond with the user's gaze region. In some embodiments, a tap may be a gaze-associated type, in that users typically look at the same region of the touchscreen in which they are tapping.
Whether a touch type is gaze-associated or non-gaze-associated may vary depending upon the context (e.g., depending upon which application is executing on the system 100 and displaying a user interface on the touchscreen 104). For example, some applications may use a swipe touch type in different regions of the touchscreen 104 to indicate user selection of various options. In such applications, a swipe touch type may be gaze-associated in that a user will typically look to the region of the touchscreen 104 corresponding to her selection. In other applications, a swipe touch type may be used to unlock a portion of a user interface (e.g., a control panel) or move to a previous document in a sequence of documents, for example. In such applications, a swipe touch type may not be gaze-associated, meaning that users will often look at regions of the screen other than the touched region when performing the swipe. The storage device 226 may store information about whether various touch types are gaze-associated or non-gaze-associated in various contexts (e.g., in various applications, operating systems, or other operating environments).
In some embodiments, the input registration logic 208 may be configured to select a touch type based on the gaze location signal by selecting a touch type that is gaze-associated or non-gaze-associated depending on the relative locations of the touch and the gaze region. In particular, the input registration logic 208 may determine, based at least in part on the touch signal, that the touch was located outside of the gaze region. In response to this determination, the input registration logic 208 may select a non-gaze- associated touch type for the touch. In some embodiments, in response to a
determination by the input registration logic 208 that the touch was located within the gaze region, the input registration logic may select a gaze-associated or non-gaze- associated touch type for the touch.
For example, FIG. 5 illustrates the gaze region 312 and several touches 502, 504 and 506 on the touchscreen 104. The touches 502 and 504 may represent touches that have a short duration and are highly localized relative to the extended contact area of the touch 506. In some embodiments, the touch detection logic 206 may analyze the characteristics of the touches 502, 504 (e.g., against a set of predetermined touch types stored in the storage device 226, as discussed above) and may select a preliminary touch type for each of the touches 502, 504 and 506 before any gaze location information is available, received and/or processed. In some embodiments, this preliminary determination may be made by the input registration logic 208; for ease of illustration, this preliminary determination will be discussed as performed by the touch detection logic 206. For example, the touch detection logic 206 may determine that the touches 502 and 504 are best classified as "taps" based on the duration of contact and the area of contact, while the touch 506 is best classified as a "slide." The touch detection logic 206 (or the input registration logic 208, as appropriate) may generate a preliminary touch type signal for each of these touches indicative of the corresponding touch type.
The input registration logic 208 may receive the preliminary touch type signals (or may receive the touch signals from the touch detection logic 206 without preliminary touch type identification) and may determine whether a location of each touch is within the gaze region 312. If a touch location is not within the gaze region 312, the input registration logic 208 may select a non-gaze-associated touch type for that touch. If a touch location is within the gaze region 312, the input registration logic 208 may select a gaze-associated or a non-gaze associated-touch type for that touch. For example, as illustrated in FIG. 5, the touch 502 is located within the gaze region 312. If a tap is a gaze- associated touch type, and the characteristics of the touch 502 are compatible with the characteristics of a tap (e.g., as stored in the storage device 226), the input registration logic 208 may generate an input signal indicating that the touch 502 is a tap.
The touch 504, however, is not located within the gaze region 312. If a tap is a gaze-associated touch type, the input registration logic 208 may not generate an input signal indicating that the touch 504 is a tap even if the non-location characteristics of the touch 504 (e.g., the area and duration of contact) are compatible with the characteristics of a tap. Instead, the input registration logic 208 may seek another touch type compatible with the characteristics of the touch 504. If no suitable touch type can be found, the input registration logic 208 may select a "none" type. In some embodiments, the input registration logic 206 may select a "none" type by ignoring the touch 504 for the purposes of generating an input signal (e.g., the touch 504 may be treated as an incidental contact between the user and the touchscreen 104).
As further illustrated in FIG. 5, the touch 506 is located outside the gaze region 312. However, if the characteristics of the touch 506 are compatible with the characteristics of a slide (e.g., as stored in the storage device 226), and if a slide is a non- gaze-associated touch type, the input registration logic 208 may generate an input signal indicating that the touch 506 is a slide.
In some embodiments, the input registration logic 208 may not require a touch to be strictly within a gaze region for the touch to be designated as a gaze-associated touch type. For example, a touch may be partially within the gaze region and partially outside of the gaze region. In another example, a touch may commence within the gaze region and end outside of the gaze region. In another example, a touch need only be within a predetermined distance of the gaze region to be designated as a gaze-associated touch type (if appropriate). The predetermined distance may be an absolute distance (e.g., 1 centimeter), a relative distance (e.g., within a distance of a gaze region less than or equal to 10% of a radius of the gaze region), or any other suitable distance.
Returning to FIG. 2, in some embodiments, the processing logic 202 may include display logic 214. The display logic 214 may be coupled to the touchscreen 104, and may be configured to cause the display of various visual elements on the touchscreen 104. In some embodiments, the display logic 214 may be coupled to the input registration logic 208, and may be configured to cause the display, on the touchscreen 104, of one or more visual elements based on the input signal generated by the input registration logic 208.
For example, FIG. 5 illustrates a display 500 on the touchscreen 104 that may be provided by the display logic 214. As shown in FIG. 5, the display 500 may include multiple visual elements, such as the letter blocks 508 and the theme change area 510. If the input registration logic 208 generates a "tap" input signal in response to the touch 502, as discussed above, the display logic 214 may cause the display, on the touchscreen 104, of a visual element based on this input signal. Such a visual element is shown in the display 600 of FIG. 6 as the shaded box 602. If the input registration logic 208 generates a "slide" input signal in response to the touch 506, as discussed above, the display logic 214 may cause the display, on the touchscreen 104, of a visual element based on this input signal. Such a visual element is shown in the display 600 as the theme graphic 604, which may replace the theme graphic 512 of the display 500.
In some embodiments, the gaze location logic 204 may be configured to generate multiple gaze location signals, each corresponding to a different user viewing the touchscreen 104. The touch detection logic 206 may be configured to generate multiple touch signals, each corresponding to different touches on the touchscreen 104. In some embodiments, the input registration logic 208 may be configured to receive the multiple location signals and the multiple touch signals, and determine which touch signals correspond to which users by comparing the locations of the touch signals to the gaze regions for each user. In particular, the input registration logic 208 may be configured to receive location signals corresponding to the gaze regions of each of two or more users, receive a touch signal, identify the gaze region closest to the location of the touch signal, and associate the touch signal with the user corresponding to the closest gaze region. In some embodiments, the input registration logic 208 may receive multiple touch signals, associate the touch signals with different users based on the proximity of the locations of the touch signals to different gaze regions (indicated by different gaze location signals), and generate multiple different input signals based at least in part on the received gaze location signals and the received touch signals. In some embodiments, the touch detection logic 206 may generate the multiple touch signals at least partially in parallel. In some embodiments, the input registration logic 208 may generate the multiple input signals at least partially in parallel.
FIG. 7 illustrates first and second users 702 and 704 viewing the touchscreen 104 (as shown, included in the personal computing device 102). The gaze of the first user 702 may be directed to a first region 706 on the touchscreen 104, and the gaze of the second user 704 may be directed to a second region 708 on the touchscreen 104. The first and second gaze regions 706 and 708 are illustrated as superimposed on the display 800 of the touchscreen 104 in FIG. 8. As illustrated in FIG. 8, the first and second gaze regions 706 and 708 may have different shapes, and may have different locations on the touchscreen 104. The gaze location logic 204 may generate first and second gaze location signals indicative of the first and second gaze regions 706 and 708, and may provide these gaze location signals to the input registration logic 208.
FIG. 8 also illustrates two touches 802 and 804. As shown, the touch 802 falls within the first gaze region 706. In some embodiments, the input registration logic 208 may be configured to receive the first gaze location signal (indicative of the first gaze region 706), receive the first touch signal (indicative of the touch 802) and determine that the touch 802 falls within the first gaze region 706. In response to that
determination, the input registration logic 208 may determine that the touch 802 was performed by the first user 702, and may generate an input signal associated with the first user 702. The input registration logic 208 may also be configured to receive the second gaze location signal (indicative of the second gaze region 708), receive the second touch signal (indicative of the touch 804) and determine that the touch 804 falls at least partially within the second gaze region 708. In response to that determination, the input registration logic 208 may determine that the touch 804 was performed by the second user 704, and may generate an input signal associated with the second user 704.
In some embodiments, the input registration logic 208 may receive touch signals indicative of the touches 802 and 804 in parallel, in rapid succession, or in any suitable order relative to receipt of the gaze location signals indicative of the first and second gaze regions 706 and 708. Thus, the input registration logic 208 may evaluate all received touch signals (e.g., within a given window of time) against all received gaze location signals to determine which touch signals may correspond with the same user as a particular gaze location signal. In the example of FIG. 8, the input registration logic 208 may determine that the touch 802 is closer to the first gaze region 706 than to the second gaze region 708, and in response, determine that the touch 802 is not associated with the second user 704. Alternately, the input registration logic 208 may determine that the touch 802 is farther than a predetermined distance away from the second gaze region 708, and in response, determine that the touch 802 is not associated with the second user 704.
In embodiments in which the input registration logic 208 generates different input signals corresponding to different users, the display logic 214 may be configured to cause the display, on the touchscreen 104, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal. The first and second visual elements may be displayed simultaneously.
Returning to FIG. 8, the display 800 also includes first and second visual elements 806 and 808. The first and second visual elements 806 and 808 of FIG. 8 are avatars, and may represent player characters in a computer game or representatives in a virtual world environment, for example. The first visual element 806 may be associated with the first user 702 (e.g., the first visual element 806 may be a player character controlled by the first user 702) and the second visual element 808 may be associated with the second user 704. In some embodiments, in response to receiving the first and second gaze location signals and the touch signals indicative of the touches 802 and 804, the input registration logic 208 may generate an input signal indicating that the first user 702 wishes to move the first visual element 806 to the location of the touch 802 and an input signal indicating that the second user 704 wishes to move the second visual element 808 to the location of the touch 804. In response to these input signals, the display logic 214 may cause the display 900 of FIG. 9 on the touchscreen 104. As shown in FIG. 9, the first visual element 806 is relocated to the location 902 of the touch 802 and the second visual element 808 is relocated to the location 904 of the touch 804. In this manner, the input registration logic 208 may distinguish input signals from multiple users on a single touchscreen, and may enable multi-user computing applications such as game playing, editing of documents, simultaneous web browsing, or any other multi-user scenario.
Returning to FIG. 2, in some embodiments, the processing logic 202 may include viewing position logic 212. The viewing position logic 212 may be coupled to the input registration logic 208 and may generate a position signal indicative of a position of the user's eyes relative to the touchscreen 104. In some embodiments, the viewing position logic 212 may be coupled to the image capture logic 210, and may be configured to generate the position signal based at least in part on an image of the user's eyes received from the image capture device 232.
FIG. 10 depicts two views 1002 and 1004 of a scenario for the generation of a position signal when a user 1006 views the touchscreen 104 (shown as included in the personal computing device 102). The user's eyes 1010 may be located at a distance z above the touchscreen 104 in a direction perpendicular to a surface of the
touchscreen 104. A reference point 1008 may be defined on the touchscreen 104 (or in any location whose position is defined with reference to the touchscreen 104). In some embodiments, the reference point 1008 may be a point at which the image capture device 232 is located on the personal computing device 102. The angle βΐ may represent the angle at which the user's eyes 1010 are located, as measured from the horizontal plane 1014 of the surface of the touchscreen 104. The angle β2 may represent the angle at which the center point 1018 between the user's eyes 1010 is located, as measured from the vertical plane 1016 of the reference point 1008. The position of the user's eyes may be characterized by the distance z, and the angles βΐ and β2, and the position signal generated accordingly by the viewing position logic 212.
The angles and distances represented in FIG. 10 are simply illustrative, and the viewing position logic 212 may use any suitable measurements to determine the position of the user's eyes for generating the position signal. For example, some existing technologies use images of the user's face, captured by an image capture device 232 mounted in a known position relative to the touchscreen 104, to create a three- dimensional model of landmarks on the user's face, and thereby determine the position of the user's eyes relative to the touchscreen 104. In some embodiments, one or more devices may be included in a head-mounted device (e.g., radio frequency identification tags included in a pair of glasses), and these devices may communicate with cooperating devices mounted on or proximate to the touchscreen 104 (e.g., radio frequency identification tag readers) to determine the relative position between the user's eyes and the touchscreen 104 (e.g., based on the strength of the radio frequency signals detected). Any known technique for head position modeling may be implemented by the viewing position logic 212.
In some embodiments, the processing logic 202 may include display adjustment logic 216. The display adjustment logic 216 may be coupled to the viewing position logic 212, and may be configured to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal generated by the viewing position logic 212. In particular, the display adjustment logic 216 may be configured to determine an angle at which the user is viewing the touchscreen 104 (e.g., based on the position signal generated by the viewing position logic 212) and generate an adjustment signal to correct the display by visually distorting the displayed elements so that they appear to the user the same as they would appear if the user were viewing the touchscreen 104 in a direction perpendicular to a surface plane of the touchscreen 104. As used herein, "an angle at which the user is viewing the touchscreen" may include one or more angular measurements representing the position of the user's eyes relative to an axis that is perpendicular to the surface plane of the touchscreen. For example, an angle may include two angular measurements. In some embodiments, the display adjustment logic 216 may be configured to generate the adjustment signal in order to correct the apparent distortion of a display on the touchscreen 104 that occurs when a user views the touchscreen 104 from an angle other than an angle perpendicular to the surface plane of the touchscreen 104. Certain examples of this distortion may be referred to as the "keystone effect" or "tombstone effect." FIG. 11 illustrates an example of this distortion. In FIG. 11, a desired display 1100 is displayed (e.g., by the display logic 214) on the touchscreen 104. However, when the touchscreen 104 is viewed from an angle other than perpendicular to the surface plane of the touchscreen 104 (e.g., as illustrated by the perspective view 1102), the display 1100 will appear distorted. In particular, as illustrated in FIG. 11, the portion 1104 of the display 1100 closer to the user will be enlarged relative to the portion 1106 of the display 1100 farther from the user.
The display adjustment logic 216 may be configured to use the position signal generated by the viewing position logic 212 to generate an adjustment signal indicative of a distortion of the display 1100 so that a user viewing the touchscreen 104 from the position indicated by the position signal will see the display 1100 appropriately dimensioned. In particular, the display logic 214 may be coupled with the display adjustment logic 216, and may be configured to cause the display, on the
touchscreen 104, of one or more visual elements distorted in accordance with the adjustment signal generated by the display adjustment logic 216. For example, FIG. 12 illustrates a display 1200 that may be generated by the display logic 214 based on the adjustment signal generated by the viewing position logic 212. The display 1200 may be distorted with respect to the desired display 1100. However, when the display 1200 is rendered on the touchscreen 104, a user viewing the touchscreen 104 from the position indicated by the position signal will see the perspective view 1202, correctly rendering the desired display 1100 to the user. In some embodiments, the display adjustment logic 216 may include a threshold of distortion such that, unless the user's position indicates that the distortion of the display should exceed the threshold, no adjustment should be made. Such a threshold may prevent the display adjustment logic 216 from making frequent and slight adjustments to the display on the touchscreen 104, which may be disconcerting for the user.
FIG. 13 is a flow diagram illustrating a process 1300 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments. The operations of the process 1300 (and the other processes described herein), although illustrated as performed in a particular sequence for the sake of illustration, may be performed in parallel as suitable or in any other order. For example, operations related to receiving a location signal may be performed in parallel, partially in parallel, or in any suitable order, relative to operations related to receiving a touch signal.
Operations of the process 1300 (and the other processes described herein) may be described as performed by components of the system 200, as embodied in the computing system 100, for illustrative purposes, but the operations of the process 1300 (and the other processes described herein) may be performed by any suitably configured computing device or collection of computing devices. Any of the operations of the process 1300 (and the other processes described herein) may be performed in accordance with any of the embodiments of the systems 100 and 200 described herein.
The process 1300 may begin at the operation 1302, in which a gaze location signal indicative of a region of a user's gaze on the touchscreen 104 may be received (e.g., by the input registration logic 208). The gaze location signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the gaze location signal may be generated based on an image of the user's eyes from the image capture device 232.
At the operation 1304, a touch signal indicative of a touch of the user on the touchscreen 104 may be received (e.g., by the input registration logic 208). The touch signal may be generated in accordance with any of the embodiments described herein.
At the operation 1306, an input signal may be generated (e.g., by the input registration logic 208), based at least in part on the gaze location signal (received at the operation 1302) and the touch signal (received at the operation 1304). The input signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the operation 1306 may include selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and the input signal may indicate the selected touch type. In some such embodiments, the plurality of predetermined touch types may include one or more non- gaze-associated types and one or more gaze-associated types. Selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal may include determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, and, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type. The process 1300 may then end.
FIG. 14 is a flow diagram illustrating a process 1400 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments. The process 1400 may begin at the operation 1402, in which multiple gaze location signals indicative of multiple region of corresponding multiple users' gaze on the touchscreen 104 may be received (e.g., by the input registration logic 208). The gaze location signals may be generated in accordance with any of the embodiments described herein. For example, the gaze location signals may be generated based on one or more images of the users' eyes from the image capture device 232. For illustrative purposes, the remaining discussion of FIG. 14 may refer to two users (a first user and a second user), but the process 1400 may be applied to any number of users. In some embodiments, the multiple gaze location signals may be received at the operation 1402 at least partially in parallel.
At the operation 1404, a touch signal indicative of a touch of a user on the touchscreen 104 may be received (e.g., by the input registration logic 208). The touch signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the touch signal may not identify which user performed the touch. In some embodiments, the operation 1404 may include receiving two touch signals indicative of the touch of one or more users on the touchscreen 104. In some embodiments, multiple touch signals may be received at the operation 1404 at least partially in parallel.
At the operation 1406, an input signal may be generated (e.g., by the input registration logic 208) based at least in part on the gaze location signals (received at the operation 1402) and the touch signal (received at the operation 1404). The input signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the location of the touch (indicated by the touch signal received at the operation 1404) with the gaze regions of the first and second users (indicated by the gaze location signals received at the operation 1402) may be compared (e.g., by the input registration logic 208), and the touch may be assigned to one of the first and second users (e.g., by the input registration logic 208). This assignment may be based on proximity to the gaze regions, and may be executed in accordance with any of the embodiments discussed herein (e.g., those discussed above with reference to FIGS. 7-9). In embodiments of the process 1400 in which two touch signals are received at the operation 1404, each touch signal may be assigned to the first and second users as discussed above (e.g., by the system 200). In some embodiments, a first touch may be associated with a first user and a second touch may be associated with a second user. Thereafter, first and second input signals, corresponding to the first and second touches, may be generated (e.g., by the system 200).
At the operation 1408, the display of a visual element based at least in part on the input signal generated at the operation 1406 may be caused (e.g., by the display logic 214). For example, the movement of a visual element associated with the first user
(e.g., a visual element representing the first user's avatar in a computer game) in response to the input signal may be caused. In embodiments in which first and second input signals are generated at the operation 1406, the simultaneous display, on the touchscreen 104, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal, may be caused (e.g., by the system 200). An example of such a display was discussed above with reference to FIGS. 8-9. The process 1400 may then end.
FIG. 15 is a flow diagram illustrating a process 1500 for generating gaze-assisted touchscreen inputs (e.g., inputs based on contact with the touchscreen 104), in accordance with some embodiments. The process 1500 may begin at the
operation 1502, in which a position signal indicative of a position of the user's eyes relative to the touchscreen 104 may be received (e.g., by the viewing position logic 212).
The position signal may be generated in accordance with any of the embodiments described herein (e.g., those discussed above with reference to FIGS. 10-12). For example, the position signal may be generated based on one or more images of the user's eyes from the image capture device 232.
At the operation 1504, an adjustment signal indicative of a desired visual distortion based at least in part on the position signal received at the operation 1502 may be generated (e.g., by the display adjustment logic 216). The adjustment signal may be generated in accordance with any of the embodiments described herein. In some embodiments, the adjustment signal may indicate adjustments to the display of a visual element on the touchscreen 104 to correct a keystone or related visual effect arising from the user's perspective on the touchscreen 104.
At the operation 1506, one or more visual elements distorted in accordance with the adjustment signal may be caused to be displayed on touchscreen 104 (e.g., by the display logic 214). An example of such a display was discussed above with reference to
FIGS. 11-12. The process 1500 may then end. In various embodiments, the processes 1300, 1400 and 1500 may be combined in any desired combination to perform touchscreen-related data processing operations. For example, in some embodiments, the process 1500 may be performed (e.g., by the system 200) to continually adjust the display on the touchscreen 104 in response to the position of the user's eyes. In addition, the adjusted display may include different visual elements associated with different users, the display of which may be adjusted in response to input signals from the different users generated in accordance with the process 1400. The combined process may also include the gaze-associated and non- gaze-associated touch type operations discussed above with reference to various embodiments of the process 1300. Accordingly, any desired combination of these processes may be performed to improve a user's touchscreen experience.
Various ones of the embodiments disclosed herein may improve the quality of experience of a user of a touchscreen device. In particular, some embodiments may improve the ability of computing systems to distinguish between two potential touch points that are close together on a touchscreen; by using gaze location information, the computing system may improve its ability to identify the desired touch point.
Embodiments that distinguish between gaze-associated and non-gaze-associated touch types may improve the computing system's ability to distinguish between different touch types (e.g., reducing the likelihood that a pinch will be mistaken for a tap), enabling better interaction between the user and the computing system. Some embodiments that employ the display adjustment techniques disclosed herein may better align the points on the touchscreen that a user believes she has touched with the points she has actually touched, reducing user frustration.
The following paragraphs describe examples of embodiments of the present disclosure. Example 1 is a computing system with gaze-assisted touchscreen inputs, including input registration logic to: receive a touch signal indicative of a touch of a user on a touchscreen of the computing system, receive a gaze location signal indicative of a region of a user's gaze on the touchscreen, and generate an input signal to the computer system based at least in part on the gaze location signal and the touch signal.
Example 2 may include the subject matter of Example 1, and may further include gaze location logic, coupled to the input registration logic, to generate the gaze location signal. Example 3 may include the subject matter of Example 2, and may further include image capture logic, coupled to the gaze location logic, to receive an image of the user's eyes from an image capture device, wherein the gaze location logic is to generate the gaze location signal based at least in part on the received image.
Example 4 may include the subject matter of any of Examples 1-3, and may further include touch detection logic, coupled to the input registration logic, to generate the touch signal.
Example 5 may include the subject matter of any of Examples 1-4, and may further specify that the input registration logic is to generate the input signal through selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
Example 6 may include the subject matter of Example 5, and may further specify that the plurality of predetermined touch types includes one or more non-gaze- associated types and one or more gaze-associated types, and that selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: determination, based at least in part on the touch signal, that the touch was located outside of the region of the user's gaze; and, in response to the determination that the touch was located outside of the region of the user's gaze, selection of a non-gaze-associated type.
Example 7 may include the subject matter of any of Examples 1-6, and may further specify that the input registration logic is further to: receive a second touch signal indicative of a second touch of the user on the touchscreen; receive a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; determine, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; determine, based at least in part on the second touch signal, that the second touch signal is not compatible with any non- gaze-associated touch types; and in response to a determination that the second touch was located outside of the second region of the user's gaze and a determination that the second touch signal is not compatible with any non-gaze-associated touch types, ignore the second touch signal for the purposes of generating of an input signal. Example 8 may include the subject matter of any of Examples 1-7, and may further specify that the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and that the input registration logic is to receive a second touch signal indicative of a touch of a second user on the touchscreen, receive a second gaze location signal indicative of a region of the second user's gaze on the touchscreen, and generate a second input signal based at least in part on the second gaze location signal and the second touch signal.
Example 9 may include the subject matter of Example 8, and may further include touch detection logic, coupled to the input registration logic, to generate the first and second touch signals at least partially in parallel.
Example 10 may include the subject matter of any of Examples 8-9, and may further include display logic to cause the display, on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal, the first and second visual elements displayed simultaneously.
Example 11 may include the subject matter of any of Examples 1-10, and may further specify that the input registration logic is to receive a position signal indicative of a position of the user's eyes relative to the touchscreen, and that the input signal is based at least in part on the position signal.
Example 12 may include the subject matter of Example 11, and may further include viewing position logic, coupled to the input registration logic, to generate the position signal.
Example 13 may include the subject matter of any of Examples 11-12, and may further include: display adjustment logic, coupled to the viewing position logic, to receive the position signal and to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and display logic, coupled with the display adjustment logic, to cause the display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
Example 14 is a method for generating gaze-assisted touchscreen inputs for a computing system, including: receiving, by the computing system, a gaze location signal indicative of a region of a user's gaze on a touchscreen of the computing system;
receiving, by the computing system, a touch signal indicative of a touch of the user on the touchscreen; and generating, by the computing system, an input signal for the computing system based at least in part on the gaze location signal and the touch signal.
Example 15 may include the subject matter of Example 14, and may further include receiving, by the computing system, an image of the user's eyes from an image capture device, wherein the gaze location signal is based at least in part on the received image.
Example 16 may include the subject matter of any of Examples 14-15, and may further specify that generating an input signal based at least in part on the gaze location signal and the touch signal includes selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
Example 17 may include the subject matter of Example 16, wherein the plurality of predetermined touch types includes one or more non-gaze-associated types and one or more gaze-associated types, and wherein selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: determining that the touch signal indicates that the touch was located outside of the region of the user's gaze; and, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
Example 18 may include the subject matter of any of Examples 14-17, and may further include: receiving a second touch signal indicative of a second touch of the user on the touchscreen; receiving a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; determining, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; determining, based at least in part on the second touch signal, that the second touch signal is not compatible with any non-gaze-associated touch types; and in response to determining that the second touch was located outside of the second region of the user's gaze and that the second touch signal is not compatible with any non-gaze- associated touch types, ignore the second touch signal for the purposes of generating an input signal.
Example 19 may include the subject matter of any of Examples 14-18, wherein the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and wherein the method may further include: receiving, by the computing system, a second gaze location signal indicative of a region of a second user's gaze on the touchscreen; receiving, by the computing system, a second touch signal indicative of a touch of the second user on the touchscreen; and generating, by the computing system, a second input signal based at least in part on the second gaze location signal and the second touch signal.
Example 20 may include the subject matter of Example 19, wherein receiving the first touch signal is performed by the computing system at least partially in parallel with receiving the second touch signal.
Example 21 may include the subject matter of any of Examples 19-20, and may further include causing simultaneous display, by the computing system on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
Example 22 may include the subject matter of any of Examples 14-21, and may further include receiving, by the computing system, a position signal indicative of a position of the user's eyes relative to the touchscreen, wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
Example 23 may include the subject matter of Example 22, and may further include: generating, by the computing system, an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and causing display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
Example 24 is one or more computer readable media having instructions thereon that, when executed by one or more processing devices of a computing device, cause the computing device to perform the method of any of Examples 14-23.
Example 25 is an apparatus including one or more processing devices and one or more computer readable media having instructions thereon that, when executed by the one or more processing devices, cause the apparatus to perform the method of any of Examples 14-23.
Example 26 is a system with gaze-assisted touchscreen inputs, including: means for receiving a gaze location signal indicative of a region of a user's gaze on a touchscreen of a computing system; means for receiving a touch signal indicative of a touch of the user on the touchscreen; and means for generating an input signal for the system based at least in part on the gaze location signal and the touch signal.
Example 27 may include the subject matter of Example 26, and may further include means for generating the gaze location signal.
Example 28 may include the subject matter of Example 27, wherein the means for generating the gaze location signal includes means for receiving an image of the user's eyes from an image capture device, wherein the gaze location signal is based at least in part on the received image.
Example 29 may include the subject matter of any of Examples 26-28, and may further include means for generating the touch signal.
Example 30 may include the subject matter of any of Examples 26-29, and may further specify that the means for generating an input signal based at least in part on the gaze location signal and the touch signal includes means for selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and that the input signal indicates the selected touch type.
Example 31 may include the subject matter of Example 30, and may further specify that the plurality of predetermined touch types includes one or more non-gaze- associated types and one or more gaze-associated types, and that the means for selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal includes: means for determining that the touch signal indicates that the touch was located outside of the region of the user's gaze; and means for, in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
Example 32 may include the subject matter of any of Examples 26-31, and may further include: means for receiving a second touch signal indicative of a second touch of the user on the touchscreen; means for receiving a second gaze location signal indicative of a second region of the user's gaze on the touchscreen; means for determining, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze; and means for determining, based at least in part on the second touch signal, that the second touch signal is not compatible with any non- gaze-associated touch types; wherein the means for generating an input signal is configured to, in response to determining that the second touch was located outside of the second region of the user's gaze and that the second touch signal is not compatible with any non-gaze-associated touch types, ignore the second touch signal.
Example 33 may include the subject matter of any of Examples 26-32, and may further specify that the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and that the system may further include: means for receiving a second gaze location signal indicative of a region of a second user's gaze on the touchscreen; means for receiving a second touch signal indicative of a touch of the second user on the touchscreen; and means for generating a second input signal based at least in part on the second gaze location signal and the second touch signal.
Example 34 may include the subject matter of Example 33, wherein the means for receiving the first touch signal is configured to receive the first touch signal at least partially in parallel with the reception of the second touch signal by the means for generating the second touch signal.
Example 35 may include the subject matter of any of Examples 33-34, and may further include means for causing simultaneous display, on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
Example 36 may include the subject matter of any of Examples 26-35, and may further include means for receiving a position signal indicative of a position of the user's eyes relative to the touchscreen, wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
Example 37 may include the subject matter of Example 36, and may further include means for generating the position signal.
Example 38 may include the subject matter of Example 37, and may further include: means for generating an adjustment signal indicative of a desired visual distortion based at least in part on the position signal, and means for causing display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
Although certain embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a wide variety of alternate and/or equivalent embodiments or implementations calculated to achieve the same purposes may be substituted for the embodiments shown and described without departing from the scope. Those with skill in the art will readily appreciate that embodiments may be implemented in a very wide variety of ways. This application is intended to cover any adaptations or variations of the embodiments discussed herein.

Claims

What is claimed is:
1. A computing system with gaze-assisted touchscreen inputs, comprising:
input registration logic to:
receive a touch signal indicative of a touch of a user on a touchscreen of the computing system,
receive a gaze location signal indicative of a region of a user's gaze on the touchscreen, and
generate an input signal to the computer system based at least in part on the gaze location signal and the touch signal.
2. The computing system of claim 1, further comprising:
gaze location logic, coupled to the input registration logic, to generate the gaze location signal.
3. The computing system of claim 2, further comprising:
image capture logic, coupled to the gaze location logic, to receive an image of the user's eyes from an image capture device;
wherein the gaze location logic is to generate the gaze location signal based at least in part on the received image.
4. The computing system of claim 1, further comprising:
touch detection logic, coupled to the input registration logic, to generate the touch signal.
5. The computing system of claim 1, wherein:
the input registration logic is to generate the input signal through selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and
the input signal indicates the selected touch type.
6. The computing system of claim 5, wherein the plurality of predetermined touch types includes one or more non-gaze-associated types and one or more gaze-associated types, and wherein selection of one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal comprises:
determination, based at least in part on the touch signal, that the touch was located outside of the region of the user's gaze; and in response to the determination that the touch was located outside of the region of the user's gaze, selection of a non-gaze-associated type.
7. The computing system of claim 1, wherein the input registration logic is further to: receive a second touch signal indicative of a second touch of the user on the touchscreen;
receive a second gaze location signal indicative of a second region of the user's gaze on the touchscreen;
determine, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze;
determine, based at least in part on the second touch signal, that the second touch signal is not compatible with any non-gaze-associated touch types; and
in response to a determination that the second touch was located outside of the second region of the user's gaze and a determination that the second touch signal is not compatible with any non-gaze-associated touch types, ignore the second touch signal for the purposes of generating an input signal.
8. The computing system of claim 1, wherein the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and wherein:
the input registration logic is to:
receive a second touch signal indicative of a touch of a second user on the touchscreen,
receive a second gaze location signal indicative of a region of the second user's gaze on the touchscreen, and
generate a second input signal based at least in part on the second gaze location signal and the second touch signal.
9. The computing system of claim 8, further comprising:
touch detection logic, coupled to the input registration logic, to generate the first and second touch signals at least partially in parallel.
10. The computing system of claim 8, further comprising:
display logic to cause the display, on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal, the first and second visual elements displayed simultaneously.
11. The computing system of any one of claims 1-10, wherein the input registration logic is further to receive a position signal indicative of a position of the user's eyes relative to the touchscreen, and wherein the input signal is based at least in part on the position signal.
12. The computing system of claim 11, further comprising:
viewing position logic, coupled to the input registration logic, to generate the position signal.
13. The computing system of claim 11, further comprising:
display adjustment logic, coupled to the viewing position logic, to receive the position signal and to generate an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and
display logic, coupled with the display adjustment logic, to cause the display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
14. A method for generating gaze-assisted touchscreen inputs for a computing system, comprising:
receiving, by the computing system, a gaze location signal indicative of a region of a user's gaze on a touchscreen of the computing system;
receiving, by the computing system, a touch signal indicative of a touch of the user on the touchscreen; and
generating, by the computing system, an input signal for the computing system based at least in part on the gaze location signal and the touch signal.
15. The method of claim 14, further comprising:
receiving, by the computing system, an image of the user's eyes from an image capture device;
wherein the gaze location signal is based at least in part on the received image.
16. The method of claim 14, wherein: generating an input signal based at least in part on the gaze location signal and the touch signal comprises selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and
the input signal indicates the selected touch type.
17. The method of claim 16, wherein the plurality of predetermined touch types includes one or more non-gaze-associated types and one or more gaze-associated types, and wherein selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal comprises:
determining that the touch signal indicates that the touch was located outside of the region of the user's gaze; and
in response to determining that the touch signal indicates that the touch was located outside of the region of the user's gaze, selecting a non-gaze-associated type.
18. The method of claim 14, further comprising:
receiving a second touch signal indicative of a second touch of the user on the touchscreen;
receiving a second gaze location signal indicative of a second region of the user's gaze on the touchscreen;
determining, based at least in part on the second touch signal, that the second touch was located outside of the second region of the user's gaze;
determining, based at least in part on the second touch signal, that the second touch signal is not compatible with any non-gaze-associated touch types; and
in response to determining that the second touch was located outside of the second region of the user's gaze and that the second touch signal is not compatible with any non-gaze-associated touch types, ignoring the second touch signal for the purposes of generating an input signal.
19. The method of claim 14, wherein the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and wherein the method further comprises:
receiving, by the computing system, a second gaze location signal indicative of a region of a second user's gaze on the touchscreen;
receiving, by the computing system, a second touch signal indicative of a touch of the second user on the touchscreen; and generating, by the computing system, a second input signal based at least in part on the second gaze location signal and the second touch signal.
20. The method of claim 19, further comprising:
causing simultaneous display, by the computing system on the touchscreen, of a first visual element based at least in part on the first input signal and a second visual element based at least in part on the second input signal.
21. The method of claim 14, further comprising:
receiving, by the computing system, a position signal indicative of a position of the user's eyes relative to the touchscreen;
wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
22. The method of claim 21, further comprising:
generating, by the computing system, an adjustment signal indicative of a desired visual distortion based at least in part on the position signal; and
causing display, on the touchscreen, of one or more visual elements distorted in accordance with the adjustment signal.
23. One or more computer readable media having instructions thereon that, when executed by one or more processing devices of a computing device, cause the computing device to perform the method of any of claims 14-22.
24. A system with gaze-assisted touchscreen inputs, comprising:
means for receiving a gaze location signal indicative of a region of a user's gaze on a touchscreen of a computing system;
means for receiving a touch signal indicative of a touch of the user on the touchscreen; and
means for generating an input signal for the system based at least in part on the gaze location signal and the touch signal.
25. The system of claim 24, wherein:
the means for generating an input signal based at least in part on the gaze location signal and the touch signal comprises means for selecting one of a plurality of predetermined touch types based at least in part on the gaze location signal and the touch signal, and
the input signal indicates the selected touch type.
26. The system of claim 24, wherein the gaze location signal is a first gaze location signal, the user is a first user, the touch signal is a first touch signal, and the input signal is a first input signal, and wherein the system further comprises:
means for receiving a second gaze location signal indicative of a region of a second user's gaze on the touchscreen;
means for receiving a second touch signal indicative of a touch of the second user on the touchscreen; and
means for generating a second input signal based at least in part on the second gaze location
27. The system of claim 24, further comprising:
means for receiving a position signal indicative of a position of the user's eyes relative to the touchscreen;
wherein the touch signal indicative of a touch of the user on the touchscreen is based at least in part on the position signal.
PCT/US2013/068125 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs WO2015065478A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
PCT/US2013/068125 WO2015065478A1 (en) 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs
JP2016524529A JP6165979B2 (en) 2013-11-01 2013-11-01 Gaze-assisted touch screen input
US14/127,955 US9575559B2 (en) 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs
CN201380080038.8A CN105593785B (en) 2013-11-01 2013-11-01 Stare auxiliary touch-screen input
EP13896672.6A EP3063602B1 (en) 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2013/068125 WO2015065478A1 (en) 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs

Publications (1)

Publication Number Publication Date
WO2015065478A1 true WO2015065478A1 (en) 2015-05-07

Family

ID=53004878

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/068125 WO2015065478A1 (en) 2013-11-01 2013-11-01 Gaze-assisted touchscreen inputs

Country Status (5)

Country Link
US (1) US9575559B2 (en)
EP (1) EP3063602B1 (en)
JP (1) JP6165979B2 (en)
CN (1) CN105593785B (en)
WO (1) WO2015065478A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8885882B1 (en) * 2011-07-14 2014-11-11 The Research Foundation For The State University Of New York Real time eye tracking for human computer interaction
KR20150083553A (en) * 2014-01-10 2015-07-20 삼성전자주식회사 Apparatus and method for processing input
US10209779B2 (en) * 2014-02-21 2019-02-19 Samsung Electronics Co., Ltd. Method for displaying content and electronic device therefor
KR101619651B1 (en) * 2014-11-26 2016-05-10 현대자동차주식회사 Driver Monitoring Apparatus and Method for Controlling Lighting thereof
JP2016163166A (en) * 2015-03-02 2016-09-05 株式会社リコー Communication terminal, interview system, display method, and program
TW201703722A (en) * 2015-07-21 2017-02-01 明達醫學科技股份有限公司 Measurement apparatus and operating method thereof
US9990044B2 (en) * 2015-10-30 2018-06-05 Intel Corporation Gaze tracking system
US9905244B2 (en) * 2016-02-02 2018-02-27 Ebay Inc. Personalized, real-time audio processing
US9898082B1 (en) * 2016-11-01 2018-02-20 Massachusetts Institute Of Technology Methods and apparatus for eye tracking
CN107820599B (en) * 2016-12-09 2021-03-23 深圳市柔宇科技股份有限公司 User interface adjusting method and system and head-mounted display device
US11507216B2 (en) 2016-12-23 2022-11-22 Realwear, Inc. Customizing user interfaces of binary applications
US11099716B2 (en) 2016-12-23 2021-08-24 Realwear, Inc. Context based content navigation for wearable display
US10620910B2 (en) 2016-12-23 2020-04-14 Realwear, Inc. Hands-free navigation of touch-based operating systems
US10936872B2 (en) 2016-12-23 2021-03-02 Realwear, Inc. Hands-free contextually aware object interaction for wearable display
JP6253861B1 (en) * 2017-03-15 2017-12-27 三菱電機株式会社 Touch gesture determination device, touch gesture determination method, touch gesture determination program, and touch panel input device
JP7099444B2 (en) * 2017-04-03 2022-07-12 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
US10304209B2 (en) 2017-04-19 2019-05-28 The Nielsen Company (Us), Llc Methods and systems to increase accuracy of eye tracking
US10474417B2 (en) * 2017-07-20 2019-11-12 Apple Inc. Electronic device with sensors and display devices
US11029834B2 (en) * 2017-12-20 2021-06-08 International Business Machines Corporation Utilizing biometric feedback to allow users to scroll content into a viewable display area
CN109101110A (en) * 2018-08-10 2018-12-28 北京七鑫易维信息技术有限公司 A kind of method for executing operating instructions, device, user terminal and storage medium
TWI734024B (en) * 2018-08-28 2021-07-21 財團法人工業技術研究院 Direction determination system and direction determination method
US10761648B2 (en) * 2019-01-16 2020-09-01 Michael D. Marra Gaze detection interlock feature for touch screen devices
US11169653B2 (en) 2019-01-18 2021-11-09 Dell Products L.P. Asymmetric information handling system user interface management
US11009907B2 (en) 2019-01-18 2021-05-18 Dell Products L.P. Portable information handling system user interface selection based on keyboard configuration
US11347367B2 (en) * 2019-01-18 2022-05-31 Dell Products L.P. Information handling system see do user interface management
DE112019007085T5 (en) 2019-03-27 2022-01-20 Intel Corporation Intelligent scoreboard setup and related techniques
US11379016B2 (en) 2019-05-23 2022-07-05 Intel Corporation Methods and apparatus to operate closed-lid portable computers
TWI804671B (en) * 2019-08-28 2023-06-11 財團法人工業技術研究院 Interaction display method and interaction display system
US11733761B2 (en) 2019-11-11 2023-08-22 Intel Corporation Methods and apparatus to manage power and performance of computing devices based on user presence
US11809535B2 (en) 2019-12-23 2023-11-07 Intel Corporation Systems and methods for multi-modal user device authentication
US11360528B2 (en) 2019-12-27 2022-06-14 Intel Corporation Apparatus and methods for thermal management of electronic user devices based on user activity
CN112987930A (en) * 2021-03-17 2021-06-18 读书郎教育科技有限公司 Method for realizing convenient interaction with large-size electronic product
US12112011B2 (en) 2022-09-16 2024-10-08 Apple Inc. System and method of application-based three-dimensional refinement in multi-user communication sessions
US12099653B2 (en) 2022-09-22 2024-09-24 Apple Inc. User interface response based on gaze-holding event assessment
US12118200B1 (en) 2023-06-02 2024-10-15 Apple Inc. Fuzzy hit testing
US12113948B1 (en) 2023-06-04 2024-10-08 Apple Inc. Systems and methods of managing spatial groups in multi-user communication sessions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6204828B1 (en) * 1998-03-31 2001-03-20 International Business Machines Corporation Integrated gaze/manual cursor positioning system
US20120272179A1 (en) * 2011-04-21 2012-10-25 Sony Computer Entertainment Inc. Gaze-Assisted Computer Interface
US20130145304A1 (en) * 2011-12-02 2013-06-06 International Business Machines Corporation Confirming input intent using eye tracking
US20130169560A1 (en) * 2012-01-04 2013-07-04 Tobii Technology Ab System for gaze interaction
KR20130081117A (en) * 2012-01-06 2013-07-16 엘지전자 주식회사 Mobile terminal and control method therof

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11296304A (en) * 1998-04-10 1999-10-29 Ricoh Co Ltd Screen display inputting device and parallax correcting method
SE524003C2 (en) * 2002-11-21 2004-06-15 Tobii Technology Ab Procedure and facility for detecting and following an eye and its angle of view
US20110304606A1 (en) * 2010-06-14 2011-12-15 Oto Technologies, Llc Method and system for implementing look-ahead protection in a computing device
US8493390B2 (en) * 2010-12-08 2013-07-23 Sony Computer Entertainment America, Inc. Adaptive displays using gaze tracking
US8766936B2 (en) * 2011-03-25 2014-07-01 Honeywell International Inc. Touch screen and method for providing stable touches
JP2012203671A (en) * 2011-03-25 2012-10-22 Nec Casio Mobile Communications Ltd Electronic apparatus, information generation method and program
KR101810170B1 (en) * 2011-10-10 2017-12-20 삼성전자 주식회사 Method and apparatus for displaying image based on user location
US9691125B2 (en) * 2011-12-20 2017-06-27 Hewlett-Packard Development Company L.P. Transformation of image data based on user position
KR101620777B1 (en) * 2012-03-26 2016-05-12 애플 인크. Enhanced virtual touchpad and touchscreen
KR20150031986A (en) * 2013-09-17 2015-03-25 삼성전자주식회사 Display apparatus and control method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6204828B1 (en) * 1998-03-31 2001-03-20 International Business Machines Corporation Integrated gaze/manual cursor positioning system
US20120272179A1 (en) * 2011-04-21 2012-10-25 Sony Computer Entertainment Inc. Gaze-Assisted Computer Interface
US20130145304A1 (en) * 2011-12-02 2013-06-06 International Business Machines Corporation Confirming input intent using eye tracking
US20130169560A1 (en) * 2012-01-04 2013-07-04 Tobii Technology Ab System for gaze interaction
KR20130081117A (en) * 2012-01-06 2013-07-16 엘지전자 주식회사 Mobile terminal and control method therof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022160933A1 (en) * 2021-01-26 2022-08-04 Huawei Technologies Co.,Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions
US11474598B2 (en) 2021-01-26 2022-10-18 Huawei Technologies Co., Ltd. Systems and methods for gaze prediction on touch-enabled devices using touch interactions

Also Published As

Publication number Publication date
EP3063602A4 (en) 2017-08-02
JP6165979B2 (en) 2017-07-19
CN105593785B (en) 2019-11-12
US20150338914A1 (en) 2015-11-26
US9575559B2 (en) 2017-02-21
EP3063602A1 (en) 2016-09-07
EP3063602B1 (en) 2019-10-23
CN105593785A (en) 2016-05-18
JP2016533575A (en) 2016-10-27

Similar Documents

Publication Publication Date Title
US9575559B2 (en) Gaze-assisted touchscreen inputs
CN114303120B (en) virtual keyboard
US9395821B2 (en) Systems and techniques for user interface control
US12008151B2 (en) Tracking and drift correction
US20220100265A1 (en) Dynamic configuration of user interface layouts and inputs for extended reality systems
CN116348836A (en) Gesture tracking for interactive game control in augmented reality
US20190384450A1 (en) Touch gesture detection on a surface with movable artifacts
EP2509070B1 (en) Apparatus and method for determining relevance of input speech
US9589325B2 (en) Method for determining display mode of screen, and terminal device
KR20120068253A (en) Method and apparatus for providing response of user interface
US20210117078A1 (en) Gesture Input Method for Wearable Device and Wearable Device
US10607069B2 (en) Determining a pointing vector for gestures performed before a depth camera
CN114647317A (en) Remote touch detection enabled by a peripheral device
WO2019085519A1 (en) Method and device for facial tracking
US12093461B2 (en) Measurement based on point selection
Schneider et al. Towards around-device interaction using corneal imaging
Qin et al. Selecting Real-World Objects via User-Perspective Phone Occlusion
Yeo et al. OmniSense: Exploring Novel Input Sensing and Interaction Techniques on Mobile Device with an Omni-Directional Camera
JP2012155480A (en) Input device for portable information apparatus

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14127955

Country of ref document: US

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13896672

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2013896672

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2016524529

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE