WO2015042292A1 - Systems and methods for providing response to user input using information about state changes predicting future user input - Google Patents
Systems and methods for providing response to user input using information about state changes predicting future user input Download PDFInfo
- Publication number
- WO2015042292A1 WO2015042292A1 PCT/US2014/056361 US2014056361W WO2015042292A1 WO 2015042292 A1 WO2015042292 A1 WO 2015042292A1 US 2014056361 W US2014056361 W US 2014056361W WO 2015042292 A1 WO2015042292 A1 WO 2015042292A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- user input
- data
- model
- prediction
- low
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 95
- 230000004044 response Effects 0.000 title claims abstract description 38
- 238000013459 approach Methods 0.000 claims description 29
- 230000033001 locomotion Effects 0.000 claims description 21
- 238000012937 correction Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 238000003825 pressing Methods 0.000 claims description 3
- 238000009877 rendering Methods 0.000 claims description 2
- 230000009471 action Effects 0.000 description 14
- 238000012545 processing Methods 0.000 description 6
- 238000003860 storage Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 3
- 239000003550 marker Substances 0.000 description 3
- 238000010079 rubber tapping Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013480 data collection Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 241000270295 Serpentes Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000035622 drinking Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000004905 finger nail Anatomy 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000009291 secondary effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
- G06F3/0416—Control or interface arrangements specially adapted for digitisers
- G06F3/04162—Control or interface arrangements specially adapted for digitisers for exchanging data with external devices, e.g. smart pens, via the digitiser sensing hardware
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/048—Fuzzy inferencing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03545—Pens or stylus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/041—Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/041—Indexing scheme relating to G06F3/041 - G06F3/045
- G06F2203/04101—2.5D-digitiser, i.e. digitiser detecting the X/Y position of the input means, finger or stylus, also when it does not touch, but is proximate to the digitiser's interaction surface and also measures the distance of the input means within a short range in the Z direction, possibly with a separate measurement setup
Definitions
- This Application includes an appendix consisting of 10 pages entitled “Planes on a Snake: a Model for Predicting Contact Location Free-Space Pointing Gestures,” which is incorporated into and part of the present disclosure.
- the present invention relates in general to the field of user input, and in particular to systems and methods that include a facility for predicting user input.
- FIG. 1 is a three-dimensional graph illustrating modeling of pre -touch data.
- FIG. 2 is a three-dimensional graph illustrating actual pre-touch data.
- FIG. 3 is a three-dimensional graph illustrating an example of a liftoff step.
- FIG. 4 is a three-dimensional graph illustrating an example of a corrective approach step.
- FIG. 5 is a three-dimensional graph illustrating an example of a drop-down or ballistic step.
- references to one or an embodiment in the present disclosure are not necessarily references to the same embodiment; and, such references mean at least one.
- touch may be used to describe the periods of time in which a user's finger, a stylus, an object or a body part is detected by the sensor. In some embodiments, these detections occur only when the user is in physical contact with a sensor, or a device in which it is embodied. In other embodiments, the sensor may be tuned to allow the detection of "touches” or “contacts” that are hovering a fixed distance above the touch surface.
- End-to-end latency the total time required between a user's input and the presentation of the system's response to that input, is a known limiting factor in user performance.
- latency is especially apparent. Users of such systems have been found to have impaired performance under as little as 25ms of latency, and can notice the effects of even a 2ms delay between the time of a touch and the system's response.
- Actual latency refers to the total amount of time required for a system to compute and present a response to a user selection or input. Actual latency is endemic to interactive computing. As discussed herein, there is a substantial potential to reduce actual latency if predictive methods are used to anticipate the position of user inputs and user states. Such predictions, if sufficiently accurate, may permit a system to respond to an input, or begin responding, before, or concurrently with, the input itself. Timed correctly, a system's response to a predicted input can be aligned with the actual moment of a user's actual input if it was correctly predicted. Moreover, if the user's actual input was sufficiently correctly predicted, the time required for a system's response to the predicted input can be reduced.
- the time between the user's actual selection and the system's response to that actual selection can be less than the actual latency. While this does not reduce the total amount of time required to respond to the predicted input, i.e., actual latency, it does reduce the system's apparent latency, that is, the total amount of time between the actual input and the system's response to the actual input.
- the disclosed system and method provides faster response to user input by intelligently caching information about graphical state changes and application state changes through predicting future user input. By sensing the movement of a user's
- the disclosed systems and methods can- with a degree of accuracy- predict future input events, such as the location of a future touch, by applying a model of user input.
- the model of user input uses current and previous input events to predict future input events. For example, by looking at the path of a finger through the air above a touch screen, the disclosed systems and methods can- with some degree of accuracy- predict the location that the finger will make contact with the display.
- prediction about future user input is paired with software or hardware that prepares the user interface and application state to respond quickly to the predicted input in the event that it occurs.
- the disclosed system and method can predict future input events with some degree of accuracy.
- the high-speed, low-latency nature of such input devices may provide ample and timely input events to make these predictions.
- Predicted input events can include, but are not limited to, touchdown location (location where a finger/pen/hand/etc. will make contact with the display), touchup location (position where fmger/pen/etc. will be lifted from the display), single or multi-finger gestures, dragging path, and others. Predicted events are discussed in further detail below.
- the predicted input events may include a prediction of timing, that is, when the event will be made.
- predicted input events may additionally include a measure of probability (e.g., between 0% and 100%) indicating the confidence that the model associates with the predicted event.
- a model can predict multiple future events, and assign a probability to each of them indicating the likelihood of their actual occurrence in the future.
- predicted events may be paired with system components that prepare the device or application for those future events.
- an "Open" button in a GUI could pre-cache the contents of the current directory when the predicted events from the model indicate that the "Open" button was likely to be pressed.
- the GUI may be able to show the user the contents of the current directory faster because of the pre-caching than it would be able to had no prediction occurred.
- a "Save" button in a GUI that has two visual appearances, pressed and unpressed.
- the software could pre- render the pressed appearance of the "Save” button so that it is able to quickly render this appearance once the input event is actually performed.
- software may wait until the input event occurs before rendering the pressed appearance, resulting in a longer delay between input event and graphical response to that input.
- the user input event is a temporary conclusion of interaction by the user and the cached data consists of commands to put the device into a low power mode. In this manner, the device can be configured to predict that the user will not touch the touch interface again or will pause before the next touch, and save substantial power by throttling down parts of the device.
- the model and prediction of touch location are used to correct for human error on touch. For example, when pressing a button that is near other buttons, the finger approach and model can be used by the processor to determine that the user intended to hit the button on the left but instead hit the left edge of the button on the right.
- a model of the movement of user's finger is constructed.
- the model allowed outputs one or more predicted locations and one or more predicted timings of a touch by the finger.
- pre-touch data in black
- the modeling involves three main steps: the initial rise (in red), a corrective movement towards the target (in blue), and a final drop down action (in green).
- a plane is fit, and the plane is projected onto the touch surface. The intersection of the projected plane and the touch surface may be used to provide a region of probably touch position.
- the initial rise may yield a larger region of probability (red rectangle)
- the corrective movement may yield a smaller region (blue rectangle)
- the final drop down action may yield an even smaller region (green rectangle).
- the prediction may be narrowed by fitting a parabola to the approach data.
- the model is adaptive, in that it may provide an increasingly narrow region of likely touch events as the user's gesture continues towards the screen.
- high-fidelity tracking may require sensing capabilities beyond those of typical modern touch devices though not beyond those of typical stylus -tracking technologies.
- high-fidelity tracking may require sensing capabilities that are too expensive today for commercial implementation in common devices, but are likely to be incorporated into common devices in the near future.
- high-fidelity tracking may require a combination of sensors including, e.g., using the combined fidelity of separate inputs such as video input and capacitive input.
- pre-touch information is used to predict user actions for computing devices, and particularly user actions for mobile devices such as touch pads and smart phones. Specifically, in an embodiment, pre-touch information may be used to predict where, and when, the user is going to touch a device.
- Participants executed the data collection study using two touch devices, constrained to a surface.
- a 10-inch tablet was responsible for the trial elements and user feedback. This was the main surface, where participants were required to execute the trial actions.
- the gesture starting position is important to the approach, defining the horizontal angle of attack. To control for this angle, we asked participants to start all gestures from a phone, positioned between the user and the tablet. To start a trial, the participant was required to touch and hold the phone display until an audio feedback indicated the trial start. Both phone and tablet were centered to the user position and positioned 13cm and 30cm from edge of the table.
- the devices and the marker tracking system were connected to a PC, which controlled the flow of the experiment.
- the computer ran a Python application designed to: (1) read the position and rotation of the artifact; (2) receive touch down and up from tablet and phone; (3) issue commands to the tablet; and (4) log all the data.
- the computer was not responsible for any touch or visual feedback; all visuals were provided by the tablet.
- the participant was required to touch, and hold the phone display, which triggered the system to advance to the next trial, shown on the tablet display.
- users were requested to wait for an acoustic feedback, output by the phone and randomly triggered between 0.7 and 1 second after the feedback was displayed.
- the task consisted of tapping a specified location, following a straight or elbow path, or following instructions to draw simple shapes.
- the users were instructed to return to the phone, to indicate that the trial was finished, and wait for the acoustic feedback to start the next trial. Any erroneous task was repeated, with feedback indicating failure provided by the tablet.
- Participants completed a consent form and a questionnaire to collect demographic information. They then received instruction on how to interact with the apparatus, and completed 30 training trials to practice the acoustic feedback, the task requested and the overall flow of the trials. [0031] After the execution of each trial, a dialog box appeared to indicate the result
- Tasks were designed accordingly to three independent variables: starting position (9 starting positions for gestures and 5 for tapping, evenly distributed in the tablet surface), action type (tap, gestures and draw actions) and direction (left, right, up, down).
- starting position (9 starting positions for gestures and 5 for tapping, evenly distributed in the tablet surface)
- action type (tap, gestures and draw actions)
- direction left, right, up, down).
- Participants executed the tasks using either a pen artifact or a finger glove.
- Each participant performed 6 repetitions of touch actions, 2 for each gesture combination of position and direction, and once for draw actions for a total of 330 actions per study.
- the ordering of the trials was randomized across participants. Participants were required to execute two sessions, one using a pen artifact and another tracking the finger. The ordering for the two sessions was round-robin between participants.
- Figure 2 shows an example of data collected for a single trial.
- In black are all the pre-touch points, starting on the phone position and ending on the target position on the tablet.
- the purple X represents the registered touch point in the tablet display.
- the data so collected revealed a distinctive three-phase gesture approach that can be divided into three main components that we refer to as: the liftoff, the corrective approach and the drop-down.
- three identifiable velocity elements were also included: top overall speed, initial descent and final descent.
- this information may be used to identify when the liftoff step is terminated, and/or when to start to look for an initial descent.
- the initial descent is defined as the point when the finger starts vertically moving towards the touch display.
- initial descent may be identified by determining when the finger's acceleration, in the z values, crosses a zero value. Even when the acceleration crosses a zero, however, such change in acceleration is not necessarily indicative that the finger with accelerate towards the display without further adjustments. Rather, it has been discovered that, it is often the case, there is a de-acceleration before a final descent is initiated.
- this detail provides fundamental information as to when the touch is going to happen, and is indicative of the final drop-down, ballistic, step. In an embodiment, these cues help detect each of the three steps described next.
- a model comprised of three steps, successfully generalizes the touch approach to a surface of interest.
- the liftoff is defined as the portion of a movement where the user starts moving the finger away from the display. It is characterized by an increase in vertical, upward, speed and direction towards the target. While the direction of the liftoff is not always directly aligned with the target, often requires a corrective approach, a plane minimized to fit the liftoff data and intersecting the tablet is enough to create a prediction of the location (i.e., a predicted region) of a touch event, very early on, and thus, also, to predict a low likelihood of touch in some parts of the display.
- Figure 3 shows an example of a liftoff step.
- the rise is fit by a plane that is slightly deviated, from the target, to the left.
- movement may be fast and in the general direction of the target, but may require future corrections.
- Figure 4 shows an example of the corrective approach step.
- the correction is compensating for the liftoff deviation.
- a model may accounts for this deviation by fitting a new plane and reducing the predictive touch area.
- the corrective approach is characterized by an inversion in vertical velocity; this is because the finger is beginning its initial descent towards the target. A slight decrease in the overall velocity may be observed; given the significant reduction of vertical velocity, such a decrease may suggest that the horizontal velocity is increasing, thus compensating for the slowdown in vertical velocity. This effect is believed to be a result of the finger is moving away from the plane defined during liftoff, as it corrects its path toward the target.
- a second plane is fit to the data points that deviate from the liftoff defined plane.
- the model may presume that the deviation of the surface intersection of plane formed from corrective data with respect to the surface intersection of the plane formed from liftoff data has a strong correlation to final target position. For example, if a deviation to the left of the liftoff plane is observed, the right side of the liftoff plane can be disregarded and the target is also to the left of the liftoff plane.
- a rapid downward movement indicates that the third step, the drop-down or ballistic step, has been reached.
- a third plane i.e., ballistic plan
- the third plane may account for deviation from the corrective approach plane, and in an embodiment, attempts to fit a parabola to the drop-down / ballistic event.
- the model may accurately predict a touch event with some degree of likelihood.
- the model may be used to predict the touch (to a very high degree of likelihood), within a circle of 1.5cm in radius, from a vertical distance of 2.5cm.
- the model may be used to accurately predict a non- interrupted touch (e.g., without a change in the user's desire, and without an external event that may move the surface) within a circle of 1.5cm in radius, from a vertical distance of 2.5cm.
- the finger is relatively close to the tablet, speeding up towards the target.
- the finger may be speeding up due to the force of gravity, or by the user employing a final adjustment that speeds up the finger until touching the display.
- this dropdown or ballistic step is characterized by a significant increase in vertical velocity and may be accompanied by a second deviation from the corrective approach.
- the ballistic step is the last step of the pre -touch movement, it is the step during which, if completed, the user will touch the display.
- the movement during the ballistic step may also be fit to a plane.
- the ballistic step movement is fit to a plane where a material deviation from the corrective approach plane is detected.
- the plane is fitted to the data points that deviate from the corrective plane.
- the ballistic step movement is modeled as a parabola to further reduce the size of the probable area of touch.
- predictions and states can be modeled.
- Examples of predictions and states that can be modeled include, e.g., the moment at which a touch or other event will occur, the location of the touch or other event, a confidence level associated with the predicted touch or other event, an identification of which gesture is being predicted, an identification of which hand is being used, an identification of which arm is being used, handedness, an estimate of how soon a prediction can be made, user state (including but not limited to: frustrated, tired, shaky, drinking, intention of the user, level of confusion, and other physical and psychological states), a biometric identification of which of multiple users is touching the sensor (e.g., which player in a game of chess), orientation or intended orientation of the sensor, landscape vs.
- predictions and states can be used not only to reduce latency but in other software functions and decision making.
- the trajectory of the user's finger from the "T" key toward the "H" key on a virtual keyboard that is displayed on the sensor can be used to compare the currently typed word to a dictionary to increase the accuracy of predictive text analysis (e.g., real time display of a word that is predicted while a user is typing the word).
- Such trajectory can also be used, e.g., to increase the target size of letters that are predicted to be pressed next.
- Such trajectory may be interpreted in software over time to define a curve that is used by the model for prediction of user input location and time.
- the predictions and states described above can be also used in software to reject false positives in software interpretation of user input.
- the sensor senses an area corresponding to the contact area between finger and display. This pad is mapped to a pixel in one of many ways, such as picking the centroid, center of mass, top of bounding box, etc.
- the predictive model as described above can be used to inform the mapping of contact area to pixels based on information about the approach and the likely shape of the finger pressing into the screen. The contact area does not always coincide with the intended target. Models have been proposed that attempt to correct this difference. The availability of pre-touch as described above can be used to educate the models by not only providing the contact area but also distinguishing touches that are equivalently sensed yet have distinct approaches.
- a trajectory with a final approach arching from the left might be intended for a target left of the initial contact, where an approach with a strong vertical drop might be intended for the target closest to the fingernail.
- the shape of the contact (currently, uniquely based on the touch sensed region) might also benefit from the approach trajectory. For example, as a user gestures to unlock a mobile device, the sensed region of the finger shifts slightly, due to the angle of attack on touch-down. Data concerning how the finger approaches can be used to understand the shifts of contact shape and determine if they are intentional (finger rocking) or just secondary effects of a fast approach that initiates a finger roll after touch-down.
- the model predictions described above indicate where the user is most likely going to touch and what regions of the display are not likely to receive touches.
- One problem of touch technology is palm rejection - i.e: how does a system decide when a touch is intentional versus when a touch is a false positive, due to hand parts other than the finger being sensed. Once a prediction is made, any touch recognized outside the predicted area can be safely classified as a false positive and ignored. This effectively allows the user to rest her hand on the display or ever train the sensor to differentiate between an approach intended to grasp the device (low approach from the side) or a tap (as described by our data collection).
- each block of the block diagrams or operational illustrations, and combinations of blocks in the block diagrams or operational illustrations may be implemented by means of analog or digital hardware and computer program instructions.
- These computer program instructions may be stored on computer-readable media and provided to a processor of a general purpose computer, special purpose computer, ASIC, or other programmable data processing apparatus, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implements the functions/acts specified in the block diagrams or operational block or blocks.
- the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations.
- two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
- At least some aspects disclosed can be embodied, at least in part, in software. That is, the techniques may be carried out in a special purpose or general purpose computer system or other data processing system in response to its processor, such as a microprocessor, executing sequences of instructions contained in a memory, such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- processor such as a microprocessor
- a memory such as ROM, volatile RAM, non-volatile memory, cache or a remote storage device.
- Routines executed to implement the embodiments may be implemented as part of an operating system, firmware, ROM, middleware, service delivery platform, SDK (Software Development Kit) component, web services, or other specific application, component, program, object, module or sequence of instructions referred to as "computer programs.” Invocation interfaces to these routines can be exposed to a software development community as an API (Application Programming Interface).
- the computer programs typically comprise one or more instructions set at various times in various memory and storage devices in a computer, and that, when read and executed by one or more processors in a computer, cause the computer to perform operations necessary to execute elements involving the various aspects.
- a non-transitory machine -readable medium can be used to store software and data which when executed by a data processing system causes the system to perform various methods.
- the executable software and data may be stored in various places including for example ROM, volatile RAM, non-volatile memory and/or cache. Portions of this software and/or data may be stored in any one of these storage devices.
- the data and instructions can be obtained from centralized servers or peer-to-peer networks. Different portions of the data and instructions can be obtained from different centralized servers and/or peer-to-peer networks at different times and in different communication sessions or in a same communication session.
- the data and instructions can be obtained in entirety prior to the execution of the applications. Alternatively, portions of the data and instructions can be obtained dynamically, just in time, when needed for execution. Thus, it is not required that the data and instructions be on a machine-readable medium in entirety at a particular instance of time.
- Examples of computer-readable media include but are not limited to recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
- recordable and non-recordable type media such as volatile and non-volatile memory devices, read only memory (ROM), random access memory (RAM), flash memory devices, floppy and other removable disks, magnetic disk storage media, optical storage media (e.g., Compact Disk Read-Only Memory (CD ROMS), Digital Versatile Disks (DVDs), etc.), among others.
- a machine readable medium includes any mechanism that provides (e.g., stores) information in a form accessible by a machine (e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.).
- a machine e.g., a computer, network device, personal digital assistant, manufacturing tool, any device with a set of one or more processors, etc.
- hardwired circuitry may be used in combination with software instructions to implement the techniques.
- the techniques are neither limited to any specific combination of hardware circuitry and software nor to any particular source for the instructions executed by the data processing system.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Fuzzy Systems (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- User Interface Of Digital Computer (AREA)
- Input From Keyboards Or The Like (AREA)
Abstract
Description
Claims
Priority Applications (10)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112016006090A BR112016006090A2 (en) | 2013-09-18 | 2014-09-18 | systems and methods for providing user input response using state change information predicting future user input |
AU2014323480A AU2014323480A1 (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
CN201480051211.6A CN105556438A (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
EP14845628.8A EP3047360A4 (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
CA2923436A CA2923436A1 (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes and predicting future user input |
MX2016003408A MX2016003408A (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input. |
JP2016543990A JP2016534481A (en) | 2013-09-18 | 2014-09-18 | System and method for providing a response to user input using information regarding state changes and predictions of future user input |
SG11201601852SA SG11201601852SA (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
KR1020167008137A KR20160058117A (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
IL244456A IL244456A0 (en) | 2013-09-18 | 2016-03-06 | Systems and methods for providing response to user input using information about state changes predicting future user input |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361879245P | 2013-09-18 | 2013-09-18 | |
US61/879,245 | 2013-09-18 | ||
US201361880887P | 2013-09-21 | 2013-09-21 | |
US61/880,887 | 2013-09-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015042292A1 true WO2015042292A1 (en) | 2015-03-26 |
Family
ID=52689400
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2014/056361 WO2015042292A1 (en) | 2013-09-18 | 2014-09-18 | Systems and methods for providing response to user input using information about state changes predicting future user input |
Country Status (12)
Country | Link |
---|---|
US (1) | US20150134572A1 (en) |
EP (1) | EP3047360A4 (en) |
JP (1) | JP2016534481A (en) |
KR (1) | KR20160058117A (en) |
CN (1) | CN105556438A (en) |
AU (1) | AU2014323480A1 (en) |
BR (1) | BR112016006090A2 (en) |
CA (1) | CA2923436A1 (en) |
IL (1) | IL244456A0 (en) |
MX (1) | MX2016003408A (en) |
SG (1) | SG11201601852SA (en) |
WO (1) | WO2015042292A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3818441A4 (en) * | 2018-08-27 | 2021-09-01 | Samsung Electronics Co., Ltd. | Methods and systems for managing an electronic device |
WO2022256125A1 (en) * | 2021-06-01 | 2022-12-08 | Microsoft Technology Licensing, Llc | Digital marking prediction by posture |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9715282B2 (en) * | 2013-03-29 | 2017-07-25 | Microsoft Technology Licensing, Llc | Closing, starting, and restarting applications |
US9483134B2 (en) | 2014-10-17 | 2016-11-01 | Elwha Llc | Systems and methods for actively resisting touch-induced motion |
US20170123622A1 (en) * | 2015-10-28 | 2017-05-04 | Microsoft Technology Licensing, Llc | Computing device having user-input accessory |
US10552752B2 (en) * | 2015-11-02 | 2020-02-04 | Microsoft Technology Licensing, Llc | Predictive controller for applications |
US9847079B2 (en) * | 2016-05-10 | 2017-12-19 | Google Llc | Methods and apparatus to use predicted actions in virtual reality environments |
EP3400505A1 (en) | 2016-05-10 | 2018-11-14 | Google LLC | Volumetric virtual reality keyboard methods, user interface, and interactions |
CN108604122B (en) * | 2016-05-10 | 2022-06-28 | 谷歌有限责任公司 | Method and apparatus for using predicted actions in a virtual reality environment |
US10732759B2 (en) | 2016-06-30 | 2020-08-04 | Microsoft Technology Licensing, Llc | Pre-touch sensing for mobile interaction |
US10061430B2 (en) * | 2016-09-07 | 2018-08-28 | Synaptics Incorporated | Touch force estimation |
GB201618288D0 (en) * | 2016-10-28 | 2016-12-14 | Remarkable As | Interactive displays |
EP3316186B1 (en) * | 2016-10-31 | 2021-04-28 | Nokia Technologies Oy | Controlling display of data to a person via a display apparatus |
CN108604142B (en) * | 2016-12-01 | 2021-05-18 | 华为技术有限公司 | Touch screen device operation method and touch screen device |
US10261685B2 (en) * | 2016-12-29 | 2019-04-16 | Google Llc | Multi-task machine learning for predicted touch interpretations |
US20180239509A1 (en) * | 2017-02-20 | 2018-08-23 | Microsoft Technology Licensing, Llc | Pre-interaction context associated with gesture and touch interactions |
CN110199242B (en) * | 2017-02-24 | 2023-08-29 | 英特尔公司 | Configuring a basic clock frequency of a processor based on usage parameters |
US11119621B2 (en) | 2018-09-11 | 2021-09-14 | Microsoft Technology Licensing, Llc | Computing device display management |
US11717748B2 (en) * | 2019-11-19 | 2023-08-08 | Valve Corporation | Latency compensation using machine-learned prediction of user input |
US11354969B2 (en) * | 2019-12-20 | 2022-06-07 | Igt | Touch input prediction using gesture input at gaming devices, and related devices, systems, and methods |
KR20220004894A (en) * | 2020-07-03 | 2022-01-12 | 삼성전자주식회사 | Device and method for reducing display output latency |
KR20220093860A (en) * | 2020-12-28 | 2022-07-05 | 삼성전자주식회사 | Method for processing image frame and electronic device supporting the same |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110175832A1 (en) * | 2010-01-19 | 2011-07-21 | Sony Corporation | Information processing apparatus, operation prediction method, and operation prediction program |
US20120169646A1 (en) * | 2010-12-29 | 2012-07-05 | Microsoft Corporation | Touch event anticipation in a computing device |
US20130082962A1 (en) * | 2011-09-30 | 2013-04-04 | Samsung Electronics Co., Ltd. | Method and apparatus for handling touch input in a mobile terminal |
US20130181908A1 (en) * | 2012-01-13 | 2013-07-18 | Microsoft Corporation | Predictive compensation for a latency of an input device |
EP2634680A1 (en) * | 2012-02-29 | 2013-09-04 | BlackBerry Limited | Graphical user interface interaction on a touch-sensitive device |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6486874B1 (en) * | 2000-11-06 | 2002-11-26 | Motorola, Inc. | Method of pre-caching user interaction elements using input device position |
GB0315151D0 (en) * | 2003-06-28 | 2003-08-06 | Ibm | Graphical user interface operation |
US7379562B2 (en) * | 2004-03-31 | 2008-05-27 | Microsoft Corporation | Determining connectedness and offset of 3D objects relative to an interactive surface |
US20060244733A1 (en) * | 2005-04-28 | 2006-11-02 | Geaghan Bernard O | Touch sensitive device and method using pre-touch information |
US7567240B2 (en) * | 2005-05-31 | 2009-07-28 | 3M Innovative Properties Company | Detection of and compensation for stray capacitance in capacitive touch sensors |
US20090243998A1 (en) * | 2008-03-28 | 2009-10-01 | Nokia Corporation | Apparatus, method and computer program product for providing an input gesture indicator |
WO2010047994A2 (en) * | 2008-10-20 | 2010-04-29 | 3M Innovative Properties Company | Touch systems and methods utilizing customized sensors and genericized controllers |
US20100153890A1 (en) * | 2008-12-11 | 2010-06-17 | Nokia Corporation | Method, Apparatus and Computer Program Product for Providing a Predictive Model for Drawing Using Touch Screen Devices |
US20100315266A1 (en) * | 2009-06-15 | 2010-12-16 | Microsoft Corporation | Predictive interfaces with usability constraints |
US8484573B1 (en) * | 2012-05-23 | 2013-07-09 | Google Inc. | Predictive virtual keyboard |
US9122351B2 (en) * | 2013-03-15 | 2015-09-01 | Verizon Patent And Licensing Inc. | Apparatus for detecting proximity of object near a touchscreen |
-
2014
- 2014-09-18 AU AU2014323480A patent/AU2014323480A1/en not_active Abandoned
- 2014-09-18 BR BR112016006090A patent/BR112016006090A2/en not_active Application Discontinuation
- 2014-09-18 US US14/490,363 patent/US20150134572A1/en not_active Abandoned
- 2014-09-18 SG SG11201601852SA patent/SG11201601852SA/en unknown
- 2014-09-18 WO PCT/US2014/056361 patent/WO2015042292A1/en active Application Filing
- 2014-09-18 KR KR1020167008137A patent/KR20160058117A/en not_active Application Discontinuation
- 2014-09-18 CA CA2923436A patent/CA2923436A1/en not_active Abandoned
- 2014-09-18 EP EP14845628.8A patent/EP3047360A4/en not_active Withdrawn
- 2014-09-18 MX MX2016003408A patent/MX2016003408A/en unknown
- 2014-09-18 CN CN201480051211.6A patent/CN105556438A/en active Pending
- 2014-09-18 JP JP2016543990A patent/JP2016534481A/en active Pending
-
2016
- 2016-03-06 IL IL244456A patent/IL244456A0/en unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110175832A1 (en) * | 2010-01-19 | 2011-07-21 | Sony Corporation | Information processing apparatus, operation prediction method, and operation prediction program |
US20120169646A1 (en) * | 2010-12-29 | 2012-07-05 | Microsoft Corporation | Touch event anticipation in a computing device |
US20130082962A1 (en) * | 2011-09-30 | 2013-04-04 | Samsung Electronics Co., Ltd. | Method and apparatus for handling touch input in a mobile terminal |
US20130181908A1 (en) * | 2012-01-13 | 2013-07-18 | Microsoft Corporation | Predictive compensation for a latency of an input device |
EP2634680A1 (en) * | 2012-02-29 | 2013-09-04 | BlackBerry Limited | Graphical user interface interaction on a touch-sensitive device |
Non-Patent Citations (1)
Title |
---|
See also references of EP3047360A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3818441A4 (en) * | 2018-08-27 | 2021-09-01 | Samsung Electronics Co., Ltd. | Methods and systems for managing an electronic device |
US11216330B2 (en) | 2018-08-27 | 2022-01-04 | Samsung Electronics Co., Ltd. | Methods and systems for managing an electronic device |
WO2022256125A1 (en) * | 2021-06-01 | 2022-12-08 | Microsoft Technology Licensing, Llc | Digital marking prediction by posture |
US11803255B2 (en) | 2021-06-01 | 2023-10-31 | Microsoft Technology Licensing, Llc | Digital marking prediction by posture |
Also Published As
Publication number | Publication date |
---|---|
SG11201601852SA (en) | 2016-04-28 |
KR20160058117A (en) | 2016-05-24 |
BR112016006090A2 (en) | 2017-08-01 |
JP2016534481A (en) | 2016-11-04 |
EP3047360A1 (en) | 2016-07-27 |
MX2016003408A (en) | 2016-06-30 |
CA2923436A1 (en) | 2015-03-26 |
US20150134572A1 (en) | 2015-05-14 |
CN105556438A (en) | 2016-05-04 |
IL244456A0 (en) | 2016-04-21 |
EP3047360A4 (en) | 2017-07-19 |
AU2014323480A1 (en) | 2016-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150134572A1 (en) | Systems and methods for providing response to user input information about state changes and predicting future user input | |
US11599154B2 (en) | Adaptive enclosure for a mobile computing device | |
US10592050B2 (en) | Systems and methods for using hover information to predict touch locations and reduce or eliminate touchdown latency | |
US9298266B2 (en) | Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects | |
US8365091B2 (en) | Non-uniform scrolling | |
US9317171B2 (en) | Systems and methods for implementing and using gesture based user interface widgets with camera input | |
CN103858073A (en) | Touch free interface for augmented reality systems | |
US20090102604A1 (en) | Method and system for controlling computer applications | |
WO2015196703A1 (en) | Application icon display method and apparatus | |
Xia et al. | Zero-latency tapping: using hover information to predict touch locations and eliminate touchdown latency | |
JP2014501413A (en) | User interface, apparatus and method for gesture recognition | |
US10228794B2 (en) | Gesture recognition and control based on finger differentiation | |
Bonnet et al. | Extending the vocabulary of touch events with ThumbRock | |
WO2014029245A1 (en) | Terminal input control method and apparatus | |
US9958946B2 (en) | Switching input rails without a release command in a natural user interface | |
US20150268736A1 (en) | Information processing method and electronic device | |
US20230259265A1 (en) | Devices, methods, and graphical user interfaces for navigating and inputting or revising content | |
US10133346B2 (en) | Gaze based prediction device and method | |
KR101405344B1 (en) | Portable terminal and method for controlling screen using virtual touch pointer | |
JP2024512246A (en) | Virtual auto-aiming | |
CN116166161A (en) | Interaction method based on multi-level menu and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480051211.6 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14845628 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2923436 Country of ref document: CA |
|
REEP | Request for entry into the european phase |
Ref document number: 2014845628 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014845628 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 244456 Country of ref document: IL |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2016/003408 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2016543990 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20167008137 Country of ref document: KR Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112016006090 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 2014323480 Country of ref document: AU Date of ref document: 20140918 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112016006090 Country of ref document: BR Kind code of ref document: A2 Effective date: 20160318 |