WO2013071198A2 - Finger-mapped character entry systems - Google Patents

Finger-mapped character entry systems Download PDF

Info

Publication number
WO2013071198A2
WO2013071198A2 PCT/US2012/064563 US2012064563W WO2013071198A2 WO 2013071198 A2 WO2013071198 A2 WO 2013071198A2 US 2012064563 W US2012064563 W US 2012064563W WO 2013071198 A2 WO2013071198 A2 WO 2013071198A2
Authority
WO
WIPO (PCT)
Prior art keywords
finger
gesture
input
fingers
user
Prior art date
Application number
PCT/US2012/064563
Other languages
French (fr)
Other versions
WO2013071198A3 (en
Inventor
Joseph T. LAPP
Original Assignee
Lapp Joseph T
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lapp Joseph T filed Critical Lapp Joseph T
Publication of WO2013071198A2 publication Critical patent/WO2013071198A2/en
Priority to US14/272,736 priority Critical patent/US10082950B2/en
Publication of WO2013071198A3 publication Critical patent/WO2013071198A3/en
Priority to US16/037,077 priority patent/US11086509B2/en
Priority to US17/106,861 priority patent/US20210109651A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus

Definitions

  • a finger-mapped character entry system is a user interface for entering text on a touchscreen device. It is analogous to a virtual keyboard, but it has no buttons. It may replace the virtual keyboard on a touchscreen device, literally occurring in the same space that would otherwise show a virtual keyboard. The space where the user enters characters into the system is called an "input area.”
  • a finger-mapped character entry system may have multiple input areas serving different purposes, such as one for each hand or one for character entry and another for cursor entry.
  • any reference to a "character entry system,” or just a “system,” is a reference to a finger-mapped character entry system.
  • An “action” is a gesture or other user interface interaction that a user performs.
  • An “event” is a notice of an action as received by the character entry system.
  • the "host” or “host system” is the application or operating system in which the character entry system runs and to which the system reports information about user actions.
  • a “behavior” is a specific function or activity that a character entry system or host might perform in response to a user action, such as moving a cursor or inserting a character into text.
  • a “request” is a message that a character entry system sends to the host to ask the host to perform a behavior.
  • Input areas may be modal, supporting different "input modes.”
  • a character entry system having only one input area normally has at least two input modes, one for character actions and another for cursor actions.
  • "Character actions” include character insertion and character deletion gestures.
  • “Cursor actions” include cursor gestures and text highlighting gestures for clipboard operations. The user switches between input modes as needed to perform these actions. This document defines gestures for selecting the input mode, but an input area may also provide buttons for selecting the input mode.
  • Finger-mapped character entry systems get away without keyboard buttons because they accept gestures and can identify the finger that performs each gesture.
  • the effect of a gesture is a function of both the gesture and the finger that performs it, and the gestures available to each finger are capable of expressing a great variety of values. All together, a character entry system is capable of providing a single hand with enough gestural values to represent every character found on a conventional physical keyboard.
  • Finger mapping requires that the user place the hand on the input area in a particular fashion that is called “home position.” In home position, the hand takes a posture similar to the posture of a hand resting on the home row of a physical keyboard. The fingers typically gesture by sliding up and down on the touchscreen from this position, usually to perform a multi-valued stroke. In a multi-valued stroke, the direction the finger slides selects a set of values, and the distance the finger slides in that direction selects a value from the set. Multi-valued strokes allow a single finger to express many values without requiring the hand to move.
  • Finger mapping also supports multi-valued "sweeps" involving two (or more) fingers sliding at once.
  • the system Before a user can employ finger mapping on a character entry system, the system must be calibrated for the user's hand, recording information such as the location and orientation of the home position, the width of the hand, and the reach of the fingers.
  • the calibration values may even be different for the left and right hands.
  • Calibrations may also specify preferences for the length and timing properties of gestures, including the gestures of input modes not involving finger mapping. Since multiple users may use a single touchscreen device, an implementation may need to track multiple calibrations and allow for their dynamic selection. This document defines many calibration measures and describes several techniques for calibrating hands for finger mapping.
  • Character entry systems can be classified according to the maximum number of values a multi-valued gesture takes, the number of simultaneous finger or thumb touches it requires, and the language or language group for which it is designed.
  • This document concludes with a specification of a two-valued two-touch English system, employing multi-valued gestures of at most two values and requiring only two simultaneous inputs. Two-valued systems are easiest to learn to use, and two-input systems are compatible with the broadest range of devices.
  • the specification also describes a three-touch variant in which the user's thumb may rest unobtrusively on the touchscreen.
  • a user inputs characters into a character entry system by performing gestures on a touchscreen.
  • One or more regions of the touchscreen must be able to interpret the gestures.
  • Each region of the screen that is capable of interpreting gestures of the character entry system is called an "input area.” It is envisioned that a character entry system would normally appear in place of the conventional virtual keyboard.
  • the host determines when an input area is enabled, where on the touchscreen it is positioned, and what size and shape the input area takes. For example, one device may have a specific region of the screen always reserved for the input area, while another may display the input area only when the user selects an input field on the screen, removing the input area after the user has completed entering the field.
  • the period during which an input area is enabled is called the "input session.”
  • a user employs a character entry system to input text into the device, but a character entry system only issues requests to its host, and the host may interpret those requests any way it deems appropriate.
  • the host When entering text, the host typically displays the text on the screen as it is entered.
  • the portion of the screen available to the host for responding to character entry system requests is called the "host area.” Because it is possible to implement character entry systems that do not render anything to the input area, it is possible for an input area and a host area to partially or wholly overlap, provided that the host area won't recognize the input area's gestures.
  • Each input area is dedicated to a different host and hence also to a different host area.
  • One input area is for character input, the other for moving the cursor.
  • Each input area is dedicated to a different user for use in a multi-user game.
  • One input area provides a tutorial for use of the second input area.
  • a calibration selection drop-down button or radio button for allowing the user to select from previously established calibrations. This is helpful when multiple users use the same device; this button selects the user. To support this, there should be a means for entering new users and assigning them names. The calibrations made while using the system as this user become associated with that user.
  • the input area can also display information that may be helpful to the user, such as a characterization of the calibration currently in effect. This document describes some useful visual feedback components as they pertain.
  • the character entry system is designed mainly as a system of gestures performed on the input area, an implementation must be careful to distinguish gestures from interactions with visual interface elements.
  • One approach is to place visual elements on the input area where they won't interfere with the gestures, or to only recognize tap gestures in regions of the input area where there are no buttons.
  • Another approach is to distinguish button presses from gestures by interpreting a finger held in one place for an extended period of time as being a button press. This can be disambiguated from the "hold gesture” described later by recognizing a hold gesture only when the user simultaneously performs another gesture.
  • a finger-mapped character entry systems In order to allow for the entry of a large number of characters, a finger-mapped character entry systems must recognize a large number of simple, distinct gestures in the limited space of an input area.
  • This section defines a gestures that are useful for this purpose, independently of the behaviors that a character entry system may assign to them. The behavior of a gesture may depend on the input mode active at the time of the gesture or the particular finger that performs the gesture.
  • This section often specifies multiple approaches for computing values associated with gestures. In each case, it is important that a character entry system choose one of the approaches for computing the value and be consistent with the value's computation.
  • Finger location The "location" of a finger on a touchscreen is given by a point. It is not given by an area, as one might expect since the finger actually touches an area of the screen.
  • the touchscreen device's operating system typically
  • Stroke ⁇ is a gesture whereby a user places a finger on the touchscreen, drags the finger some distance across the touchscreen, and lifts the finger. Multiple fingers may be engaged in strokes simultaneously.
  • the "starting location" of a stroke is the location at which the finger began the stroke.
  • the "current location" of a finger is the location of the finger at some point during a stroke, prior to lifting the finger at the end of the stroke.
  • the "ending location" of a stroke is the location at which the finger is lifted upon completing a stroke.
  • the "vector" of an in-progress stroke is the line segment that begins at the stroke's starting location and ends at the stroke's current location.
  • the "vector" of a completed stroke is the line segment that begins at the stroke's starting location and ends at the stroke's ending location.
  • the vector is straight regardless of the path that the finger traced.
  • a vector also has a "direction,” which points from the starting location to the current or ending location.
  • Stroke length ⁇ The "length" of a stroke is the length of the stroke's vector.
  • Stroke angle ⁇ The "angle" of a stroke is the angle of the stroke's vector relative to some fixture axis against which the character entry system measures all angles. Normally an edge of the screen serves as this fixture axis.
  • a "two-fingered sweep” is a gesture in which the user simultaneously sweeps two fingers across the touchscreen along roughly parallel paths; it is a gesture consisting of the simultaneous, parallel strokes of two fingers.
  • a user employing a two- fingered sweep envisions touching both fingers to the screen at the same time, sweeping them for the same amount of time, and lifting them from the screen at the same time.
  • users will vary in their ability to master the timing, so the gesture should implement a heuristic for detecting and approximating the sweep.
  • One possible heuristic is for the character entry system to delay for a predetermined amount of time the recognition of any single-finger stroke it detects. If the touchscreen registers a second finger while the first is still active and before the delay period has expired, the gesture is recognized as a two-fingered sweep; otherwise the gesture is recognized as a single-finger stroke. Once recognized as a two-finger sweep, the gesture ends when at least one of the fingers is lifted.
  • This approach allows the character entry system to provide the user with feedback about the nature of the gesture during the gesture instead of having to wait for the gesture to end before recognizing it.
  • the delay period could ⁇ but need not ⁇ be part of the calibration. A reasonable delay period would be 200ms.
  • Finger start locations ⁇ These are the starting locations of both fingers' strokes.
  • Finger end locations These are the ending locations of both fingers' strokes.
  • Sweep vector ⁇ This is either the vector of one of the finger strokes or it is the line segment that connects the location midway between the starting locations of the fingers and the location midway between the current (or ending) locations of the fingers.
  • the midway locations may be calculated as the averages of the coordinates; midway between (a, b) and (c, d) is (a/2 + c/2, b/2 + d/2).
  • the direction of the sweep vector points from the midpoint between the starting locations to the midpoint between the current (or ending) locations.
  • Sweep length ⁇ This is length of the sweep vector.
  • the length of the sweep vector as computed from the midpoints will equal the average of the lengths of the sweep's two strokes.
  • the average of the lengths may be used as an approximation at some sacrifice to the sensitivity of the gesture to the particulars of the sweep vector.
  • Sweep angle ⁇ This is either the angle of the stroke of one of the fingers or the average of the angles of the strokes of both fingers.
  • Finger separation distance This is either the distance between the two finger start locations or the distance between the two finger end locations.
  • Line separation distance This is the distance between the two parallel lines traced by the fingers.
  • the vector of each finger's stroke may be thought of as part of an infinitely long line, and the line separation distance is the distance between these two lines.
  • the distance between parallel lines is a standard mathematical calculation, the two fingers may not trace exactly parallel lines. An approximation must be used instead, such as calculating the line separation distance as the finger separation distance times the sine of the sweep angle.
  • Finger identities This is the names of the particular fingers that are performing the sweep, when this can be determined, as within a finger map.
  • Common line ⁇ A "common line” sweep is a two-fingered sweep in which the fingers trace roughly the same line. This can be detected as a sweep in which the line separation distance is less than a certain value. Ideally, this value would be a function of the finger separation distance, as the farther apart the fingers are, the more tolerant the character entry system should be of error. A line separation distance of under 4mm plus 15% of the finger separation distance is reasonable. The 4mm constant reflects the fact that users cannot achieve arbitrary accuracy.
  • Common line perpendicular A "common line perpendicular" sweep is a two- fingered sweep in which the fingers trace lines that are roughly perpendicular to the sweep's orientation line. To detect this gesture the character entry system compares the sweep angle to the angle of the orientation line. It is reasonable to consider the sweep to be common line perpendicular when the difference between these angles falls in the range of 90 degrees plus or minus 25 degrees (or some other variation around a right angle suitable for the particular system).
  • An "askew" sweep is a two-fingered sweep that is neither common line nor common line perpendicular.
  • Adjacent-finger An "adjacent-finger" sweep is a two-fingered sweep in which the fingers appear to be immediately adjacent to each other. Fingers are considered to be adjacent when the finger separation distance is less than a certain maximum. Ideally, this value would be a function of the size of the user's fingers and hence part of the calibration. However, in situations where a character entry system may need to determine adjacency prior to having a calibration, 20mm is a reasonable maximum. In an adjacent-finger sweep, finger and line separation distances are not informative beyond their use in identifying the sweep as adjacent-finger.
  • a "spaced-finger" sweep is a two-fingered sweep that is not adjacent-finger.
  • the two-fingered sweep can be generalized into a sweep of any number of fingers - a "multi-fingered" sweep. Additional fingers added to the gesture during the gesture could be included as part of the gesture.
  • the sweep vector could be the vector of the stroke of any one of the fingers, or it could be the vector going from the average of the starting locations to the average of the ending locations, or by employing some other center of mass calculation for the starting and ending locations.
  • the finger separation distance could be the distance between the furthest-separated fingers, either at the start or end of the sweep, and the line orientation could be the line through these same two finger locations. The line separation distance would likely need to be the smallest line separation distance among all the fingers when calculated pairwise, so that the character entry system can detect a user's attempt to be common line.
  • a “linear gesture” is a gesture with a length and a two-valued direction.
  • the length is computed as a function of the degree to which the gesture projects onto one or more predetermined lines called “reference lines.”
  • the two directional values are given by the direction of the gesture along one of the reference lines.
  • Linear gestures have an effect similar to that of scrolling the contents of a mobile touchscreen that allows for vertical scrolling but not horizontal scrolling: the screen scrolls to the degree that the finger moves up and down along the screen, while ignore horizontal motions of the finger. This example corresponds to a vertical reference line, but the reference lines of linear gestures may have any slope, and there may be more than one reference line.
  • Reference lines pre-exist the input of a linear gesture, but the particular reference lines employed by a linear gesture may be function of the region of the touchscreen in which the gesture begins, a function of the particular fingers performing the gesture, or a function of the direction in which the gesture begins. In the latter case, the character entry system would need to implement a gesture length threshold beyond which the direction of the gesture and references lines are selected.
  • reference lines are "lines,” for purposes of determining a linear gesture's length and direction, their only salient characteristics are their slopes; an implementation projects measures of the gesture onto reference lines, so the location of the reference line in space doesn't matter.
  • a "linear stroke” is the linear version of a one-finger stroke. It is a stroke whose length is computed by projecting the vector of the stroke onto a reference line; the length of the projection is the length of the gesture. The direction of the linear stroke is the direction of the projected vector along the reference line.
  • a “linear sweep” is a linear version of the two-fingered sweep. To the user, the gesture is identical to a two-fingered sweep, though its length and direction are computed differently.
  • a linear sweep's length can be defined using either one reference line or two. When using one reference line, the length is the length of the projection of the sweep vector onto the reference line. The direction of the linear sweep is the direction along the reference line that the projection of the sweep vector points. If each finger has an associated reference line, it's most intuitive to measure the linear sweep against a third reference line that bisects the region between the two finger reference lines.
  • one of the reference lines applies to one finger and the other applies to the other finger.
  • Each finger is treated as performing a linear stroke relative to its applicable reference line, and the length of the linear sweep is the average of the lengths of the two linear strokes.
  • the direction of this linear sweep can be indicated as either the direction of one of the linear strokes or as the direction of the sweep vector (which might itself be that of one of the strokes).
  • the character entry system may choose to ignore the gesture or report an error when the directions of the two linear strokes do not agree.
  • the linear stroke directions disagree when the projection of the vector of one of the strokes onto the vector line of the other stroke results in a vector that points in the direction opposite that of the other stroke. 3.4. Multi-Tap Gestures
  • a "mulit-tap gesture” is a gesture in which the user uses one or more fingers to tap the touchscreen two or more times in rapid succession and concludes by performing a gesture with the final tap. That is, instead of lifting the finger or fingers from the touchscreen after contacting the touchscreen for the final tap, the finger or fingers instead remain on the touchscreen and perform a gesture before lifting.
  • the number of taps that the user performs in a multi-tap gesture is called the "tap count.”
  • a two-tap gesture is called a “double-tap gesture” and a three-tap gesture is called a “triple-tap gesture.” It's also possible to have tap counts of four or more.
  • a “tap stroke” is a gesture in which the user taps the touchscreen a number of times equal to the tap count, and rather than lift the finger from the touchscreen after performing the final tap, the user performs a stroke with the finger.
  • the character entry system registers the gesture as a stroke with a tap count. It's useful to refer to "double- tap strokes" and “triple-tap strokes.” Tap strokes may also be linear.
  • a “tap sweep” is a gesture in which the user taps the touchscreen simultaneously with two fingers, doing so a number of times equal to the tap count, and rather than lift the fingers from the touchscreen after performing the final tap, the user performs a two- fingered sweep with the fingers.
  • the character entry system registers the gesture as a two-fingered sweep with a tap count. It's useful to refer to "double-tap sweeps" and “triple-tap sweeps.” Tap sweeps may also be linear.
  • Multi-tap gestures are governed by the following values:
  • Maximum tap length It is possible that a user intending to tap a finger instead causes the touchscreen to register a short stroke.
  • the character entry system should only register strokes and two-fingered sweeps that are longer than the "maximum tap length"; shorter gestures should be registered as taps. 4mm is a reasonable maximum tap length, but ideally it would be tailored to the user.
  • the character entry system In order to register successive touches on the touchscreen as taps of a single multi-tap gesture, the character entry system must implement a "tap timeout.” The timeout period begins the instant the finger touches the touchscreen. The user may only increase the tap count by touching the touchscreen again within the timeout period. This period includes the time required for the user to lift the finger and place it again. If at the end of the timeout period no concluding gesture has been registered and the finger or fingers are not at the tap locations, the gesture is interpreted as a tap sequence but not a multi-tap gesture. 300ms is a reasonable tap timeout, but ideally it would be tailored to the user.
  • the tap timeout may be the most important for the user to control.
  • a "multi-valued gesture” is a stroke or two-fingered sweep that selects a value from a set of values according to the length and direction of the gesture.
  • the set of values may depend on the region of the input area where the gesture begins, but the gesture is not selecting from among screen elements with fixed locations that pre-exist the gesture. Linear strokes and sweeps may also be multi-valued.
  • the region of the input area where the gesture begins combined with the type and direction of the gesture, together select the set from which the gesture chooses.
  • the length of the gesture selects a value from the set.
  • the values need not be uniformly distributed across the length, and the sets need not have the same number of values. Not every gesture type or direction need be associated with a set of values, and not all of the value sets need be unique to the region, gesture type, or gesture direction.
  • the character entry system may partition the input area into regions, perhaps doing so as a dynamic function of the calibration in effect. Any one-finger stroke within a given region has access to the same value sets, regardless of where in the region the stroke starts, and regardless of where the stroke ends, even if it ends in a different region.
  • a two-fingered sweep within the same region may provide access to a different collection of value sets, but as with one-finger strokes, every two-fingered sweep within a given region provides access to the same value sets.
  • value sets may be associated with gesture direction by assigning value sets to ranges of the gesture angle.
  • the character entry system may be useful for the character entry system to show the user the value that would be selected were the user to lift the finger (or fingers) at some point during a multi-touch gesture. This can only be done if the character entry system selects a value set for the gesture prior to the completion of the gesture -- in fact, near to the gesture's origination.
  • a character entry system can accomplish this by implementing a "set selection threshold," which is the minimum gesture length beyond which the value set will be selected. Once selected, the user cannot subsequently change the value set by changing the direction of the gesture. Should the user change the direction anyway, the character entry system could ignore the gesture, report an error, continue to select from the set as a function of length, or apply some other way of influencing the selection.
  • a set selection threshold also gives the user some room to error even when the character entry system is not providing feedback. For example, a user preparing to begin a stroke may be moving a finger towards the bottom of the touchscreen in order to place the finger at the beginning of a stroke that will move up the screen. The momentum of moving the finger down to the starting position may produce a brief downward movement at the beginning of the stroke. This will frustrate the user if the character entry system detects this too soon and selects a value set accordingly.
  • the device clicks audibly as the finger enters a value's length range.
  • the device audibly announces the value that would be selected were the user to lift the finger (or fingers) at its present location, announcing that value every time the finger moves into the value's length range.
  • the system pops up a small box that shows a list of all the values in the set for the direction selected, highlighting the value that would be chose were the user to lift the finger (or fingers). The highlighted value changes as the user moves the finger. If the system implements the gesture with reversible directions (see below), the list could include all of the values in the sets for both directions.
  • a "reversible gesture” is a gesture that would report a length and a direction when complete but whose length can be increased or decreased dynamically during the gesture and whose eventually-to-be-reported direction may changed during the gesture.
  • the user may also nullify a reversible gesture after starting the gesture in order to prevent the character entry system from registering and acting upon the gesture.
  • Reversible gestures are useful for providing users with feedback during the gesture to help them see what value the gesture would take were they to end the gesture. They also allow a user to undo a gesture if they catch themselves prior to completing it.
  • Reversibility is defined only for those gestures that increase in length the farther the finger (or fingers) move from their starting locations.
  • the gesture is said to "reverse” the instant the gesture length decreases during the gesture, and it is said to be “in reversal” while the length decreases. Whenever the gesture length increases, the gesture is said to "resume.”
  • the gesture ends when the user lifts the finger (or fingers) from the touchscreen, and the direction and length of the gesture at that point becomes the direction and length reported to the character entry system for the gesture.
  • a reversible gesture has a "no-value threshold," which is given as a gesture length.
  • the instant a gesture in reversal assumes a length less than (or equal to) the no-value threshold, the gesture is nullified, such that if the user were to lift the finger (or fingers) at this point, the character entry system would ignore the gesture as if the user had input no gesture.
  • the gesture would be reported should the user end the gesture without again reversing it.
  • a reversible multi-valued gesture has an additional constraint, at least when the character entry system is providing feedback about the current value of the gesture: the gesture's value set can only be changed by returning to the finger (or fingers) to the no- value threshold. This essentially resets the gesture so that a new value set can be selected by subsequently exceeding the multi-valued gesture's set selection threshold again, a threshold which may actually equal the no-value threshold. Since the no-value threshold is generally close to the starting location of the finger or fingers, a user can only change the value set, mid-gesture, by first returning the finger or fingers to their approximate starting locations. However, a character entry system may opt to
  • a “scrubbing gesture” is a gesture consisting of constituent gestures that trace back and forth two or more times along roughly the same path.
  • the motion of a scrubbing gesture is similar to the motion of rubbing an eraser back and forth on a page to erase a mark ⁇ for example, left, right, and left again, or up, down, and up again.
  • the constituent gestures must be of a kind having a length and a direction, and all the constituent gestures of a given scrubbing gesture must be of the same kind.
  • the value of a scrubbing gesture is a list of the vectors traced by the constituent gestures, in the order in which they were traced. Because at least one reversal is required, every scrubbing gesture reports at least two vectors. The number of vectors is called the "scrub count.” The gesture ends when the user lifts the fingers.
  • Strokes and sweeps are suitable constituents of a scrubbing gesture. Should the user move the finger (or fingers) in one direction for the stroke or sweep and then lift the fingers, the gesture ends and registers as a stroke or sweep. However, if after moving the fingers in that first direction, the user keeps the fingers on the touchscreen and moves the fingers in the reverse direction, the gesture suddenly becomes a scrubbing gesture, provided that it is not also reversible. To allow the user some error with ending strokes and sweeps, it is useful to define a minimal length of the first reversing constituent gesture; upon exceeding this length, the gesture is henceforth a scrubbing.
  • the vectors of the constituent gestures must be roughly parallel. This is established by only detecting scrubbing gestures as those in which each of the vectors has a angel relative to the first vector less than a specific value. A maximum of a 30 degree difference among vectors is reasonable.
  • An implementation has several options for enforcing this. The implementation could declare a gesture to be a scrubbing gesture should the second vector be within 30 degrees of the first and then end the scrubbing gesture reporting only those subsequent vectors consecutive with the first that were within 30 degrees of the first. This effectively truncates a gesture at the point where it ceases to be a scrubbing gesture. Another option is to refuse to report a scrubbing gesture at all if any of the vectors exceeds the maximum angle.
  • Scrubbing gestures cannot be reversible. Additionally, scrubbing gestures require extra effort to distinguish them from reversible gestures.
  • One way to distinguish a scrubbing gesture from a reversible gesture is to only recognize reversible gestures as slow- moving gestures, indicating that the user is uncertain and might like feedback during the gesture for assistance; a fast-moving gesture would then be interpreted as a scrubbing.
  • the most effective method for distinguishing scrubbing gestures from reversible gestures is to only recognize as scrubbing gestures those gestures that satisfy all of the following conditions: (1 ) the gesture conforms to scrubbing gesture requirements, (2) is performed with at least a certain level of speed, and (3) has at least a minimum scrub count. Minimum scrub counts of 4 and 6 are reasonable because the user interprets these as going back and forth 2 or 3 times, respectively. Scrubbing gesture speed might be measured as the number of vectors per unit time.
  • a “hold gesture” is a placement of a finger or thumb, or a combination of fingers and thumbs, on the input area and holding them there for an extended period of time. Hold gestures are useful for engaging input modes and for allowing stray fingers or thumbs to rest on the input area without affecting the data being input. Some hold gestures may be held simultaneously with the input of other gestures, analogously to holding a shift or control key down while continuing to type other keys on a conventional keyboard. There are two kinds of hold gestures ⁇ "modal holds” and “anchor holds.”
  • a "modal hold” is a hold that activates an input mode.
  • the input mode lasts for the duration of the hold gesture. It is possible that from that activated input mode the user could again change to another input mode by means other than ending the modal hold, such as via a particular gesture available within the mode or by simultaneously engaging a second hold gesture.
  • a modal hold allows the user to select an input mode with one finger or thumb while simultaneously entering gestures with the remaining fingers (and possibly thumb) of the same hand. This is most flexibly accomplished by allowing the thumb to hold the mode and the other fingers to perform gestures for that mode. However, it may also be accomplished by holding the pinky finger or the index finger. It is difficult to hold a mode with the middle two fingers.
  • An “anchor hold” is a hold that the character entry system ignores.
  • An anchor hold allows the user to keep a finger or thumb on the input area while performing other gestures, without the finger or thumb influencing the input area.
  • Touchscreen operating systems usually require that the user specify regions of the touchscreen that are to be sensitive to touch inputs, but those regions are usually rectangular.
  • a character entry system requires an irregularly shaped region of insensitivity, it may have to implement it itself, which it could do as an anchor hold.
  • the character entry system To detect a hold gesture, the character entry system must disambiguate the gesture from other gestures the user could be inputting.
  • One or more regions of the input area can be dedicated to receiving only hold gestures, or only certain hold gestures.
  • the sizes, shapes, and locations of the regions might be a function of the particular calibration in effect.
  • the gesture preceding and ending in the hold can select the particular hold gesture in effect. For example, the tap count of a tap hold could select among input modes.
  • the particular finger or thumb (or combination) performing the gesture could identify both the gesture as a hold and the particular kind of hold being performed. Normally, this technique entails identifying the starting location of the gesture and hence may be viewed as an instance of identifying holds by region.
  • the hold gesture can require a qualifying preceding gesture. For example, a region of the input area might accept many different gestures, only some of which result in a hold gesture when the user ends gesture but keeps the finger or thumb (or combination) on the screen to persist the hold.
  • a finger or thumb can be interpreted as a hold once the gesture has been held for a certain length of time without significant motion of the touch inputs, as explained below. 1 .5 seconds might be a reasonable threshold for this determination.
  • a hold gesture is capable of being interpreted in one of two ways, and one of those ways is as a modal hold
  • the modal hold interpretation can be selected by inputting another gesture simultaneous with the hold; the hold gesture isn't identified until a concurrent gesture is.
  • a user could place a finger to begin a multi-valued stroke. However, the user leaves the finger in place for a while, resulting in a hold.
  • the character entry system could initially assume the user wants to see a list of the stroke's possible values and show them. If after showing them, the user instead enters a stroke with another finger, while retaining the hold, the character entry system would know to re-interpret the hold as a modal hold. If the user proceeds with the stroke, the system remains in the help mode.
  • a character entry system generally cannot identify a hold by touch inputs that do not move at all, and it cannot require that the touch inputs of an in-process hold never move at all. If the user is simultaneously entering gestures with the same hand that holds the mode, the finger or thumb holding the mode is sure to jiggle. The holding finger or thumb may even gradually drift across the touchscreen as the user works. Hence, the character entry system should instead measure the time-averaged speed of motion of the touch inputs.
  • the oldest and most recent locations of the touch input, according to the remaining queue, are then computed, as is the distance between those locations. This distance is then divided by the period of the polling window, providing an average speed for the polling period.
  • While detecting excessive motion may prevent the character entry system from interpreting some gestures as holds, it may be reasonable to allow the user to move some holds without ending the hold. In this case, the character entry system may opt to ignore motion detected during an already established hold. In these situations, the system need not bother monitoring gesture speed.
  • the "home position” is a characterization of the posture, location, extent, and orientation of a hand that is calibrated for use with a finger-mapped character entry system.
  • the term “home position” is meant to conjure both the posture that a hand takes to enter characters via the system and the place on the touchscreen where the user places the hand. It is analogous to "home row” on a conventional keyboard, which brings to mind both a posture of the hands and a location of the hands on the keyboard. However, unlike the "home row", the home position can be dynamically relocated on the input area.
  • “home position” also refers to the collection of measures that characterize the hand and hence is part of a user's calibration.
  • buttons with which a hand in home position aligns instead of pressing buttons, the user performs up, down, and side-to-side strokes and sweeps with the fingers, always returning to home position should any gesture cause the hand to stray a little. Most gestures, however, should not move the hand at all.
  • the fingers only touch the touchscreen to enter a character and never rest on the screen between strokes.
  • the system employs a characterization of the hand in home position to determine which fingers are involved in a gesture.
  • the system also employs knowledge of whether the left or right hand is performing the gesture, knowledge that is provided at calibration. As explained under “Hand Calibration,” the user can change hands or relocate home position at any time, as desired. It's probably wise for users to train both hands to use the system so that the user can periodically switch hands and give each turns resting.
  • the user must keep the hand roughly in home position while entering characters. If the hand shifts too far out of the calibrated home position, finger gestures will not produce the expected results. To prevent this, the user may need to anchor the thumb on the device to help stabilize the hand. Depending on the device, the user may be able to anchor the thumb on the device somewhere off of the touchscreen. However, it may sometimes be more convenient or even necessary to anchor the thumb directly on the touchscreen. Finger-mapped character entry systems allow this on touchscreens that can support the additional touch input, implementing the thumb touch as an anchor hold. If the specific finger map of a character entry system employs only one-finger strokes, a two-input touchscreen suffices. If the finger map also employs two-fingered sweeps, the touchscreen must be able to support three simultaneous inputs.
  • An “up” or “upward” stroke of a finger in home position is the stroke a finger makes when it moves away from the palm, requiring it to uncurl a little from the home position posture.
  • a “down” or “downward” stroke is the stroke a finger makes when it moves towards the palm, requiring it to curl more than is natural for home position posture.
  • “up” or “upward” sweeps involve fingers stroking upward
  • “down” or “downward” sweeps involve fingers stroking downward.
  • Finger mapping is a technique for making user interface behaviors a function of both the finger that performs a gesture and the gesture that the finger performs. To identify the finger, finger mapping compares the location at which the finger begins a gesture to a characterization of the calibrated home position. The user must generally keep the hand in home position ensure proper finger identification, so specialized gestures are required to associate each finger with a breadth of behaviors. Hence, finger mapping also includes a collection of recommended gestures. An association of behaviors with fingers and their gestures is called a "finger map.”
  • Fingers are mapped while the user's hand is in home position.
  • the location and orientation of home position varies from calibration to calibration, as do parameters characterizing the size of the user's hand and the reach of the user's fingers.
  • Finger mapping necessarily occurs as a function of the home position and is here is defined with respect to the following home position properties:
  • Finger rest locations are the point locations of the fingers were they to touch the touchscreen in their relaxed postures while in home position. In practice, these points will vary over time, as the fingers won't keep returning to the exact locations while entering characters. Instead, finger rest locations are used for particular instants in time as approximate characterizations of the home position.
  • Finger reference lines ⁇ These are lines, one line for each finger, that each finger roughly parallels as it makes up and down strokes from home position. Each finger's reference line is positioned so that it intersects the finger's rest location at time of home position calibration.
  • Gesture lengths ⁇ These are the minimum and maximum lengths of the gestures of each finger.
  • the strongest user calibration records gesture lengths for each finger separately.
  • gesture lengths are associated with each value of each multi-valued gesture available in the finger map. For example, a character entry system will typically assign linear multi-valued gestures to each finger, such that each linear gesture's reference line is the finger's reference line. The gestures lengths would then be the minimum and maximum lengths of the up and down strokes to select each of the available values. The gesture lengths characterize the finger reach inclinations of each person's hand.
  • Finger regions These are the regions of the input area that are assigned to each individual finger. Any stroke that starts in a region is assumed to be performed by the finger to which the region is assigned. Each finger region should be at least as long as the longest stroke the finger is capable of performing, so that the user can begin the stroke from either the top or the bottom of the region. Finger regions are retained by the calibration in relative terms so that the home position may be moved around and re-oriented and yet still employ the calibration regions.
  • One finger region is associated with each finger of the hand.
  • the touchscreen detects a touch event in one of these regions, and the touch event begins a new gesture, the member of the hand touching the region is assumed to be the particular finger previously associated with the region. Identifying the finger also selects the set of possible gestures the finger may perform or partake in.
  • the character entry system interprets the gesture according to both the finger used and the gesture it performs.
  • each “outer” coinciding line is the line computed as the mirror image of the finger's inner coinciding line when mirrored around that finger's reference line. The outer boundaries of the index and pinky finger regions coincide with these lines.
  • each finger region is determined separately for each finger.
  • Each finger has an associated maximum gesture length.
  • the maximum gesture length of an upward gesture may be different from that of a downward gesture, but this algorithm treats these two maximum lengths as being equal.
  • the upper and lower boundaries of a finger's region coincide with the two lines satisfy these criteria: (1 ) the lines are perpendicular to the finger's reference line, and (2) the lines intersect the reference line at a distance from the finger's rest location equal to the the finger's maximum gesture length.
  • the boundary-coinciding lines defined above together form four isosceles trapezoids, one for each finger.
  • the boundary of the region of any given finger is the particular trapezoid that contains the finger's rest location.
  • Extending the boundaries also provides users with the flexibility to perform side gestures intended for one finger using an adjacent finger; in this case, the regions must extend far enough to accommodate locations where the adjacent finger might start the gesture. Note that the outside boundary of the index finger should not be extended too far to the side, as this might encroach on the space where the user might place a thumb on the touchscreen.
  • the home position calibration places the home position at an exact location on the input area, with specific finger rest locations and reference Iines, etc., the calibration can be translated and rotated to new locations and orientations.
  • the rest location of the index finger is treated as the home position's absolute location
  • the orientation line is treated as the home positions current orientation. All other properties are interpreted as relative to these more dynamically changeable properties. From this perspective, the calibration is independent of location and orientation and is only situated on the input area as most recently specified by the user.
  • gestures are especially useful to fingers from home position.
  • a particular character entry system specifies exactly which gestures are available to each finger and what behaviors these gestures produce -- this is a "finger map.” There will likely be a different finger map for each character entry input mode of the system. Finger maps generally employ some combination of the following gestures:
  • Multi-valued gestures The values of a multi-valued gesture are selected as a function of the length of the gesture. This length can be given by the normal form of a multi-valued gesture, where it is computed according to its distance from a point. The approach works well when the gesture is not reversible. The length can also be computed by employing linear multi-valued gestures. In this second case the reference lines of a linear gesture are exactly the reference lines of the fingers as specified for the home position, so that the lengths are computed by projection onto the reference lines.
  • This second case is well-suited to gestures that are reversible, so that the user reversing a finger back to the starting location has room to error; any location where the user might normally start a gesture with the finger then becomes a valid location for resetting the gesture.
  • Two-fingered sweeps The hand can also easily perform up and down two- fingered sweeps with any pair of fingers. These may also be implemented as multivalued gestures when more values are needed than are available from single- finger strokes alone. Adjacent-finger sweeps are probably easier on users than spaced-finger sweeps. Besides, several spaced-finger sweeps are part of the standard gesture repertoire and probably ought not be redefined.
  • a linear multivalued two-fingered sweep can be defined either with respect to the two reference lines of the fingers performing the sweep, or with respect to a third line that bisects the region between the two finger reference lines, as explained for linear gestures.
  • Positional gestures Normally finger mapping treats all gestures begun in a particular finger region as equivalent, regardless of where in the region it started. It is also possible to partition a finger region along the axis of its reference line and to distinguish gestures of a given finger by the partition in which it begins. For example, a finger region could be split vertically in half at the finger rest location, and the character entry system could distinguish between gestures begun above the rest location from those begun below the rest location. These gestures are thus "positional.” Up/down positional gestures are incompatible with linear multi-valued gestures whose reference lines coincide with the finger's reference lines, but they are particularly well suited for side gestures. Extending the example of splitting the finger region at the rest location, a side gesture that begins above the rest location could have a different meaning from one that begins below the rest location.
  • Each finger region is a finger region
  • Multi-tap gestures Any of the gestures here may be further embellished by making it a multi-tap gesture. The tap count would select an alternative
  • a character entry system might support reversible gestures in order to assist users who are learning the system, or just to give the user more flexibility while entering characters.
  • reversible gestures any gesture producing a length and a direction can be made reversible.
  • Scrubbing gestures ⁇ Scrubbing gestures involving fingers going back and forth are available, but they must be distinguished from reversible gestures. They can be distinguish by the finger involved, by only recognizing scrubbing gestures that go side to side, should all of the reversible gestures go up and down, or by monitoring speed and scrub count, as explained under "Scrubbing Gestures.”
  • Hold gestures are not included among the list of useful finger map gestures because, in the parlance of this document, they may be used to select input modes. Different input modes for character entry may have different finger maps. So hold gestures can be used to select new interpretations of gestures, but they participate in selecting the finger map rather than being part of a finger map.
  • Finger mapping requires a characterization of a user's hand in home position. This characterization is called the "calibration" for the user's hand.
  • This section defines methods for calibrating a hand or for selecting a pre-existing calibration, and it ends with some suggestions for conveying the calibration to the user.
  • the gestures provide varying levels of calibration. Ideally, each hand would be calibrated separately, but if one hand is calibrated, the calibration can be borrowed for the second hand.
  • the "hand selection slide” selects the hand (left or right) with which the user wishes to input characters. It also establishes a minimal home position calibration. If the minimal calibration approximates a previously stored, more complete calibration, the character entry system may interpret the gesture as a selection of that previous calibration, perhaps by first popping up a dialog to confirm or to select among multiple matching calibrations. Whether or not the system pops up such a dialog can be a matter of configuration.
  • the hand selection slide thus makes it easy for the user to move an existing calibration to new locations on the input area and to quickly switch hands as desired.
  • the gesture is a spaced-finger common-line two-fingered sweep.
  • the two fingers are assumed to be the index and pinky fingers.
  • the gesture requires a distinction between left and right sides of the input area, as well as top and bottom.
  • a sweep to the left identifies a left hand calibration
  • a sweep to the right identifies a right hand calibration.
  • the locations of the fingers at the end of the sweep are taken to be the rest locations of the index and pinky fingers in home position.
  • the system may assume that the user is selecting a pre-existing calibration that has a hand breadth within 5% or so of the breadth given by the hand selection slide, if such a pre-existing calibration exists.
  • the calibration is geometrically translated to the location given by the rest location of the index finger, and it is rotated so that its orientation line coincides with the line extending from the newly established index finger rest location to the newly established pinky finger rest location. Since the input area is assumed to have an official top and bottom, the rotation can properly orient the top and bottom of the calibration. Should the breadth of the new calibration be more than a few percent different from that of the old calibration, it is an option for the character entry system to expand or compress the home position characterization along the dimension of the orientation line in order to accommodate the size change. This may also be configurable behavior.
  • the system can apply a default computation of the calibration.
  • a reasonable default places the rest locations of the remaining two fingers equally spaced along the home position's orientation line between the index and pinky finger rest locations, but offset from the orientation line.
  • the middle finger can be offset above the line by 1/6th of the home position breadth, and the ring finger can be offset by 1/12th of the home position breadth.
  • the references lines of the index, middle, ring, and pinky fingers can be assumed to be at angles of 80, 85, 90, and 100 degrees with the orientation line, respectively, for the angles on the thumb side of the reference line above the orientation line (facing the top of the input area).
  • the calibration can also assume that the calibrated gesture lengths are a certain percentage of the home position breadth, according to the needs of the particular character entry system.
  • Default finger regions can be derived from this information, but the region calculations should use larger maximum gestures lengths to accommodate error in the calibration.
  • a default calibration for a hand selection slide given only the selection of hand and the index and pinky finger rest locations.
  • a hand selection slide consisting of more than two fingers. This would be a "multi-fingered sweep," defined previously as an embellishment of two-fingered sweeps.
  • the end locations of each of the fingers would become the rest locations for the fingers. If only three fingers are used in the gesture, the rest location of the missing finger can be assumed to be halfway between the rest locations of the fingers bordering the gap.
  • a hand selection slide of four fingers ⁇ a "four-finger hand selection slide" ⁇ is ideal because it allows the user to specify rest locations for all of the fingers at once.
  • tickling is a gesture that calibrates a particular finger in home position. From home position, a finger “tickles” by performing scrubbing gestures up and down above and below the finger's rest location. The vectors of the scrubbing are averaged together to form an average vector, and the finger's rest location, reference line, and gesture lengths are deduced from this average vector. Multiple tickling gestures may be performed at once, but they are each interpreted independently. Tickling involving fewer than four simultaneous fingers modifies a pre-existing calibration; a four-finger tickling is capable of calibrating all four fingers at once. However, in all cases, tickling requires that the hand -- left or right -- have been established prior to the tickling. The number of fingers that can tickle simultaneously is limited by the capabilities of the touchscreen, as well as by the implementation of the character entry system.
  • up and down finger gestures are usually interpreted as multi-valued strokes or sweeps.
  • a sophisticated character entry system may also allow these multi-valued strokes and sweeps to be reversible, and reversible gestures can be ambiguous with scrubbing gestures.
  • scrubbing gestures can be distinguished from reversible gestures by speed and vector count. It's therefore reasonable to identify a tickling gesture as a scrubbing stroke generating at least 4.5 vectors per second and having at least 5 vectors (a scrub count of 5).
  • a user can think of this as 3 or more back and forth (up and down) motions of the finger. It's reasonable for the character entry system to allow tickling at any time so that the user can recalibrate one or more troublesome fingers when needed.
  • the character entry system identifies each finger engaged in tickling according to an already existing finger mapping, which is given by a home position calibration.
  • This existing home position calibration can be the one computed by default from a preceding hand selection gesture.
  • the vectors of the tickling are used to change the calibration for that finger as follows:
  • the rest location of the finger is a point somewhere on the average vector. If the character entry system asks the user to attempt to tickle up and down equal distances around the rest location, the rest location would be the midpoint of the vector; the rest location depends on how the system interprets the tickling.
  • the reference line of the finger is the line coincident with the average vector.
  • the gesture lengths of the finger are computed as a function of both the finger's identity and the length of the average vector computed for the finger. This computation will vary from character entry system to character entry system. However, it is useful to assume that the average vector represents most of the reach available to the finger, but not all of that reach. For example, the average vector's length may assumed to be 85% of the length available to the finger.
  • a user performing a tickling can scrub the fingers up and down to varying distances. If the character entry system is to derive gesture lengths based on the lengths of the scrubbing vectors, the system will need to specify the linear extent to which the user should perform the tickling. For example, the system could specify the user is only to perform tickling at the maximum length of the first values for multi-valued gestures. Another system could ask the user to tickle at the maximum comfortable distances available to the fingers for any gesture. Additionally, in order to assign rest locations, the system must specify whether the tickling consists only of up strokes, only of down strokes, or of strokes that travel through the extents of both up and down strokes.
  • the character entry system may optionally discard the active home position calibration and generate one anew.
  • the system must use other means for identifying the individual fingers.
  • the four-finger gesture itself can be recognized as a common-line perpendicular multi-valued sweep of four fingers, or simply as four fingers engaged in tickling along roughly parallel vectors. These ticklings form a rough line, and because the input area has a known left and right (as required by the hand selection slide), the previously established hand-in-use specifies which finger is which: when using the left hand, the rightmost tickling calibrates the index finger, and when using the right hand, the leftmost tickling calibrates the index finger. From this information the entire home position calibration can be computed.
  • implementation can be specifically designed to detect such combination gestures because both the hand selection slide and the multi-finger tickling gestures are forms of multi-finger sweeps. In the transition described here, the sweep is common line for its first vector and common line perpendicular for subsequent vectors, making the combination gesture pretty distinctive.
  • Multi-valued gestures require users to move their fingers over different lengths to select the different values.
  • the lengths over which users are comfortable moving their fingers to select a value will vary from user to user and perhaps hand to hand.
  • These gesture lengths are part of the home position calibration.
  • the calibrations provided by a hand selection slide or finger tickling assign gesture lengths, but they may not be right for the user.
  • the user may wish to perform a gesture for refining just the gesture lengths.
  • the conventional pinching and zooming gestures are useful for this purpose.
  • a pinching gesture is one in which two fingers are placed on the touchscreen and subsequently moved closer together
  • a zooming gesture is one in which the two fingers are subsequently moved farther apart.
  • a character entry system may also find these gestures useful for refining gesture lengths other than those that are multi-valued. Since the pinch and zoom gestures are also useful for representing copy and paste, gesture length refinement should employ these gestures as part of a double-tap.
  • the pinching and zooming gestures for refining gesture length are most useful while a calibration is active, because it modifies this calibration.
  • verical gestures as those that move roughly perpendicular to the home position orientation line
  • Pinching and zooming motions in which the fingers move roughly perpendicular to the orientation line refine the vertical gestures, and those that move roughly parallel to the orientation line refine the horizontal gestures.
  • the verticality of the pinching and zooming gestures may also be interpreted relative to their orientation in the input area.
  • a pinching gesture reduces gesture lengths, and a zooming gesture increases gesture lengths.
  • An intuitive approach is to reduce or increase gesture lengths in a proportion equal to the proportion by which the user reduces or increases the distance between his or her fingers during the pinching or zooming gesture.
  • the home position of a hand may gradually change as a user enters characters into the input area. Ideally the character entry system would detect these changes and update the home position calibration accordingly.
  • the hand may gradually drift across the screen, the hand's orientation may rotate a little, and reference lines may change as the hand relaxes. If these changes occur gradually enough, it is possible for the character entry system to track them.
  • One way to track them is through a dynamic recaiibration technique that constantly adjusts the calibration according to the user's gesture history. The technique recalibrates the home position on regular intervals.
  • the interval can be any length of time and is called the "recalibration interval.”
  • the recalibration at the end of each interval is derived from data collected over a preceding amount of time called the “monitoring window.”
  • the monitoring window needs to be long enough to have enough data from which to make an accurate calibration, but short enough to ignore data that applies to a hand's previous home position characterization. Since the user may take frequent breaks, the monitoring window is best measured by number of gestures performed. Moreover, since each finger must be calibrated separately, the technique maintains a separate monitoring window for each finger. 10 seconds is a reasonable recalibration interval, and 50 gestures is a reasonable monitoring window.
  • the technique employs a "staleness window.” This is the maximum amount of time for which gestures in a finger queue are valid contributors to a recalibration. Presumably, gestures that were performed too long ago should have no bearing on the current characterization of the home position.
  • the staleness window prevents this technique from combining the gestures a user performs prior to taking a break with the gestures the user performs upon resuming after the break. A staleness window of 2 or 3 minutes is reasonable.
  • the character entry system manages a queue of gestures for each finger.
  • the system appends a characterization of each finger's gesture to that finger's gesture queue.
  • the characterization consists of the vectors produced by the finger, an indication of which vector was the first of the gesture, and the time at which the gesture was initiated.
  • the gesture is recorded to each participating finger's queue but only includes the vectors of the gesture performed by the particular finger. In multi-finger gestures, each finger will have its first vector of the gesture designated as such.
  • This particular technique requires that only upward and downward strokes and sweeps be placed on the queue; the technique ignores all other gestures and so does not recalibrate based on these other gestures. Additionally, each queue must be managed so that, at least at the time it is examined for a possible recalibration, it never contains more gestures than specified by the monitoring window.
  • the character entry system examines each finger's queue. All gestures marked with times older than the staleness window are discarded. If after discarding stale gestures, a finger's queue contains a number of gestures equal to the monitoring window, the system recalibrates that finger.
  • the recalibration of a finger first computes a preliminary rest location for the finger.
  • Each gesture in the finger's queue has a vector designated as the initial vector, and each of these initial vectors has a starting location. Furthermore, each initial vector can be classified as an upward stroke or a downward stroke, as explained in the description of the home position.
  • the queue effectively contains a collection of gesture starting locations and their associated upward/downward directions.
  • the number of upward-directed start locations is the "up count”
  • the number of downward-directed start locations is the "down count.”
  • This recalibration technique defines a minimum number of start locations that are required to compute a preliminary rest location from start locations, a number that can be at most half the monitoring window size, but which must contain enough data to be meaningful. This minimum is the "minimum rest location source size.” 20 is a reasonable minimum rest location source size.
  • the recalibration's preliminary rest location is the existing calibration's rest location. If the up count and the down count are each greater than or equal to the minimum rest location source size, the preliminary rest location is computed from the up and down start locations. In this case, the upward-directed start locations are all averaged together to form a single location coordinate that serves as endpoint A. The downward- directed start locations are also averaged together to form a single location coordinate that serves as endpoint B. These locations are averaged by separately averaging each of their coordinate values. The preliminary rest location is then the midpoint of the segment connecting endpoints A and B.
  • the recalibration computes the new reference line. It does this by averaging together all of the vectors in the finger's gesture queue. It suffices to average just the gesture initial vectors, since finger identity is determined by where gestures start. This is computed as the "average vector” described for the tickling gesture, resulting in a single “average vector.” This average vector has a location on the input area, not just a slope. The line coincident with this average vector is taken as the finger's new reference line.
  • the technique computes the finger's new rest location from the preliminary rest location and the new reference line.
  • the preliminary rest location may or may not occur on the reference line, where the actual rest location is required to be.
  • the finger's new rest location is taken to be the point on the reference line that is closest to the
  • the calibration approach just described completely moves the home position to the new location at every recalibration interval. This may result in drastic change of experience for the user.
  • the recalibration process can make the home position change more gradual with two modifications to the above algorithm. First, the actual preliminary rest location is taken to be the midpoint between the previous rest location and the newly computed preliminary rest location described above. Second, the actual new reference line is taken to be the line that bisects the region between the previous reference line and the newly computed reference line described above. These two modifications gradually move the rest location and reference lines to their apparent new positions by only moving halfway to the new positions in each recalibration.
  • a character entry system may provide software "wizards" that guide the user through a calibration process.
  • a wizard might have the user enter one gesture after another in order to measure the user's preferences for performing that gesture, or the wizard might just ask the user to enter a pre-specified sentence and infer the user's preferences from the resulting gestures.
  • the wizard can be made available, such as from a menu item or from a special gesture that opens up the wizard.
  • One possible gesture for this could be a double- or triple-tap hand selection slide, which would also have the benefit of specifying the desired hand while opening the wizard.
  • Finger-mapped character entry systems are designed to allow the user to input characters without having to look at the input area. However, there are times when it can be useful to visually depict information about the calibration currently in effect, such as when the user lifts a hand from the input area and is inclined to place the hand back in home position without performing another hand selection slide or otherwise
  • Visual feedback may also help users who are just learning to use the character entry system.
  • Some options for providing visual feedback are some options:
  • the character entry system When the finger mapped region of the input area is apparent to the user, it becomes possible for the character entry system to treat the remaining portions of the input area differently. For example, if a gesture is initiated somewhere off of the finger mapping, the system could immediately activate the cursor mode for interpretation of the gesture, allowing the user to readily perform cursor actions from a character mode without first having to perform a gesture whose purpose is only to activate a cursor mode.
  • finger-mapped character entry systems There are many ways to layout the visual interface of a finger-mapped character entry system, but a user will only find two systems compatible if they employ the same gestures for character input and cursor movement. The particular gestures for selecting text and clipboard operations may also be crucial to the user. For this reason, finger- mapped character entry systems are best classified by the input modes they employ, the gestures each input mode supports, and the means they provide for transitioning among input modes. Together these details compose a system's specification.
  • the input mode is the organizing component of a system's specification. When an input session first begins, the input area starts in some input mode. The input mode determines what is displayed in the input area and how the input area behaves.
  • the user changes the active mode by performing a gesture or pressing a button.
  • the system may also asynchronously change the active mode, such as on timeout.
  • Relationships among input modes may be complicated, with some only being available from certain other input modes, under certain conditions, while others are always available. These relationships are best represented in a state diagram showing input modes and the events transitioning the active mode among them. It may be useful for a particular implementation to show a name or symbol for the active mode somewhere on the input area, but this need not technically be part of the specification.
  • a character entry system it is possible for a character entry system to implement only a single mode, but it's usually best for a system to have at least a character mode and a cursor mode.
  • the character mode employs finger mapping to allow the user to input a large variety of characters, while the cursor mode more flexibly accommodates gestures that are more suggestive of their respective cursor operations.
  • the user need not first calibrate home position in order to move the cursor.
  • a character entry system that supports a character mode and a cursor mode would normally begin an input session in cursor mode. This allows the user to immediately cursor about when the input area appears.
  • the user performs a hand selection slide, simultaneously selecting a hand and at least a default home position calibration. Since this gesture is a spaced-finger common-line two- fingered sweep, it is convenient to have an adjacent-finger common-line two-fingered sweep, in any direction, activate cursor mode. Because a user may periodically forget which input mode is currently active, both of these gestures should be available from each input mode. A new hand selection slide in character mode will just assign a new home position, while any attempt to switch to cursor mode from cursor mode will simply be ignored. This way the gesture always guarantees the input mode.
  • a and B Set active mode to character mode with a left or right hand calibration, respectively.
  • the most effective way for a character mode to accommodate a large number of characters is to assign multi-valued gestures to the fingers of the finger mapping.
  • characters can be input just by moving the fingers up and down (as explained for the home position).
  • the direction that a finger moves selects a character set, and the distance the finger moves selects a character from the set.
  • these gestures should be implemented as linear multi-valued gestures.
  • Example of a finger map with character actions represented by linear multi-valued strokes Note that this diagram depicts finger start locations, not finger rest locations. A finger's stroke may begin anywhere within the finger's region, since the character is selected by the length of the stroke, not its position within the finger's region.
  • Example of a finger map with character actions represented by linear multi-valued two-fingered sweeps depicts finger start locations, not finger rest locations.
  • Each finger's stroke may begin anywhere within the finger's region, since the character is selected by the length of the sweep, not by positions within the finger's region.
  • sweep lengths are traced by the midpoints between the fingers.
  • An input mode may provide temporary access to another input mode through the use of a hold gesture.
  • An input mode that is active only temporarily before returning to the preceding input mode is called a "submode.”
  • a hold gesture generally induces a submode for the duration of the hold gestures.
  • Employing a hold gesture is analogous to holding the shift, control, or command keys down on a physical keyboard.
  • Useful hold gesture combinations while finger mapping include holding the thumb down (anywhere to the outside of the index finger), holding any one finger down (usually the index or pinky finger), or holding both the thumb and a finger down. While holding the gesture, the user simultaneously performs a separate gesture with other fingers, even with the same hand, and the simultaneous gesture takes on a meaning specific to the induced submode.
  • the user interface could also implement buttons on the input area that the thumb might press, allowing the thumb to selection from among multiple submodes. However, provided that gestures are available for selecting the submode any time the user might need to do so, these buttons need not be part of the system's specification.
  • a - Thumb hold allowing other fingers to do any gestures.
  • B Pinky hold, allowing other fingers to do finger map gestures.
  • C Thumb anchor, steadying fingers doing finger map gestures.
  • the character entry system may, for particular input modes, allow the user to anchor the thumb on the touchscreen with a hold gesture.
  • the system may identify a thumb anchor as a hold gesture that is performed to the outside of the index finger region ⁇ that is, as a hold gesture performed next to the region mapped for the fingers, closer to the index finger region than the to the pinky region.
  • the character entry system simply ignores the gesture, while still detecting and interpreting other gestures that are simultaneously performed.
  • the system could additionally allow the thumb to specify modes by interpreting double-tap and triple-tap thumb holds as specifying a mode rather than as anchoring the thumb.
  • the behaviors that gestures implement may be a function of the hand ⁇ left or right ⁇ that performs them.
  • the character entry system may opt to implement identical behaviors for analogous fingers on each hand, so that the gestures of the two hands are identical in mirror image. This allows the user to employ either hand for character entry. It also facilitates training both hands. It may make sense to violate the mirror image for some side gestures to ensure that the gesture remains suggestive of the direction in which the gesture moves the cursor. It's also possible to specify a system allows both hands to simultaneously enter characters into one or two input areas, each employing a different finger map, analogously to typing with both hands on a physical keyboard. 8. Two-Valued English System
  • the system may be implemented as two-touch, three-touch, or configurably either two-touch or three-touch.
  • the two-touch implementations are available to the greatest number of devices, which at this time are mostly only two-touch capable.
  • the three-touch implementation allows the thumb to touch the touch the touchscreen while entering characters, either to just steady the hand or to select the input mode.
  • the gestures and finger maps of this two- valued English system have been selected to be as intuitive and mnemonic as possible.
  • the input area has an inherent left side, right side, top, and bottom.
  • the horizontal is the direction that runs left and right parallel with the top and bottom, and the vertical is the direction that runs up and down parallel with the sides.
  • the right hand could be placed in a home position whose orientation line is only slightly rotated clockwise from the vertical.
  • the system employs separate definitions of "up”, “down”, “left”, and “right” for finger mapped and non-finger mapped gestures.
  • finger mapped gestures “up” and “down” have the meanings ascribed to “up” and “down” for home position. These gestures are roughly perpendicular to home position's orientation line.
  • the "left” and “right” gestures of a finger mapping are those that are neither up nor down, with left and right gestures concluding further left or right of where they start, respectively. These gestures are roughly parallel with the home position's orientation line, which need not be horizontally oriented.
  • non-finger mapped gestures “up” and “down” gestures are approximately vertical, while “left” and “right” gestures are approximately horizontal. Approximately vertical gestures can be taken as gestures whose angles are within 30 degrees of the vertical, and approximately horizontal gestures can be taken as gestures whose angles are within 30 degrees of the horizontal.
  • Multi-valued gestures in this system have exactly two values for each direction of the gesture.
  • the values available to the shorter strokes are called the "short" gestures, and those available to the longer strokes are called “long gestures.”
  • Multi-valued gestures can be divided into three classes: gestures performed outside of a finger mapping, up/ down gestures within a finger mapping, and side gestures within a finger mapping.
  • the character entry system should allow the user to separately set the boundary length between short and long gestures for each of these classes.
  • a reasonable default for the former is 20% of the home position breadth and 15% of the home position breadth for the latter.
  • a calibration to the user's hand in home position would refine these lengths further as a function of the finger that performs the gesture.
  • the finger maps for the left and right hands are defined as being identical for analogous fingers. For example, any action available to the index finger of the left hand is also available to the index finger of the right hand. However, the finger maps are only mirror images for the up/down gestures; side gestures retain their same left-right sense on each hand. Since a short right stroke of the index finger on the right hand produces a space, a short right stroke of the index finger on the left hand also produces a space. Because it is easier for the left index finger to move right than for the right index finger to move right, the middle finger is also mapped to produce a space for a short right stroke. This allows the index finger of the left hand to start in the middle finger region -- and be detected as the middle finger ⁇ and move right to produce a space.
  • the finger maps of this system abide by the principle that the side gestures of the index and middle fingers be identical and that the side gestures of the ring and pinky fingers be identical.
  • the finger mapping must support cross mapping for adjacent fingers; it is not enough to simply define a finger's region by the maximum stroke lengths expected by that particular finger.
  • the finger regions must be extended to accommodate the maximum stroke lengths of their adjacent fingers. The easiest way to do this is to compute the minimal sizes of the finger regions, compute the bounding box of these regions, and then extend the side boundaries of each finger to the borders of the bounding box. (This technique was depicted in an earlier diagram.)
  • this area may be used to register thumb holds.
  • the behavior of the thumb hold could be configurable. It could serve as a thumb anchor and be ignored, or it could select a cursor submode, as explained below.
  • the input session begins in navigation mode. From here the user can employ gestures or press implementation-specific interface buttons to activate other input modes.
  • the means for accessing input modes varies from input mode to input mode. There are three classes of input modes: cursor modes for moving the cursor around via freeform gestures, character modes for entering characters via finger mapping gestures, and button modes for entering characters using conventional virtual keyboards.
  • Navigation Mode This is a cursor mode that allows the user to move the cursor around the text. Clipboard operations are also possible from this input mode.
  • Highlighting Mode This is a cursor mode in which the user highlights text. The clipboard operations are available from this input mode.
  • Number/Pinky Mode This is a character mode for entering number characters. It is only available as a submode of the lowercase and uppercase modes. The mode may also provide access to application or operating system functions.
  • Keyboard Mode This is a button mode that allows the user to enter characters via a conventional keyboard. This mode provides easy access to the standard way to input text for users who have not trained with the gesture-based system.
  • Circles designate input modes and dashed regions designate superstates.
  • Solid arrows designate non-nesting transitions between input modes. Dotted arrows depict transitions between primary input modes and their submodes.
  • the primary input modes are navigation mode, lowercase mode, and uppercase mode.
  • the transition away from a primary mode to a submode is temporary and lasts only for the duration of the hold gesture that induced the submode; the dotted return arrows labeled "release hold" restore the active mode to the input mode that was active at the time hold gesture was performed.
  • the arrows that point from a superstate to an input mode are transitions that are available to each input mode within the superstate.
  • the navigation mode is shown in a bold circle because it is the first input mode of the input session.
  • the black dot represents an existing of the character entry system by closing the input area.
  • Case-Change Tap This is a two-fingered double-tap performed on a finger mapping. If the two tapping fingers are not adjacent fingers (if they are index and ring, index and pinky, or middle and pinky), then after the tap gesture times out waiting for a third tap, the gesture completes as a case-change tap. This gesture is a toggle between the lowercase and uppercase modes.
  • Pinky Hold ⁇ This is a hold of the pinky finger on a finger mapping. While the pinky finger is down, the remaining three fingers may perform gestures for selecting behaviors not available to the lowercase and uppercase modes.
  • Cursor Mode Slide ⁇ This is an adjacent-finger two-fingered sweep to either the left or the right, performed anywhere on the input area.
  • Optional-Tap Thumb Hold This is a hold gesture that could be performed by any finger or thumb but which in practice is normally the thumb. Using the thumb allows the user to perform cursoring gestures simultaneously with the hold.
  • the hold is optionally the concluding gesture of a triple-tap, but no taps are required.
  • the triple-tap is optional to allow the user to perform the same gesture that also activates highlighting mode from a character mode, when the character entry system supports thumb-hold cursoring and highlighting from character modes.
  • the gesture opens a conventional virtual keyboard in the input area.
  • Double-Tap Thumb Hold ⁇ This is an optionally supported gesture. While in a finger mapping mode, the thumb may perform a double-tap hold gesture -- ending the double-tap with a hold ⁇ to enter navigation mode as a submode. While the thumb is held, the regular navigation mode gestures are available to the other fingers. Normally the index finger performs the navigation mode gestures.
  • Triple-Tap Thumb Hold ⁇ This is an optionally supported gesture, which should be supported if the preceding double-tap thumb hold transition is also supported. While in a finger mapping mode, the thumb may perform a triple-tap hold gesture ⁇ ending the triple-tap with a hold ⁇ to enter highlighting mode as a submode. While the thumb is held, the regular highlighting mode gestures are available to the other fingers. Normally the index finger performs the highlighting mode gestures.
  • the character entry system need not retain state information about text that may be in the host area.
  • the host is responsible for maintaining the text.
  • the character entry system only makes requests of the host, and only hosts that support the particular requests perform the associated behaviors. However, user actions in the character entry system are intended to produce specific behaviors in the host, and the host should honor the requests as much at is able. In particular, only a single contiguous length of text should be highlighted at any time. When text is highlighted, the cut and copy gestures are available. If text is highlighted when the user performs a gesture that deletes a character or a word, only the highlighted text should be deleted. If text is highlighted when the user inputs a new character, the new character should replace the highlighted text. Likewise, if a paste gesture is performed while text is highlighted, and if the clipboard is non-empty, the contents of the clipboard should replace the highlighted region. Finally, if the cursor is moved while a region of text is highlighted, without also changing the extent of the highlighted region, existing highlighting should be removed (without removing the associated text).
  • the character entry system may support thumb holds in the region of the input area on the thumb-side of a finger mapping. It is an option for the system to accept thumb anchoring here and never to interpret thumb gestures, so that the thumb only ever helps to steady the hand in home position. However, it is also an option to employ thumb holds for implementing cursor submodes of the character modes. In this case, a double-tap thumb hold activates navigation mode for the duration of the hold, and a triple-tap thumb hold activates highlighting mode for the duration of the hold.
  • the character entry system could interpret a gesture performed outside the region of a finger mapping as a potential cursor gesture.
  • the system would activate navigation mode when it detects a gesture initiated outside of the finger mapping region and not recognized by character mode but known to navigation mode. The gesture is then interpreted from navigation mode. When the gesture completes, the system remains in navigation mode, unless the gesture itself selected a different input mode.
  • the above state diagram does not depict this possible transition to navigation mode. To support this feature, it would be helpful to depict the finger mapping region on the input area so the user can see where cursor gestures might be initiated.
  • the thumb may still anchor in the cursor-gesture sensitive space, provided that it is kept steady so it doesn't register cursor gestures.
  • a number of actions are available to all of the primary input modes ⁇ lowercase mode, uppercase mode, and navigation mode.
  • the gestures of these actions are identical in all of these modes.
  • the gestures are not sensitive to the finger mapping when performed in a character mode.
  • the interpretations of these gestures are identical across input modes as well.
  • This gesture the users quickly slides a finger on the input area in the rough outline of a circle.
  • This gesture may be distinguished from the cut gesture by comparing the bounding box of the finger's path to the distance between the start and end locations of the finger. If the smallest dimension of the bounding box is at least 40% greater than the distance between the finger's starting and ending locations, the user can be considered to have performed a circle.
  • This gesture issues a request to highlight all of the text.
  • This gesture the user quickly slides the finger on the input area in the rough shape of an X, without lifting the finger during the gesture.
  • This gesture may be distinguished from the highlight all gesture by comparing the bounding box of the gesture to the distance between the start and end locations of the finger. If the smallest dimension of the bounding box is no more than 40% greater than the distance between the finger's starting and ending locations, the user can be considered to have performed a cut.
  • This gesture issues a request to delete the highlighted text.
  • Paste This is a conventional touchscreen zooming gestures (as in pinching/ zooming). The start and end sizes of the gesture are not significant, as they are in its conventional zoom-out interpretation. This gesture issues a paste-from- clipboard request. If no text is highlighted, this host inserts the contents of the clipboard at the cursor location. If text is highlighted, the host deletes the
  • the gesture may be performed in any direction (with any orientation).
  • Undo ⁇ This is a left-right scrub with a scrub count of at least three. This gesture performs an undo operation, which may be implemented by either the character entry system or the host. Note that the interpretation of "left” and “right” depends on whether the undo is being performed from a cursor mode or a character mode, the latter of which implements finger mapping.
  • Redo ⁇ This is a double-tap horizontal scrub with a scrub count of at least three. This gesture performs a redo operation, which may be implemented by the character entry system or the host. It redoes a preceding undo. Note that the interpretation of "left” and “right” depends on whether the undo is being performed from a cursor mode or a character mode, the latter of which implements finger mapping.
  • the character modes are the input modes for entering characters into the character entry system. They are lowercase mode, uppercase mode, and number/pinky mode. Number/pinky mode is a submode of each of the other two. They are all finger mapped, employing the home position calibrations that are established by the most recent hand selection slide, adjusted by gesture length refinement, tickling, dynamic recalibration, and user-controlled calibration preferences.
  • Tab ⁇ This is a double-tap stroke of any length to the right, performed by either the index finger or the ring finger. This gesture issues a tab to the host. It is up to the host to decide how to interpret the tab. Some hosts interpret all tab characters as insertions of tabs into the text. Some hosts interpret tab characters as navigation among fields, hence the support for tabs from navigation mode.
  • the lowercase and uppercase modes both support the following gestures:
  • New Line ⁇ This is a double-tap stroke to the left of any length performed by the ring or pinky finger.
  • the gesture issues a new line sequence to the host.
  • the new line sequence varies by operating system. For example, Unix-variety operating systems use ASCII line feeds, while Microsoft Windows-variety operating systems use the two-character sequence ASCII line-feed followed by ASCII carriage return.
  • Line Break ⁇ This is a triple-tap stroke to the left of any length performed by the ring or pinky finger. It issues a line break to the host. Line breaks do not correspond to ASCII characters. Many applications on a standard computer employing a conventional physical keyboard allow the user to press shift-enter in order to insert a "soft return" or "line break.” This gesture provides line breaks for hosts that support it.
  • Delete Word This is a long stroke to the left performed by either the index or middle finger. It issues a request to the host to delete the previous word. If no text is highlighted, the host deletes the word that precedes the cursor, if there is any. If text is highlighted, the host only deletes the highlighted text.
  • Dash ⁇ This is a short stroke to the right performed by either the ring finger or the pinky. It requests the insertion of a dash (decimal 45 in ASCII) character.
  • the character modes map most characters and symbols to up and down strokes and sweeps of the finger mapping.
  • Each finger may perform a short stroke up, a long stroke up, a short stroke down, or a long stroke down. These strokes may be embellished by a double-tap or triple-tap, causing them to select different characters according to the tap count.
  • each pair of adjacent fingers index and middle, middle and ring, or ring and pinky
  • the following tables depict the various up/down strokes and sweeps available from lowercase mode. Although the index finger column of each table is on the left side of the table, the tables characterize the finger maps for both the left and right hands.
  • the tables for uppercase mode are not shown.
  • the uppercase mode tables can be computed from these lowercase mode tables by transforming the letter cases of all the letter characters; all lowercase letters would become uppercase letters, and all uppercase letters would become lowercase letters.
  • the tables indicate the characters that the character entry system requests the host to insert.
  • Number/pinky mode is a submode of both lowercase mode and uppercase mode.
  • the finger map for this submode is the same regardless of which input mode it was activated from.
  • the pinky finger remains in hold gesture for the duration of number/ pinky mode, so the remaining fingers select the character.
  • This system specification assigns characters to both the short and long up and down strokes, but it does not assign characters or other behaviors to the double-tap or triple-tap versions of these gestures.
  • the double-tap and triple-tap gestures of number/pinky mode are therefore available to request application or operating system specific behaviors.
  • the following table depicts the strokes available from number/pinky mode. Although the index finger column is on the left side of the table, the table characterize the finger map for both the left and right hands. This table also indicates the characters that the character entry system requests the host to insert.
  • the cursor modes are navigation mode and highlighting mode.
  • Navigation mode moves the cursor around without highlighting any of the text
  • highlighting mode moves the cursor around, highlighting all text over which the cursor passes.
  • the host receives requests to move the cursor without also extending a highlight, the host should remove any existing highlighting (but not the characters highlighted).
  • the cursor modes do not employ finger mapping, so the user may enter the gestures anywhere on the input area using any finger.
  • Navigation mode and highlighting mode share the same gestures except for the following, which are only available to the navigation mode:
  • the gesture issues a new line sequence to the host.
  • the new line sequence varies by operating system. For example, Unix-variety operating systems use ASCII line feeds, while Microsoft Windows-variety operating systems use the two-character sequence ASCII line-feed followed by ASCII carriage return. It is useful to have a new line gesture available from cursor mode so that the user can repeatedly hit enter on input fields until encountering an input field requiring editing.
  • Tab ⁇ This is a double-tap stroke of any length to the right. This gesture issues a tab to the host. It is up to the host to decide how to interpret the tab. Some hosts interpret all tab characters as insertions of tabs into the text. Some hosts interpret tab characters as navigation among fields, hence the support for tabs from navigation mode.
  • Word Left This is a long stroke to the left. It requests that the host move the cursor one word to the left. If the cursor is in the middle of a word, the host should move the cursor to the beginning of the word. If the cursor is between words, the host should move the cursor to the start of the preceding word. Hosts that do not support word-left behavior may instead implement character-left.
  • Word Right This is a long stroke to the right. It requests that the host move the cursor to the start of the next word in the text. Hosts that do not support word-right behavior may instead implement character-right.
  • paragraph-down behavior may instead implement line-down.
  • the button modes allow the user to pull up a conventional virtual keyboard as desired. They dispense with the gesture system and display buttons for character entry.
  • Keyboard mode displays a conventional virtual keyboard. This virtual keyboard may itself be modal, but this specification doesn't dictate those modes.
  • Symbol table mode displays a tabular list of character symbols available for entry. In addition to symbols available through the system's finger mapping, this table may display international symbols, mathematical symbols, etc.
  • the symbols and keyboard keys are buttons that the user presses to make requests of the host, such as to insert or delete a character.
  • the virtual keyboard and symbol tables presumably have a way to switch between the two modes, as depicted in the state diagram. This specification doesn't dictate the mechanism though.
  • the user may also completely close either the virtual table or symbol table, returning the character entry system to the gesture method of input.
  • the button modes could have a button for accomplishing this, or they may employ the Close Input Area gesture for this. From these modes, the Close Input Area gesture wouldn't close the input area; the gesture would just close the input mode. Upon closing either of these input modes, the navigation mode becomes the system's active mode.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Input From Keyboards Or The Like (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This document describes a class of systems for entering characters into a touchscreen device, such as an Apple iPad, an Android tablet, or a mobile phone with a touch- sensitive screen. These character entry systems allow the user to enter characters relatively fast using just one hand without needing to look at a keyboard. The user may enter characters with either the left or the right hand, may alternate hands periodically as desired, and on large devices, may even periodically reposition the hand. These systems accomplish this by defining special gestures and identifying the fingers being used to perform the gestures. Because fingers are identified by a "finger mapping" process, these systems can be called "finger-mapped character entry systems." The class of finger-mapped character entry systems is language-independent; this document defines them in a language-independent way. However, a particular character entry system may be optimized for a specific language or group of languages. This document concludes by specifying a character entry system optimized for English.

Description

Table of Contents
1. Introduction 4
2. Input Areas 5
3. Gesture Types 8
3.1. Basic Terminology 9
3.2. Two-Fingered Sweeps 10
3.3. Linear Gestures 15
3.4. Multi-Tap Gestures 17
3.5. Multi-Valued Gestures 18
3.6. Reversible Gestures 21
3.7. Scrubbing Gestures 22
3.8. Hold Gestures 24
4. Home Position 26
5. Finger Mapping 27
6. Calibration 34
6.1. Hand Selection Slide 34
6.2. Tickling 36
6.3. Gesture Length Refinement 38
6.4. Dynamic Recalibration 38
6.5. Wizard-Based Calibration 40
6.6. Visual Representation 41
7. System Specification 41 8. Two-Valued English System 47
8.1. Gesture Mapping 47
8.2. Input Session 48
8.3. Common Actions 53
8.4. Character Modes 54
8.5. Cursor Modes 59
8.6. Button Modes 61
1. Introduction
A finger-mapped character entry system is a user interface for entering text on a touchscreen device. It is analogous to a virtual keyboard, but it has no buttons. It may replace the virtual keyboard on a touchscreen device, literally occurring in the same space that would otherwise show a virtual keyboard. The space where the user enters characters into the system is called an "input area." A finger-mapped character entry system may have multiple input areas serving different purposes, such as one for each hand or one for character entry and another for cursor entry.
In this document, any reference to a "character entry system," or just a "system," is a reference to a finger-mapped character entry system. An "action" is a gesture or other user interface interaction that a user performs. An "event" is a notice of an action as received by the character entry system. The "host" or "host system" is the application or operating system in which the character entry system runs and to which the system reports information about user actions. A "behavior" is a specific function or activity that a character entry system or host might perform in response to a user action, such as moving a cursor or inserting a character into text. A "request" is a message that a character entry system sends to the host to ask the host to perform a behavior.
Input areas may be modal, supporting different "input modes." A character entry system having only one input area normally has at least two input modes, one for character actions and another for cursor actions. "Character actions" include character insertion and character deletion gestures. "Cursor actions" include cursor gestures and text highlighting gestures for clipboard operations. The user switches between input modes as needed to perform these actions. This document defines gestures for selecting the input mode, but an input area may also provide buttons for selecting the input mode.
Finger-mapped character entry systems get away without keyboard buttons because they accept gestures and can identify the finger that performs each gesture. The effect of a gesture is a function of both the gesture and the finger that performs it, and the gestures available to each finger are capable of expressing a great variety of values. All together, a character entry system is capable of providing a single hand with enough gestural values to represent every character found on a conventional physical keyboard.
The technique for associating gestures with fingers is called "finger mapping." Finger mapping requires that the user place the hand on the input area in a particular fashion that is called "home position." In home position, the hand takes a posture similar to the posture of a hand resting on the home row of a physical keyboard. The fingers typically gesture by sliding up and down on the touchscreen from this position, usually to perform a multi-valued stroke. In a multi-valued stroke, the direction the finger slides selects a set of values, and the distance the finger slides in that direction selects a value from the set. Multi-valued strokes allow a single finger to express many values without requiring the hand to move. Because the gesture allows the finger to begin anywhere in the vertical space assigned to the finger, relative to the hand's home position, the gesture does not require the user to look at the input area to perform it. Finger mapping also supports multi-valued "sweeps" involving two (or more) fingers sliding at once.
Before a user can employ finger mapping on a character entry system, the system must be calibrated for the user's hand, recording information such as the location and orientation of the home position, the width of the hand, and the reach of the fingers. The calibration values may even be different for the left and right hands. Calibrations may also specify preferences for the length and timing properties of gestures, including the gestures of input modes not involving finger mapping. Since multiple users may use a single touchscreen device, an implementation may need to track multiple calibrations and allow for their dynamic selection. This document defines many calibration measures and describes several techniques for calibrating hands for finger mapping.
Character entry systems can be classified according to the maximum number of values a multi-valued gesture takes, the number of simultaneous finger or thumb touches it requires, and the language or language group for which it is designed. This document concludes with a specification of a two-valued two-touch English system, employing multi-valued gestures of at most two values and requiring only two simultaneous inputs. Two-valued systems are easiest to learn to use, and two-input systems are compatible with the broadest range of devices. The specification also describes a three-touch variant in which the user's thumb may rest unobtrusively on the touchscreen.
2. Input Areas
A user inputs characters into a character entry system by performing gestures on a touchscreen. One or more regions of the touchscreen must be able to interpret the gestures. Each region of the screen that is capable of interpreting gestures of the character entry system is called an "input area." It is envisioned that a character entry system would normally appear in place of the conventional virtual keyboard.
Host Area Host Area
Input Area
Input Area
Input areas located in place of the conventional virtual keyboard.
The host determines when an input area is enabled, where on the touchscreen it is positioned, and what size and shape the input area takes. For example, one device may have a specific region of the screen always reserved for the input area, while another may display the input area only when the user selects an input field on the screen, removing the input area after the user has completed entering the field. The period during which an input area is enabled is called the "input session."
Generally, a user employs a character entry system to input text into the device, but a character entry system only issues requests to its host, and the host may interpret those requests any way it deems appropriate. When entering text, the host typically displays the text on the screen as it is entered. The portion of the screen available to the host for responding to character entry system requests is called the "host area." Because it is possible to implement character entry systems that do not render anything to the input area, it is possible for an input area and a host area to partially or wholly overlap, provided that the host area won't recognize the input area's gestures.
Input Area 1
Input Area Host Area
Host Area
Host Area
Input Area:
Input Area 2
Figure imgf000007_0001
Figure imgf000007_0002
Alternative arrangements of input and host areas.
It's also possible for an application or operating system to employ multiple input areas at once. Here are some possible configurations:
• Each input area is dedicated to a different host and hence also to a different host area.
• Each input area is dedicated to a different hand.
• One input area is for character input, the other for moving the cursor.
• Each input area is dedicated to a different user for use in a multi-user game.
• One input area provides a tutorial for use of the second input area.
Here are some user interface elements that a character entry system might provide, in addition to supporting gestures for making host requests:
• A menu button for pulling up a dialog providing information or access to
configuration or preferences.
• A help button for quick access to dialogs that explain the character entry system, or a button for initiating a tutorial.
• A calibration selection drop-down button or radio button for allowing the user to select from previously established calibrations. This is helpful when multiple users use the same device; this button selects the user. To support this, there should be a means for entering new users and assigning them names. The calibrations made while using the system as this user become associated with that user.
• Buttons for selecting the input mode, as alternatives to gestures, or in place of gestures where no gestures are available for the input mode.
• A button for closing the input area or character entry system.
Of course, any of these behaviors could be made available via gestures as well. The following are some examples of behaviors that might be especially useful as gestures:
• Spell check the current word or auto-correct the spelling of the current word.
• Pull up a definition of the current word or highlighted term.
• Transform the current word or the highlighted selection, such as to change the case of its characters.
• Format the highlighted selection, such as to make it bold, italic, underlined, etc. (which may require support from the host for formatting).
• Perform an behavior that is specific to the application or operating system, such as opening an application menu or changing the screen brightness.
The input area can also display information that may be helpful to the user, such as a characterization of the calibration currently in effect. This document describes some useful visual feedback components as they pertain.
Since the character entry system is designed mainly as a system of gestures performed on the input area, an implementation must be careful to distinguish gestures from interactions with visual interface elements. One approach is to place visual elements on the input area where they won't interfere with the gestures, or to only recognize tap gestures in regions of the input area where there are no buttons. Another approach is to distinguish button presses from gestures by interpreting a finger held in one place for an extended period of time as being a button press. This can be disambiguated from the "hold gesture" described later by recognizing a hold gesture only when the user simultaneously performs another gesture.
3. Gesture Types
In order to allow for the entry of a large number of characters, a finger-mapped character entry systems must recognize a large number of simple, distinct gestures in the limited space of an input area. This section defines a gestures that are useful for this purpose, independently of the behaviors that a character entry system may assign to them. The behavior of a gesture may depend on the input mode active at the time of the gesture or the particular finger that performs the gesture. This section often specifies multiple approaches for computing values associated with gestures. In each case, it is important that a character entry system choose one of the approaches for computing the value and be consistent with the value's computation.
3.1. Basic Terminology
The gestures of this section are defined using the following terminology:
• Finger location ~ The "location" of a finger on a touchscreen is given by a point. It is not given by an area, as one might expect since the finger actually touches an area of the screen. The touchscreen device's operating system typically
determines the coordinates of this point.
• Stroke ~ A "stroke" is a gesture whereby a user places a finger on the touchscreen, drags the finger some distance across the touchscreen, and lifts the finger. Multiple fingers may be engaged in strokes simultaneously.
• Starting location ~ The "starting location" of a stroke is the location at which the finger began the stroke.
• Current location ~ The "current location" of a finger is the location of the finger at some point during a stroke, prior to lifting the finger at the end of the stroke.
• Ending location ~ The "ending location" of a stroke is the location at which the finger is lifted upon completing a stroke.
• Vector ~ The "vector" of an in-progress stroke is the line segment that begins at the stroke's starting location and ends at the stroke's current location. The "vector" of a completed stroke is the line segment that begins at the stroke's starting location and ends at the stroke's ending location. The vector is straight regardless of the path that the finger traced. A vector also has a "direction," which points from the starting location to the current or ending location.
• Stroke length ~ The "length" of a stroke is the length of the stroke's vector.
• Stroke angle ~ The "angle" of a stroke is the angle of the stroke's vector relative to some fixture axis against which the character entry system measures all angles. Normally an edge of the screen serves as this fixture axis.
Figure imgf000010_0001
Illustration for terminology characterizing a single-finger stroke
3.2. Two-Fingered Sweeps
A "two-fingered sweep" (or just "sweep") is a gesture in which the user simultaneously sweeps two fingers across the touchscreen along roughly parallel paths; it is a gesture consisting of the simultaneous, parallel strokes of two fingers. A user employing a two- fingered sweep envisions touching both fingers to the screen at the same time, sweeping them for the same amount of time, and lifting them from the screen at the same time. In practice, users will vary in their ability to master the timing, so the gesture should implement a heuristic for detecting and approximating the sweep.
One possible heuristic is for the character entry system to delay for a predetermined amount of time the recognition of any single-finger stroke it detects. If the touchscreen registers a second finger while the first is still active and before the delay period has expired, the gesture is recognized as a two-fingered sweep; otherwise the gesture is recognized as a single-finger stroke. Once recognized as a two-finger sweep, the gesture ends when at least one of the fingers is lifted. This approach allows the character entry system to provide the user with feedback about the nature of the gesture during the gesture instead of having to wait for the gesture to end before recognizing it. The delay period could ~ but need not ~ be part of the calibration. A reasonable delay period would be 200ms.
Figure imgf000011_0001
Examples of a single hand performing two-fingered sweeps
Two-fingered sweeps have the following properties:
• Finger start locations ~ These are the starting locations of both fingers' strokes.
• Finger end locations -- These are the ending locations of both fingers' strokes.
• Sweep vector ~ This is either the vector of one of the finger strokes or it is the line segment that connects the location midway between the starting locations of the fingers and the location midway between the current (or ending) locations of the fingers. The midway locations ("midpoints") may be calculated as the averages of the coordinates; midway between (a, b) and (c, d) is (a/2 + c/2, b/2 + d/2). When computing the sweep vector from the midpoints, the direction of the sweep vector points from the midpoint between the starting locations to the midpoint between the current (or ending) locations. • Sweep length ~ This is length of the sweep vector. When the strokes of the two fingers are parallel, the length of the sweep vector as computed from the midpoints will equal the average of the lengths of the sweep's two strokes. The average of the lengths may be used as an approximation at some sacrifice to the sensitivity of the gesture to the particulars of the sweep vector.
• Sweep angle ~ This is either the angle of the stroke of one of the fingers or the average of the angles of the strokes of both fingers.
• Finger separation distance -- This is either the distance between the two finger start locations or the distance between the two finger end locations.
• Line separation distance - This is the distance between the two parallel lines traced by the fingers. The vector of each finger's stroke may be thought of as part of an infinitely long line, and the line separation distance is the distance between these two lines. Although the distance between parallel lines is a standard mathematical calculation, the two fingers may not trace exactly parallel lines. An approximation must be used instead, such as calculating the line separation distance as the finger separation distance times the sine of the sweep angle.
• Orientation line -- This is either the line that passes through the finger start
locations or the line that passes through the finger end locations. These are distinguished as the "starting" and "ending" orientation lines.
• Finger identities ~ This is the names of the particular fingers that are performing the sweep, when this can be determined, as within a finger map.
Figure imgf000012_0001
Properties of a two-fingered sweep, illustrating some calculation methods This following are some useful characterizations of two-fingered sweeps:
• Common line ~ A "common line" sweep is a two-fingered sweep in which the fingers trace roughly the same line. This can be detected as a sweep in which the line separation distance is less than a certain value. Ideally, this value would be a function of the finger separation distance, as the farther apart the fingers are, the more tolerant the character entry system should be of error. A line separation distance of under 4mm plus 15% of the finger separation distance is reasonable. The 4mm constant reflects the fact that users cannot achieve arbitrary accuracy.
• Common line perpendicular -- A "common line perpendicular" sweep is a two- fingered sweep in which the fingers trace lines that are roughly perpendicular to the sweep's orientation line. To detect this gesture the character entry system compares the sweep angle to the angle of the orientation line. It is reasonable to consider the sweep to be common line perpendicular when the difference between these angles falls in the range of 90 degrees plus or minus 25 degrees (or some other variation around a right angle suitable for the particular system).
• Askew ~ An "askew" sweep is a two-fingered sweep that is neither common line nor common line perpendicular.
• Adjacent-finger ~ An "adjacent-finger" sweep is a two-fingered sweep in which the fingers appear to be immediately adjacent to each other. Fingers are considered to be adjacent when the finger separation distance is less than a certain maximum. Ideally, this value would be a function of the size of the user's fingers and hence part of the calibration. However, in situations where a character entry system may need to determine adjacency prior to having a calibration, 20mm is a reasonable maximum. In an adjacent-finger sweep, finger and line separation distances are not informative beyond their use in identifying the sweep as adjacent-finger.
• Spaced-finger ~ A "spaced-finger" sweep is a two-fingered sweep that is not adjacent-finger.
Figure imgf000014_0001
Dj = line separation distance
Dj < Df * 0.15 + 4mm
A "common line" two-fingered sweep
Figure imgf000014_0002
finger 1
A "common line perpendicular " two-fingered sweep
The two-fingered sweep can be generalized into a sweep of any number of fingers - a "multi-fingered" sweep. Additional fingers added to the gesture during the gesture could be included as part of the gesture. The sweep vector could be the vector of the stroke of any one of the fingers, or it could be the vector going from the average of the starting locations to the average of the ending locations, or by employing some other center of mass calculation for the starting and ending locations. The finger separation distance could be the distance between the furthest-separated fingers, either at the start or end of the sweep, and the line orientation could be the line through these same two finger locations. The line separation distance would likely need to be the smallest line separation distance among all the fingers when calculated pairwise, so that the character entry system can detect a user's attempt to be common line.
3.3. Linear Gestures
A "linear gesture" is a gesture with a length and a two-valued direction. The length is computed as a function of the degree to which the gesture projects onto one or more predetermined lines called "reference lines." The two directional values are given by the direction of the gesture along one of the reference lines. Linear gestures have an effect similar to that of scrolling the contents of a mobile touchscreen that allows for vertical scrolling but not horizontal scrolling: the screen scrolls to the degree that the finger moves up and down along the screen, while ignore horizontal motions of the finger. This example corresponds to a vertical reference line, but the reference lines of linear gestures may have any slope, and there may be more than one reference line.
Reference lines pre-exist the input of a linear gesture, but the particular reference lines employed by a linear gesture may be function of the region of the touchscreen in which the gesture begins, a function of the particular fingers performing the gesture, or a function of the direction in which the gesture begins. In the latter case, the character entry system would need to implement a gesture length threshold beyond which the direction of the gesture and references lines are selected. Although reference lines are "lines," for purposes of determining a linear gesture's length and direction, their only salient characteristics are their slopes; an implementation projects measures of the gesture onto reference lines, so the location of the reference line in space doesn't matter.
There are linear versions of one-finger strokes and two-fingered sweeps. A "linear stroke" is the linear version of a one-finger stroke. It is a stroke whose length is computed by projecting the vector of the stroke onto a reference line; the length of the projection is the length of the gesture. The direction of the linear stroke is the direction of the projected vector along the reference line.
Figure imgf000016_0001
Computing the length of a linear stroke relative to a reference line.
A "linear sweep" is a linear version of the two-fingered sweep. To the user, the gesture is identical to a two-fingered sweep, though its length and direction are computed differently. A linear sweep's length can be defined using either one reference line or two. When using one reference line, the length is the length of the projection of the sweep vector onto the reference line. The direction of the linear sweep is the direction along the reference line that the projection of the sweep vector points. If each finger has an associated reference line, it's most intuitive to measure the linear sweep against a third reference line that bisects the region between the two finger reference lines.
When employing linear sweeps that use two reference lines, one of the reference lines applies to one finger and the other applies to the other finger. Each finger is treated as performing a linear stroke relative to its applicable reference line, and the length of the linear sweep is the average of the lengths of the two linear strokes. The direction of this linear sweep can be indicated as either the direction of one of the linear strokes or as the direction of the sweep vector (which might itself be that of one of the strokes). The character entry system may choose to ignore the gesture or report an error when the directions of the two linear strokes do not agree. The linear stroke directions disagree when the projection of the vector of one of the strokes onto the vector line of the other stroke results in a vector that points in the direction opposite that of the other stroke. 3.4. Multi-Tap Gestures
A "mulit-tap gesture" is a gesture in which the user uses one or more fingers to tap the touchscreen two or more times in rapid succession and concludes by performing a gesture with the final tap. That is, instead of lifting the finger or fingers from the touchscreen after contacting the touchscreen for the final tap, the finger or fingers instead remain on the touchscreen and perform a gesture before lifting.
The number of taps that the user performs in a multi-tap gesture is called the "tap count." A two-tap gesture is called a "double-tap gesture" and a three-tap gesture is called a "triple-tap gesture." It's also possible to have tap counts of four or more.
A "tap stroke" is a gesture in which the user taps the touchscreen a number of times equal to the tap count, and rather than lift the finger from the touchscreen after performing the final tap, the user performs a stroke with the finger. The character entry system registers the gesture as a stroke with a tap count. It's useful to refer to "double- tap strokes" and "triple-tap strokes." Tap strokes may also be linear.
A "tap sweep" is a gesture in which the user taps the touchscreen simultaneously with two fingers, doing so a number of times equal to the tap count, and rather than lift the fingers from the touchscreen after performing the final tap, the user performs a two- fingered sweep with the fingers. The character entry system registers the gesture as a two-fingered sweep with a tap count. It's useful to refer to "double-tap sweeps" and "triple-tap sweeps." Tap sweeps may also be linear.
Multi-tap gestures are governed by the following values:
• Tap radius ~ Successive taps of a single multi-tap gesture must occur at roughly the same location (or locations) on the touchscreen. When a character entry system first registers a touch, it looks for subsequent touches within a specified radius of each of the original touches. The maximum distance allowed between successive taps is called the "tap radius." It is also possible to require that all successive taps be no further than the tap radius from the initial tap. 4mm is a reasonable tap radius, but ideally it would be tailored to the user.
• Maximum tap length -- It is possible that a user intending to tap a finger instead causes the touchscreen to register a short stroke. The character entry system should only register strokes and two-fingered sweeps that are longer than the "maximum tap length"; shorter gestures should be registered as taps. 4mm is a reasonable maximum tap length, but ideally it would be tailored to the user.
• Tap timeout ~ In order to register successive touches on the touchscreen as taps of a single multi-tap gesture, the character entry system must implement a "tap timeout." The timeout period begins the instant the finger touches the touchscreen. The user may only increase the tap count by touching the touchscreen again within the timeout period. This period includes the time required for the user to lift the finger and place it again. If at the end of the timeout period no concluding gesture has been registered and the finger or fingers are not at the tap locations, the gesture is interpreted as a tap sequence but not a multi-tap gesture. 300ms is a reasonable tap timeout, but ideally it would be tailored to the user.
All of these values are suitable for inclusion in the per-user calibration. However, the tap timeout may be the most important for the user to control.
3.5. Multi-Valued Gestures
A "multi-valued gesture" is a stroke or two-fingered sweep that selects a value from a set of values according to the length and direction of the gesture. The set of values may depend on the region of the input area where the gesture begins, but the gesture is not selecting from among screen elements with fixed locations that pre-exist the gesture. Linear strokes and sweeps may also be multi-valued.
The region of the input area where the gesture begins, combined with the type and direction of the gesture, together select the set from which the gesture chooses. The length of the gesture selects a value from the set. The values need not be uniformly distributed across the length, and the sets need not have the same number of values. Not every gesture type or direction need be associated with a set of values, and not all of the value sets need be unique to the region, gesture type, or gesture direction.
Figure imgf000019_0001
Examples of the values of multi-valued gestures as a function of region of the input area and the type, direction, and length of the gesture.
(Gesture lengths are computed as described in the text.)
To clarify, the character entry system may partition the input area into regions, perhaps doing so as a dynamic function of the calibration in effect. Any one-finger stroke within a given region has access to the same value sets, regardless of where in the region the stroke starts, and regardless of where the stroke ends, even if it ends in a different region. A two-fingered sweep within the same region may provide access to a different collection of value sets, but as with one-finger strokes, every two-fingered sweep within a given region provides access to the same value sets. There may be applications where having a multi-valued gesture cross from one region to another produces a different value set, perhaps even representing a single value by doing so, but such interpretations may not be useful to character entry systems. Finally, value sets may be associated with gesture direction by assigning value sets to ranges of the gesture angle.
It may be useful for the character entry system to show the user the value that would be selected were the user to lift the finger (or fingers) at some point during a multi-touch gesture. This can only be done if the character entry system selects a value set for the gesture prior to the completion of the gesture -- in fact, near to the gesture's origination. A character entry system can accomplish this by implementing a "set selection threshold," which is the minimum gesture length beyond which the value set will be selected. Once selected, the user cannot subsequently change the value set by changing the direction of the gesture. Should the user change the direction anyway, the character entry system could ignore the gesture, report an error, continue to select from the set as a function of length, or apply some other way of influencing the selection.
A set selection threshold also gives the user some room to error even when the character entry system is not providing feedback. For example, a user preparing to begin a stroke may be moving a finger towards the bottom of the touchscreen in order to place the finger at the beginning of a stroke that will move up the screen. The momentum of moving the finger down to the starting position may produce a brief downward movement at the beginning of the stroke. This will frustrate the user if the character entry system detects this too soon and selects a value set accordingly.
There are some visual techniques for helping users to learn the values of multi-valued gestures. These techniques would kick in if the user has taken too long to complete the gesture. 500ms may be a reasonable timeout for some users, but most likely users will have different preferences for this timeout. Once the timeout occurs, one or more of the following features might engage, possibly according to the configured preferences:
• The device clicks audibly as the finger enters a value's length range.
• The device audibly announces the value that would be selected were the user to lift the finger (or fingers) at its present location, announcing that value every time the finger moves into the value's length range.
• The system pops up a small box that shows the value that would be selected.
• The system pops up a small box that shows a list of all the values in the set for the direction selected, highlighting the value that would be chose were the user to lift the finger (or fingers). The highlighted value changes as the user moves the finger. If the system implements the gesture with reversible directions (see below), the list could include all of the values in the sets for both directions.
• The system inserts into the text at the current cursor location the character that would be selected should the finger (or fingers) be lifted, replacing this character as the user moves among the multiple values. The character might be displayed in a conspicuous style to indicate that it is provisional. This feature might require explicit support from the host and only makes sense for hosts display text. 3.6. Reversible Gestures
A "reversible gesture" is a gesture that would report a length and a direction when complete but whose length can be increased or decreased dynamically during the gesture and whose eventually-to-be-reported direction may changed during the gesture. The user may also nullify a reversible gesture after starting the gesture in order to prevent the character entry system from registering and acting upon the gesture.
Reversible gestures are useful for providing users with feedback during the gesture to help them see what value the gesture would take were they to end the gesture. They also allow a user to undo a gesture if they catch themselves prior to completing it.
Reversibility is defined only for those gestures that increase in length the farther the finger (or fingers) move from their starting locations. The gesture is said to "reverse" the instant the gesture length decreases during the gesture, and it is said to be "in reversal" while the length decreases. Whenever the gesture length increases, the gesture is said to "resume." The gesture ends when the user lifts the finger (or fingers) from the touchscreen, and the direction and length of the gesture at that point becomes the direction and length reported to the character entry system for the gesture.
A reversible gesture has a "no-value threshold," which is given as a gesture length. The instant a gesture in reversal assumes a length less than (or equal to) the no-value threshold, the gesture is nullified, such that if the user were to lift the finger (or fingers) at this point, the character entry system would ignore the gesture as if the user had input no gesture. However, if while in the no-value threshold the user keeps the fingers in contact with the touchscreen and resumes the gesture, the gesture would be reported should the user end the gesture without again reversing it.
A reversible multi-valued gesture has an additional constraint, at least when the character entry system is providing feedback about the current value of the gesture: the gesture's value set can only be changed by returning to the finger (or fingers) to the no- value threshold. This essentially resets the gesture so that a new value set can be selected by subsequently exceeding the multi-valued gesture's set selection threshold again, a threshold which may actually equal the no-value threshold. Since the no-value threshold is generally close to the starting location of the finger or fingers, a user can only change the value set, mid-gesture, by first returning the finger or fingers to their approximate starting locations. However, a character entry system may opt to
implement a reversible multi-valued gesture so that once a value set has been selected, no other value set can be selected for the duration of the gesture, even if reversed.
Figure imgf000022_0001
location
Example of a reversing multi-valued gesture that allows value set changes, resulting in the value 'Η'.
Figure imgf000022_0002
Example of a reversing multi-valued linear stroke that
allows value set changes, resulting in value 'F'.
3.7. Scrubbing Gestures
A "scrubbing gesture" is a gesture consisting of constituent gestures that trace back and forth two or more times along roughly the same path. The motion of a scrubbing gesture is similar to the motion of rubbing an eraser back and forth on a page to erase a mark ~ for example, left, right, and left again, or up, down, and up again. The constituent gestures must be of a kind having a length and a direction, and all the constituent gestures of a given scrubbing gesture must be of the same kind. The value of a scrubbing gesture is a list of the vectors traced by the constituent gestures, in the order in which they were traced. Because at least one reversal is required, every scrubbing gesture reports at least two vectors. The number of vectors is called the "scrub count." The gesture ends when the user lifts the fingers.
Strokes and sweeps are suitable constituents of a scrubbing gesture. Should the user move the finger (or fingers) in one direction for the stroke or sweep and then lift the fingers, the gesture ends and registers as a stroke or sweep. However, if after moving the fingers in that first direction, the user keeps the fingers on the touchscreen and moves the fingers in the reverse direction, the gesture suddenly becomes a scrubbing gesture, provided that it is not also reversible. To allow the user some error with ending strokes and sweeps, it is useful to define a minimal length of the first reversing constituent gesture; upon exceeding this length, the gesture is henceforth a scrubbing.
The vectors of the constituent gestures must be roughly parallel. This is established by only detecting scrubbing gestures as those in which each of the vectors has a angel relative to the first vector less than a specific value. A maximum of a 30 degree difference among vectors is reasonable. An implementation has several options for enforcing this. The implementation could declare a gesture to be a scrubbing gesture should the second vector be within 30 degrees of the first and then end the scrubbing gesture reporting only those subsequent vectors consecutive with the first that were within 30 degrees of the first. This effectively truncates a gesture at the point where it ceases to be a scrubbing gesture. Another option is to refuse to report a scrubbing gesture at all if any of the vectors exceeds the maximum angle.
Figure imgf000023_0001
Example scrubbing strokes with scrub count 3.
Scrubbing gestures cannot be reversible. Additionally, scrubbing gestures require extra effort to distinguish them from reversible gestures. One way to distinguish a scrubbing gesture from a reversible gesture is to only recognize reversible gestures as slow- moving gestures, indicating that the user is uncertain and might like feedback during the gesture for assistance; a fast-moving gesture would then be interpreted as a scrubbing. However, it is likely the user will perform the first vector of a reversible gesture quickly, and a user meaning to quickly undo an in-progress reversible gesture may quickly reverse the gesture back to its starting location. Hence, perhaps the most effective method for distinguishing scrubbing gestures from reversible gestures, where the two gestures my co-occur, is to only recognize as scrubbing gestures those gestures that satisfy all of the following conditions: (1 ) the gesture conforms to scrubbing gesture requirements, (2) is performed with at least a certain level of speed, and (3) has at least a minimum scrub count. Minimum scrub counts of 4 and 6 are reasonable because the user interprets these as going back and forth 2 or 3 times, respectively. Scrubbing gesture speed might be measured as the number of vectors per unit time.
3.8. Hold Gestures
A "hold gesture" is a placement of a finger or thumb, or a combination of fingers and thumbs, on the input area and holding them there for an extended period of time. Hold gestures are useful for engaging input modes and for allowing stray fingers or thumbs to rest on the input area without affecting the data being input. Some hold gestures may be held simultaneously with the input of other gestures, analogously to holding a shift or control key down while continuing to type other keys on a conventional keyboard. There are two kinds of hold gestures ~ "modal holds" and "anchor holds."
A "modal hold" is a hold that activates an input mode. The input mode lasts for the duration of the hold gesture. It is possible that from that activated input mode the user could again change to another input mode by means other than ending the modal hold, such as via a particular gesture available within the mode or by simultaneously engaging a second hold gesture. Normally, a modal hold allows the user to select an input mode with one finger or thumb while simultaneously entering gestures with the remaining fingers (and possibly thumb) of the same hand. This is most flexibly accomplished by allowing the thumb to hold the mode and the other fingers to perform gestures for that mode. However, it may also be accomplished by holding the pinky finger or the index finger. It is difficult to hold a mode with the middle two fingers.
An "anchor hold" is a hold that the character entry system ignores. An anchor hold allows the user to keep a finger or thumb on the input area while performing other gestures, without the finger or thumb influencing the input area. Touchscreen operating systems usually require that the user specify regions of the touchscreen that are to be sensitive to touch inputs, but those regions are usually rectangular. When a character entry system requires an irregularly shaped region of insensitivity, it may have to implement it itself, which it could do as an anchor hold.
To detect a hold gesture, the character entry system must disambiguate the gesture from other gestures the user could be inputting. The following are some techniques that may be employed for identifying hold gestures: • One or more regions of the input area can be dedicated to receiving only hold gestures, or only certain hold gestures. The sizes, shapes, and locations of the regions might be a function of the particular calibration in effect. The gesture preceding and ending in the hold can select the particular hold gesture in effect. For example, the tap count of a tap hold could select among input modes.
• The particular finger or thumb (or combination) performing the gesture could identify both the gesture as a hold and the particular kind of hold being performed. Normally, this technique entails identifying the starting location of the gesture and hence may be viewed as an instance of identifying holds by region.
• The hold gesture can require a qualifying preceding gesture. For example, a region of the input area might accept many different gestures, only some of which result in a hold gesture when the user ends gesture but keeps the finger or thumb (or combination) on the screen to persist the hold.
• A finger or thumb (or a combination of fingers and thumbs) can be interpreted as a hold once the gesture has been held for a certain length of time without significant motion of the touch inputs, as explained below. 1 .5 seconds might be a reasonable threshold for this determination.
• When a hold gesture is capable of being interpreted in one of two ways, and one of those ways is as a modal hold, the modal hold interpretation can be selected by inputting another gesture simultaneous with the hold; the hold gesture isn't identified until a concurrent gesture is. For example, a user could place a finger to begin a multi-valued stroke. However, the user leaves the finger in place for a while, resulting in a hold. The character entry system could initially assume the user wants to see a list of the stroke's possible values and show them. If after showing them, the user instead enters a stroke with another finger, while retaining the hold, the character entry system would know to re-interpret the hold as a modal hold. If the user proceeds with the stroke, the system remains in the help mode.
Because the user's fingers cannot be trusted to remain perfectly still, a character entry system generally cannot identify a hold by touch inputs that do not move at all, and it cannot require that the touch inputs of an in-process hold never move at all. If the user is simultaneously entering gestures with the same hand that holds the mode, the finger or thumb holding the mode is sure to jiggle. The holding finger or thumb may even gradually drift across the touchscreen as the user works. Hence, the character entry system should instead measure the time-averaged speed of motion of the touch inputs.
There are many ways to measure the time-average speed of a touch input. Perhaps one of the simplest ways is as follows. First, establish a maximum average speed for the touch input. Exceeding this speed will cause the input to be interpreted as motion, and should this occur prior to establishing the gesture as a hold, it may prevent the gesture from being interpreted as a hold. Second, establish a speed polling window. Rather than setting an asynchronous timer, this can be done by maintaining a queue of touchscreen events for the touch input in question. Every new event for this touch input results in an examination of the queue. During this examination, toss out all events that contain no state information relevant to the most recent period of time given by the polling window. The oldest and most recent locations of the touch input, according to the remaining queue, are then computed, as is the distance between those locations. This distance is then divided by the period of the polling window, providing an average speed for the polling period. By this method, for suitable values of the polling window, only a sustained high speed will trigger motion detection. In particular, quick jitters of a finger or thumb are not likely to register as motion.
While detecting excessive motion may prevent the character entry system from interpreting some gestures as holds, it may be reasonable to allow the user to move some holds without ending the hold. In this case, the character entry system may opt to ignore motion detected during an already established hold. In these situations, the system need not bother monitoring gesture speed.
Because some people have more jittery hands than others, and because some people operate at a higher speed than others, it may be helpful to include parameters governing hold gestures as part of the user's calibration.
4. Home Position
The "home position" is a characterization of the posture, location, extent, and orientation of a hand that is calibrated for use with a finger-mapped character entry system. The term "home position" is meant to conjure both the posture that a hand takes to enter characters via the system and the place on the touchscreen where the user places the hand. It is analogous to "home row" on a conventional keyboard, which brings to mind both a posture of the hands and a location of the hands on the keyboard. However, unlike the "home row", the home position can be dynamically relocated on the input area. In practice, "home position" also refers to the collection of measures that characterize the hand and hence is part of a user's calibration.
The posture of a hand in home position is easy to explain. First, imagine both hands on a conventional keyboard with the fingers on the home row. That is, on a QWERTY keyboard, the fingers of the left hand are on "A-S-D-F", and the fingers of the right hand are on "J-K-L-;". Furthermore, the hands are slightly curled, and the thumbs are held in the air. That's the home row posture on a conventional keyboard. Now, leaving the hands in place, remove the keyboard, and put a touchscreen tablet or mobile phone under either hand in any orientation -- under just one hand, but it doesn't matter which one. Lift the fingers slightly off the touchscreen, and the hand is now in home position. To help keep the hand in home position, optionally rest the thumb either on the touchscreen or on the edge of the tablet or phone, just off the touchscreen. The middle two fingers may also relax slightly so that they are no longer perfectly in line with the index and pinky fingers, as required by the conventional keyboard. There are no buttons with which a hand in home position aligns. Instead of pressing buttons, the user performs up, down, and side-to-side strokes and sweeps with the fingers, always returning to home position should any gesture cause the hand to stray a little. Most gestures, however, should not move the hand at all. The fingers only touch the touchscreen to enter a character and never rest on the screen between strokes. The system employs a characterization of the hand in home position to determine which fingers are involved in a gesture. The system also employs knowledge of whether the left or right hand is performing the gesture, knowledge that is provided at calibration. As explained under "Hand Calibration," the user can change hands or relocate home position at any time, as desired. It's probably wise for users to train both hands to use the system so that the user can periodically switch hands and give each turns resting.
The user must keep the hand roughly in home position while entering characters. If the hand shifts too far out of the calibrated home position, finger gestures will not produce the expected results. To prevent this, the user may need to anchor the thumb on the device to help stabilize the hand. Depending on the device, the user may be able to anchor the thumb on the device somewhere off of the touchscreen. However, it may sometimes be more convenient or even necessary to anchor the thumb directly on the touchscreen. Finger-mapped character entry systems allow this on touchscreens that can support the additional touch input, implementing the thumb touch as an anchor hold. If the specific finger map of a character entry system employs only one-finger strokes, a two-input touchscreen suffices. If the finger map also employs two-fingered sweeps, the touchscreen must be able to support three simultaneous inputs.
An "up" or "upward" stroke of a finger in home position is the stroke a finger makes when it moves away from the palm, requiring it to uncurl a little from the home position posture. A "down" or "downward" stroke is the stroke a finger makes when it moves towards the palm, requiring it to curl more than is natural for home position posture. Likewise, "up" or "upward" sweeps involve fingers stroking upward, and "down" or "downward" sweeps involve fingers stroking downward.
Note that users may find it difficult to "push" their fingers up to generate an upward stroke or sweep from home position. This pushing action is similar to attempting to point a finger while keeping the finger in contact with the touchscreen. While this action will work, particularly on larger input areas, it is not an easy technique to master. Instead, users should think of dragging their fingers up by brushing their fingertips against the touchscreen. This way the hand remains more curled -- more in home position ~ not only easing character entry but also speeding up character entry.
5. Finger Mapping
"Finger mapping" is a technique for making user interface behaviors a function of both the finger that performs a gesture and the gesture that the finger performs. To identify the finger, finger mapping compares the location at which the finger begins a gesture to a characterization of the calibrated home position. The user must generally keep the hand in home position ensure proper finger identification, so specialized gestures are required to associate each finger with a breadth of behaviors. Hence, finger mapping also includes a collection of recommended gestures. An association of behaviors with fingers and their gestures is called a "finger map."
Fingers are mapped while the user's hand is in home position. The location and orientation of home position varies from calibration to calibration, as do parameters characterizing the size of the user's hand and the reach of the user's fingers. Finger mapping necessarily occurs as a function of the home position and is here is defined with respect to the following home position properties:
• Hand used ~ This is which hand the user is using, the left or the right.
• Finger rest locations ~ These are the point locations of the fingers were they to touch the touchscreen in their relaxed postures while in home position. In practice, these points will vary over time, as the fingers won't keep returning to the exact locations while entering characters. Instead, finger rest locations are used for particular instants in time as approximate characterizations of the home position.
• Breadth -- This is the distance from the index finger rest location to the pinky finger rest location at the time of home position calibration.
• Orientation line -- This is the line connecting the index finger rest location to the pinky finger rest location at the time of home position calibration.
• Finger reference lines ~ These are lines, one line for each finger, that each finger roughly parallels as it makes up and down strokes from home position. Each finger's reference line is positioned so that it intersects the finger's rest location at time of home position calibration.
• Gesture lengths ~ These are the minimum and maximum lengths of the gestures of each finger. The strongest user calibration records gesture lengths for each finger separately. In particular, gesture lengths are associated with each value of each multi-valued gesture available in the finger map. For example, a character entry system will typically assign linear multi-valued gestures to each finger, such that each linear gesture's reference line is the finger's reference line. The gestures lengths would then be the minimum and maximum lengths of the up and down strokes to select each of the available values. The gesture lengths characterize the finger reach inclinations of each person's hand.
• Finger regions ~ These are the regions of the input area that are assigned to each individual finger. Any stroke that starts in a region is assumed to be performed by the finger to which the region is assigned. Each finger region should be at least as long as the longest stroke the finger is capable of performing, so that the user can begin the stroke from either the top or the bottom of the region. Finger regions are retained by the calibration in relative terms so that the home position may be moved around and re-oriented and yet still employ the calibration regions.
Figure imgf000029_0001
Some home position properties for a placement of the right hand. Gesture lengths are not shown because they don't have fixed positions.
One finger region is associated with each finger of the hand. When the touchscreen detects a touch event in one of these regions, and the touch event begins a new gesture, the member of the hand touching the region is assumed to be the particular finger previously associated with the region. Identifying the finger also selects the set of possible gestures the finger may perform or partake in. The character entry system interprets the gesture according to both the finger used and the gesture it performs.
There are multiple ways to compute the shape, size, and locations of the finger regions. In the absence of a direct specification of this information, reasonable finger regions can be deduced from the finger rest locations and finger reference lines. One algorithm for accomplishing this follows:
1 . If the reference lines of adjacent fingers are parallel, the boundary between their regions coincides with a parallel line placed halfway between the reference lines. If the reference lines are not parallel, determine the angle between them at their point of intersection, and the boundary between the regions coincides with the line that both bisects this angle and intersects the same point. The boundaries only coincide with these lines because the finger regions are not infinitely long.
2. The reference lines of the index and pinky fingers bisect their regions. The
previous step determined the coinciding boundary lines between these fingers and their adjacent fingers. These are the "inner" coinciding lines of these fingers. The "outer" coinciding lines of the index and pinky fingers are determined separately for each finger. Each "outer" coinciding line is the line computed as the mirror image of the finger's inner coinciding line when mirrored around that finger's reference line. The outer boundaries of the index and pinky finger regions coincide with these lines.
The upper and lower boundaries of each finger region is determined separately for each finger. Each finger has an associated maximum gesture length.
Technically, the maximum gesture length of an upward gesture may be different from that of a downward gesture, but this algorithm treats these two maximum lengths as being equal. The upper and lower boundaries of a finger's region coincide with the two lines satisfy these criteria: (1 ) the lines are perpendicular to the finger's reference line, and (2) the lines intersect the reference line at a distance from the finger's rest location equal to the the finger's maximum gesture length.
The boundary-coinciding lines defined above together form four isosceles trapezoids, one for each finger. The boundary of the region of any given finger is the particular trapezoid that contains the finger's rest location.
The boundaries so far defined are the minimum regions required for each finger. An implementation may simplify finger identity computations by extending the boundaries beyond these to align some of their edges along common
boundaries. Extending the boundaries also provides users with the flexibility to perform side gestures intended for one finger using an adjacent finger; in this case, the regions must extend far enough to accommodate locations where the adjacent finger might start the gesture. Note that the outside boundary of the index finger should not be extended too far to the side, as this might encroach on the space where the user might place a thumb on the touchscreen.
finger reference lines finger rest locations
= midpoint between ref. iines , jl v = angles
= maximum gesture length of finger n
Illustration of the sample algorithm for determining finger regions. Instead of showing angle bisections at the intersections of reference Iines, the illustration shows some of the angles that result from these bisections. The outermost boundary shows how an implementation might extend the
regions to simplify finger identity computations.
(All lengths and angles shown were estimated, not measured.)
Although the home position calibration places the home position at an exact location on the input area, with specific finger rest locations and reference Iines, etc., the calibration can be translated and rotated to new locations and orientations. To accomplish this, the rest location of the index finger is treated as the home position's absolute location, and the orientation line is treated as the home positions current orientation. All other properties are interpreted as relative to these more dynamically changeable properties. From this perspective, the calibration is independent of location and orientation and is only situated on the input area as most recently specified by the user.
A number of gestures are especially useful to fingers from home position. A particular character entry system specifies exactly which gestures are available to each finger and what behaviors these gestures produce -- this is a "finger map." There will likely be a different finger map for each character entry input mode of the system. Finger maps generally employ some combination of the following gestures:
• Multi-valued gestures -- The values of a multi-valued gesture are selected as a function of the length of the gesture. This length can be given by the normal form of a multi-valued gesture, where it is computed according to its distance from a point. The approach works well when the gesture is not reversible. The length can also be computed by employing linear multi-valued gestures. In this second case the reference lines of a linear gesture are exactly the reference lines of the fingers as specified for the home position, so that the lengths are computed by projection onto the reference lines. This second case is well-suited to gestures that are reversible, so that the user reversing a finger back to the starting location has room to error; any location where the user might normally start a gesture with the finger then becomes a valid location for resetting the gesture.
• Single-finger strokes ~ The hand easily performs up and down strokes of individual fingers from home position, without requiring the hand to move from home position. These strokes are typically implemented as multi-valued gestures.
• Two-fingered sweeps ~ The hand can also easily perform up and down two- fingered sweeps with any pair of fingers. These may also be implemented as multivalued gestures when more values are needed than are available from single- finger strokes alone. Adjacent-finger sweeps are probably easier on users than spaced-finger sweeps. Besides, several spaced-finger sweeps are part of the standard gesture repertoire and probably ought not be redefined. A linear multivalued two-fingered sweep can be defined either with respect to the two reference lines of the fingers performing the sweep, or with respect to a third line that bisects the region between the two finger reference lines, as explained for linear gestures.
• Side gestures ~ It is also possible to do side-to-side gestures from home position. These are strokes and sweeps in which the user does not appear to intend to move the fingers parallel to their reference lines. An angle of 30 degrees or more off of a reference line should be sufficient for detecting a side gesture. Side gestures can be multi-valued, though this is discouraged for the complexity it adds to the finger map. Furthermore, allowing two-fingered sweeps to appear as side gestures of a finger map may preclude the use of the standard gesture for enabling a cursor mode (see later in this document). It is also possible to distinguish between side gestures according to which side of the finger's reference line the gesture begins, where the sides are delimited by the finger's rest location. Side gestures may cross finger regions, but they must begin in a particular finger region.
• Positional gestures -- Normally finger mapping treats all gestures begun in a particular finger region as equivalent, regardless of where in the region it started. It is also possible to partition a finger region along the axis of its reference line and to distinguish gestures of a given finger by the partition in which it begins. For example, a finger region could be split vertically in half at the finger rest location, and the character entry system could distinguish between gestures begun above the rest location from those begun below the rest location. These gestures are thus "positional." Up/down positional gestures are incompatible with linear multi-valued gestures whose reference lines coincide with the finger's reference lines, but they are particularly well suited for side gestures. Extending the example of splitting the finger region at the rest location, a side gesture that begins above the rest location could have a different meaning from one that begins below the rest location.
Figure imgf000033_0001
Examples of positional side gestures. Each finger region is
partitioned into three regions along the finger's reference line.
• Multi-tap gestures -- Any of the gestures here may be further embellished by making it a multi-tap gesture. The tap count would select an alternative
interpretation of the gesture. For example, suppose an up-stroke of the index finger produces the lowercase letter 'a'. A double-tap up-stroke of this finger could instead produce the uppercase letter 'Α'. This could be applied across all of the gestures for letters. Multi-taps of two-fingered sweeps are also possible. Triple-tap gestures could access infrequently used symbol characters. Note that if a finger region supports a multi-tap gesture, as defined in this document, that finger region will be unable to interpret a gesture consisting only of taps ~ not with a concluding movement of the finger -- unless it's a multi-finger tap whose taps are not ambiguous with any other multi-finger tap gestures.
• Reversible gestures ~ A character entry system might support reversible gestures in order to assist users who are learning the system, or just to give the user more flexibility while entering characters. As defined for reversible gestures, any gesture producing a length and a direction can be made reversible.
• Scrubbing gestures ~ Scrubbing gestures involving fingers going back and forth are available, but they must be distinguished from reversible gestures. They can be distinguish by the finger involved, by only recognizing scrubbing gestures that go side to side, should all of the reversible gestures go up and down, or by monitoring speed and scrub count, as explained under "Scrubbing Gestures."
Hold gestures are not included among the list of useful finger map gestures because, in the parlance of this document, they may be used to select input modes. Different input modes for character entry may have different finger maps. So hold gestures can be used to select new interpretations of gestures, but they participate in selecting the finger map rather than being part of a finger map.
6. Calibration
Finger mapping requires a characterization of a user's hand in home position. This characterization is called the "calibration" for the user's hand. This section defines methods for calibrating a hand or for selecting a pre-existing calibration, and it ends with some suggestions for conveying the calibration to the user. The gestures provide varying levels of calibration. Ideally, each hand would be calibrated separately, but if one hand is calibrated, the calibration can be borrowed for the second hand.
6.1. Hand Selection Slide
The "hand selection slide" selects the hand (left or right) with which the user wishes to input characters. It also establishes a minimal home position calibration. If the minimal calibration approximates a previously stored, more complete calibration, the character entry system may interpret the gesture as a selection of that previous calibration, perhaps by first popping up a dialog to confirm or to select among multiple matching calibrations. Whether or not the system pops up such a dialog can be a matter of configuration. The hand selection slide thus makes it easy for the user to move an existing calibration to new locations on the input area and to quickly switch hands as desired.
The gesture is a spaced-finger common-line two-fingered sweep. The two fingers are assumed to be the index and pinky fingers. The gesture requires a distinction between left and right sides of the input area, as well as top and bottom. A sweep to the left identifies a left hand calibration, and a sweep to the right identifies a right hand calibration. The locations of the fingers at the end of the sweep are taken to be the rest locations of the index and pinky fingers in home position. The system may assume that the user is selecting a pre-existing calibration that has a hand breadth within 5% or so of the breadth given by the hand selection slide, if such a pre-existing calibration exists.
If a matching calibration is found and to be employed by the character entry system, the calibration is geometrically translated to the location given by the rest location of the index finger, and it is rotated so that its orientation line coincides with the line extending from the newly established index finger rest location to the newly established pinky finger rest location. Since the input area is assumed to have an official top and bottom, the rotation can properly orient the top and bottom of the calibration. Should the breadth of the new calibration be more than a few percent different from that of the old calibration, it is an option for the character entry system to expand or compress the home position characterization along the dimension of the orientation line in order to accommodate the size change. This may also be configurable behavior.
If no matching pre-existing calibration is available, or if the system does not attempt to match a pre-existing calibration, the system can apply a default computation of the calibration. A reasonable default places the rest locations of the remaining two fingers equally spaced along the home position's orientation line between the index and pinky finger rest locations, but offset from the orientation line. The middle finger can be offset above the line by 1/6th of the home position breadth, and the ring finger can be offset by 1/12th of the home position breadth. The references lines of the index, middle, ring, and pinky fingers can be assumed to be at angles of 80, 85, 90, and 100 degrees with the orientation line, respectively, for the angles on the thumb side of the reference line above the orientation line (facing the top of the input area). The calibration can also assume that the calibrated gesture lengths are a certain percentage of the home position breadth, according to the needs of the particular character entry system.
Default finger regions can be derived from this information, but the region calculations should use larger maximum gestures lengths to accommodate error in the calibration.
Figure imgf000035_0001
A default calibration for a hand selection slide, given only the selection of hand and the index and pinky finger rest locations.
On touchscreens supporting more than two simultaneous inputs, it may be useful to support a hand selection slide consisting of more than two fingers. This would be a "multi-fingered sweep," defined previously as an embellishment of two-fingered sweeps. The end locations of each of the fingers would become the rest locations for the fingers. If only three fingers are used in the gesture, the rest location of the missing finger can be assumed to be halfway between the rest locations of the fingers bordering the gap. A hand selection slide of four fingers ~ a "four-finger hand selection slide" ~ is ideal because it allows the user to specify rest locations for all of the fingers at once.
6.2. Tickling
"Tickling" is a gesture that calibrates a particular finger in home position. From home position, a finger "tickles" by performing scrubbing gestures up and down above and below the finger's rest location. The vectors of the scrubbing are averaged together to form an average vector, and the finger's rest location, reference line, and gesture lengths are deduced from this average vector. Multiple tickling gestures may be performed at once, but they are each interpreted independently. Tickling involving fewer than four simultaneous fingers modifies a pre-existing calibration; a four-finger tickling is capable of calibrating all four fingers at once. However, in all cases, tickling requires that the hand -- left or right -- have been established prior to the tickling. The number of fingers that can tickle simultaneously is limited by the capabilities of the touchscreen, as well as by the implementation of the character entry system.
In a character entry input mode, up and down finger gestures are usually interpreted as multi-valued strokes or sweeps. A sophisticated character entry system may also allow these multi-valued strokes and sweeps to be reversible, and reversible gestures can be ambiguous with scrubbing gestures. As explained for scrubbing gestures, scrubbing gestures can be distinguished from reversible gestures by speed and vector count. It's therefore reasonable to identify a tickling gesture as a scrubbing stroke generating at least 4.5 vectors per second and having at least 5 vectors (a scrub count of 5). A user can think of this as 3 or more back and forth (up and down) motions of the finger. It's reasonable for the character entry system to allow tickling at any time so that the user can recalibrate one or more troublesome fingers when needed.
When fewer than four fingers tickle simultaneously, the character entry system identifies each finger engaged in tickling according to an already existing finger mapping, which is given by a home position calibration. This existing home position calibration can be the one computed by default from a preceding hand selection gesture. When a finger completes its tickling gesture (lifts from the touchscreen), the vectors of the tickling (it's scrubbing vectors) are used to change the calibration for that finger as follows:
1 . Compute the "average vector." There are many ways to compute the average vector. One way is to determine the "best fit" vector using statistics. Another is simply to average the starting and ending locations of the vectors. Under the latter approach, the coordinates of the starting locations of all the vectors are put into one coordinate pool, and those of the ending locations are put in another pool. The x-values of all the coordinates in each pool are averaged together to produce an average x-value for the pool, and likewise for the y-values. Each pool now corresponds to an "average coordinate" given by the pool's average x-value and average y-value. The average vector is then the vector extending from the starting location pool's average coordinate to the ending location pool's average coordinate.
2. The rest location of the finger is a point somewhere on the average vector. If the character entry system asks the user to attempt to tickle up and down equal distances around the rest location, the rest location would be the midpoint of the vector; the rest location depends on how the system interprets the tickling.
3. The reference line of the finger is the line coincident with the average vector.
4. The gesture lengths of the finger are computed as a function of both the finger's identity and the length of the average vector computed for the finger. This computation will vary from character entry system to character entry system. However, it is useful to assume that the average vector represents most of the reach available to the finger, but not all of that reach. For example, the average vector's length may assumed to be 85% of the length available to the finger.
A user performing a tickling can scrub the fingers up and down to varying distances. If the character entry system is to derive gesture lengths based on the lengths of the scrubbing vectors, the system will need to specify the linear extent to which the user should perform the tickling. For example, the system could specify the user is only to perform tickling at the maximum length of the first values for multi-valued gestures. Another system could ask the user to tickle at the maximum comfortable distances available to the fingers for any gesture. Additionally, in order to assign rest locations, the system must specify whether the tickling consists only of up strokes, only of down strokes, or of strokes that travel through the extents of both up and down strokes.
When all four fingers tickle simultaneously, the character entry system may optionally discard the active home position calibration and generate one anew. In not employing previously established finger regions, the system must use other means for identifying the individual fingers. The four-finger gesture itself can be recognized as a common-line perpendicular multi-valued sweep of four fingers, or simply as four fingers engaged in tickling along roughly parallel vectors. These ticklings form a rough line, and because the input area has a known left and right (as required by the hand selection slide), the previously established hand-in-use specifies which finger is which: when using the left hand, the rightmost tickling calibrates the index finger, and when using the right hand, the leftmost tickling calibrates the index finger. From this information the entire home position calibration can be computed.
It is possible to allow the user to combine a hand selection slide with subsequent tickling without first lifting the fingers to complete the preceding hand selection slide. This is most tempting on systems that support four-finger hand selection slides. An
implementation can be specifically designed to detect such combination gestures because both the hand selection slide and the multi-finger tickling gestures are forms of multi-finger sweeps. In the transition described here, the sweep is common line for its first vector and common line perpendicular for subsequent vectors, making the combination gesture pretty distinctive.
6.3. Gesture Length Refinement
Multi-valued gestures require users to move their fingers over different lengths to select the different values. The lengths over which users are comfortable moving their fingers to select a value will vary from user to user and perhaps hand to hand. These gesture lengths are part of the home position calibration. The calibrations provided by a hand selection slide or finger tickling assign gesture lengths, but they may not be right for the user. The user may wish to perform a gesture for refining just the gesture lengths. The conventional pinching and zooming gestures are useful for this purpose.
A pinching gesture is one in which two fingers are placed on the touchscreen and subsequently moved closer together, and a zooming gesture is one in which the two fingers are subsequently moved farther apart. A character entry system may also find these gestures useful for refining gesture lengths other than those that are multi-valued. Since the pinch and zoom gestures are also useful for representing copy and paste, gesture length refinement should employ these gestures as part of a double-tap.
The pinching and zooming gestures for refining gesture length are most useful while a calibration is active, because it modifies this calibration. Define "vertical gestures" as those that move roughly perpendicular to the home position orientation line, and
"horizontal gestures" as those that move roughly parallel to this orientation line.
Pinching and zooming motions in which the fingers move roughly perpendicular to the orientation line refine the vertical gestures, and those that move roughly parallel to the orientation line refine the horizontal gestures. The verticality of the pinching and zooming gestures may also be interpreted relative to their orientation in the input area.
A pinching gesture reduces gesture lengths, and a zooming gesture increases gesture lengths. There are many approaches to calculating the changes these gestures induce in lengths. An intuitive approach is to reduce or increase gesture lengths in a proportion equal to the proportion by which the user reduces or increases the distance between his or her fingers during the pinching or zooming gesture.
6.4. Dynamic Recaiibration
The home position of a hand may gradually change as a user enters characters into the input area. Ideally the character entry system would detect these changes and update the home position calibration accordingly. The hand may gradually drift across the screen, the hand's orientation may rotate a little, and reference lines may change as the hand relaxes. If these changes occur gradually enough, it is possible for the character entry system to track them. One way to track them is through a dynamic recaiibration technique that constantly adjusts the calibration according to the user's gesture history. The technique recalibrates the home position on regular intervals. The interval can be any length of time and is called the "recalibration interval." The recalibration at the end of each interval is derived from data collected over a preceding amount of time called the "monitoring window." The monitoring window needs to be long enough to have enough data from which to make an accurate calibration, but short enough to ignore data that applies to a hand's previous home position characterization. Since the user may take frequent breaks, the monitoring window is best measured by number of gestures performed. Moreover, since each finger must be calibrated separately, the technique maintains a separate monitoring window for each finger. 10 seconds is a reasonable recalibration interval, and 50 gestures is a reasonable monitoring window.
In addition, the technique employs a "staleness window." This is the maximum amount of time for which gestures in a finger queue are valid contributors to a recalibration. Presumably, gestures that were performed too long ago should have no bearing on the current characterization of the home position. The staleness window prevents this technique from combining the gestures a user performs prior to taking a break with the gestures the user performs upon resuming after the break. A staleness window of 2 or 3 minutes is reasonable.
To implement this recalibration technique, the character entry system manages a queue of gestures for each finger. As the user gestures into the character entry system to perform user interface behaviors, the system appends a characterization of each finger's gesture to that finger's gesture queue. The characterization consists of the vectors produced by the finger, an indication of which vector was the first of the gesture, and the time at which the gesture was initiated. When a finger participates in a multi- finger gesture, the gesture is recorded to each participating finger's queue but only includes the vectors of the gesture performed by the particular finger. In multi-finger gestures, each finger will have its first vector of the gesture designated as such.
This particular technique requires that only upward and downward strokes and sweeps be placed on the queue; the technique ignores all other gestures and so does not recalibrate based on these other gestures. Additionally, each queue must be managed so that, at least at the time it is examined for a possible recalibration, it never contains more gestures than specified by the monitoring window.
At every recalibration interval, the character entry system examines each finger's queue. All gestures marked with times older than the staleness window are discarded. If after discarding stale gestures, a finger's queue contains a number of gestures equal to the monitoring window, the system recalibrates that finger.
The recalibration of a finger first computes a preliminary rest location for the finger. Each gesture in the finger's queue has a vector designated as the initial vector, and each of these initial vectors has a starting location. Furthermore, each initial vector can be classified as an upward stroke or a downward stroke, as explained in the description of the home position. Hence, the queue effectively contains a collection of gesture starting locations and their associated upward/downward directions. The number of upward-directed start locations is the "up count," and the number of downward-directed start locations is the "down count." This recalibration technique defines a minimum number of start locations that are required to compute a preliminary rest location from start locations, a number that can be at most half the monitoring window size, but which must contain enough data to be meaningful. This minimum is the "minimum rest location source size." 20 is a reasonable minimum rest location source size.
If either the up count or the down count is less than the minimum rest location source size, the recalibration's preliminary rest location is the existing calibration's rest location. If the up count and the down count are each greater than or equal to the minimum rest location source size, the preliminary rest location is computed from the up and down start locations. In this case, the upward-directed start locations are all averaged together to form a single location coordinate that serves as endpoint A. The downward- directed start locations are also averaged together to form a single location coordinate that serves as endpoint B. These locations are averaged by separately averaging each of their coordinate values. The preliminary rest location is then the midpoint of the segment connecting endpoints A and B.
Next the recalibration computes the new reference line. It does this by averaging together all of the vectors in the finger's gesture queue. It suffices to average just the gesture initial vectors, since finger identity is determined by where gestures start. This is computed as the "average vector" described for the tickling gesture, resulting in a single "average vector." This average vector has a location on the input area, not just a slope. The line coincident with this average vector is taken as the finger's new reference line.
Finally, the technique computes the finger's new rest location from the preliminary rest location and the new reference line. The preliminary rest location may or may not occur on the reference line, where the actual rest location is required to be. The finger's new rest location is taken to be the point on the reference line that is closest to the
preliminary rest location. It's calculation is standard geometry.
The calibration approach just described completely moves the home position to the new location at every recalibration interval. This may result in drastic change of experience for the user. The recalibration process can make the home position change more gradual with two modifications to the above algorithm. First, the actual preliminary rest location is taken to be the midpoint between the previous rest location and the newly computed preliminary rest location described above. Second, the actual new reference line is taken to be the line that bisects the region between the previous reference line and the newly computed reference line described above. These two modifications gradually move the rest location and reference lines to their apparent new positions by only moving halfway to the new positions in each recalibration.
6.5. Wizard-Based Calibration
This document has provided several means by which a user can calibration home position using simple gestures. The simplicity of these gestures necessarily limits how specifically tailored the calibration can be to a user's needs. In order to allow users to specify more suitably accurate calibrations, a character entry system may provide software "wizards" that guide the user through a calibration process. For example, a wizard might have the user enter one gesture after another in order to measure the user's preferences for performing that gesture, or the wizard might just ask the user to enter a pre-specified sentence and infer the user's preferences from the resulting gestures. There are many means by which the wizard can be made available, such as from a menu item or from a special gesture that opens up the wizard. One possible gesture for this could be a double- or triple-tap hand selection slide, which would also have the benefit of specifying the desired hand while opening the wizard.
6.6. Visual Representation
Finger-mapped character entry systems are designed to allow the user to input characters without having to look at the input area. However, there are times when it can be useful to visually depict information about the calibration currently in effect, such as when the user lifts a hand from the input area and is inclined to place the hand back in home position without performing another hand selection slide or otherwise
recalibrating the input area. Visual feedback may also help users who are just learning to use the character entry system. Here are some options for providing visual feedback:
• The name of the user or calibration currently in effect.
• The name or a symbol for the hand currently in use.
• A special coloring of the finger regions when finger mapping, perhaps one color for the entire finger-mapped region, or different colors for the regions.
• Lines marking the boarders between adjacent finger regions.
• A single line, or an especially conspicuous line, marking the outside boundary of the index finger region, perhaps along with a clear indication of which side of this line the fingers go, since that differs between the two hands.
When the finger mapped region of the input area is apparent to the user, it becomes possible for the character entry system to treat the remaining portions of the input area differently. For example, if a gesture is initiated somewhere off of the finger mapping, the system could immediately activate the cursor mode for interpretation of the gesture, allowing the user to readily perform cursor actions from a character mode without first having to perform a gesture whose purpose is only to activate a cursor mode.
7. System Specification
There are many ways to layout the visual interface of a finger-mapped character entry system, but a user will only find two systems compatible if they employ the same gestures for character input and cursor movement. The particular gestures for selecting text and clipboard operations may also be crucial to the user. For this reason, finger- mapped character entry systems are best classified by the input modes they employ, the gestures each input mode supports, and the means they provide for transitioning among input modes. Together these details compose a system's specification.
The input mode is the organizing component of a system's specification. When an input session first begins, the input area starts in some input mode. The input mode determines what is displayed in the input area and how the input area behaves. The following are examples of input modes that a character entry system may support:
• Calibration mode
• Cursor mode
• Character mode
• Capital letters mode
• Numeric keypad mode
• Symbol characters mode
• Button-based keyboard mode
• Help or tutorial mode
Only one input mode is active at any time in an input area. This is the "active mode." The user changes the active mode by performing a gesture or pressing a button. The system may also asynchronously change the active mode, such as on timeout.
Relationships among input modes may be complicated, with some only being available from certain other input modes, under certain conditions, while others are always available. These relationships are best represented in a state diagram showing input modes and the events transitioning the active mode among them. It may be useful for a particular implementation to show a name or symbol for the active mode somewhere on the input area, but this need not technically be part of the specification.
It is possible for a character entry system to implement only a single mode, but it's usually best for a system to have at least a character mode and a cursor mode. The character mode employs finger mapping to allow the user to input a large variety of characters, while the cursor mode more flexibly accommodates gestures that are more suggestive of their respective cursor operations. Moreover, by having a separate cursor mode, the user need not first calibrate home position in order to move the cursor.
A character entry system that supports a character mode and a cursor mode would normally begin an input session in cursor mode. This allows the user to immediately cursor about when the input area appears. To switch to character mode, the user performs a hand selection slide, simultaneously selecting a hand and at least a default home position calibration. Since this gesture is a spaced-finger common-line two- fingered sweep, it is convenient to have an adjacent-finger common-line two-fingered sweep, in any direction, activate cursor mode. Because a user may periodically forget which input mode is currently active, both of these gestures should be available from each input mode. A new hand selection slide in character mode will just assign a new home position, while any attempt to switch to cursor mode from cursor mode will simply be ignored. This way the gesture always guarantees the input mode.
Figure imgf000043_0001
A B 0 D
Useful mode change gestures. A and B - Set active mode to character mode with a left or right hand calibration, respectively.
C and D - Set active mode to cursor mode.
The most effective way for a character mode to accommodate a large number of characters is to assign multi-valued gestures to the fingers of the finger mapping. This way, as the hand is held in home position, characters can be input just by moving the fingers up and down (as explained for the home position). This way, the direction that a finger moves selects a character set, and the distance the finger moves selects a character from the set. To enable these gestures to be reversible, they should be implemented as linear multi-valued gestures. By also supporting sweeps, side gestures, and tap gestures ~ multi-valued or not ~ many characters become available for input.
stroke lengths
Figure imgf000044_0001
Example of a finger map with character actions represented by linear multi-valued strokes. Note that this diagram depicts finger start locations, not finger rest locations. A finger's stroke may begin anywhere within the finger's region, since the character is selected by the length of the stroke, not its position within the finger's region.
Figure imgf000045_0001
Example of a finger map with character actions represented by linear multi-valued two-fingered sweeps. Note that this diagram depicts finger start locations, not finger rest locations. Each finger's stroke may begin anywhere within the finger's region, since the character is selected by the length of the sweep, not by positions within the finger's region. Here sweep lengths are traced by the midpoints between the fingers.
An input mode may provide temporary access to another input mode through the use of a hold gesture. An input mode that is active only temporarily before returning to the preceding input mode is called a "submode." A hold gesture generally induces a submode for the duration of the hold gestures. Employing a hold gesture is analogous to holding the shift, control, or command keys down on a physical keyboard. Useful hold gesture combinations while finger mapping include holding the thumb down (anywhere to the outside of the index finger), holding any one finger down (usually the index or pinky finger), or holding both the thumb and a finger down. While holding the gesture, the user simultaneously performs a separate gesture with other fingers, even with the same hand, and the simultaneous gesture takes on a meaning specific to the induced submode. The user interface could also implement buttons on the input area that the thumb might press, allowing the thumb to selection from among multiple submodes. However, provided that gestures are available for selecting the submode any time the user might need to do so, these buttons need not be part of the system's specification.
Figure imgf000046_0001
A B C
Modal holds and thumb anchor.
A - Thumb hold, allowing other fingers to do any gestures.
B - Pinky hold, allowing other fingers to do finger map gestures. C - Thumb anchor, steadying fingers doing finger map gestures.
To enter characters in a mode that employs finger mapping, the user must keep the hand positioned properly over the regions mapped to the individual fingers. The system allows the user some freedom to shift the hand, particularly with dynamic recalibration, but if the user is walking, in a car, or on a plane, this may not be enough. To allow the user to steady the hand, the character entry system may, for particular input modes, allow the user to anchor the thumb on the touchscreen with a hold gesture. The system may identify a thumb anchor as a hold gesture that is performed to the outside of the index finger region ~ that is, as a hold gesture performed next to the region mapped for the fingers, closer to the index finger region than the to the pinky region. To interpret this gesture as a thumb anchor, the character entry system simply ignores the gesture, while still detecting and interpreting other gestures that are simultaneously performed. The system could additionally allow the thumb to specify modes by interpreting double-tap and triple-tap thumb holds as specifying a mode rather than as anchoring the thumb.
Finally, the behaviors that gestures implement may be a function of the hand ~ left or right ~ that performs them. The character entry system may opt to implement identical behaviors for analogous fingers on each hand, so that the gestures of the two hands are identical in mirror image. This allows the user to employ either hand for character entry. It also facilitates training both hands. It may make sense to violate the mirror image for some side gestures to ensure that the gesture remains suggestive of the direction in which the gesture moves the cursor. It's also possible to specify a system allows both hands to simultaneously enter characters into one or two input areas, each employing a different finger map, analogously to typing with both hands on a physical keyboard. 8. Two-Valued English System
Here is a specification for a two-valued character entry system for English. This system employs multi-valued gestures of at most two values for each gesture direction. Limiting gestures to at most two values makes the system easier for users to learn, since users need only train their fingers to perform strokes of two lengths. The system may be implemented as two-touch, three-touch, or configurably either two-touch or three-touch. The two-touch implementations are available to the greatest number of devices, which at this time are mostly only two-touch capable. The three-touch implementation allows the thumb to touch the touch the touchscreen while entering characters, either to just steady the hand or to select the input mode. The gestures and finger maps of this two- valued English system have been selected to be as intuitive and mnemonic as possible.
8.1. Gesture Mapping
This system employs a single input area. The input area has an inherent left side, right side, top, and bottom. The horizontal is the direction that runs left and right parallel with the top and bottom, and the vertical is the direction that runs up and down parallel with the sides. When the user puts the hand in home position, the system requires that the hand be oriented so that the fingers move upward, per the home position
characterization, by moving towards the top of the input area, even if at a steep angle. For example, the right hand could be placed in a home position whose orientation line is only slightly rotated clockwise from the vertical.
The system employs separate definitions of "up", "down", "left", and "right" for finger mapped and non-finger mapped gestures. With finger mapped gestures, "up" and "down" have the meanings ascribed to "up" and "down" for home position. These gestures are roughly perpendicular to home position's orientation line. The "left" and "right" gestures of a finger mapping are those that are neither up nor down, with left and right gestures concluding further left or right of where they start, respectively. These gestures are roughly parallel with the home position's orientation line, which need not be horizontally oriented. With non-finger mapped gestures, "up" and "down" gestures are approximately vertical, while "left" and "right" gestures are approximately horizontal. Approximately vertical gestures can be taken as gestures whose angles are within 30 degrees of the vertical, and approximately horizontal gestures can be taken as gestures whose angles are within 30 degrees of the horizontal.
Multi-valued gestures in this system have exactly two values for each direction of the gesture. The values available to the shorter strokes are called the "short" gestures, and those available to the longer strokes are called "long gestures." Multi-valued gestures can be divided into three classes: gestures performed outside of a finger mapping, up/ down gestures within a finger mapping, and side gestures within a finger mapping. The character entry system should allow the user to separately set the boundary length between short and long gestures for each of these classes. In general, it is helpful to have "long" gestures performed outside of finger mapping longer than those performed from a finger mapping. A reasonable default for the former is 20% of the home position breadth and 15% of the home position breadth for the latter. Ideally, a calibration to the user's hand in home position would refine these lengths further as a function of the finger that performs the gesture.
The finger maps for the left and right hands are defined as being identical for analogous fingers. For example, any action available to the index finger of the left hand is also available to the index finger of the right hand. However, the finger maps are only mirror images for the up/down gestures; side gestures retain their same left-right sense on each hand. Since a short right stroke of the index finger on the right hand produces a space, a short right stroke of the index finger on the left hand also produces a space. Because it is easier for the left index finger to move right than for the right index finger to move right, the middle finger is also mapped to produce a space for a short right stroke. This allows the index finger of the left hand to start in the middle finger region -- and be detected as the middle finger ~ and move right to produce a space. This is far easier on the left hand. But then the middle finger of the right hand must also be similarly mapped to maintain the direct analogy between the hands. A similar situation holds for ring and pinky fingers. Therefore, the finger maps of this system abide by the principle that the side gestures of the index and middle fingers be identical and that the side gestures of the ring and pinky fingers be identical.
Users may have varying preferences ~ for either hand ~ about whether they cross one finger over to another finger's region to start a side gesture. As a result, the finger mapping must support cross mapping for adjacent fingers; it is not enough to simply define a finger's region by the maximum stroke lengths expected by that particular finger. To support cross mapping, the finger regions must be extended to accommodate the maximum stroke lengths of their adjacent fingers. The easiest way to do this is to compute the minimal sizes of the finger regions, compute the bounding box of these regions, and then extend the side boundaries of each finger to the borders of the bounding box. (This technique was depicted in an earlier diagram.)
Depending on the size of the input area, the size of the finger region bounding box, and the location where the user places the hand in home position, there may be a significant amount of unused space in the input area on the thumb side of the finger mapping. There are multiple uses for this space. If the touchscreen is capable of reporting three simultaneous touch inputs, this area may be used to register thumb holds. The behavior of the thumb hold could be configurable. It could serve as a thumb anchor and be ignored, or it could select a cursor submode, as explained below.
8.2. Input Session
The input session begins in navigation mode. From here the user can employ gestures or press implementation-specific interface buttons to activate other input modes. The means for accessing input modes varies from input mode to input mode. There are three classes of input modes: cursor modes for moving the cursor around via freeform gestures, character modes for entering characters via finger mapping gestures, and button modes for entering characters using conventional virtual keyboards.
This character entry system supports the following input modes:
• Navigation Mode ~ This is a cursor mode that allows the user to move the cursor around the text. Clipboard operations are also possible from this input mode.
• Highlighting Mode -- This is a cursor mode in which the user highlights text. The clipboard operations are available from this input mode.
• Lowercase Mode ~ This is a character mode that makes nearly all of the
conventional keyboard characters available by gestures. It provides both lowercase and uppercase letters, but the lowercase letters are more accessible..
• Uppercase Mode ~ This is a character mode that makes nearly all of the
conventional keyboard characters available by gestures. It provides both lowercase and uppercase letters, but the uppercase letters are more accessible.
• Number/Pinky Mode ~ This is a character mode for entering number characters. It is only available as a submode of the lowercase and uppercase modes. The mode may also provide access to application or operating system functions.
• Keyboard Mode ~ This is is a button mode that allows the user to enter characters via a conventional keyboard. This mode provides easy access to the standard way to input text for users who have not trained with the gesture-based system.
• Symbol Table Mode -- This is a button mode that displays in tabular form the range of symbol characters available for easy selection by the user.
The following state diagram illustrates the transitions among input modes:
Figure imgf000050_0001
State transition diagram for the input modes Circles designate input modes and dashed regions designate superstates. Solid arrows designate non-nesting transitions between input modes. Dotted arrows depict transitions between primary input modes and their submodes. The primary input modes are navigation mode, lowercase mode, and uppercase mode. The transition away from a primary mode to a submode is temporary and lasts only for the duration of the hold gesture that induced the submode; the dotted return arrows labeled "release hold" restore the active mode to the input mode that was active at the time hold gesture was performed. The arrows that point from a superstate to an input mode are transitions that are available to each input mode within the superstate. The navigation mode is shown in a bold circle because it is the first input mode of the input session. The black dot represents an existing of the character entry system by closing the input area.
The gestures that transition among input modes are as follows:
• Lowercase Slide ~ This is a hand selection slide to either the left or the right, performed anywhere on the input area.
• Uppercase Slide ~ This is a double-tap hand selection slide to either the left or the right, performed anywhere on the input area.
• Case-Change Tap -- This is a two-fingered double-tap performed on a finger mapping. If the two tapping fingers are not adjacent fingers (if they are index and ring, index and pinky, or middle and pinky), then after the tap gesture times out waiting for a third tap, the gesture completes as a case-change tap. This gesture is a toggle between the lowercase and uppercase modes.
• Pinky Hold ~ This is a hold of the pinky finger on a finger mapping. While the pinky finger is down, the remaining three fingers may perform gestures for selecting behaviors not available to the lowercase and uppercase modes.
• Cursor Mode Slide ~ This is an adjacent-finger two-fingered sweep to either the left or the right, performed anywhere on the input area.
• Optional-Tap Thumb Hold ~ This is a hold gesture that could be performed by any finger or thumb but which in practice is normally the thumb. Using the thumb allows the user to perform cursoring gestures simultaneously with the hold. The hold is optionally the concluding gesture of a triple-tap, but no taps are required. The triple-tap is optional to allow the user to perform the same gesture that also activates highlighting mode from a character mode, when the character entry system supports thumb-hold cursoring and highlighting from character modes.
• Keyboard Slide-Up ~ This is a double-tap spaced-finger common line
perpendicular two-fingered sweep towards the top of the input area. The gesture opens a conventional virtual keyboard in the input area.
• Symbol Table Slide-Up ~ This is a triple-tap spaced-finger common line
perpendicular two-fingered sweep towards the top of the input area. The gesture opens a table of symbol characters in the input area. • Double-Tap Thumb Hold ~ This is an optionally supported gesture. While in a finger mapping mode, the thumb may perform a double-tap hold gesture -- ending the double-tap with a hold ~ to enter navigation mode as a submode. While the thumb is held, the regular navigation mode gestures are available to the other fingers. Normally the index finger performs the navigation mode gestures.
• Triple-Tap Thumb Hold ~ This is an optionally supported gesture, which should be supported if the preceding double-tap thumb hold transition is also supported. While in a finger mapping mode, the thumb may perform a triple-tap hold gesture ~ ending the triple-tap with a hold ~ to enter highlighting mode as a submode. While the thumb is held, the regular highlighting mode gestures are available to the other fingers. Normally the index finger performs the highlighting mode gestures.
• Close Input Area ~ This is a double-tap spaced-finger common line perpendicular two-fingered sweep towards the bottom of the input area. The gesture closes the input area, ending the input session.
The character entry system need not retain state information about text that may be in the host area. The host is responsible for maintaining the text. Generally, an
implementation of the character entry system only makes requests of the host, and only hosts that support the particular requests perform the associated behaviors. However, user actions in the character entry system are intended to produce specific behaviors in the host, and the host should honor the requests as much at is able. In particular, only a single contiguous length of text should be highlighted at any time. When text is highlighted, the cut and copy gestures are available. If text is highlighted when the user performs a gesture that deletes a character or a word, only the highlighted text should be deleted. If text is highlighted when the user inputs a new character, the new character should replace the highlighted text. Likewise, if a paste gesture is performed while text is highlighted, and if the clipboard is non-empty, the contents of the clipboard should replace the highlighted region. Finally, if the cursor is moved while a region of text is highlighted, without also changing the extent of the highlighted region, existing highlighting should be removed (without removing the associated text).
On touchscreen devices that support three or more simultaneous touch inputs, the character entry system may support thumb holds in the region of the input area on the thumb-side of a finger mapping. It is an option for the system to accept thumb anchoring here and never to interpret thumb gestures, so that the thumb only ever helps to steady the hand in home position. However, it is also an option to employ thumb holds for implementing cursor submodes of the character modes. In this case, a double-tap thumb hold activates navigation mode for the duration of the hold, and a triple-tap thumb hold activates highlighting mode for the duration of the hold.
In addition, the character entry system could interpret a gesture performed outside the region of a finger mapping as a potential cursor gesture. The system would activate navigation mode when it detects a gesture initiated outside of the finger mapping region and not recognized by character mode but known to navigation mode. The gesture is then interpreted from navigation mode. When the gesture completes, the system remains in navigation mode, unless the gesture itself selected a different input mode. The above state diagram does not depict this possible transition to navigation mode. To support this feature, it would be helpful to depict the finger mapping region on the input area so the user can see where cursor gestures might be initiated. When this feature is implemented, the thumb may still anchor in the cursor-gesture sensitive space, provided that it is kept steady so it doesn't register cursor gestures.
8.3. Common Actions
A number of actions are available to all of the primary input modes ~ lowercase mode, uppercase mode, and navigation mode. The gestures of these actions are identical in all of these modes. In particular, the gestures are not sensitive to the finger mapping when performed in a character mode. The interpretations of these gestures are identical across input modes as well. These actions are the following:
• Highlight All -- In this gesture the users quickly slides a finger on the input area in the rough outline of a circle. This gesture may be distinguished from the cut gesture by comparing the bounding box of the finger's path to the distance between the start and end locations of the finger. If the smallest dimension of the bounding box is at least 40% greater than the distance between the finger's starting and ending locations, the user can be considered to have performed a circle. This gesture issues a request to highlight all of the text.
• Cut ~ In this gesture the user quickly slides the finger on the input area in the rough shape of an X, without lifting the finger during the gesture. This gesture may be distinguished from the highlight all gesture by comparing the bounding box of the gesture to the distance between the start and end locations of the finger. If the smallest dimension of the bounding box is no more than 40% greater than the distance between the finger's starting and ending locations, the user can be considered to have performed a cut. This gesture issues a request to delete the highlighted text.
• Copy ~ This is a conventional touchscreen pinching gestures (as in pinching/ zooming). The start and end sizes of the gesture are not significant, as they are in its conventional zoom-in interpretation. This gesture issues a request to copy the highlighted text to the clipboard without deleting it. The gesture may be performed in any direction (with any orientation).
• Paste— This is a conventional touchscreen zooming gestures (as in pinching/ zooming). The start and end sizes of the gesture are not significant, as they are in its conventional zoom-out interpretation. This gesture issues a paste-from- clipboard request. If no text is highlighted, this host inserts the contents of the clipboard at the cursor location. If text is highlighted, the host deletes the
highlighted text and replaces it with the contents of the clipboard. The gesture may be performed in any direction (with any orientation).
• Undo ~ This is a left-right scrub with a scrub count of at least three. This gesture performs an undo operation, which may be implemented by either the character entry system or the host. Note that the interpretation of "left" and "right" depends on whether the undo is being performed from a cursor mode or a character mode, the latter of which implements finger mapping.
• Redo ~ This is a double-tap horizontal scrub with a scrub count of at least three. This gesture performs a redo operation, which may be implemented by the character entry system or the host. It redoes a preceding undo. Note that the interpretation of "left" and "right" depends on whether the undo is being performed from a cursor mode or a character mode, the latter of which implements finger mapping.
8.4. Character Modes
The character modes are the input modes for entering characters into the character entry system. They are lowercase mode, uppercase mode, and number/pinky mode. Number/pinky mode is a submode of each of the other two. They are all finger mapped, employing the home position calibrations that are established by the most recent hand selection slide, adjusted by gesture length refinement, tickling, dynamic recalibration, and user-controlled calibration preferences.
All three character modes support the following gestures:
• Space ~ This is a short stroke to the right performed by either the index finger or the middle finger. It requests that a space character be inserted.
• Tab ~ This is a double-tap stroke of any length to the right, performed by either the index finger or the ring finger. This gesture issues a tab to the host. It is up to the host to decide how to interpret the tab. Some hosts interpret all tab characters as insertions of tabs into the text. Some hosts interpret tab characters as navigation among fields, hence the support for tabs from navigation mode.
• Delete Character ~ This is a short stroke to the left performed by either the index or middle finger. It requests that the host delete the previous character. If no text is highlighted, the host deletes the character that precedes the cursor, if there is any. If text is highlighted, the host only deletes the highlighted text.
In addition to the calibration gestures, the lowercase and uppercase modes both support the following gestures:
• New Line ~ This is a double-tap stroke to the left of any length performed by the ring or pinky finger. The gesture issues a new line sequence to the host. The new line sequence varies by operating system. For example, Unix-variety operating systems use ASCII line feeds, while Microsoft Windows-variety operating systems use the two-character sequence ASCII line-feed followed by ASCII carriage return.
• Line Break ~ This is a triple-tap stroke to the left of any length performed by the ring or pinky finger. It issues a line break to the host. Line breaks do not correspond to ASCII characters. Many applications on a standard computer employing a conventional physical keyboard allow the user to press shift-enter in order to insert a "soft return" or "line break." This gesture provides line breaks for hosts that support it.
• Delete Word ~ This is a long stroke to the left performed by either the index or middle finger. It issues a request to the host to delete the previous word. If no text is highlighted, the host deletes the word that precedes the cursor, if there is any. If text is highlighted, the host only deletes the highlighted text.
• Dash ~ This is a short stroke to the right performed by either the ring finger or the pinky. It requests the insertion of a dash (decimal 45 in ASCII) character.
• Em-Dash -- This is a long stroke to the right performed by either the ring finger or the pinky. It requests the insertion of an em-dash. If the host is not capable of representing an em-dash in the encoding of the text, the host may insert two dash characters. If the character entry system is able to determine that the host cannot represent the em-dash, it can request the insertion of the two dash characters.
• Underscore -- This is a triple-tap stroke to the right performed by any finger in the finger mapping. It requests the insertion of an underscore character. Any finger is capable of performing an underscore because one user may think of an
underscore as a variant of the space, which is assigned to the index and middle fingers, while another user may think of it as a variant of the dash, which is assigned to the ring and pinky fingers.
The character modes map most characters and symbols to up and down strokes and sweeps of the finger mapping. Each finger may perform a short stroke up, a long stroke up, a short stroke down, or a long stroke down. These strokes may be embellished by a double-tap or triple-tap, causing them to select different characters according to the tap count. Furthermore, each pair of adjacent fingers (index and middle, middle and ring, or ring and pinky) may perform a short sweep up, a long sweep up, a short sweep down, or a long sweep down. Embellishing these sweeps with double-taps or triple-taps provides access to still more characters.
The following tables depict the various up/down strokes and sweeps available from lowercase mode. Although the index finger column of each table is on the left side of the table, the tables characterize the finger maps for both the left and right hands. The tables for uppercase mode are not shown. The uppercase mode tables can be computed from these lowercase mode tables by transforming the letter cases of all the letter characters; all lowercase letters would become uppercase letters, and all uppercase letters would become lowercase letters. The tables indicate the characters that the character entry system requests the host to insert.
Figure imgf000056_0001
Lowercase mode up/down strokes (without preceding taps).
Figure imgf000056_0002
Lowercase mode up/down sweeps (without preceding taps).
Figure imgf000057_0001
Lowercase mode up/down double-tap strokes.
Figure imgf000057_0002
Lowercase mode up/down double-tap sweeps.
Figure imgf000058_0001
stroke down
Lowercase and uppercase mode up/down triple-tap strokes.
Figure imgf000058_0002
sweep down ** j
Lowercase and uppercase mode up/down triple-tap sweeps. Number/pinky mode is a submode of both lowercase mode and uppercase mode. The finger map for this submode is the same regardless of which input mode it was activated from. The pinky finger remains in hold gesture for the duration of number/ pinky mode, so the remaining fingers select the character. This system specification assigns characters to both the short and long up and down strokes, but it does not assign characters or other behaviors to the double-tap or triple-tap versions of these gestures. The double-tap and triple-tap gestures of number/pinky mode are therefore available to request application or operating system specific behaviors.
The following table depicts the strokes available from number/pinky mode. Although the index finger column is on the left side of the table, the table characterize the finger map for both the left and right hands. This table also indicates the characters that the character entry system requests the host to insert.
Figure imgf000059_0001
Lowercase and uppercase mode up/down triple-tap sweeps.
8.5. Cursor Modes
The cursor modes are navigation mode and highlighting mode. Navigation mode moves the cursor around without highlighting any of the text, and highlighting mode moves the cursor around, highlighting all text over which the cursor passes. When the host receives requests to move the cursor without also extending a highlight, the host should remove any existing highlighting (but not the characters highlighted). The cursor modes do not employ finger mapping, so the user may enter the gestures anywhere on the input area using any finger.
Navigation mode and highlighting mode share the same gestures except for the following, which are only available to the navigation mode:
• New Line -- This is a double-tap stroke to the left of any length. The gesture issues a new line sequence to the host. The new line sequence varies by operating system. For example, Unix-variety operating systems use ASCII line feeds, while Microsoft Windows-variety operating systems use the two-character sequence ASCII line-feed followed by ASCII carriage return. It is useful to have a new line gesture available from cursor mode so that the user can repeatedly hit enter on input fields until encountering an input field requiring editing.
• Tab ~ This is a double-tap stroke of any length to the right. This gesture issues a tab to the host. It is up to the host to decide how to interpret the tab. Some hosts interpret all tab characters as insertions of tabs into the text. Some hosts interpret tab characters as navigation among fields, hence the support for tabs from navigation mode.
Navigation mode and highlighting mode share the following gestures:
• Character Left ~ This is a short stroke to the left. It requests that the host move the cursor one character closer to the start of the text.
• Word Left ~ This is a long stroke to the left. It requests that the host move the cursor one word to the left. If the cursor is in the middle of a word, the host should move the cursor to the beginning of the word. If the cursor is between words, the host should move the cursor to the start of the preceding word. Hosts that do not support word-left behavior may instead implement character-left.
• Beginning of Line -- This is a triple-tap stroke of any length to the left. It requests that the host move the cursor to the beginning of the current line.
• Character Right ~ This is a short stroke to the right. It requests that the host move the cursor one character closer to the end of the text.
• Word Right ~ This is a long stroke to the right. It requests that the host move the cursor to the start of the next word in the text. Hosts that do not support word-right behavior may instead implement character-right.
• End of Line ~ This is a triple-tap stroke of any length to the right. It requests that the host move the cursor to the end of the current line.
• Up Line ~ This is a short stroke up. It requests that the host move the cursor to the previous line in the text, positioning the cursor at the same offset into the line.
• Up Paragraph ~ This is a long stroke up. It requests that the host move the cursor one paragraph earlier in the text. If the cursor is presently in a paragraph, the host should move the cursor to the beginning of the paragraph. If the cursor is between paragraphs, the host should move the cursor to the beginning of the previous paragraph. Hosts that do not support paragraph-up behavior may instead implement line-up.
• Beginning of Text ~ This is a double-tap up of any length. It requests that the host move the cursor to the beginning of the text. The system may also implement this behavior as a triple-tap up of any length.
• Down Line ~ This is a short stroke down. It requests that the host move the cursor to the next line in the text, positioning the cursor at the same offset into the line.
• Down Paragraph ~ This is a long stroke down. It requests that the host move the cursor to the beginning of the next paragraph. Hosts that do not support
paragraph-down behavior may instead implement line-down.
• End of Text ~ This is a double-tap down of any length. It requests that the host move the cursor to the end of the text. The system may also implement this behavior as a triple-tap down of any length.
8.6. Button Modes
The button modes allow the user to pull up a conventional virtual keyboard as desired. They dispense with the gesture system and display buttons for character entry. There are two button modes ~ keyboard mode and symbol table mode. Keyboard mode displays a conventional virtual keyboard. This virtual keyboard may itself be modal, but this specification doesn't dictate those modes. Symbol table mode displays a tabular list of character symbols available for entry. In addition to symbols available through the system's finger mapping, this table may display international symbols, mathematical symbols, etc. The symbols and keyboard keys are buttons that the user presses to make requests of the host, such as to insert or delete a character.
The virtual keyboard and symbol tables presumably have a way to switch between the two modes, as depicted in the state diagram. This specification doesn't dictate the mechanism though. The user may also completely close either the virtual table or symbol table, returning the character entry system to the gesture method of input. The button modes could have a button for accomplishing this, or they may employ the Close Input Area gesture for this. From these modes, the Close Input Area gesture wouldn't close the input area; the gesture would just close the input mode. Upon closing either of these input modes, the navigation mode becomes the system's active mode.

Claims

Claims:
1. A method of user input on a touch-sensitive surface associated with at least one host that interprets user requests in response to user input, said method comprising the steps of: defining a input area having a plurality of input regions, associating each combination of input regions with a set of possible user requests, detecting all gesturing fingers simultaneously in contact with the touch- sensitive surface, each gesturing finger initially having touched the input area in a particular input region and traversed a path, the path having a length greater than or equal to a specified set selection threshold distance, detecting a gesture that said gesturing fingers compose, determining a gesture length as a function of the paths traversed by said gesturing fingers, defining the regions touched as the combination of regions initially touched by said gesturing fingers, selecting, as a function of said gesture length, a user request from among the possible user requests associated with said regions touched, said function yielding distinct user requests for at least two distinct values of gesture length; and issuing said user request to said at least one host.
2. A method according to claims 1, 10, 11, 13 or 24, wherein said gesture includes at least one additional attribute independent of both said regions touched and said gesture length, and wherein said selecting step further comprises selecting said user request as a function of said at least one additional attribute.
3. A method according to claims 2, 10, 11, 13, 24 further comprising the step of determining a gesture direction as a function of the paths traversed by said gesturing fingers, independently of said gesture length, and wherein said selecting step comprises selecting said user request as a function of said gesture direction.
4. A method according to claim 3, 10, 11, 13, 24 each of said input regions having a reference line indicating two opposite directions, the path of each of said gesturing fingers having a first location and a last location; for each of said gesturing fingers, determining the vector from the finger's first location to said finger's last location, producing a projected vector by projecting said vector onto the reference line for the input region that said finger initially touched, and identifying the direction of said finger as the direction along said reference line that said projected vector points; determining said gesture direction as a function of the projected vectors of each of said gesturing fingers; and further selecting said user request as a function of said gesture direction.
5. A method according to claim 4, 10, 11, 13, 24 wherein exactly two fingers compose said gesturing fingers, said method further comprising producing a test vector by projecting the projected vector of one of said two fingers onto the reference line of the other of said two fingers, and requiring that said test vector and the projected vector of said other of two fingers both point in the same direction along said reference line.
6. A method according to claim 2, 10, 11, 13, 24 wherein said gesture is a multi-tap gesture, said gesturing fingers each having a tap count, said method comprising determining a gesture tap count as a function of the tap counts of each of said gesturing fingers, and further said selecting step comprising selecting said user request as a function of said gesture tap count.
7. A method according to claim 2, 10, 11, 13, 24, wherein said gesturing fingers each further having paths that have at some point registered at least a minimum speed of travel during a polling window.
8. A method according to claim 7, 10, 11, 13, 24, further comprising the steps of detecting one or more hold fingers simultaneously in contact with the touch- sensitive surface, each hold finger initially having touched the input area in a particular input region and traversed a path of at least one point, the path never having registered a minimum speed of travel during any polling window;
determining that the combination of said hold fingers represents a modal hold; further selecting said user request as a function of said modal hold.
9. A method according to claim 1, 10, 11, 13, 24, wherein said user request is issued to said at least one host when at least one of said gesturing fingers lifts from the touch-sensitive surface.
10. A method according to claim 9, 13, 24, wherein exactly one finger composes said gesturing fingers.
11. A method according to claim 9, 13, 24, wherein exactly two fingers compose said gesturing fingers.
12. A method according to claim 11, wherein both of said two fingers having lifted and said user request issued, further comprising the steps of detecting all subsequent gesturing fingers simultaneously in contact with the touch- sensitive surface, each gesturing finger initially having touched the input area in a particular input region and traversed a path, the path having a length greater than or equal to a specified set selection threshold distance, exactly one subsequent gesturing finger composing said subsequent gesturing fingers, determining a finger length as a function of the path traversed by said subsequent gesturing finger, defining the subsequent region touched as the region initially touched by said subsequent gesturing finger, selecting, as a function of said finger length, a subsequent user request from among the possible user requests assigned to said subsequent region touched, said function yielding distinct user requests for at least two distinct values of finger length; and issuing said subsequent user request to said at least one host.
13. A method according to claim 1, 10, 11, wherein said input area is contiguous, said input regions partitioning said input area and forming an adjacency series (i.e. every location in said input area is also a location in exactly one input region, and every region is adjacent to two other regions, except for the regions at the ends of the series).
14. A method according to claim 13, wherein said input area has a hand calibration, said hand calibration is calibrated to a particular hand of a particular user, and said input regions are determined as a function of said hand calibration.
15. A method according to claim 14, wherein said hand calibration comprises at least two rest locations and comprising a reference line passing through each of said at least two reference locations, the boundaries between adjacent input regions bisecting the area between the reference lines of said adjacent input regions.
16. A method according to claim 15, said hand calibration further comprising an indication for whether it represents the user's left hand or the users' right hand.
17. A method according to claim 14, said hand calibration having been established prior to said gesture according to a prior calibration gesture.
18. A method according to claim 17, said hand calibration comprising at least two rest locations and comprising a reference line passing through each of said at least two reference locations, the boundaries between adjacent input regions bisecting the area between the reference lines of said adjacent input regions.
19. A method according to claim 18, said hand calibration comprising at least four rest locations, each rest location corresponding to a finger on a person's hand, and an indication for whether said hand calibration represents the user's left hand or the users' right hand.
20. A method according to claim 19, said touch-sensitive surface having assigned left and right sides and assigned top and bottom sides, said hand calibration including an indication of whether it represents the user's left hand or the user's right hand, said prior calibration gesture being a hand selection slide of at least two fingers, the direction of said hand selection slide indicating whether said hand calibration represents a right hand or a left hand, and the locations at which said at least two fingers lift indicating rest locations, any remaining locations and the reference lines for all four rest locations having been deduced from a previously established hand calibration.
21. A method according to claim 19, said touch-sensitive surface having assigned left and right sides and assigned top and bottom sides, said prior calibration gesture being a tickling gesture, the user having input multiple vectors for each finger as part of said tickling gesture, said multiple vectors having been averaged together into average vectors, each of said four rest locations residing at the midpoint of said average vectors, and the reference lines of each of said at least four rest locations coinciding with said average vectors.
22. A method according to claim 1, 10, 11, 13 , wherein said user request is a request to perform a text editing function.
23. A method according to claim 1, wherein said user request represents a keyboard input.
24. A method according to claim 23, wherein said user request is a request to input a particular character.
25. A system for user input on a touch-sensitive surface associated with at least one host that interprets user requests in response to user input, said system comprising: means for defining a input area having a plurality of input regions, means for associating each combination of input regions with a set of possible user requests, means for detecting all gesturing fingers simultaneously in contact with the touch-sensitive surface, each gesturing finger initially having touched the input area in a particular input region and traversed a path, the path having a length greater than or equal to a specified set selection threshold distance, means for detecting a gesture that said gesturing fingers compose, means for determining a gesture length as a function of the paths traversed by said gesturing fingers, means for defining the regions touched as the combination of regions initially touched by said gesturing fingers, means for selecting, as a function of said gesture length, a user request from among the possible user requests associated with said regions touched, said function yielding distinct user requests for at least two distinct values of gesture length; and means for issuing said user request to said at least one host.
26. A system according to claims 25, 58, 59, 37 or 48, wherein said gesture includes at least one additional attribute independent of both said regions touched and said gesture length, and wherein said means for selecting further comprises means for selecting said user request as a function of said at least one additional attribute.
27. A system according to claims 26, 34, 35, 37, 48 further comprising means for determining a gesture direction as a function of the paths traversed by said gesturing fingers, independently of said gesture length, and wherein said means for selecting comprises means for selecting said user request as a function of said gesture direction.
28. A system according to claim 27, 34, 35, 37, 38 each of said input regions having a reference line indicating two opposite directions, the path of each of said gesturing fingers having a first location and a last location; means, for each of said gesturing fingers, determining the vector from the finger's first location to said finger's last location, means for producing a projected vector by projecting said vector onto the reference line for the input region that said finger initially touched, and menas for identifying the direction of said finger as the direction along said reference line that said projected vector points; means for determining said gesture direction as a function of the projected vectors of each of said gesturing fingers; and further means for selecting said user request as a function of said gesture direction.
29. A system according to claim 28, 34, 35, 37, 48 wherein exactly two fingers compose said gesturing fingers, said system further comprising means for producing a test vector by projecting the projected vector of one of said two fingers onto the reference line of the other of said two fingers, and means for requiring that said test vector and the projected vector of said other of two fingers both point in the same direction along said reference line.
30. A system according to claim 26, 34, 35, 37, 48 wherein said gesture is a multi- tap gesture, said gesturing fingers each having a tap count, said system comprising means for determining a gesture tap count as a function of the tap counts of each of said gesturing fingers, and further said means for selecting comprising means for selecting said user request as a function of said gesture tap count.
31. A system according to claim 26, 34, 35, 37, 48, wherein said gesturing fingers each further having paths that have at some point registered at least a minimum speed of travel during a polling window.
32. A system according to claim 31, 34, 35, 37, 48, further comprising means for detecting one or more hold fingers simultaneously in contact with the touch- sensitive surface, each hold finger initially having touched the input area in a particular input region and traversed a path of at least one point, the path never having registered a minimum speed of travel during any polling window; and means for determining that the combination of said hold fingers represents a modal hold; further means for selecting said user request as a function of said modal hold.
33. A system according to claim 25, 34, 35, 37, 48, wherein said user request is issued to said at least one host when at least one of said gesturing fingers lifts from the touch-sensitive surface.
34. A system according to claim 33, 37, 48, wherein exactly one finger composes said gesturing fingers.
35. A system according to claim 33, 37, 48, wherein exactly two fingers compose said gesturing fingers.
36. A system according to claim 35, wherein both of said two fingers having lifted and said user request issued, said system further comprising means for detecting all subsequent gesturing fingers simultaneously in contact with the touch-sensitive surface, each gesturing finger initially having touched the input area in a particular input region and traversed a path, the path having a length greater than or equal to a specified set selection threshold distance, exactly one subsequent gesturing finger composing said subsequent gesturing fingers, means for determining a finger length as a function of the path traversed by said subsequent gesturing finger, means for defining the subsequent region touched as the region initially touched by said subsequent gesturing finger, means for selecting, as a function of said finger length, a subsequent user request from among the possible user requests assigned to said subsequent region touched, said function yielding distinct user requests for at least two distinct values of finger length; and means for issuing said subsequent user request to said at least one host.
37. A system according to claim 25, 34, 35, wherein said input area is contiguous, said input regions partitioning said input area and forming an adjacency series (i.e. every location in said input area is also a location in exactly one input region, and every region is adjacent to two other regions, except for the regions at the ends of the series).
38. A system according to claim 37, wherein said input area has a hand calibration, said hand calibration is calibrated to a particular hand of a particular user, and said input regions are determined as a function of said hand calibration.
39. A system according to claim 38, wherein said hand calibration comprises at least two rest locations and comprising a reference line passing through each of said at least two reference locations, the boundaries between adjacent input regions bisecting the area between the reference lines of said adjacent input regions.
40. A system according to claim 39, said hand calibration further comprising an indication for whether it represents the user's left hand or the users' right hand.
41. A system according to claim 38, said hand calibration having been established prior to said gesture according to a prior calibration gesture.
42. A system according to claim 41, said hand calibration comprising at least two rest locations and comprising a reference line passing through each of said at least two reference locations, the boundaries between adjacent input regions bisecting the area between the reference lines of said adjacent input regions.
43. A system according to claim 42, said hand calibration comprising at least four rest locations, each rest location corresponding to a finger on a person's hand, and an indication for whether said hand calibration represents the user's left hand or the users' right hand.
44. A system according to claim 43, said touch-sensitive surface having assigned left and right sides and assigned top and bottom sides, said hand calibration including an indication of whether it represents the user's left hand or the user's right hand, said prior calibration gesture being a hand selection slide of at least two fingers, the direction of said hand selection slide indicating whether said hand calibration represents a right hand or a left hand, and the locations at which said at least two fingers lift indicating rest locations, any remaining locations and the reference lines for all four rest locations having been deduced from a previously established hand calibration.
45. A system according to claim 43, said touch-sensitive surface having assigned left and right sides and assigned top and bottom sides, said prior calibration gesture being a tickling gesture, the user having input multiple vectors for each finger as part of said tickling gesture, said multiple vectors having been averaged together into average vectors, each of said four rest locations residing at the midpoint of said average vectors, and the reference lines of each of said at least four rest locations coinciding with said average vectors.
46. A system according to claim 25, 34, 35, 37, wherein said user request is a request to perform a text editing function.
47. A system according to claim 25, wherein said user request represents a keyboard input.
48. A system according to claim 47, wherein said user request is a request to input a particular character.
49. A method of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said method comprising the steps of: defining an input area having a plurality of input regions, each input region corresponding to a different finger detecting finger movement in at least one of said regions, determining a character entered based on a combination of the detected finger movement and the region in which it was detected; and displaying the determined character in said host area.
50. A method of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said method comprising the steps of:
defining an input area detecting finger movement of at least one finger in said input area, determining a character based on a start position, direction and length of movement; and displaying the determined character in said host area.
51. A method of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said method comprising the steps of:
defining an input area detecting finger movement of at least one finger in said input area, determining a character corresponding to the detected finger movement in said input area, based at least in part on a direction of movement not parallel to an edge of said screen; and displaying the determined character in said host area.
52. A method of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying gestures performed by fingers on said screen, said method comprising the steps of:
detecting at least one hold input from at least one hold finger; detecting at least one gesture input from at least one gesturing finger; and identifying user input from a combination of said hold and gesture inputs.
53. A method of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying user input based on gestures performed by fingers on said screen, said method comprising the steps of:
defining an input area having a plurality of input regions, each input region corresponding to a different finger detecting finger movement in at least two of said regions, identifying said user input based on the combination of detected finger movements in said at least two regions
54. A method of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying user input based on gestures performed by fingers on said screen, said method comprising the steps of:
defining an input area having a plurality of input regions, each input region corresponding to a different finger detecting finger movement in at least one of said regions, identifying said user input based on the combination of the detected movement and the region in which it is detected.
55. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character corresponding to the based on a start position, direction and length of movement; and means for displaying the determined character in said host area.
56. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character corresponding to the detected finger movement in said input area, based at least in part on a direction of movement not parallel to an edge of said screen; and means for displaying the determined character in said host area.
57. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising: means for defining an input area having a plurality of input regions, each input region corresponding to a different finger means for detecting finger movement in at least one of said regions, means for determining a character entered based on a combination of the detected finger movement and the region in which it was detected; and means for displaying the determined character in said host area.
58. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character based on a start position, direction and length of movement; and means for displaying the determined character in said host area.
59. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character corresponding to the detected finger movement in said input area, based at least in part on a direction of movement not parallel to an edge of said screen; and means for displaying the determined character in said host area.
60. A system of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying gestures performed by fingers on said screen, said system comprising:
means for detecting at least one hold input from at least one hold finger; means for detecting at least one gesture input from at least one gesturing finger; and means for identifying user input from a combination of said hold and gesture inputs.
61. A system of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying user input based on gestures performed by fingers on said screen, said system comprising:
means for defining an input area having a plurality of input regions, each input region corresponding to a different finger means for detecting finger movement in at least two of said regions, means for identifying said user input based on the combination of detected finger movements in said at least two regions
62. A system of accepting user input to a device having a touch sensitive screen, using a gesture entry system for identifying user input based on gestures performed by fingers on said screen, said system comprising:
means for defining an input area having a plurality of input regions, each input region corresponding to a different finger means for detecting finger movement in at least one of said regions, means for identifying said user input based on the combination of the detected movement and the region in which it is detected.
63. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character corresponding to the based on a start position, direction and length of movement; and means for displaying the determined character in said host area.
64. A system of finger-mapped character entry on a touch sensitive screen having at least one host area for host display in response to user input, said system
comprising:
means for defining an input area means for detecting finger movement of at least one finger in said input area, means for determining a character corresponding to the detected finger movement in said input area, based at least in part on a direction of movement not parallel to an edge of said screen; and means for displaying the determined character in said host area.
PCT/US2012/064563 2011-11-09 2012-11-09 Finger-mapped character entry systems WO2013071198A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/272,736 US10082950B2 (en) 2011-11-09 2014-05-08 Finger-mapped character entry systems
US16/037,077 US11086509B2 (en) 2011-11-09 2018-07-17 Calibrated finger-mapped gesture systems
US17/106,861 US20210109651A1 (en) 2011-11-09 2020-11-30 Calibration Gestures For Finger-Mapped Gesture Systems

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161557570P 2011-11-09 2011-11-09
US61/557,570 2011-11-09

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US14272736 A-371-Of-International 2012-11-09
US14/272,736 Continuation-In-Part US10082950B2 (en) 2011-11-09 2014-05-08 Finger-mapped character entry systems
US16/037,077 Division US11086509B2 (en) 2011-11-09 2018-07-17 Calibrated finger-mapped gesture systems

Publications (2)

Publication Number Publication Date
WO2013071198A2 true WO2013071198A2 (en) 2013-05-16
WO2013071198A3 WO2013071198A3 (en) 2016-05-19

Family

ID=48290775

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/064563 WO2013071198A2 (en) 2011-11-09 2012-11-09 Finger-mapped character entry systems

Country Status (1)

Country Link
WO (1) WO2013071198A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103560942A (en) * 2013-10-09 2014-02-05 广东欧珀移动通信有限公司 Notification message immediate processing method and system and mobile terminal
US20150116214A1 (en) * 2013-10-29 2015-04-30 Anders Grunnet-Jepsen Gesture based human computer interaction
US20150143277A1 (en) * 2013-11-18 2015-05-21 Samsung Electronics Co., Ltd. Method for changing an input mode in an electronic device
US10019109B2 (en) 2016-06-28 2018-07-10 Google Llc Enhancing touch-sensitive device precision
US10496273B2 (en) 2017-03-27 2019-12-03 Google Llc Dismissing displayed elements
EP4057116A1 (en) * 2021-03-09 2022-09-14 Adatype AB Method for biometrically optimizing a virtual keyboard
US11573641B2 (en) * 2018-03-13 2023-02-07 Magic Leap, Inc. Gesture recognition system and method of using same

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6570557B1 (en) * 2001-02-10 2003-05-27 Finger Works, Inc. Multi-touch system and method for emulating modifier keys via fingertip chords
US7057607B2 (en) * 2003-06-30 2006-06-06 Motorola, Inc. Application-independent text entry for touch-sensitive display
US8564543B2 (en) * 2006-09-11 2013-10-22 Apple Inc. Media player with imaged based browsing
US20090249258A1 (en) * 2008-03-29 2009-10-01 Thomas Zhiwei Tang Simple Motion Based Input System
US8723795B2 (en) * 2008-04-24 2014-05-13 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103560942A (en) * 2013-10-09 2014-02-05 广东欧珀移动通信有限公司 Notification message immediate processing method and system and mobile terminal
CN103560942B (en) * 2013-10-09 2016-08-31 广东欧珀移动通信有限公司 Method, system and the mobile terminal of a kind of quick process announcement information
US20150116214A1 (en) * 2013-10-29 2015-04-30 Anders Grunnet-Jepsen Gesture based human computer interaction
US9304597B2 (en) * 2013-10-29 2016-04-05 Intel Corporation Gesture based human computer interaction
US20150143277A1 (en) * 2013-11-18 2015-05-21 Samsung Electronics Co., Ltd. Method for changing an input mode in an electronic device
US10545663B2 (en) * 2013-11-18 2020-01-28 Samsung Electronics Co., Ltd Method for changing an input mode in an electronic device
US10019109B2 (en) 2016-06-28 2018-07-10 Google Llc Enhancing touch-sensitive device precision
US10739912B2 (en) 2016-06-28 2020-08-11 Google Llc Enhancing touch-sensitive device precision
US10496273B2 (en) 2017-03-27 2019-12-03 Google Llc Dismissing displayed elements
US11573641B2 (en) * 2018-03-13 2023-02-07 Magic Leap, Inc. Gesture recognition system and method of using same
US20230152902A1 (en) * 2018-03-13 2023-05-18 Magic Leap, Inc. Gesture recognition system and method of using same
EP4057116A1 (en) * 2021-03-09 2022-09-14 Adatype AB Method for biometrically optimizing a virtual keyboard

Also Published As

Publication number Publication date
WO2013071198A3 (en) 2016-05-19

Similar Documents

Publication Publication Date Title
US10983694B2 (en) Disambiguation of keyboard input
US10908815B2 (en) Systems and methods for distinguishing between a gesture tracing out a word and a wiping motion on a touch-sensitive keyboard
Li et al. The 1line keyboard: a QWERTY layout in a single line
US20160239137A1 (en) Method for interacting with a dynamic tactile interface
US20160364138A1 (en) Front touchscreen and back touchpad operated user interface employing semi-persistent button groups
US8884872B2 (en) Gesture-based repetition of key activations on a virtual keyboard
US8059101B2 (en) Swipe gestures for touch screen keyboards
CN101937313B (en) A kind of method and device of touch keyboard dynamic generation and input
JP6115867B2 (en) Method and computing device for enabling interaction with an electronic device via one or more multi-directional buttons
US9535603B2 (en) Columnar fitted virtual keyboard
WO2013071198A2 (en) Finger-mapped character entry systems
US20170017393A1 (en) Method for controlling interactive objects from a touchpad of a computerized device
US20150143276A1 (en) Method for controlling a control region of a computerized device from a touchpad
US20150100910A1 (en) Method for detecting user gestures from alternative touchpads of a handheld computerized device
US20100259482A1 (en) Keyboard gesturing
EP2653955B1 (en) Method and device having touchscreen keyboard with visual cues
WO2014058948A1 (en) A split virtual keyboard on a mobile computing device
WO2014059060A1 (en) Text entry using shapewriting on a touch-sensitive input panel
JP2013527539A5 (en)
Gaur AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12848688

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 12848688

Country of ref document: EP

Kind code of ref document: A2