WO2015109468A1 - Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée - Google Patents

Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée Download PDF

Info

Publication number
WO2015109468A1
WO2015109468A1 PCT/CN2014/071189 CN2014071189W WO2015109468A1 WO 2015109468 A1 WO2015109468 A1 WO 2015109468A1 CN 2014071189 W CN2014071189 W CN 2014071189W WO 2015109468 A1 WO2015109468 A1 WO 2015109468A1
Authority
WO
WIPO (PCT)
Prior art keywords
input
text
letters
keys
candidate characters
Prior art date
Application number
PCT/CN2014/071189
Other languages
English (en)
Inventor
Guo Bin Shen
Matthew Robert Scott
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Priority to PCT/CN2014/071189 priority Critical patent/WO2015109468A1/fr
Publication of WO2015109468A1 publication Critical patent/WO2015109468A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/232Orthographic correction, e.g. spell checking or vowelisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/274Converting codes to words; Guess-ahead of partial word inputs

Definitions

  • computing devices e.g., personal mobile devices such as smartphones, tablets, etc.
  • client devices are typically configured with an interaction device that presents a configuration or arrangement (e.g., a QWERTY keyboard) that may limit users to providing Latin-based input.
  • a user of a computing device wanting to communicate a message comprising Chinese characters typically has to select keys representing letters of the Latin alphabet (e.g., 'a', 'b', 'c', 'd' through 'z').
  • the user provides input in "Pinyin", which is the official phonetic system for transcribing the sound of Chinese characters into Latin-based script.
  • the computing devices may be configured with logic to translate the input from the user (e.g., a string of letters from the Latin alphabet) into Chinese characters and display the Chinese characters on the computing device. Then, the user providing the input can review the message (e.g., a text message, an electronic mail message, etc.) in Chinese and send the message when it is ready to be communicated to a recipient or to a recipient device. Accordingly, even though the final message communicated by the computing device may be transcribed toChinese (e.g., Chinese characters), the input provided by the user is Latin-based input (e.g., the letters from a Latin alphabet).
  • Latin-based input e.g., the letters from a Latin alphabet
  • Latin alphabet e.g., the modern version used in English, German and/or Spanish languages
  • Latin alphabet includes a limited number of letters (e.g., twenty-six letters - 'a', 'b', 'c', 'd' through 'z').
  • the computing devices are able to present a key or a selectable option representing each letter even when display space of a computing devicemay be limited.
  • many non Latin-based phonetic and/or writing systems such as those in East Asian countries and Arabic speaking countries (e.g., Arabic script), include hundreds if not thousands of different characters. Consequently, computing devices are unable to realistically present akey or a selectable option representing each character of some non Latin-based writing systems because there is not enough display space.
  • a user must input a large number of letters to communicate a message in a non Latin-based phonetic or writing system.
  • a single Chinese character or a group of Chinese characters may correlate to, or be transcribed with, a large amount of Latin-based letters. Therefore, the amount of time it takes to select keysrepresenting the Latin-based letters and communicate a message is significantly increased.
  • a user may often expend additional input time correcting errors in the input and/or revising the input.
  • the techniques and/or systems described herein minimize (e.g., decrease) an amount of time it takes for a user to provide input to a computing device or enter input via a user interface of the computing device (e.g., select a key or an option).
  • the techniques and/or systems described herein may also minimize an amount of time it takes a user to select a candidate character in a particular language, the candidate character representing, or being associated with, the provided input. Further, the techniques and/or systems described herein provide an efficient error correction or text revision mechanism.
  • FIG. 1 illustrates an example environment including a computing device that implementsan input module to process input received via an interaction device, in accordance with various embodiments.
  • FIG. 2 illustrates components of the computing device that operates the input module, in accordance with various embodiments.
  • FIG. 3 illustrates example diagrams of a first type of input directed to selecting one or more keyspresented on a user interface, in accordance with various embodiments.
  • FIG. 4 illustrates example diagrams of a second type of input directed to changing text associated with the first type of input,in accordance with various embodiments.
  • FIG. 5 illustrates an example user interface of a computing device that receives and processes the first type of input and the second type of input to determine that the user is instructing the computing device to add text to text already entered and displayed, in accordance with various embodiments.
  • FIG. 6 illustrates another example user interfaceof a computing device that receives and processes the first type of input and the second type of input to determine that the user is instructing the computing device to add text to text already entered and displayed, in accordance with various embodiments.
  • FIG. 7 illustrates an example user interface of a computing device that receives and processes the first type of input and the second type of input to determine that the user is instructing the computing device to add a pronunciation indication, in accordance with various embodiments.
  • FIG. 8 illustrates example user interfaces of computing devices that receive and process shorthand versions of the first type of input, in accordance with various embodiments.
  • FIG. 9 illustrates an example user interfaceof a computing device that presents a reconfigured keyboard, in accordance with various embodiments.
  • FIG. 10 illustrates another example user interfaceof a computing device that presents a reconfigured keyboard, in accordance with various embodiments.
  • FIG. 11 illustrates an example process that changes text entered via the first type of input based at least on the second type of input, in accordance with various embodiments.
  • FIG. 12 illustrates an example process that reduces a number of candidate characters associated with text entered via the first type of input based at least on the second type of input, in accordance with various embodiments.
  • FIG. 13 illustrates an example process that associates a change instructed by the second type of input with a selection of a key representing a letter, in accordance with various embodiments.
  • FIG. 14 illustrates an example user interfaceof a computing device that maintains information related to a pressed or unpressed state for individual keysof a keyboard and visually distinguishes between keys based on the maintained state information, in accordance with various embodiments.
  • FIG. 15 illustrates another example user interface of a computing device that maintains information related to a pressed or unpressed state for individual keysof a keyboard and visually distinguishes between keysbased on the maintained state information, in accordance with various embodiments.
  • FIG. 16 illustrates an example user interfaceof a computing device that determines a location to initiate a correction or revision in an embodiment where an individual letter occurs more than once in a revision string, in accordance with various embodiments.
  • FIG. 17 illustrates an example process that maintains state information (e.g., pressed or unpressed) for a plurality of keyspresented via a virtual user interface, in accordance with various embodiments.
  • state information e.g., pressed or unpressed
  • FIG. 18 illustrates an example process that provides functionality to select an instance of a single letter that is repeated in text being corrected or revised, in accordance with various embodiments.
  • the techniques and/or systems described herein minimize (e.g., decrease) an amount of time it takes for a user to provide input to a computing device or enter input via a user interface of the computing device (e.g., select a key or an option).
  • the techniques and/or systems described herein may also minimize an amount of time it takes a user to select a candidate character in a particular language, the candidate character representing the provided input.
  • the techniques and/or systems described herein provide an efficient error correction or text revision mechanism. Consequently, the techniques and/or systems discussed herein improve the user input experience by providing features and/or functionsnot currently implemented on computing devices.
  • the techniques and/or systems reduce memory and computing resource requirements of a computing device configured to receive and process input entered by a user at least because the input time is minimized.
  • An amount of time it takes for a user to provide input (e.g., type a message) to a computing device is typically affected by a distancean input object must move to select various keys (e.g., as presented on a display surface of a user interface).
  • the amount of time it takes for the user to provide input may be directly correlated to a number of keys fora user toselect.
  • the amount of time it takes for a user to provide input to a computing device may also be affected by the amount of time it takes the user to select an intended character from multiple candidate characters, specifically for non-Latin phonetics.
  • the amount of time it takes for a user to provide input to a computing device may further be affected by the amount of time it takes the user to correct an error in the input or revise the input.
  • users typically have a difficult time pinpointing a location in displayed text at which to initiate a correction or revision at least because input objects (e.g., fingers) may be large and the display space and displayed text may be small (e.g., on a smartphone).
  • input objects e.g., fingers
  • the display space and displayed text may be small (e.g., on a smartphone).
  • users often inefficiently delete large portions of text to return to the location where an error occurs or where a revision is to be made, and then re-type the text that was deleted.
  • the techniques and/or systems described herein improve a user input experience by reducing a number of keys to be selected to enter text, by minimizing a distance that an input object travels to select one or more options and by implementing efficient error correction or text revision.
  • the techniques and/or systems describe a computing device that may receive and process first input provided by a user, the input selecting keysto enter text. The computing device may display the text so that the user can review the entered text.
  • the techniques and/or systems describe receivingand processingsecond input that changes or modifies the displayed text corresponding to the first input.
  • the second input may provide an instruction to change or modify the entered text.
  • the second input may provide an instruction to add text (e.g., one or more additional letters) to the letters already displayed without having to physically contactkeys representing the additional letterson a user interface.
  • the second input may provide an instruction to add a pronunciation indication to the letters already displayed (e.g., change the pronunciation of the displayed text from a first pronunciation to a second pronunciation, such as from a first tone to a second tone).
  • the second input mayprovide a basis for reducing a number of candidate characters in a non Latin-based language (e.g., Chinese) to be presented for selection (e.g., by a user of the computing device).
  • a non Latin-based language e.g., Chinese
  • the first type of input may comprise direct input.
  • direct input may include tapping an individual key to select a letter to enter (e.g., 'a', 'b', 'c', 'd' and so forth).
  • Another example of direct input may include swiping from one key to another key, or in the direction of another key, to select multiple letters.
  • the second type of input may comprise indirect input.
  • Indirect input for example, may include a flick motion or a gesture motion associated with (e.g., initiated on) an individual key, as further discussed herein.
  • indirect input may also include a swipe motion.
  • the computing device may be configured to implementefficient error correction or text revision.
  • the techniques and/or systems describe maintaining information indicating that individual keys of a plurality of keys presented via a user interface have been selected to enter text.
  • the techniques and/or systems describe visually distinguishing, on the user interface, between the keys that have been selected to enter the text and a plurality of other keyspresented via the user interface that have not been selected to enter the text (e.g., a "pressed" state or an "unpressed” state).
  • a user may quickly identify and select a letter in the text for which a key was already selected to correct or revise the text.
  • the computing device receivesa selection of a previously selected keythat indicates a location within the text to initiate a correction to the text and/or a revision to the text.
  • letters refer to the letters of a Latin alphabet (e.g., 'a', 'b',
  • acomputing device may display a virtual QWERTY keyboard with an individual key representing an individual letter.
  • the computing device may be configured to display text that directly reflects the keys/letters selected.
  • "characters" refer to transcriptions of a non Latin-based language (e.g., Chinese characters such as ⁇ /Si ⁇ -Once the keysare selected, the computing device may also be configured to display text in characters that areassociated with the keys/letters selected (e.g., Chinese characters associated with input in Pinyin).
  • the computing device may display text based on two different writing systems (e.g., input in Pinyin and Chinese characters).
  • the Pinyin phonetic system mentioned above includes the following vowels that may be commonly input to a computing device by a user (e.g., particularly users in China, Taiwan, Singapore): 'a', 'an', 'ao', 'ai', 'ang', 'e', 'en', 'er', 'ei', 'eng', 'i', 'ia', 'iu', 'ie', 'in', 'ing', 'iao', 'ian', 'iang', 'iong', ' ⁇ ', 'ong', 'ou', 'u', 'un', 'ua', 'uo', 'ue', 'u','ui'.
  • the base vowels in Pinyin include 'a', 'e', 'i', ' ⁇ ', and 'u'.
  • a compound vowel in Pinyin (referred to herein as a “compound Pinyin vowel”) is a combination of at least two letters. As seen above, many of the Pinyin compound vowels start with a Pinyin base vowel and end with the letter 'n' or the letters 'ng'.
  • the Pinyin phonetic system includes three compound consonants (e.g., 'zh', 'ch', and 'sh') that may be commonly input to a computing device by a user.
  • FIG. 1 illustrates an example environment 100 that enables a user of a computing device 102 to effectively and efficiently provide input (e.g., enter text).
  • the computing device 102 in the example environment 100 provides a user with input features and/or functions not currently implemented on computing devices, such features and/or functions decrease the amount of time it takes to provide input. Consequently, the computing device 102 is able to efficiently receive and process the input provided by the user. If a user is described herein to provide input to, or enter text on, a computing device, then a reciprocal operation of a computing device sensing and/or receiving the input and displaying the entered text isalso performed.
  • the input provided by a user may be directed to generating and/or communicating a message to a message recipient or a stored contact (e.g., a text message, an instant message, an electronic mail message, a social networking message, etc.).
  • a message recipient or a stored contact e.g., a text message, an instant message, an electronic mail message, a social networking message, etc.
  • the techniques and/or systems discussed herein may be implemented in association with input directed to entering text for other types of device functionality (e.g., note taking, browsing, searching, writing, gaming, etc.).
  • the user of the computing device 102 may use an input object 104 (e.g., a fmger, a stylus, a pen, or other pointing mechanism, etc.) to select (e.g., press, tap, or in some way activate) one or more keyspresented viaan interaction device 106.
  • an input object 104 e.g., a fmger, a stylus, a pen, or other pointing mechanism, etc.
  • select e.g., press, tap, or in some way activate
  • the keyspresented via the interaction device 106 may include keys or selectable optionsrepresenting letters.
  • the interaction device 106 may comprise a touch screen that configures and presents a particular layout or arrangement of keys in a keyboard 108, such as a QWERTY keyboard.
  • the interaction device 106 presents the keyboard 108 as part of a larger user interface 110.
  • the user interface may include a virtual keyboard, an input string presentation area 112 and a candidate characters presentation area 114, etc.
  • the computing device 102 may operate an input module 116.
  • the input module 116 is configured to receive and analyze input received from the input object 104.
  • the user may employ the input object 104 to input the text string "shuang".
  • the computing device 102 may display the input string in an input string presentation area 112 in a first portion of the user interface 110 that is part of the computing device 102, or in some way coupled to the computing device 102.
  • the computing device 102 may be configured to determine second text, such as candidate characters displayed in a candidate characters presentation area 114, the candidate character representing the input string displayed in the input string presentation 112.
  • some non Latin-based languages comprise a large number of candidate characters, and thus, there may be a large number of candidate characters associated with aninput string.
  • the input module 116 discussed herein is configured to reduce anumber of candidate characters presented in the candidate characters presentation area 114 based on an analysis of the input, thereby, making it easier and more efficient for the user to make a selection of a character (e.g., from the eight candidate characters displayed in the candidate characters presentation area 114 of FIG. 1).
  • the input module 116 is configured to determine that the input includes a first type of input 118(represented as a dashed line in this document) and a second type of input 120(represented as a solid line in this document).
  • the illustrated first type of input 118(also referred to as first input herein) and the second type of input 120(also referred to as second input herein) are associated with the 'uang' portion of the input string "shuang" displayed in the input string presentation area 112.
  • the first type of input 118 comprises auser selection of letters represented by keyson the keyboard 108 (e.g., direct input or direct selection).
  • the user employs the input object 104 to directly select the 'u' and the 'a'.
  • the user may select the 'u' and the 'a' by employing the input object 104 to provide continuous physical contact across multiple keys/letters.
  • the user may slide or swipe the input object 104 from the 'u' to the 'a' without breaking or interrupting physical contact with a surface of the user interface(e.g., a touch screen).
  • the user may select the 'u' and the 'a' by employing the input object 104 to individually tap or press the keyrepresenting each letter (e.g., physically contact or touch/tap the 'u', break or interrupt the physical contact, and then physically contact or touch/tap the 'a').
  • the interaction device 106 may sense the first type of input 118and the computing device 102 may then receive signals representing the first type of input 118from the interaction device 106.
  • the second type of input 120 comprises motion by the input object 104 thatmay instruct or command the computing device 102 to add text to the text selected using the first type of input 118(e.g., add an 'ng' to the 'ua' or to the 'shua').
  • the second type of input 120 may be indirect input that may not directly select a particular key(e.g., actually tapping or touching a key representing a letter) configured as part of a layout or arrangement of selectable keyson the keyboard 108. Rather, the second type of input 120instructs or commands the computing device 102 to add text (e.g., the 'ng') to the input string without having to select the keyrepresenting the 'n' or the 'g'.
  • the second type of input 120 may comprise a "flick" motion that may include distinguishing characteristics such as a speed characteristic and a directional characteristic.
  • a flick motion may comprise a quick, linear movement of the input object 104 with respect to a selectable key(e.g., the 'a' key in FIG. 1). Therefore, the interaction device 106 may sense the second type of input 120and the computing device 102 may then receive signals representing the second type of input 120from the interaction device 106.
  • the second type of input 120 allows the user to save time providing input, for example, because the user does not have to physically contact(e.g., touch or tap) keysrepresenting the additional letters (e.g., thekeyrepresentingthe 'n' or the keyrepresenting the 'g').
  • keysrepresenting the additional letters e.g., thekeyrepresentingthe 'n' or the keyrepresenting the 'g'.
  • the second type of input 120 may instruct or command the computing device 102 to add a pronunciation indicationto the input string or to the letters selected using the first type of input 118, as further discussed herein.
  • the input module 116 is configured to receive signals representing sensed input (e.g., the first type of input 118and the second type of input 120) and process the signals to determine text to display (e.g., via user interface 110). Accordingly, a computing device 102 operating the input module 116provides a reduced number of selectable keysa user selects to provide input. In implementations where the computing device 102 correlates or translates the input, the input module 116may identify candidate characters associated with the input string and then display the candidate characters in a candidate characters presentation area 114 of the user interface 110 for selection. In various embodiments, the candidate characters may be based solely on the first input 118and the second input 120. In alternative embodiments, the candidate characters may be based on the first input 118, the second input 120 and previous input received before the first input (e.g., the 'sh' in "shuang").
  • FIG. 2 illustrates a diagram 200 that describes components of the computing device 102 that operates the input module 116.
  • the computing device 102 includes, or is in some way coupled to, an interaction device 106 that enables a user of the computing device 102 to provide input to a keyboard 108 of a user interface 110.
  • the computing device 102 may include, but is not limited to, a mobile computing device, a distributed computing device, or embedded computing device.
  • a computing device may include any one of a variety of devices, such as a smart phone, a mobile phone, a personal digital assistant (PDA), a tablet computer, a laptop computer, a desktop computer, a portable computer, an electronic book device, a gaming console, a personal media player device, a server computer, an automotive computer, a computerized appliance or any other electronic device that may receive signals from the interaction device 106 thatrepresent sensed input (e.g., the first type of input 118and the second type of input 120of FIG. 1) and then process the signals to display text that reflectsthe input from the user (e.g., the entered text).
  • sensed input e.g., the first type of input 118and the second type of input 120of FIG.
  • the interaction device 106 may be any one of a variety of interaction devices
  • the interaction device 106 may then be configured to provide signals that report interaction positions (e.g., selection or activation) of the one or more input objects 104 with respect to the presented keys or options.
  • the interaction device 106 may sense two-dimensional and/or three-dimensional positions of input objects.
  • the interaction device 106 may comprise a direct touch device that displays virtual keys (e.g., on a touch screen, etc.), an air gesture sensing device that may use cameras or other image sensing techniques to determine object position and object speed with respect to options (e.g., keys representing letters), an indirect touch device (e.g., a touchpad, a click-pad, a pressure-pad, etc.), or any other device capable of receiving the first type of input and the second type of input and providing digital signals to the computing device 102.
  • the interaction device 106 may be an input mechanism that is part of the computing device 102.
  • the interaction device 106 may be a separate input mechanism that is connectable to the computing device 102.
  • the interaction device 106 may be a device that is permanently attached to the computing device 102 (e.g., as part of a production process such as before purchase of the computer device 102 by a consumer), or a device that is freely attachable and/or detachable (e.g., as part of a post-production process such as after purchase of the computer device 102 by a consumer).
  • the interaction device 106 may include an input detection area or other input acquiring mechanism (e.g., the user interface 108, an interaction plane, etc.).
  • the input detection area may be opaque, transparent, or a combination of both.
  • the interaction device 106 is configured to sense and determine a position of one or more input objects (e.g., input object 104) or parts of an object (e.g., detection points of an arm or hand to determine an air gesture, etc.) with respect to the detection area. Moreover, the interaction device 106 may sense multiple positions of input representing movement from a first location to a second location (e.g., movement from the 'u' to the 'a' in FIG. 1).
  • the computing device 102 may include one or more processors 204 and memory 206.
  • the processor(s) 204 may be a single processing unit or a number of units, each of which could include multiple different processing units.
  • the processor(s) 204 may include a microprocessor, a microcomputer, a microcontroller, a digital signal processor, a central processing unit (CPU), a graphics processing unit (GPU), etc.
  • CPU central processing unit
  • GPU graphics processing unit
  • the techniques described herein can be performed, at least in part, by one or more hardware logic components.
  • illustrative types of hardware logic components include a Field-programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), an Application-Specific Standard Products (ASSP), a state machine, a Complex Programmable Logic Device (CPLD), other logic circuitry, a system on chip (SoC), and/or any other devices that manipulate signals based on operational instructions.
  • the processor(s) 204 may be configured to fetch and execute computer-readable instructions stored in the memory 206.
  • the memory 206 may include one or a combination of computer-readable media.
  • “computer-readable media” includes computer storage media and communication media.
  • Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer-readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, phase change memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), flash memory or other memory technology, compact disk ROM (CD-ROM), digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store information for access by a computing device.
  • PRAM phase change memory
  • SRAM static random-access memory
  • DRAM dynamic random-access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrically erasable programmable ROM
  • flash memory or other memory technology
  • CD-ROM compact disk ROM
  • DVD
  • communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave.
  • computer storage media does not include communication media.
  • the memory 206 includes the input module 116discussed above.
  • the input module 116 may include one or more of an input analysis module 208, a candidate character identification module 210, a character selection module 212, a revision module 214 and an input learning module 216.
  • the memory may also include one or more setting(s) 218 that define one or more input classification(s) 220 and/or one or more layout configuration(s) 222. Each of the components illustrated in FIG. 2 and mentioned above are further discussed below.
  • module is intended to represent example divisions of the software for purposes of discussion, and is not intended to represent any type of requirement or required method, manner or organization. Accordingly, while various "modules" are discussed, their functionality and/or similar functionality could be arranged differently (e.g., combined into a fewer number of modules, broken into a larger number of modules, etc.). Further, while certain functions and modules are described herein as being implemented by software and/or firmware executable on a processor, in other embodiments, any or all of the modules may be implemented in whole or in part by hardware (e.g., as an ASIC, a specialized processing unit, etc.) to execute the described functions. In some instances, the functions and/or modules may be implemented as part of an operating system. In other instances, the functions and/or modules are implemented as part of a device driver (e.g., a driver for a touch surface), firmware, and so on.
  • a device driver e.g., a driver for a touch surface
  • Example functionality, or applications, operable on the computing device 102 to which a user may provide input may include a text messaging application, an electronic mail application, a social networking application, an instant messaging application, a browsing application, a gaming application, a media player application, a data processing application, and so forth.
  • the input module 116 is configured to receive the sensed input (e.g., signals representing location/position of the input, speed of the input, etc.) from the interaction device 106.
  • the input analysis module 208 then analyzes the input to determine an input string to present via an input string presentation area 112 of the computing device 102 (e.g., so the user can view text that reflects the input). For instance, the input analysis module 208 may distinguish between the first type of input 118and the second type of input 120. According to the example of FIG.
  • the input analysis module 208 may determine that the user swiped from the 'u' to the 'a' and then provided a downward flick motion to add an 'ng' to the 'ua' (e.g., thereby providing the compound Pinyin vowel 'uang' without having to tap or swipe over each individual key).
  • the input analysis module 208 may access the input classification(s) 220 in the settings 218 to determine instructions associated with the second type of input 120.
  • the input classification(s) 220 may be part of functionality activated by a user of the computing device 102.
  • an input classification 220 may be associated with a default instruction indicating that an upward flick on a key representing an 'a', an 'e' or an 'i' adds an 'n' while a downward flick on a key representing an 'a', an 'e' or an 'i' adds an 'ng'.
  • the input classifications 220 may be customarily defined by the user of the computing device 102.
  • the user may define an input classification 220 to be associated with an instruction indicating that an upward flick on a key representing an 'a', an 'e' or an 'i' adds an 'ng' while a downward flick on a key representing an 'a', an 'e' or an 'i' adds an 'n'.
  • the input analysis module 208 is configured to access input classification(s) 220 after determining input is the second type of input 120(e.g., a flick motion) associated with a particular letter selected as part of the first type of input 118(e.g., the 'a' in FIG. 1).
  • a computing device 102 may configure separate settings 218 for different users of the computing device 102 (e.g., two or more users). For example, the computing device 102 may maintain multiple user profiles, and each user profile may have its own settings 218.
  • thesetting(s) 218 may also comprise layout configuration(s) 222 for keyboards (e.g., the keyboard 108).
  • a particular layout configuration 222 may be selected by the user of the computing device 102 and presented via the interaction device 106 so that it is easier to provide input.
  • a particular layout configuration 222 may swap keyson a standard keyboard layout (e.g., QWERTY keyboard) to minimize distances between two or more keyscommonly selected next to each other (e.g., 'ia' or 'sh' in Pinyin).
  • a layout configuration 222 may swap the keyrepresenting the 'a' with the keyrepresenting the 'h' so the keyrepresenting the 'a' is closer to the keys representing 'u', the 'i' and the ⁇ ' and the keyrepresenting the 'h' is closer to the keys representing the 'z', the 'x' and the 'c'.
  • Other layout configurations are contemplated and/or may be implemented for a second user of the computing device 102 and/or for users of computing devices that may use other non-Latin based writing systems such as Arabic script or Cyrillic rather than Chinese characters, for example
  • the candidate character identification module 210 is configured to determine a set of candidate characters associated with the input (e.g., characters associated with the first input 118). In some scenarios, the candidate character identification module 210 may reduce, based on the second type of inputl20, a number of candidate characters from a first number to a second number. The character selection module 212 may then present the second number of candidate characters for selection and receive aselection of a candidate character (e.g., to add to a message). As an example, the input string "shishi" may initially be associated with more than forty candidate characters.
  • the candidate character identification module 210 may reduce the number of candidate characters associated with "shishi" from more than forty candidate characters to about six candidate characters. Therefore, a user of the computing device can efficiently make a selection of a candidate character (e.g., to be added to a message or entered as text).
  • a pronunciation indication e.g., a Pinyin tone
  • the candidate character identification module 210 may determine that the first input and the second input is associated with a lonecandidate character, and therefore, the candidate character identification module 210 may automatically add the lone candidate character to the message independent of a selection.
  • therevision module 214 is configured to maintain information indicating that individual keysof a keyboardl08 (e.g., virtual or mechanical) have been selected to enter text. Then, in response to receiving an indication to correct the entered text, the revision module 214 may visually distinguish between the keysthat have been selected to enter the text and a plurality of other keyspresented via the keyboard 108 that have not been selected to enter the text (e.g., a "pressed” state or an "unpressed” state).
  • a keyboardl08 e.g., virtual or mechanical
  • the input learning module 216 is configured to learn common movement characteristics of the input object 104 that may be associated with the first input and/or the second input (e.g., flick motion speed, swipe speed, flick motion direction, swipe direction, flick motion distance, swipe distance, etc). Accordingly, the input analysis module 208 may better evaluate the input and determine that the input is associated with a particular instruction (e.g., to add one or more letters to an input string or to add a pronunciation indication to an input string). To help with the evaluation and the classification of the first input and/or the second input, the input learning module 216 may employ one or more of a variety of different machine learning algorithms such as neural networks, decision trees, support vector machines, and so forth.
  • machine learning algorithms such as neural networks, decision trees, support vector machines, and so forth.
  • the input learning module 216 maintains a temporary historical window of user input actions and their outcomes; positive, if the actions resulted in input finalization, or negative, if the actions were later reversed as part of error correction.
  • the outcomes may be considered as classification labels, and for each one, a corresponding feature vector may be constructed whose components contain related user actions in normalized form.
  • the feature vectors may be aggregated into a two dimensional training data matrix where each row represents one sample, and the labels may be aggregated into a one dimensional labels matrix where each row represents the expected class with a 1 :1 row level correspondence.
  • a background training thread may be launched if it does not already exist, and the historical window may be flushed.
  • the constructed matrices may be used to train an incremental support vector machine with a linear kernel.
  • the output is an updated model used within the input analysis module 208, to evaluate through statistical prediction, whether new input is likely to have a positive outcome in deciding whether or not to make an association with a particular action or instruction.
  • FIG. 3 illustrates example diagrams(300(A), 300(B), 300(C))of the first type of input 118.
  • the first type of input 118 may comprise direct input that includesa user selection of keys representing letters.
  • the user selection of a key 302 involves a direct or anactual selection such that the input object 104 physically contacts a location of the key302 on the user interface 110 or is within a defined proximity of the location of the key302.
  • the size of the key302 with respect to the input object 104 may not be drawn to scale (e.g., on a smart phone device the size of the key302 may typically be smaller compared to the size shown in FIG. 3).
  • diagram 300(A) which shows a top view of an actual user selection
  • the input object 104 may physically contact the key302 (e.g., an individual key tap, as part of a multi-key swipe, etc.).
  • the diagram 300(B) shows a side view of a direct or an actual user selection where the input object 104 physically contacts the key302.
  • the diagram 300(C) shows a side view of a direct or anactual user selection of the key302 where the input object 104 is not physically contacting the key 302, but is within a defined proximity to indicate selection of activation of the key302.
  • the defined proximity may indicate that the input object 104 must be positioned within three-dimensional space over or above the key302.
  • the input object 104 may have to be within a threshold distance 304 (e.g., five millimeters, one centimeter, two centimeters, etc.) of the surface within which the keyis presented 302.
  • the threshold distance 304 may be a setting 218.
  • FIG. 4 illustrates example diagrams (400(A) and 400(B)) of the second type of input 120.
  • the second type of input 120 comprises motion by the input object 104 that may instruct or command the computing device 102 to change or modify the first type of inputl 18 (e.g., the text entered via the first input).
  • the second type of input 120 comprises indirect input in which a use does not directly select keys displayed.
  • the instruction or command associated with the indirect input may add one or more letters (e.g., add a suffix) to the letter represented by key302 without the input object 104 having to actually select (e.g., physically contact) keys representing the one or more additional letters.
  • the second type of input 120 may also instruct or command the computing device 102 to add a pronunciation indication to the letter represented by the key302 or to a string of letters of which the letter represented by the key302 is a part.
  • the second type of input 120 may comprise a flick motion that may include a speed characteristic and a directional characteristic (e.g., in two-dimensional space or in three dimensional space).
  • the input analysis module 208 may distinguish between the first type of input 118and the second type of input 120 and then classify the input.
  • the second type of input 120 may comprise a swipe motion.
  • a user may swipe up from the 'e', the 'i' or the ⁇ ' to add an 'ng' since there are no letter keys above (e.g., the 'e', the 'i' or the ⁇ ' are included in the top row of letters). Or, the user may swipe down from the 'z' or the 'c' to add an 'h' since there are no letter keys below (the 'z' and the 'c' are included in the bottom row of letters). As a result, a swipe up from the 'e', the 'i' or the ⁇ ' may not be confused with direct input (e.g., a swipe to another letter).
  • the second type of input may comprise a gesture motion.
  • a gesture motion may comprise movement of the input object 104, with respect to a key 302, in multiple different directions (e.g., a circle gesture motion, a right angle gesture motion, etc.).
  • the illustration in 400(A) shows a top view of the input object
  • the illustration in 400(B) shows a side view of the input object 104 moving from the first position to the second position.
  • the first position may be associated with a user selection of the key302 discussed above with respect to FIG. 3 (e.g., the first type of input 118).
  • the movement from the first position to the second position may be associated with the second type of input 120.
  • the input analysis module 208 may determine the motion is a flick motion, and therefore, the motion is the second type of input 120because (i) the motion breaks contact between the surface of the display and the input object 104 or at least increases a distance 402 between the surface of the display and the input object 104, and (ii) the movement speed of the input object 104 meets or exceeds a flick threshold speed in a particular direction 404 from the key302 (e.g., from a top down view the particular direction may be up, down, left, right, an up-left diagonal, a up-right diagonal, a down-left diagonal, a down-right diagonal, etc.).
  • the second type of input 120 may include faster movement of the input object 104 compared to the first type of input 118.
  • FIG. 5 illustrates an example user interface500 ofa computing device 102 that receives and processes the first type of input 118and the second type of input 120to determine that the user is instructing the computing device 102 to add text to text already entered and displayed.
  • the first type of input 118and the second type of input 120 may be provided to the computing device 102 by a user to enter a compound Pinyin vowel.
  • a user may employ the input object 104 to provide, and correspondinglythe computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'e'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the upward direction (as represented by element 502), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'n' to the 'e' to produce the Pinyin compound vowel 'en'.
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'e'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the downward direction (as represented by element 504), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'ng' to the 'e' to produce the Pinyin compound vowel 'eng'.
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'e'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the rightward direction (as represented by element 506), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'r' to the 'e' to produce the Pinyin compound vowel 'er'.
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keysrepresenting the letters 'i' and 'a'.
  • the first input may comprise a swipe from the key representing the letter 'i' to the key representing the letter 'a' (as represented by element 508).
  • the user may employ the input object 104 to provide second input comprising a flick motion in the upward direction (as represented by element 510), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'n' to produce the Pinyin compound vowel 'ian'.
  • the user may employ the input object 104 to provide second input comprising a flick motion in the downward direction (as represented by element 512), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'ng' to produce the Pinyin compound vowel 'iang'.
  • a combination of the first input 118(e.g., the physical selection of the keysrepresenting the letter 'e' or 'ia') and the second input 120(e.g., a flick motion associated with a key representing letter 'e' or the letter 'a') may be used to provide an input string without the user having to select each key/letter in the input string (e.g., the 'i', the 'a', the 'n', the 'g' to enter 'iang').
  • the computing device 102 maydetermine which letters to add based on a direction of the flick motion.
  • the correspondence or mapping between a particular direction of a flick motion and the one or more letters to add may be arbitrary (e.g., may be user defined and/or user modifiable).
  • the combination of the first input 118and the second input 120 may be associated with a particular sound or syllable in a phonetic system (e.g., a compound Pinyin vowel).
  • the first input and the second input may be utilized to input other strings of text, such as the Pinyin compound vowels 'an', 'ang', 'in', 'ing', 'iong', 'ong', 'un' and so forth.
  • the first and second input described above may also be used in association with other commonly used multi-letter text strings or phrases in other phonetic and/or writing systems as well (e.g. other languages).
  • a flick motion can add common suffixes in English, such that a first flick direction initiated on a particular option adds an 'ing' (e.g., "patentmg” or “describmg") and a second flick direction initiated on the particular option adds an 'ed' (e.g., "patented” or “described” ).
  • a user can define particular text (e.g., one or more letters) to add to text already entered based on a flick motion a flick motion.
  • the first input may be utilized to input a word or phrase and the second input may be used to change the tense of the word or the phrase.
  • the second input e.g., a flick motion in a particular direction
  • the user may enter, e.g., via first input, the word "take” (e.g., present tense).
  • the user may flick up if the user wants to change the word to a past tense (e.g., "took”).
  • the user may flick down to change the word to a progressive tense (e.g., "taking”).
  • FIG. 6 illustrates another example user interface600 of a computing device
  • a user may employ the input object 104 to provide, and correspondinglythe computing device 102 therefore receives, first input that selects the keyrepresenting the letter 's'. Then the user may employ the input object 104 to provide second input comprising a flick motion in the rightward direction (as represented by element 602), which the computing device 102 receives as an instruction to add, e.g., based on an input classification 220, an 'h' to the 's' to produce the compound Pinyin consonant 'sh' in the input string.
  • a direction of the flick motion may be associated with a rule.
  • the rule may define that the flick motion may have to be directed towards the letter to be added, as shown by element 602 where the flick motion is directed from the 's' to the 'h'.
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the key representing the letter 'z'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the downward direction (as represented by element 604), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220, an 'h' to the 'z' to produce the compound Pinyin consonant 'zh' in the input string.
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'c'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the upward direction (as represented by element 606), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220, an 'h' to the 'c' to produce the compound Pinyin consonant 'ch' in the input string.
  • a flick motion in any one of multiple directions may add the same letter. For instance, if the flick motion is associated with the letter 'a', then the computing device 102 may be configured to determine a direction of the flick motion to distinguish between adding an 'n' or a 'ng'. In contrast, a flick motion in any direction that is associated with one of the letters 's', 'z' or 'c' may be configured to add an 'h' (to produce a compound Pinyin vowel). Thus, the user may flick in any direction to add an 'h' to each of the letters 's', 'z' or 'c'.
  • first input and second input may comprise a stroke (e.g., a single stroke).
  • a first stroke may input 'sh' (represented by element 602 in FIG. 6) and then a second stroke, after the first stroke, may input 'uang' (as discussed with respect to FIG. 1). Therefore, individual strokes may provide a natural separation between a consonant (e.g., 'sh') and a vowel ('uang') to input a larger text string (e.g., 'shuang').
  • the second type of input 120 may be defined by user to input phrases frequently used. For example, a user may define that a selection of a key representing the letter 'w' (e.g., the first type of input 118) followed by an upward flick motion (e.g., the second type of input 120) indicates the input of "women" (3 ⁇ 4 ⁇ ]) and the selection of a key representing the letter 'w' (e.g., the first type of input 118) followed by a downward flick motion (e.g., the second type of input 120) indicate the input o 'wan shang chi shen
  • a user may define that a selection of a key representing the letter 'n' (e.g., the first type of input 118) followed by an upward flick motion (e.g., the second type of input 120) indicates the input of "nihao" (j ⁇ )and the selection of a key representing the letter 'n' (e.g., the first type of
  • FIG. 7 illustrates an example user interface700 of a computing device 102 that receives and processes the first type of input 118and the second type of input 120 to determine that the user is instructing the computing device 102 to add a pronunciation indication.
  • a user may employ the input object 104 to provide, and correspondinglythe computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'a'.
  • the user may employ the input object 104 to enter second input comprising a flick motion in the rightward direction (as represented by element 702), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220,a pronunciation indication(e.g., a [-] or a "macron”that represents a flat or high level tone in Pinyin) to produce a 'a' in the input string.
  • a pronunciation indication e.g., a [-] or a "macron”that represents a flat or high level tone in Pinyin
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'a'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the upward/rightward direction (as represented by element 704), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220, a pronunciation indication (e.g., a [ ' ] or an "acute accent" that represents a rising or a high-rising tone in Pinyin) to produce a 'a' in the input string.
  • a pronunciation indication e.g., a [ ' ] or an "acute accent” that represents a rising or a high-rising tone in Pinyin
  • a user may employ the input object 104 to provide, and correspondingly the computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'a'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the upward/leftward direction (as represented by element 706), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220, a pronunciation indication (e.g., a f ] or a "grave accent" that represents a falling or a high-falling tone in Pinyin) to produce a 'a' in the input string.
  • a pronunciation indication e.g., a f
  • a "grave accent" that represents a falling or a high-falling tone in Pinyin
  • a user may employ the input object 104 to provide, and correspondinglythe computing device 102 therefore receives, first input that selects the keyrepresenting the letter 'a'. Then the user may employ the input object 104 to enter second input comprising a flick motion in the downward direction (as represented by element 708), which the computing device 102 receives as an instruction to add, e.g., based on a input classification 220, a pronunciation indication (e.g., a [ " ] or a "caron” that represents a falling-rising or a low tone in Pinyin) to produce a 'a' in the input string.
  • a pronunciation indication e.g., a [ " ] or a "caron” that represents a falling-rising or a low tone in Pinyin
  • the direction of the flick motion may intuitively correspond to a direction of the pronunciation indication as it appears in relation to a letter. This may make it easier for a user of the computing device to remember and/or adopt the input mechanisms discussed herein.
  • the [-] or the macron is visually flat and therefore may be associated with a flat direction (e.g., leftward flick or rightward flick).
  • the [ ' ] or the acute accent visually appears to go upward and rightward and therefore may be associated with an upward/rightward flick motion (e.g., as represented by element 704).
  • the f ] or the grave accent visually appears to go upward and leftward and therefore may be associated with an upward/leftward flick motion (e.g., as represented by element 706).
  • the [ " ] or the caron visually appears to point downward and therefore may be associated with the downward flick motion (e.g., as represented by element 708).
  • the first input and the second input may be utilized to input other commonly used pronunciation indications in other phonetic and/or writing systems as well (e.g. other languages).
  • a flick motion can add a [ ⁇ ], as used in Spanish.
  • the input analysis module 208 may be configured to evaluate the second type of input 120on a base vowel (e.g., 'a', 'e', 'i', ⁇ ', V) as one of an instruction to add text (e.g., add an 'n' or an 'ng') or an instruction to add a pronunciation indication.
  • the input analysis module 208 may be configured to evaluate two instances of the second type of input 120in association with a selection of a key associated with a base vowel (e.g., 'a', 'e', 'i', ⁇ ', V).
  • the input analysis module 208 may be configured to add letters (e.g., add an 'ng' to an 'a') in accordance with a first detected flick motion and then add a pronunciation indication (e.g., add an [-] or a macron to the 'a') in accordance with a second detected flick motion that occurs after the first detected flick motion (e.g., within a threshold period of time).
  • letters e.g., add an 'ng' to an 'a'
  • a pronunciation indication e.g., add an [-] or a macron to the 'a'
  • the input analysis module 208 may be configured to add a pronunciation indication (e.g., add an [ -] or a macron to the 'a') in accordance with a first detected flick motion and then add letters (e.g., add an 'ng' to an 'a') in accordance with a second detected flick motion that occurs after the first detected flick motion (e.g., within the threshold period of time).
  • the second type of input may be applied to a key/letter different than the key/letter on which the second type of input was initiated (e.g., a letter selected before or after the letter with which the second type of input is associated).
  • the input analysis module 208 may smartly determine which letter, in an input string, on which to add a pronunciation indication (e.g., a tone).
  • a pronunciation indication e.g., a tone
  • the second type of input may add a tone to a candidate character to reduce a number of candidate characters presented for selection.
  • FIG. 8 illustrates an example user interface800 of computing devices 102 that receive and process shorthand versions of the first type of input 118.
  • Shorthand versions of the first type of input 118 comprise scaled down swipe distances, or shortened swipe distances (in the direction or a particular key/letter).
  • regular input to enter an input string of "xiang” may include first swiping from the keyrepresenting the 'x' to the key representing the 'i' (e.g., first input as represented by element 802) and then swiping from the key representing the 'i' to the key representing the 'a' (as represented by element 804). Then, a flick motion (e.g., second input as represented by element 806) may instruct the computing device 102 to add an 'ng' to the 'xia' to produce the input string "xiang".
  • a flick motion e.g., second input as represented by element 806
  • the shorthand version of the regular input is illustrated by elements 808, 810, and 812.That is, the motion of the input object 104 represented by element 808 is the shorthand version of the motion represented by element 802, the motion of the input object 104 represented by element 810 is the shorthand version of the motion represented by element 804, and the motion of the input object 812 represented by element 812 is associated with the motion represented by element 808 (e.g., the characteristics of the flick motion, other than the location, may remain the same).
  • a user may not be required to move the input object 104 the complete keyboard "layout" distance from the 'x' to the 'i' or to move the input object 104 the complete "layout” distance from the 'i' to the 'a'. Rather, the user may swipe in the direction of the intended input keywithout moving the input object 104 the complete layout distance.
  • the flick motion e.g., as represented by element 812 may indicate an end to the shorthand swipes (represented by elements 808 and 810).
  • the computing device 102 may compare the shorthand version to different input classification(s)to determine that the shorthand version is a scaled down version of particular regular input associated with the entry of text (e.g., "xiang").
  • the comparison may comprise determining that a direction characteristic of the individual swipes (e.g., represented by elements 808 and 810) is within a direction characteristic threshold and a length characteristic of the individual swipes are within a length characteristic threshold before identifying the shorthand input as being associated with the particular regular input.
  • the input learning module 216 is configured to learn the shorthand versions of the regular input so that the input analysis module 208 can intelligently associate the shorthand version with the correct regular input.
  • FIG. 9 illustrates an example user interface900 of a computing device 102 that presents a reconfiguredkeyboard or a reconfigured layout of options.
  • the reconfigured keyboard may be a user-defined or automatic/default layout configuration 222 in the setting(s) 218 that minimizes the distance between keysthe user often selects.
  • a user that commonly provides input in Pinyin may often select a key representing an 'h' directly after selecting a key representing a 'z', an 's' or an 'c'.
  • the user may often select a key representing an 'a' directly before or after selectinga key representing the 'u', the 'i' or the ⁇ '.
  • the reconfigured keyboard may swap, or switch, the keyrepresenting the 'a' and the keyrepresenting the 'h' (e.g., the shaded options) from a common QWERTY keyboard configuration to minimize the distances between keystypically selected in close proximity to one another (e.g., directly before or directly after). Consequently, the keyrepresenting the 'h' is closer to the keyrepresenting the 'z', the key representing the 's' and the key representing the 'c'. Moreover, the key representing the 'a' is closer to the key representing the 'u', thekey representing the 'i' and thekey representing the ⁇ '.
  • FIG. 10 illustrates another example user interface 1000 of a computing device
  • the keyrepresenting the letter 'h' and the key representing the letter 'a' are swapped (e.g., from a QWERTY keyboard perspective). Further, the key representing the letter 'x' and the key representing the letter 'c' are swapped.
  • the key representing the 'h' is closer to the key representing the 'z', the key representing the 's' and thekey representing the 'c'.
  • the key representing the 'a' is closer to the keyrepresenting the 'u', the key representing the'i' and the key representing the'o'.
  • the techniques and/or systems described herein may also be applicable to other common user interface or keyboard configurations.
  • the first type of input 118(e.g., the swiping) and the second type of input 120(e.g., the flick motion) may be implemented in accordance with a standard numeric key layout used to provide input since the base vowels are individually located on different numerical keys (e.g., the 'a' is on the '2', the 'e' is on the '3', the T is on the '4', the ⁇ ' is on the '6', and the V is on the '8').
  • the flick motions may be used to efficiently add an 'n' or an 'ng' to select base bowels (e.g., the user may flick upwards from the '2' key to add an 'n' to the 'a' or flick downwards from the '2'key to add an 'ng' to the 'a').
  • FIGS. 11, 12 and/or 13 illustrate example processes depicted as logical flow graphs, which represent a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof.
  • the operations represent computer-executable instructions that, when executed by one or more processors, configure a computing device to perform the recited operations.
  • computer- executable instructions include routines, programs, objects, components, data structures, and the like that configure a computing device to perform particular functions or implement particular abstract data types.
  • any or all of the operations may be implemented in whole or in part by hardware (e.g., as an ASIC, a specialized processing unit, etc.) to execute the described functions.
  • the functions and/or modules are implemented as part of an operating system.
  • the functions and/or modules are implemented as part of a device driver (e.g., a driver for a touch surface), firmware, and so on.
  • FIG. 11 illustrates an example process 1100 that changes text entered via the first type of input 118based on the second type of input 120.
  • the input analysis module 208 receives first input that selects one or more keysthat represent text (e.g., one or more letters). As discussed above, the input analysis module 208 may receive signals representing the first input from the interaction device 106. Accordingly, the interaction device 106 is configured to sense the input provided by a user via the input object 104 and then generate the signals to provide to the input analysis module 208. [0101] At operation 1104, the input analysis module 208 receives second input after the first input that instructs the computing device 102 to change the text or letters entered in association with the first input. Again, the input analysis module 208 may receive signals representing the second input from the interaction device 106.
  • the input analysis module 208 determines, based on an analysis of the second input, the change to the text to produce changed text.
  • the second input is a flick motion that instructs the computing device 102 to add additional text (e.g., a suffix, 'n', 'ng', 'ed', 'ing', etc.) to the text entered based on the first input (e.g., 'a', 'e', 'i', 'ia', 'io', 'patent', 'describ', etc.).
  • the user may provide the second input independent of selecting additional keysrepresenting the additional text.
  • the user does not have to actually select keysrepresenting letters of the additional text, thereby speeding up the process of providing input and, correspondingly, receiving and processing the input.
  • the second input is a flick motion that instructs the computing device 102 to add a pronunciation indication to the text or letters entered based on the first input.
  • the computing device 102 may present the changed text.
  • the computing device 102 may display the changed text so a user of the computing device 102 can view the input string in the input string presentation area 112.
  • the example process 1100 may be performed for individual input strings that may comprise part of a word or phrase or a whole word or phrase. Moreover, the example process 1100 may be performed for individual input strings that may comprise part of a message or a whole message to be communicated.
  • FIG. 12 illustrates an example process 1200 that reduces a number of candidate characters associated with text entered via the first type of input 118based on the second type of input 120.
  • the input analysis module 208 receives first input that selects one or more keysrepresenting one or more letters.
  • the input analysis 208 may receive signals indicating that a user has swiped from the 'i' to the 'a' (e.g., as represented by element 508 in FIG. 5).
  • the candidate character identification 210 identifies a number (e.g., one or more) of candidate characters associated with the one or more letters represented by the keysselected.
  • the identified candidate characters may be a representation or translation of the one or more letters (e.g., represent the pronunciation of 'ia' in another language).
  • the identified candidate characters may be a representation or translation of the one or more letters (e.g., the 'ia') and previous input that had already been entered by a user and received by the computing device 102 (e.g., an 'x' input before the 'ia').
  • the input analysis module 208 receives second input that is associated with a last selection of a key representing a last letter of the one or more letters selected in operation 1202.
  • the first input may select a single key representing a single letter (e.g., an individual tap of one of 'a', 'e', 'i', ⁇ ', 'u'), and therefore, the last letter is the single letter selected.
  • the first input may select multiple keys representing multiple letters (e.g., a swipe from 'i' to 'a'), and therefore, the last letter is the 'a'.
  • a determination of the second input indicates an end to, or completion of, the first input.
  • the input analysis module 208 may access input classification(s) 220 for the last letter. Accordingly, in some implementations the input classification(s) 220 may be organized or arranged according to letters because second input may be associated with different instructions for individual letters. For example, a flick motion in a first direction on an 'i' may add an 'n' to produce the base Pinyin vowel 'in' and a flick motion in a second direction on an 'i' may add an 'ng' to produce the base Pinyin vowel 'ing'.
  • a flick motion in the same first direction on a 'u' may also add an 'n' to produce the base Pinyin vowel 'un', but a flick motion in the same second direction on 'u' may add a [ " ] to produce the base Pinyin vowel ' ⁇ ' instead of adding an 'ng' since there is no base Pinyin vowel of 'ung'.
  • mapping the second input to an instruction may depend on a specific letter selected (e.g., last letter).
  • the second input may be universally mapped to an instruction for all letters.
  • the candidate character identification 210 reduces the number of candidate characters from a first number to a second number based on the second input. For example, if the second input instructs the computing device 102 to add text to the text entered in association with the first input, then the candidate character identification 210 may reduce the number of candidate characters based on the first input text and the additional second input text. In another example, if the second input instructs the computing device 102 to add a pronunciation indication, then the candidate character identification 210 may reduce the number of candidate characters based on the pronunciation indication (e.g., by eliminating other pronunciations). [0111] At operation 1210, the candidate character identification 210 presents the candidate characters for selection. For example, the candidate character identification 210 may present the candidate characters (e.g., the second number) in the candidate character presentation area 114 in the user interface 110 of the computing device 102, as seen in FIG. 1.
  • the character selection module 212 receives a selection of a candidate character (e.g., a user selection). In response to receiving the selection, the computing device 102 may add the selected character to a message or other device function.
  • a candidate character e.g., a user selection
  • the example process 1200 may be implemented in association with a selection of individual characters or multiple characters.
  • FIG. 13 illustrates an example process 1300 that associates a change instructed by the second type of input 120with a selection of a key representing a letter.
  • the input analysis module 208 identifies a letter associated with second input (e.g., as discussed above with respect to operation 1206).
  • the letter identified may an individual tap of a single letter (e.g., 'a', 'e', 'i', ⁇ ', V, ' ⁇ ', 's', 'c', etc.) or the identified letter may be a last letter of a swipe (e.g., the 'a' after a user swipes from 'i' to 'a').
  • the input analysis module 208 may determine that input is the second type of input 120 (e.g., a flick motion) at least because the speed of movement of the input object 104 may meet or exceed a threshold speed and/or the movement of the input object 104 may increase distance between a display surface that presents a keyboard 108 and the input object 104.
  • the second type of input 120 e.g., a flick motion
  • the input analysis module 208 determines a direction of the second input.
  • the input analysis module 208 accesses an input classification 220 to determine a change to the text associated with the first input based on the direction of the second input.
  • the second input may instruct the computing device to add an 'n' to an 'i' in association with a first direction and to add an 'ng' to an 'i' in association with a second direction.
  • the second input may instruct the computing device 102 to add a [-] or a macron to an 'a' in association with a first direction and to add f] or a grave accent to an 'a' in association with a second direction.
  • the candidate character identification module 210 reduces anumber of candidate characters from a first number to a second number based on the change determined in operation 1306.
  • the computing device 102 may also be configured to implement efficient error correction or text revision mechanisms (e.g., for text written in any language or based on any phonetic system).
  • the revision module 214 may be configured to maintain information indicating that individual keysof a plurality of keyspresented via a keyboard 108, e.g., on a user interface 110, have been selected to enter text.
  • the revision module 214 may visually distinguish, on the keyboard 108, between the keysthat have been selected to enter the text and a plurality of other keyspresented via the keyboard 108 that have not been selected to enter the text (e.g., a "pressed” state or an "unpressed” state). As a result, a user may quickly identify and select a letter in the text for which a key was already selected to correct or revise the text without having to delete a large portion of text already entered.
  • FIG. 14 illustrates an example user interfacel400 of a computing device 102 that maintains information related to a pressed or unpressed state for individual keysof a keyboard 108and visually distinguishes betweenkeysbased on the maintained state information.
  • a user may employ the input object 104 to select, in order, keyscorresponding to 'g', 'u', 'a', 'n' and 'h'to enter the text "guanh" (e.g., which is incorrect in this example).
  • the user interface 110 e.g., the input string presentation area 112of the computing device 102
  • the user may realize that the entered text is incorrect and/or needs to be revised so that it correctly recites "guang” (e.g., replace the 'h' with a 'g').
  • the revision module 214 maintains one of a pressed state or an unpressed state for individual ones of the keyspresented via thekeyboard 108.
  • the revision module 214 maintains information indicating that each of 'g', 'u', 'a', 'n', and 'h' have been pressed, and therefore the revision module 214 retains a pressed state for the keys representing each of 'g', 'u', 'a', 'n', and 'h'.
  • the revision module 214 maintains information indicating that the other keys(e.g., representing the 'q', 'w', 'e', 'r', 't', 'y', ⁇ ', 'p' and so forth) have not been pressed, and therefore the revision module 214 retains an unpressed state for these other keys.
  • additional text e.g., five letters, ten letters, etc.
  • delete e.g., press the backspace key multiple times
  • the selection of keysand/or the tracking and maintenance of the selected keys is part of an input mode where the user is providing input that generates the input string.
  • the user may discontinuethe input mode in response to receiving an indication to switch from the input mode to a revision mode. For example, the user may touch or contact an area of a text string that roughly indicates a revision position. Then, the system may automatically expand a selection window that covers keys nearby, e.g., the keys representing the 'g', V, 'a', 'n' and 'h').
  • the revision module 214 is configured to switch from the input mode to a revision mode to revise or correct at least a portion of the text displayed in the input string presentation area 112.
  • the input string presentation area 112 may be partof a message to be communicated or a whole message to be communicated.
  • the indication to switch includes a user selection of a backspace key 1402.
  • the user may review the displayed text in the input string presentation area 112 and realize that the text is incorrect or needs to be revised.
  • the revision module 214 accesses the maintained state information for keysof the keyboard 108and visually distinguishes between keyswith a pressed state and keyswith an unpressed state.
  • the revision module 214 shades each of the keysrepresenting 'g', V, 'a', 'n' and 'h' to visually distinguish them from the other options.
  • the revision module 214 may visually distinguish keys by: changing colors of a key or a letter on a key, highlighting a key or a letter on a key, changing a size of a key or a letter on a key, underlining a letter on a key, italicizing a letter on a key and so forth.
  • the revision module 214 may visually distinguish keys via lighting behind the keys.
  • the user selects a key (e.g., the 'h' represented by element 1404) indicating a location at which to initiate a correction or revision to the text already entered.
  • the revision module 214 may identify the location selected in the input string (e.g., the underlined 'h') in response to receiving a user selection of a key indicating the location at which to initiate the correction or revision to the text.
  • the revision module 214 may place a cursor before or after the location selected (e.g., place the cursor before or after the underlined 'h').
  • the user can correct the incorrect text or revise the text (e.g., select the keyrepresenting the 'g' as shown by element 1406).
  • the computing device 102 may highlight the underlined 'h' and replace the underlined 'h' with a 'g' upon receiving the correct selection.
  • the revision module 214 locates a cursor after the underlined 'h', the user may select the backspace key 1402 to delete the underlined 'h' before selecting the keyrepresenting the 'g' as shown by element 1406.
  • the revision module 214 locates a cursor before the underlined 'h', the user may select a deletion key to delete the underlined 'h' before selecting the key representing the 'g' as shown by element 1406. Ultimately, the user corrects the input string so that the text recites "guang” instead of "guanh".
  • a selection of the backspace key 1402 deletes the last letter or the letter at the end of the input string and subsequently highlights the previous input keys (e.g., the 'g', the V, the 'a' and the 'n').
  • a user of the computing device 102 may swipe left or right on the space bar key to move the cursor left or right, respectively (e.g., move the cursor from one letter to the next similar to a left or right arrow key).
  • a two finger swipe e.g., the user contacts the user interface with two fingers next to one another
  • left or right anywhere on the user interface 110 and/or the keyboard 108 may instruct the computing device 102 to move the cursor to the left or the right, respectively (e.g., move one letter left or move one letter right).
  • the computing device 102 may be configured to recognize that two finger input is associated with cursor movement.
  • This functionality may be useful in situations where the insertion is to occur at the cursor position and/or the keyboard 108 does not include specific keys dedicated to cursor movement (e.g., left, right, down, up arrow keys). Similarly, a single finger swipe up or down on the spacebar may move the cursor up and down, and/or a two finger swipe up or down anywhere on the user interface 110 may move the cursor up and down, respectively.
  • the indication to switch from the input mode to the revision mode may be conveyed via selection of keys or options other than the backspace key 1402 (e.g., a deletion key, an 'alt' key, a 'ctrl' keyor a specific keygenerated and presented to providea switching function from the input mode to the revision mode).
  • keys or options other than the backspace key 1402 e.g., a deletion key, an 'alt' key, a 'ctrl' keyor a specific keygenerated and presented to providea switching function from the input mode to the revision mode.
  • FIG. 15 illustrates another example user interface 1500 of a computing device
  • a user may employ the input object 104 to select, in order, keyscorresponding to 'g', 'u', 'a', 'n', 'c', 'h', 'u', 'a', 'n' and 'g' to enter the text "guanchuang" (e.g., which is incorrect in this example).
  • the revision module 214 maintains one of a pressed state or an unpressed state for individual ones of the keyspresented via the keyboard.
  • the revision module 214 maintains information indicating that each of the 'g', 'u', 'a', 'n', 'c', 'h', 'u', 'a', 'n' and 'g' have been pressed, and therefore the revision module 214 retains a pressed state for thekeysrepresenting 'g', V, 'a', 'n', 'c', 'h', V, 'a', 'n' and 'g'.
  • the revision module 214 maintains information indicating that the other keys (e.g., representing the 'q', 'w', 'e', 'r', 't', 'y', ⁇ ', 'p' and so forth) have not been pressed, and therefore the revision module 214 retains an unpressed state for these other keys.
  • the other keys e.g., representing the 'q', 'w', 'e', 'r', 't', 'y', ⁇ ', 'p' and so forth
  • the indication to switch from the input mode to the revision mode may include physical contact between the input object 104 and a location with a general area 1502 on a user interface (e.g., a touch screen) where the user wants to correct or revise the text. It is often difficult for a user to pinpoint an exact location for a revision using the input object 104 when the display screen is limited in size (e.g., it is difficult for a large finger to accurately instruct the computing device 102 to move the cursor to an exact location between lettersto initiate a correction or revision).
  • the revision module 214 is configured to identify a general, or rough, area surrounding a location of physical contact between the input object 104 and the user interface.
  • the area may include a threshold number or length of letters or marks surrounding the location of physical contact or a threshold distance (e.g., up, down, left and/or right) from the location of physical contact.
  • the input object 104 may physically contact the screen at the 'n' within area 1502 and therefore, the revision module 214 may determine and/or present the area 1502, which includes the V, 'a', 'n', 'c', 'h' (e.g., a threshold may be two letters from the point of physical contact).
  • the revision module 214 After receiving the physical contact to indicate the switch from the input mode to the revision mode, the revision module 214 accesses the maintained state information and visually distinguishes between keys with a pressed state and keyswith an unpressed state. With respect to FIG. 15, the revision module 214 shades each of the keysrepresenting 'u', 'a', 'n', 'c' and 'h' to visually distinguish them from the other keys.
  • the visual distinction of keysthat have already been selected is limited to the text present within area 1502. Therefore, there may be some keyswith a retained pressed state that are not visually distinguished (e.g., the 'g'). Accordingly, the determination of area 1502 in response to the physical contact allows the computing device 102 to minimize the correction area and reduce a number of keys(e.g., visually distinguished keys) selectable to initiate a correction or revision.
  • the user selects a key (e.g., the 'n' represented by element 1504) indicating a location at which to initiate a correction or revision to the text already entered (e.g., the revision module 214 may move the cursor between the 'n' and the 'c'). Then, the user can correct the incorrect text or revise the text (e.g., select the key representing the 'g' as shown by element 1506). For example, the computing device 102 may insert a 'g' between the 'n' and the 'c' upon receiving the selection. Consequently, the user corrects the input string so that the text recites "guangchuang" instead of "guanchuang".
  • a key e.g., the 'n' represented by element 1504
  • the revision module 214 may move the cursor between the 'n' and the 'c'.
  • the indication may include a selection of the backspace key
  • FIG. 16 illustrates an example user interface 1600 of a computing device 102 that determines a location to initiate a correction or revision in an embodiment where an individual letter occurs more than once in a revision string.
  • the revision module 214 may determine an input string (e.g., a message or part of a message) or a general area (e.g., area 1502 in FIG. 15) that includes multiple instances of a single letter.
  • the revision module 214 is configured to provide functionality to receive a selectionof an instance of multiple instances of a single letter at which to initiate a correction or revision to the text.
  • a user may provide an indication for the revision module
  • the revision module 214 may then visually distinguish the keys representing the 'u', the 'a', the 'n', the 'h' and the 'c' from the other keyson thekeyboard. To correct the original entered text "guanhchuang" so that it recites “guangchuang", the user needs to replace the first 'h' with a 'g'.
  • the revision module 214 is configured to provide functionality so a user can select one of multiple instances of a single letter as a location at which to initiate a correction or revision to the text.
  • the revision module 214 may generate and present individual options representing each instance, as represented by element 1606 where the user selects between a left 'h' corresponding to the first 'h' in area 1604 and a right 'h' corresponding to the second 'h' in area 1606. If there are more than two instances, then the revision module 214 may present three instance options, four instance options, five instance options, and so forth.
  • the user can flick left 1608 to locate the cursor at the first 'h' in area 1606 or flick right 1610 to locate the cursor at the second 'h' in area 1606.
  • the user may select the left 'h' in element 1606 to move the cursor to the first 'h' in area 1604 or the user may flick left 1608 to move the cursor to the first 'h' in area 1604.
  • the user may then select the keyl612 representing the 'g' to replace the first 'h' with a 'g' to arrive at the correct text "guangchuang".
  • the revision module 214 is configured to clear the maintained and retained state information, and start over, after a particular event.
  • the event may be based on an indication that a message or text is complete (e.g., a send message/text request is received, a store message/text request is received, or a search request is received).
  • the event may be the selection of a candidate character, as discussed with respect to FIG. 1.
  • the event may be a switch from one device application to another.
  • FIGS. 17 and 18 illustrate example processes depicted as logical flow graphs, which represent a sequence of operations that can be implemented in hardware, software, firmware, or a combination thereof.
  • the operations represent computer-executable instructions that, when executed by one or more processors, configure a computing device to perform the recited operations.
  • computer-executable instructions include routines, programs, objects, components, data structures, and the like that perform particular functions or implement particular abstract data types.
  • any or all of the operations may be implemented in whole or in part by hardware (e.g., as an ASIC, a specialized processing unit, etc.) to execute the described functions.
  • the functions and/or modules are implemented as part of an operating system.
  • the functions and/or modules are implemented as part of a device driver (e.g., a driver for a touch surface), firmware, and so on.
  • FIG. 17 illustrates an example process 1700 that maintains state information
  • a keyboard e.g., the keyboard 108.
  • the revision module 214 maintains information indicating that individual keysof a plurality of keys presented have been selected (e.g., pressed, swiped, tapped) to enter text. For example, the revision module 214 may maintain a pressed state for each of the keys representing the letters in: "guanh” as shown in FIG. 14, “guanchuang” as shown in FIG. 15, or “guanhchuang” as shown in FIG. 16.
  • the revision module 214 receives an indication to revise or correct the text entered in operation 1702. For example, a user may review the text and realize a correction or revision needs to be made, and therefore, may provide the indication via selecting the backspace key or via physically contacting an area in which to make the correction or the revision.
  • the revision module 214 visually distinguishes between the keys that have been selected and other presented keys that have not been selected to enter the text. For example, the revision module 214 may shade or highlight the keys representing 'g', the 'u', the 'a', the 'n' and the 'h', as shown in FIG. 14. In various embodiments, the revision module 214 may also visually distinguish between selected keysthat do not represent letters. For example, the revision module 214 may highlight a selected key representing a period or comma. [0146] At operation 1708, the revision module 214 receives a selection of an individual keyindicating a location within the text to initiate a correction or a revision. For example, a user may select the keyrepresenting the 'h' (as represented by element 1404 in FIG. 14) as the location within the text "guanh" at which to initiate a correction or a revision.
  • the revision module 214 receives an instruction to initiate the correction or the revision to the text at the location.
  • the instruction may be a user selection of the keyrepresenting the 'g' (as represented by element 1406 in FIG. 15) to replace the 'h' in "guanh".
  • FIG. 18 illustrates an example process 1800 that provides functionality to select an instance of a single letter that is repeated in text being corrected or revised.
  • the computing device 102 receives a request to enter text.
  • the request may indicate a user of the computing device 102 would like to input text for a message or to type a message
  • the revision module 214 maintains, as part of the input mode, a pressed state for a plurality of letters (e.g., as presented on selectable keys) selected to enter the text.
  • the revision module 214 displays the text so the user can review the text.
  • the revision module 214 receives an indication to switch from the text input mode to a revision mode to correct or revise the text or a portion of the text.
  • the revision module 214 visually distinguishes the pressed states of one or more of the plurality of letters that were selected from unpressed states of other letters that were not selected to enter the text.
  • the revision module 214 receives a selection of one of the one or more letters visually distinguished to convey a pressed state.
  • the revision module 214 determines if the letter selected in operation 1812 occurs in the text, or a portion of the text identified as a correction or revision are, more than once. If the answer to decision operation 1814 is "No", then at operation 1816, the revision module 214 determines a location of the lone instance of the letter selected in operation 1812 within the text, and at operation 1818, the revision module 214 receives an instruction to implement a correction or revision at the location determined in operation 1816. [0156] If the answer at decision operation 1814 is "Yes”, then at operation 1820, the revision module 214 distinguishes between individual instances of the multiple instances of the letter selected in operation 1812.
  • the revision module 214 then receives a selection of an instance indicating a location to implement a correction or revision, and at operation 1818, the revision module receives an instruction to implement the correction or the revision at the location indicated in operation 1822.
  • the correction or the revision may include, but is not limited to, the deletion of a letter, the addition of a letter, the insertion of a letter, or a combination thereof.
  • Embodiment A a method comprising:receiving first input selecting one or more keys representing one or more letters;identifying a first number of candidate characters associated with the one or more letters based at least in part on the first input;receiving second input, after the first input, the second input being associated with a last selection of a key of the one or more keys representing a last letter of the one or more letters;reducing, based at least on the second input, the first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters; anddisplaying candidate characters for selection based at least on the second number of candidate characters.
  • Embodiment B the method of embodiment A, wherein the second number of candidate characters comprises at least two candidate characters and the method further comprises receiving a selection of one of the at least two candidate characters.
  • Embodiment C the method of embodiment A or embodiment B, further comprising:analyzing the second input; and determining, based at least on the analyzing of the second input, that the second input indicates an instruction to add one or more additional letters to the one or more letters, wherein the reducing the number of candidate characters is based at least on a string of letters that combines the one or more letters and the one or more additional letters.
  • Embodiment D the method of embodiment C, wherein the instruction to add the one or more additional letters to the one or more letters is determined independent of a selection of the one or more additional letters.
  • Embodiment E the method of embodiment A or embodiment B, further comprising:analyzing the second input; anddetermining, based at least on the analyzing of the second input, that the second input indicates an instruction to add a pronunciation indication to the last letter, wherein the reducing the number of candidate characters is based at least on the pronunciation indication.
  • Embodiment F the method of any one of embodiments A through E, wherein the first input comprises physical contact between an input object and the one or more keys.
  • Embodiment G the method of embodiment F, wherein the one or more keys comprises multiple keys and the physical contact comprises continuous physical contact where the input object swipes the multiple keys.
  • Embodiment H the method of embodiment F, wherein the one or more keys comprises multiple keys and the physical contact comprises the input object tapping individual ones of the multiple keys.
  • Embodiment I the method of any one of embodiments A through H, wherein the second input comprises a flick motion that: increases a distance between an input object and a display surface that presents the one or more keys representing the one or more letters; and meets or exceeds a flick threshold speed.
  • Embodiment J the method of embodiment I, further comprising determining a direction of the flick motion;and adding, based at least on the direction of the flick motion, one or more additional letters to the one or more letters to produce a string of letters that reduces the number of candidate characters associated with the one or more letters from the first number of candidate characters to the second number of candidate characters.
  • EmbodimentK the method of embodiment J, wherein the string of letters comprises at least a compound Pinyin vowel and the one or more additional letters comprises one of a 'n' or a 'ng'.
  • EmbodimentL the method of embodiment I, further comprising adding, based at least on the flick motion, one or more additional letters to the one or more letters to produce a string of letters that reduces the number of candidate characters associated with the one or more letters from the first number of candidate characters to the second number of candidate characters.
  • Embodiment M the method of embodiment L, wherein the string of letters comprises a compound Pinyin consonant in Pinyin and the one or more additional letters comprises an 'h'.
  • EmbodimentN the method of embodiment I, further comprising:determining a direction of the flick motion; and adding, based at least on the direction of the flick motion, a pronunciation indication to at least one letter of the one or more letters to reduce the number of candidate characters associated with one or more letters from the first number of candidate characters to the second number of candidate characters.
  • EmbodimentO the method of embodiment N, wherein the pronunciation indication comprises a Pinyin tone.
  • Embodiment P one or more computer-readable storage media comprising instructions that, when executed on one or more processors, configure a computing device to perform operations comprising:analyzing second input after receiving first input that selects one or more keys representing text, the second input instructing a change to the text independent of selecting an additional key;determining, based at least on the analyzing of the second input, the change to the text to produce changed text; andpresenting the changed text.
  • Embodiment Q the one or more computer-readable storage media of embodiment P, wherein the operations further comprise determining, based at least on the analyzing of the second input, that the second input indicates an instruction to combine the text with additional text.
  • Embodiment the one or more computer-readable storage media of embodiment Q, wherein the operations further comprise reducing, based at least on the instruction, a number of candidate characters representative of a pronunciation of the text from a first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters.
  • Embodiment S the one or more computer-readable storage media of embodiment Q or embodiment R, wherein the text comprises at least one of an 'a', an 'e', an 'i', an ⁇ ', or a 'u' and the additional text includes a suffix comprising one of an 'n' or an 'ng'.
  • Embodiment T the one or more computer-readable storage media of embodiment P, wherein the operations further comprise determining, based at least on the analyzing of the second input, that the second input indicates an instruction to add a pronunciation indication to the text.
  • Embodiment U the one or more computer-readable storage media of embodiment T, wherein the operations further comprise reducing, based at least on the instruction, a number of candidate characters representative of the pronunciation of the text from a first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters.
  • Embodiment V the one or more computer-readable storage media of embodiment T or embodiment U, wherein the pronunciation indication comprises a Pinyin tone.
  • EmbodimentW the one or more computer-readable storage media of any one of embodiments P through V, the second input comprises a flick motion that: increases a distance between an input object and a display surface presenting the one or more keys representing the text; and meets or exceeds a flick threshold speed.
  • Embodiment X the one or more computer-readable storage media of any one of embodiments P through W, wherein the second input indicates completion of the first input and a combination of the first input and the second input indicates entry of a syllable in Pinyin.
  • Embodiment Y the one or more computer-readable storage media of any one of embodiments P through X, wherein the operations further comprise re-configuring a QWERTY keyboard layout to minimize a distance between keys representing at least two base vowels, the base vowels comprising an 'a', 'e', 'i', ⁇ ', and 'u'.
  • Embodiment Z the one or more computer-readable storage media of any one of embodiments P through Y, wherein the operations further comprise re-configuring a QWERTY keyboard layout to minimize a distance between a key representing a letter 'h' and at least one key representing at least one letter of a 'z', 'c', or 's'.
  • Embodiment AA a method comprising:receiving first input selecting one or more keys representing one or more letters;receiving second input after the first input, the second input indicating an instruction to change the one or more letters independent of a selection of one or more additional keys representing one or more additional letters;accessing a classifier to determine the change to the one or more letters;reducing, based at least on the determined change to the one or more letters, a number of candidate characters representative of a pronunciation of at least the one or more letters from a first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters; anddisplaying the second number of candidate characters.
  • Embodiment BB the method of embodiment AA, wherein the second input is associated with a last selection of a key representing a last letter of the one or more letters and the classifier accessed is associated with the last letter.
  • Embodiment CC the method of embodiment BB, wherein if the last letter is identified to be one of a 'z' a 's', or a 'c', then the classifier determines the change to the one or more letters to be an addition of an 'h' to the one or more letters.
  • Embodiment DD the method of embodiment BB, wherein if the last letter is identified to be one of an 'a', an 'e', an 'i', an ⁇ ' or a 'u', then the classifier determines the change to the one or more letters to be an addition of an 'n' or an 'ng' to the one or more letters.
  • Embodiment EE the method of embodiment BB, wherein if the last letter is identified to be one of an 'a', an 'e', an 'i', an ⁇ ' or a 'u', then the classifier determines the change to the one or more letters to be one of a plurality of different pronunciation indications to add to the last letter.
  • Embodiment FF a system comprising: one or more processors; one or more memories storing instructions that, when executed on the one or more processors, configure a computing device to perform operations comprising: analyzing second input after receiving first input that selects one or more keys representing text, the second input instructing a change to the text independent of selecting an additional key;determining, based at least on the analyzing of the second input, the change to the text to produce changed text; andpresenting the changed text.
  • Embodiment GG the system of embodiment FF, wherein the operations further comprise determining, based at least on the analyzing of the second input, that the second input indicates an instruction to combine the text with additional text.
  • Embodiment HH the system of embodiment GG, wherein the operations further comprise reducing, based at least on the instruction, a number of candidate characters representative of a pronunciation of the text from a first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters.
  • Embodiment II the system of embodiment GG or embodiment HH, wherein the text comprises at least one of an 'a', an 'e', an 'i', an ⁇ ', or a 'u' and the additional text includes a suffix comprising one of an 'n' or an 'ng'.
  • Embodiment JJ the system of embodiment FF, wherein the operations further comprise determining, based at least on the analyzing of the second input, that the second input indicates an instruction to add a pronunciation indication to the text.
  • Embodiment KK the system of embodiment JJ, wherein the operations further comprise reducing, based at least on the instruction, a number of candidate characters representative of the pronunciation of the text from a first number of candidate characters to a second number of candidate characters that is less than the first number of candidate characters.
  • Embodiment LL the systemof embodiment JJ or embodiment KK, wherein the pronunciation indication comprises a Pinyin tone.
  • Embodiment MM the system of any one of embodiments FF through LL, wherein the second input comprises a flick motion tha increases a distance between an input object and a display surface presenting the one or more keys representing the text; and meets or exceeds a flick threshold speed.
  • Embodiment NN the system of any one of embodiments FF through MM, wherein the second input indicates completion of the first input and a combination of the first input and the second input indicates entry of a syllable in Pinyin.
  • Embodiment OO the system of any one of embodiments FF through NN, wherein the operations further comprise re-configuring a QWERTY keyboard layout to minimize a distance between keys representing at least two base vowels, the base vowels comprising an 'a', 'e', 'i', ⁇ ', and 'u'.
  • Embodiment PP the system of any one of embodiments FF through OO, wherein the operations further comprise re-configuring a QWERTY keyboard layout to minimize a distance between a key representing a letter 'h' and at least one key representing at least one letter of a 'z', 'c', or 's'.
  • Embodiment QQ a method comprising:maintaining, as part of an input mode, a pressed state for a plurality of keys selected to enter text;displaying the text;receiving an indication to switch from the input mode to a revision mode to correct or revise at least a portion of the text displayed;visually distinguishing the pressed state of one or more keys of the plurality of keys from an unpressed state of one or more other keys not selected to provide enter the text;receiving a selection of one of the one or more keys, the selection indicating a location within the at least the portion of the text displayed to initiate a correction or a revision to the text; and receiving the correction or the revision to the text; andimplementing the correction or the revision to the text at the location.
  • Embodiment RR the method of embodiment QQ, wherein the correction or the revision comprises a replacement letter to correct an incorrect letter at the location within the at least the portion of the text displayed.
  • Embodiment SS the method of embodiment QQ, wherein the correction or the revision comprises a selection of a key representing an additional letter to insert at the location within the at least the portion of the text displayed.
  • Embodiment TT the method of any one of embodiments QQ through SS, further comprising:receiving a request to input a message; andcommencing the maintaining in response to receiving the request to input the message.
  • Embodiment UU the method of embodiment TT, further comprising:receiving another request to communicate the message; andending the maintaining in response to receiving the other request to communicate the message.
  • Embodiment VV the method of any one of embodiments QQ through UU, wherein the visually distinguishing of the pressed state of an individual key comprises presenting multiple instances of the individual key.
  • Embodiment WW the method of embodiment VV, wherein individual instances of the multiple instances correlate to a repeated letter within the at least the portion of the text displayed.
  • Embodiment XX the method of any one of embodiments QQ through WW, wherein the indication comprises a selection of a backspace key.
  • Embodiment YY the method of any one of embodiments QQ through WW, wherein the indication comprises an instruction to place a cursor within the at least the portion of the text displayed.
  • Embodiment ZZ one or more computer-readable storage media comprising instructions that, when executed on one or more processors, configure a computing device to perform operations comprising:maintaining information indicating that individual keys of a plurality of keys have been selected to enter text;receiving an indication to correct or to revise the entered text;visually distinguishing, on a virtual keyboard, between the plurality of keys that have been selected to enter the text and a plurality of other keys that have not been selected to enter the text;receiving a selection of an individual key of the plurality of keys, the selection indicating a location within the text; andreceiving an instruction to correct or to revise the text at the location.
  • Embodiment AAA a system comprising: one or more processors;one or more memories storing instructions that, when executed on the one or more processors, configure a computing device to perform operations comprising:maintaining, as part of an input mode, a pressed state for a plurality of keys selected to enter text;displaying the text;receiving an indication to switch from the input mode to a revision mode to correct or revise at least a portion of the text displayed;visually distinguishing the pressed state of one or more keys of the plurality of keys from an unpressed state of one or more other keys not selected to provide enter the text;receiving a selection of one of the one or more keys, the selection indicating a location within the at least the portion of the text displayed to initiate a correction or a revision to the text; receiving the correction or the revision to the text; andimplementing the correction or the revision to the text at the location.
  • Embodiment BBB the system of embodiment AAA, wherein the correction or the revision comprises a replacement letter to correct an incorrect letter at the location within the at least the portion of the text displayed.
  • Embodiment CCC the system of embodiment AAA, wherein the correction or the revision comprises a selection of a key representing an additional letter to insert at the location within the at least the portion of the text displayed.
  • Embodiment DDD the system of any one of embodiments AAA through
  • Embodiment EEE the system of embodiment DDD, wherein individual instances of the multiple instances correlate to a repeated letter within the at least the portion of the text displayed.
  • Embodiment FFF the system of any one of embodiments AAA through
  • Embodiment GGG the system of any one of embodiments AAA through
  • the indication comprises an instruction to place a cursor within the at least the portion of the text displayed.
  • Embodiment HHH a user interface comprising:a virtual keyboard that displays a plurality of keys representing letters; a first subset of keys of the plurality of keys that have not been selected to enter text displayed via the user interface; and a second subset of keys of the plurality of keys that have been selected to enter the text displayed via the user interface, wherein the user interface distinguishes individual keys in the second subset of keys from individual keys in the first subset of keys.
  • Embodiment III the user interface of embodiment HHH, wherein the distinguishing of the individual keys in the second subset of keys from the individual keys in the first subset of keys occurs as part of a text revision mode.
  • Embodiment JJJ the user interface of embodiment HHH or embodiment III, wherein the distinguishing of the individual keys in the second subset of keys from the individual keys in the first subset of keys occurs in response to sensing an indication to revise the text or to correct the text.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

La présente invention concerne des techniques et/ou des systèmes pour réduire au minimum (par exemple diminuer) le temps qu'il faut à un utilisateur pour fournir une entrée à un dispositif informatique ou saisir une entrée par l'intermédiaire d'une interface utilisateur du dispositif informatique (par exemple sélectionner une touche ou une option). Les techniques et/ou les systèmes décrits par les présentes peuvent également réduire à un minimum le temps qu'il faut à un utilisateur pour sélectionner un caractère candidat dans une langue particulière, le caractère candidat représentant l'entrée fournie. En outre, les techniques et/ou les systèmes décrits par les présentes fournissent un mécanisme de correction d'erreur ou de révision de texte efficace.
PCT/CN2014/071189 2014-01-23 2014-01-23 Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée WO2015109468A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/071189 WO2015109468A1 (fr) 2014-01-23 2014-01-23 Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/071189 WO2015109468A1 (fr) 2014-01-23 2014-01-23 Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée

Publications (1)

Publication Number Publication Date
WO2015109468A1 true WO2015109468A1 (fr) 2015-07-30

Family

ID=53680581

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/071189 WO2015109468A1 (fr) 2014-01-23 2014-01-23 Fonctionnalité pour réduire le temps qu'il faut à un dispositif pour recevoir et traiter une entrée

Country Status (1)

Country Link
WO (1) WO2015109468A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1542596A (zh) * 2003-04-30 2004-11-03 字符和文本单元输入校正系统
CN101002198A (zh) * 2004-06-23 2007-07-18 Google公司 用于非罗马字符和字的拼写校正系统和方法
CN103218054A (zh) * 2012-01-20 2013-07-24 国际商业机器公司 用于字符更正的方法和系统

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1542596A (zh) * 2003-04-30 2004-11-03 字符和文本单元输入校正系统
CN101002198A (zh) * 2004-06-23 2007-07-18 Google公司 用于非罗马字符和字的拼写校正系统和方法
CN103218054A (zh) * 2012-01-20 2013-07-24 国际商业机器公司 用于字符更正的方法和系统

Similar Documents

Publication Publication Date Title
US10275152B2 (en) Advanced methods and systems for text input error correction
US8560974B1 (en) Input method application for a touch-sensitive user interface
US9811193B2 (en) Text entry for electronic devices
US8922489B2 (en) Text input using key and gesture information
US10268370B2 (en) Character input device and character input method with a plurality of keypads
US8957868B2 (en) Multi-touch text input
KR101323281B1 (ko) 입력 장치 및 문자 입력 방법
US20140078065A1 (en) Predictive Keyboard With Suppressed Keys
US20120047454A1 (en) Dynamic Soft Input
US8952897B2 (en) Single page soft input panels for larger character sets
JP6426417B2 (ja) 電子機器、方法及びプログラム
US20140354550A1 (en) Receiving contextual information from keyboards
US20190196712A1 (en) Systems and Methods for Facilitating Data Entry into Small Screen Electronic Devices
CN103176737A (zh) 手写句子系统的基于多点触摸的校正的方法和设备
US9395911B2 (en) Computer input using hand drawn symbols
KR100651396B1 (ko) 문자 인식 장치 및 방법
JP6430198B2 (ja) 電子機器、方法及びプログラム
KR20100024471A (ko) 터치스크린을 사용해 초성, 중성 또는 종성을 한 번에 입력하는 한글입력방법 및 장치
JP5977764B2 (ja) 拡張キーを利用した情報入力システム及び情報入力方法
KR20130010252A (ko) 가상 키보드 크기 조절 장치 및 방법
US9298366B2 (en) Electronic device, method and computer readable medium
US9274609B2 (en) Inputting radical on touch screen device
KR100506231B1 (ko) 터치 스크린을 구비한 단말기에서 문자 입력 장치 및 방법
TW202230093A (zh) 輸入語素文字至電子裝置之設備及方法
US20150347004A1 (en) Indic language keyboard interface

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14880360

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14880360

Country of ref document: EP

Kind code of ref document: A1