CN113407100B - Time-based word segmentation - Google Patents

Time-based word segmentation Download PDF

Info

Publication number
CN113407100B
CN113407100B CN202110569755.3A CN202110569755A CN113407100B CN 113407100 B CN113407100 B CN 113407100B CN 202110569755 A CN202110569755 A CN 202110569755A CN 113407100 B CN113407100 B CN 113407100B
Authority
CN
China
Prior art keywords
score
input
computing device
character
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110569755.3A
Other languages
Chinese (zh)
Other versions
CN113407100A (en
Inventor
托马斯·德泽莱斯
丹尼尔·马丁·凯泽斯
亚伯拉罕·穆雷
翟树民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to CN202110569755.3A priority Critical patent/CN113407100B/en
Publication of CN113407100A publication Critical patent/CN113407100A/en
Application granted granted Critical
Publication of CN113407100B publication Critical patent/CN113407100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • G06F3/0237Character input methods using prediction or retrieval techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/32Digital ink

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Character Discrimination (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to time-based segmentation. A computing device is described that receives a first input of a first text character at an initial time and a second input of a second text character at a subsequent time. The computing device determines, based on the first and second text characters, a first character sequence that does not include space characters between the first and second text characters and a second character sequence that includes the space characters between the first and second text characters. The computing device determines a first score associated with the first character sequence and a second score associated with the second character sequence. The computing device adjusts the second score to determine a third score based on a duration between the initial and subsequent times, and the computing device outputs the second character sequence in response to determining that the third score exceeds the first score.

Description

Time-based word segmentation
Description of the division
The present application belongs to the divisional application of Chinese patent application 201680024744.4 whose application date is 2016, 8 and 5.
Background
Some computing devices (e.g., mobile phones, tablet computers, etc.) may provide a graphical keyboard or handwriting input feature as part of a graphical user interface for composing text using a presence-sensitive input device, such as a trackpad or touch screen. Such computing devices may rely on auto-completion and character recognition systems to correct spelling and grammar errors, perform word segmentation (e.g., by inserting space characters into multiple words that separate text input), and perform other character and word recognition techniques for assisting a user in entering typed or handwritten text. However, some auto-completion systems may be limited in their ability to correct for inconsistencies in the user's intended text. Thus, the user may need to make additional effort to remove, delete, or otherwise correct the correction of the error.
Drawings
FIG. 1 is a conceptual diagram illustrating an example computing device configured to divide text input into two or more words in accordance with one or more techniques of this disclosure.
FIG. 2 is a block diagram illustrating an example computing device configured to divide text input into two or more words, in accordance with one or more aspects of the present disclosure.
FIG. 3 is a conceptual diagram of an example distribution of total score increases that vary based on duration between text input portions, in accordance with one or more techniques of the present disclosure.
FIG. 4 is a flowchart illustrating example operations of an example computing device configured to divide text input into two or more words, in accordance with one or more aspects of the present disclosure.
Detailed Description
In general, the present disclosure relates to a technique for partitioning text input into one or more words by applying a language model and/or a spatial model to the text input in conjunction with temporal characteristics of the text input. For example, a computing device may provide a graphical keyboard or handwriting input features as part of a graphical user interface through which a user may provide text input (e.g., a sequence of text characters) using a presence-sensitive input component of the computing device, such as a trackpad or touch screen. As feedback that the computing device is accurately interpreting text input, the computing device may present graphical output generated based on the text input. Unlike the exact sequence of text characters that the presentation device derives from the text input, the computing device analyzes the sequence of text characters to determine word boundaries and spelling or grammar errors that the computing device uses to automatically insert spaces and correct errors before presenting the graphical output at the screen.
The computing device utilizes the language model and/or the spatial model to determine, with a degree of certainty or "total score" (e.g., a probability derived from the language model score and/or the spatial model score), whether the portion of the text input is intended to represent letters of one or more individuals, a combination of letters, or words of a dictionary (e.g., a lexicon). If the language model and/or the spatial model indicate that a portion of the text input is likely to misspell one or more letters, combinations of letters, or words in the dictionary, the computing device may replace the misspelled portion of the received text input with one or more corrected letters, combinations of letters, or words in the dictionary. The computing device may insert a space into the text input at each word boundary identified by the language model and/or the spatial model to clearly divide the graphical output of the text input into one or more clearly identifiable words.
To improve the accuracy of the language model and/or the spatial model and to better perform word segmentation, the computing device also uses the temporal characteristics of the text input to determine whether a particular text input portion represents a word break (word break) or space between words that are not necessarily the highest ranked words in the dictionary. In other words, while a word break or space is unlikely to occur in a particular language context, the computing device uses a language model and/or a spatial model in conjunction with the temporal characteristics of the input to determine whether the user intends to enter the word break or "space" in the text input.
For example, the computing device may infer that a short delay or pause in receiving text input of two consecutive characters is an indication that the user does not intend to specify a space or word boundary in the text input, and that a long delay in receiving text input of two consecutive characters is an indication that the user intends to enter a space or word boundary in the text input. Thus, if the computing device detects a short delay in receiving the continuous character input, the computing device may ignore the short delay and treat the continuous character input as forming part of a single word. However, if the computing device detects a long delay in receiving the continuous character input, the computing device may increase the overall score of the word pairs including broken words or spaces between the continuous characters. The computing device may adjust the total score according to the duration of the delay in order to increase the likelihood that the computing device will more accurately identify broken words or spaces based on the intended pauses in the text input.
FIG. 1 is a conceptual diagram illustrating a computing device 100 as an example computing device configured to divide text input into two or more words in accordance with one or more techniques of this disclosure. In the example of fig. 1, computing device 100 is a wearable computing device (e.g., a computerized watch or a so-called smart watch device). However, in other examples, computing device 100 may be a mobile phone, a tablet computer, a Personal Digital Assistant (PDA), a laptop computer, a portable gaming device, a portable media player, an electronic book reader, a television platform, an automotive computing platform or system, a fitness tracker, or any other type of mobile or non-mobile computing device that receives typed or handwritten text input from a user.
Computing device 100 may include a presence-sensitive display 112. The presence-sensitive display 112 of the computing device 100 may serve as an input component and as an output component of the computing device 100. Presence-sensitive display 112 is implemented using various techniques. For example, presence-sensitive display 112 may function as a presence-sensitive input device using a presence-sensitive screen, such as a resistive touch screen, a surface acoustic wave touch screen, a capacitive touch screen, a projected capacitive touch screen, a pressure-sensitive screen, an acoustic pulse recognition touch screen, a camera and display system, or another presence-sensitive screen technology. Presence-sensitive display 112 may serve as an output component such as a display device using any one or more of a Liquid Crystal Display (LCD), a dot matrix display, a Light Emitting Diode (LED) display, an Organic Light Emitting Diode (OLED) display, electronic ink, or similar monochrome or color display capable of outputting visual information to a user of computing device 100.
Presence-sensitive display 112 of computing device 100 may include a presence-sensitive screen that receives tactile user input from a user of computing device 100 and presents output. Presence-sensitive display 112 may receive indications of tactile user input (e.g., a user touching or pointing to one or more locations of presence-sensitive display 112 with a finger or stylus) by detecting one or more tap and/or non-tap gestures from a user of computing device 100, and in response to the input, computing device 100 may cause presence-sensitive display 112 to present output. Presence-sensitive display 112 may present output as part of a graphical user interface (e.g., screen shots 114A and 114B) that may relate to functionality provided by computing device 100, such as receiving text input from a user. For example, presence-sensitive display 112 may present a graphical keyboard that user 118 may provide keyboard-based text input and/or handwriting input features that user 118 may provide handwriting text input.
The user 118 may interact with the computing device 100 by providing one or more tap or non-tap gestures at or near the presence-sensitive display 112 for entering text input. When the user 118 enters handwritten text input, the handwritten text input may be printed, handwriting, or any other form of writing or drawing, as opposed to keyboard-based text input. In the example of FIG. 1, user 118 writes (e.g., with a finger or stylus) a mix of print and handwriting letters h-i-t-h-e-r-e between times t0 and t 13. Fig. 1 shows that user 118 writes letter h beginning at time t0 and ending at time t1, letter i beginning at time t2 and ending at time t3, and letter t beginning at time t4 and ending at time t5, respectively. After a pause between times t5 and t6, FIG. 1 shows user 118 beginning at time t6 and ending at time t7 again writing letter h, beginning at time t8 and ending at time t9, beginning at time t10 and ending at time t11 again writing letter r, and beginning at time t12 and ending at time t 13.
The computing device 100 may include a text entry module 120 and a character recognition module 122. Modules 120 and 122 may perform operations using software, hardware, firmware, or a mixture of hardware, software, and/or firmware residing in computing device 100 and executing on computing device 100. Computing device 100 may utilize multiple processors to execute modules 120 and 122 and/or to execute modules 120 and 122 as virtual machines executing on underlying hardware. In some examples, presence-sensitive display 112 and modules 120 and 122 may be disposed remotely from computing device 100, and computing device 100 may access presence-sensitive display and modules 120 and 122 remotely, for example, as one or more network services accessible via a network cloud.
Text entry module 120 may manage a user interface provided by computing device 100 at presence-sensitive display 112 for handling text input from a user. For example, the text entry module 120 may cause the computing device 100 to present a graphical keyboard or handwriting input feature as part of a graphical user interface (e.g., screen shot 114A) by which a user, such as user 118, may provide text input (e.g., a sequence of text characters) using the presence-sensitive display 112. As a form of feedback that computing device 100 is accurately receiving handwritten text input at presence-sensitive display 112, text entry module 120 may cause computing device 100 to display a trace or "ink" (e.g., screen shot 114A) corresponding to the location of presence-sensitive display 112 at which the text input was received. In addition to or as an alternative to feedback that computing device 100 is accurately interpreting text input received by presence-sensitive display 112, text entry module 120 may cause computing device 100 to present individual characters that computing device 100 inferred from the text input as graphical output (e.g., screen shot 114B).
When user 118 provides a tap or non-tap gesture input at presence-sensitive display 112, text entry module 120 may receive information from presence-sensitive display 112 regarding an indication of user input detected at presence-sensitive display 112. Text entry module 120 may determine a sequence of touch events based on information received from presence-sensitive display 112. Each touch event in the sequence may include data regarding where, when, and from what direction the presence-sensitive display 112 detected the user input. Text entry module 120 may invoke character recognition module 122 to process and interpret text characters associated with the text input by outputting a sequence of touch events to character recognition module 122. In response to outputting the sequence of touch events, the text entry module 120 may receive from the character recognition module 122 an indication of one or more text characters or words separated by spaces that the character recognition module 122 derived from the touch events. Text entry module 120 may cause presence-sensitive display 112 to present text characters received from character recognition module 122 as a graphical output (e.g., screen shot 114B).
Character recognition module 122 may perform character-level and/or word-level recognition operations on a sequence of touch events determined by text entry module 120 from text input provided at presence-sensitive display 112. By determining a sequence of text characters based on touch events received from the text entry module 120, the character recognition module 122 may perform character level recognition of the text input. Additionally, the character recognition module 122 may perform word level recognition of the text input to determine word sequences including individual characters determined from the touch event. For example, using the spatial model, the character recognition module 122 may interpret the sequence of touch events as a selection of a key of a graphical keyboard presented at the presence-sensitive display 112 and determine a sequence of characters for an individual corresponding to the selection of the key along with a spatial model score indicating, with a degree of certainty, a likelihood that the sequence of touch events represents the selection of the key. Alternatively, using stroke recognition techniques and spatial models, character recognition module 122 may interpret the sequence of touch events into a sequence of strokes of the handwritten text input and determine a spatial model score that corresponds to the sequence of strokes along with a degree of certainty that indicates a likelihood that the sequence of touch events represents a stroke of an individual letter. The character recognition module 122 may determine a total score indicating a likelihood that the sequence of touch events represents text input with a degree of certainty based on the spatial model score.
Instead of merely outputting to the text entry module 120 the original sequence of text characters that the character recognition module 122 derives from the sequence of touch events, the character recognition module 112 may perform additional analysis on the sequence of touch events to identify potential word boundaries and spelling or grammar errors associated with text input. The character recognition module 122 may automatically insert spaces and correct potential errors in character sequences derived from touch events before outputting text characters to the text entry module 120 for presentation at the presence-sensitive display 122.
In addition to the temporal characteristics of the text input, the character recognition module 122 may divide the text input into one or more words by using aspects of the spatial model and/or the language model. In operation, after receiving a first input of at least one first text character at an initial time that is detected at presence-sensitive display 112, computing device 100 may receive a second input of at least one second text character at a subsequent time that is detected at presence-sensitive display 112. For example, between initial times t0 and t5, presence-sensitive display 112 may detect initial handwritten text input when user 118 gestures in letters h-i-t at or near the location of presence-sensitive display 112. Between subsequent times t6 and t13, presence-sensitive display 112 may detect subsequent handwritten text input when user 118 gestures in letters h-e-r-e at or near the location of presence-sensitive display 112. Presence-sensitive display 112 may output information to text entry module 120 indicating the location (e.g., x, y coordinate information) and time at which presence-sensitive display 112 detected the initial and subsequent handwritten text inputs.
Text entry module 120 may assemble the location and time of information received from presence-sensitive display 112 into a time-sequential sequence of touch events. The text entry module 120 may pass the sequence of touch events or a pointer to a location in memory of the computing device 100 where the sequence of touch events is stored to the character recognition module 122 for conversion into a sequence of text characters.
Using spatial models and other stroke recognition techniques, character recognition module 122 may interpret the sequence of touch events received from text entry module 120 into a sequence of literal strokes that form a sequence of individual characters. The character recognition module 122 may derive the overall score based at least in part on the spatial model scores assigned to the individual character sequences by the spatial model.
For example, the character recognition module 122 may characterize a portion of a touch event as defining a different vertical stroke, horizontal stroke, curved stroke, diagonal stroke, curved stroke, etc. The character recognition module 122 may assign a score or rank to potential characters that are more similar to the strokes defined by the touch event and combine each individual score (e.g., product sum, average, etc.) to determine a total score associated with the text input. The total score or ranking may indicate a degree of likelihood or confidence that one or more of the touch events correspond to a stroke or combination of strokes associated with a particular text character. The character recognition module 122 may generate the character sequence based at least in part on the total score or ranking and other factors. For example, based on touch events associated with times t0 through t13, character recognition module 122 may define the character sequence as h-i-t-h-e-r-e.
Rather than merely outputting a sequence of characters generated from a touch event, character recognition module 122 may perform additional character and word recognition operations to more accurately determine text characters that user 118 intends to input at presence-sensitive display 112. The character recognition module 122 may determine a first character sequence that does not include space characters between the at least one first text character and the at least one second text character and a second character sequence that includes space characters between the at least one first text character and the at least one second text character based at least in part on the at least one first text character and the at least one second text character.
For example, the character recognition module 122 may input the character sequence h-i-t-h-e-r-e into a language model that compares the character sequence with individual words and phrases in a dictionary (e.g., a dictionary). When the user provides a handwritten text input recognized as h-i-t-h-e-r-e by the character recognition module 122, the language model may assign a corresponding language model score or rank to each word or phrase in the dictionary that may potentially represent the text input that the user intends to enter at the presence-sensitive display 112. Using the respective language model scores for each potential word or phrase and the total score determined from the touch event, the character recognition module 122 may determine a respective "total" score for each potential word or phrase.
For example, the language model may identify phrases of "hi ether" and "hit ether" as possible representations of text character sequences. Since the phrase "hit" is more common than the phrase "hit here" in English, the language model may assign a higher language model score to the phrase "hit here" than the language model score that the model assigns to the phrase "hit here". The character recognition module 122 may then assign a higher total score (i.e., a first score) to the phrase "hit" than the total score (i.e., a second score) assigned to the phrase "hit" by the character recognition module 122. In other words, based on the information stored in the dictionary and the language model, the character recognition module 122 may determine that the first character sequence "hit" that does include space characters between the letters h-i-t received between the initial times t0 and t5 and the letters h-e-r-e received between the subsequent times t6 and t13 is more likely to represent handwritten text input received between the times t0 and t13 than the second character sequence "hit" that does include space characters between the letters h-i-t received between the initial times t0 and t5 and the letters h-e-r-e received between the subsequent times t6 and t 13.
To improve the accuracy of the text recognition technique performed by character recognition module 122 and to better perform word segmentation, character recognition module 122 also uses the temporal characteristics of the text input detected at presence-sensitive display 112 to determine one or more individual words in the dictionary that are more likely to represent the text input. In particular, the character recognition module 122 uses the temporal characteristics of the text input to determine whether the user 118 intends to enter a break or "space" in the text input by determining whether the user 118 pauses between entering consecutive characters in the sequence. The character module 122 may determine whether a sufficient duration has elapsed between receipt of an initial portion of text input associated with ending an initial character and a subsequent portion of text input associated with beginning of a subsequent character, with a high likelihood that the user intends to designate the portion of text input between the initial and subsequent characters as a space or break. The character recognition module 122 may infer that the shorter delay in receiving text input associated with two consecutive characters is an indication that the user does not intend to specify a space or word boundary in the text input, and that the longer delay in receiving text input associated with two consecutive characters is an indication that the user intends to enter a space or word boundary in the text input.
The character recognition module 122 may adjust the second score (e.g., the total score associated with "hit" s) based on the duration between the initial time and the subsequent time to determine a third score associated with the second character sequence. For example, even though the language model of the character recognition module 122 may determine that the phrase "hit" is more common in English and therefore has a higher language model score than the phrase "hit" the character recognition module 122 may have a total score for the character sequence "hit" due to pauses identified between times t4 and t5, i.e., after the user 118 draws the letter t and before the user 118 draws the letter h. By adjusting the overall score of the character sequence "hit" in response to the pause, the character recognition module 122 may assign a higher score to the phrase "hit" than the score that the character recognition module 122 assigns to the phrase "hit". In this manner, the character recognition module 122 may enable the computing device 100 to receive indications of spaces or breaks in the text input by recognizing pauses in the text input.
In response to determining that the third score (e.g., the adjusted score associated with "hit sphere") exceeds the first score (e.g., the score associated with "hit sphere"), computing device 100 may output an indication of the second word sequence for display. In other words, after character recognition module 122 adjusts the score of the character sequence "hit" based on the temporal characteristics of the text input, character recognition module 122 may determine whether the adjusted score of the character sequence "hit" exceeds the scores of other potential character sequences output from the language model. In the example of fig. 1, character recognition module 122 may determine that the adjusted score of "hit sphere" exceeds the score of "hit sphere" and output the character sequence "hit sphere" to text entry module 120 for presentation at presence-sensitive display 112.
Text entry module 120 may receive data from character recognition module 122 indicating the character sequence "hit sphere". Text entry module 120 may use the data from character recognition module 122 to generate an updated graphical user interface having characters h-i-t-h-r-re and send instructions to presence-sensitive display 122 for displaying the updated user interface (e.g., screen shot 114B).
In this way, a computing device in accordance with the described techniques may better identify word breaks or space predictions in text input than other systems. By using the temporal characteristics of text input to enhance language model and/or spatial model output and other components of the text input system, a computing device may improve the intuitive nature of text entry by allowing a user to more easily see if the computing device accurately interprets the input. By predicting word breaks and space entries more accurately, the computing device may receive less input from the user to correct erroneous word breaks or space predictions. By receiving fewer inputs, the computing device may process fewer instructions and use less power. Thus, the computing device may receive text input more quickly and consume less battery power than other systems.
Fig. 2 is a block diagram illustrating a computing device 200 as an example computing device configured to divide text input into two or more words, in accordance with one or more aspects of the present disclosure. Computing device 200 of fig. 2 is described below within the context (context) of computing device 100 of fig. 1. In some examples, computing device 200 of fig. 2 represents an example of computing device 100 of fig. 1. Fig. 2 illustrates only one particular example of a computing device 200, and many other examples of computing devices 200 may be used in other instances and may include a subset of the components included in the example computing device 200 or may include additional components not shown in fig. 2.
As shown in the example of fig. 2, computing device 200 includes a presence-sensitive display 212, one or more processors 240, one or more input components 242, one or more communication units 244, one or more input components 246, and one or more storage components 248. Presence-sensitive display 212 includes a display component 202 and a presence-sensitive input component 204.
The one or more storage components 248 of the computing device 200 are configured to store the text entry module 220 and the character recognition module 222, the character recognition module 222 further including a Time Model (TM) module 226, a Language Model (LM) module 224, and a Space Model (SM) module 228. In addition, storage component 248 is configured to store dictionary data store 234A and threshold data store 234B. Data stores 234A and 234B may be collectively referred to herein as "data stores 234".
Communication channel 250 may interconnect each of components 202, 204, 212, 220, 222, 224, 226, 228, 234, 240, 242, 244, 246, and 248 for inter-component communication (physically, communicatively, and/or operatively). In some examples, communication channel 250 may include a system bus, a network connection, an interprocess communication data structure, or any other method for transmitting data.
One or more input components 242 of the computing device 200 may receive input. Examples of inputs are tactile inputs, audio inputs, image inputs, and video inputs. In one example, the input component 242 of the computing device 200 includes a presence-sensitive display, a touch-sensitive screen, a mouse, a keyboard, a voice response system, a microphone, or any other type of device for detecting input from a person or machine. In some examples, the input component 242 includes one or more sensor components, such as one or more position sensors (GPS component, wi-Fi component, cellular component), one or more temperature sensors, one or more movement sensors (e.g., accelerometer, gyroscope), one or more pressure sensors (e.g., barometer), one or more ambient light sensors, and one or more other sensors (e.g., microphone, camera, video camera, body camera, eyewear, or other camera devices operatively coupled to the computing device 200, infrared proximity sensor, hygrometer, etc.).
One or more output components 246 of computing device 200 can generate output. Examples of outputs are haptic outputs, audio outputs, still image outputs, and video outputs. In one example, output component 246 of computing device 200 includes a presence-sensitive display, a sound card, a video graphics adapter card, speakers, a Cathode Ray Tube (CRT) monitor, a Liquid Crystal Display (LCD), or any other type of device for generating output to a person or machine.
The one or more communication units 244 of the computing device 200 may communicate with external devices via one or more wired and/or wireless networks by transmitting and/or receiving network signals over one or more networks. For example, the communication unit 244 may be configured to communicate over a network with a remote computing system that processes text input and performs word segmentation of the text input using time and language model characteristics as described herein. In response to outputting the indication of the sequence of touch events via communication unit 244 for transmission to the remote computing system, modules 220 and/or 222 may receive an indication of the sequence of characters from the remote computing system via communication unit 244. Examples of communication unit 244 include a network interface card (e.g., such as an ethernet card), an optical transceiver, a radio frequency transceiver, a GPS receiver, or any other type of device that may send and/or receive information. Other examples of communication unit 244 may include a short wave radio, a cellular data radio, a wireless network radio, and a Universal Serial Bus (USB) controller.
Presence-sensitive display 212 of computing device 200 includes display component 202 and presence-sensitive input component 204. The display component 202 may be a screen that displays information with the presence-sensitive display 212 and the presence-sensitive input component 204 may detect objects at and/or near the display component 202. As one example range, presence-sensitive input component 204 may detect an object, such as a finger or stylus, that is 2 inches or less from display component 202. Presence-sensitive input component 204 may determine a location (e.g., [ x, y ] coordinates) of the display component 202 at which the object was detected. In another example range, presence-sensitive input component 204 may detect objects that are 6 inches or less from display component 202, and other ranges are possible. The presence-sensitive input component 204 may determine the location of the display component 202 selected by the user's finger using capacitive recognition techniques, inductive recognition techniques, and/or optical recognition techniques. In some examples, presence-sensitive input component 204 also provides output to a user through the use of tactile, audio, or video stimuli as described with respect to display component 202. In the example of fig. 2, presence-sensitive display 212 may present a user interface (such as a graphical user interface for receiving text input and outputting a sequence of characters inferred from the text input as shown in screen shots 114A and 114B in fig. 1).
Although presence-sensitive display 212 is illustrated as an internal component of computing device 200, presence-sensitive display 212 may also represent an external component that shares a data path with computing device 200 for transmitting and/or receiving inputs and outputs. For example, in one example, presence-sensitive display 212 represents built-in components of computing device 200 that are located within an external package of computing device 200 and physically connected to the external package of computing device 200 (e.g., a screen on a mobile phone). In another example, presence-sensitive display 212 represents an external component of computing device 200 that is located outside of and physically separate from the enclosure or housing of computing device 200 (e.g., a monitor, projector, etc. that shares a wired and/or wireless data path with computing device 200).
Presence-sensitive display 212 of computing device 200 may receive tactile input from a user of computing device 200. The presence-sensitive display 212 may receive indications of tactile input by detecting one or more tap or non-tap gestures (e.g., a user touching or pointing to one or more locations of the presence-sensitive display 212 with a finger or stylus) from a user of the computing device 200. Presence-sensitive display 212 may present output to a user. Presence-sensitive display 212 may present the output as a graphical user interface (e.g., as screen shots 114A and 114B of fig. 1) that may be associated with functions provided by various functions of computing device 200. For example, presence-sensitive display 212 may present various user interfaces for components of a computing platform, operating system, application, or service (e.g., electronic messaging application, navigation application, internet browser application, mobile operating system, etc.) executing at computing device 200 or accessible by computing device 200. The user may interact with a corresponding user interface to cause computing device 200 to perform operations related to one or more different functions. For example, text entry module 220 may cause presence-sensitive display 212 to present a graphical user interface associated with text input functions of computing device 200. A user of computing device 200 may view the output presented as feedback associated with the text input function and provide input to presence-sensitive display 212 using the text input function to compose additional text.
Presence-sensitive display 212 of computing device 200 may detect two-dimensional and/or three-dimensional gestures as input from a user of computing device 200. For example, the sensor of presence-sensitive display 212 may detect movement of the user (e.g., moving a hand, arm, pen, stylus, etc.) within a threshold distance of the sensor of presence-sensitive display 212. Presence-sensitive display 212 may determine a two-dimensional or three-dimensional vector representation of the movement and associate the vector representation with a gesture input (e.g., swipe, pinch, tap (clip), stroke, etc.) having multiple dimensions. In other words, presence-sensitive display 212 may detect multi-dimensional gestures without requiring the user to gesture at or near a screen or surface where presence-sensitive display 212 outputs information for display. Instead, presence-sensitive display 212 may detect multi-dimensional gestures performed at or near a sensor that may or may not be positioned near a screen or surface where presence-sensitive display 212 outputs information for display.
The one or more processors 240 may implement functions and/or execute instructions associated with the computing device 200. Examples of processor 240 include an application processor, a display controller, an auxiliary processor, one or more sensor hubs, and any other hardware configured to act as a processor, a processing unit, or a processing device. The modules 220, 222, 224, 226, and 228 may be operated by the processor 240 to perform various actions, operations, or functions of the computing device 200. For example, the processor 240 of the computing device 200 may retrieve and execute instructions stored by the storage component 248 that cause the processor 240 to execute the operational modules 220, 222, 224, 226, and 228. The instructions, when executed by the processor 240, may cause the computing device 200 to store information within the storage component 248.
One or more storage components 248 within computing device 200 may store information for processing during operation of computing device 200 (e.g., computing device 200 may store data accessed during execution at computing device 200 by modules 220, 222, 224, 226, and 228). In some examples, the storage component 248 is temporary storage, meaning that the primary purpose of the storage component 248 is not long-term storage. The storage component 248 on the computing device 220 may be configured as volatile memory for short-term storage of information and, thus, does not retain stored content if the storage component is powered down. Examples of volatile memory include Random Access Memory (RAM), dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), and other forms of volatile memory known in the art.
In some examples, storage component 248 further includes one or more computer-readable storage media. In some examples, storage component 248 includes one or more non-transitory computer-readable storage media. The storage component 248 may be configured to store a greater amount of information than is typically stored by volatile memory. The storage component 248 may be further configured as a non-volatile memory space for long term storage of information and to retain information after power up/down cycles. Examples of non-volatile memory include magnetic hard disks, optical disks, floppy disks, flash memory, or the form of electrically programmable memory (EPROM) or Electrically Erasable Programmable (EEPROM) memory. Storage component 248 may store program instructions and/or information (e.g., data) associated with modules 220, 222, 224 226, and 228 and data store 234. Storage component 248 may include memory configured to store data or other information associated with modules 220, 222, 224 226, and 228 and data store 234.
Text entry module 220 may include all of the functionality of text entry module 120 of computing device 100 of fig. 1 and may perform operations similar to text entry module 120 for managing a user interface provided by computing device 200 at presence-sensitive display 212 for handling text input from a user. The text entry module 220 may send information over a communication channel 250 that causes the display component 202 of the presence-sensitive display 212 to present a graphical keyboard or handwriting input feature as part of a graphical user interface (e.g., screen shot 114A) that a user, such as user 118, may provide text input (e.g., a sequence of text characters) by providing tap and non-tap gestures at the presence-sensitive input component 204. Text entry module 220 may cause display component 202 to present a trajectory or corresponding to the location of presence-sensitive input component 204 where the text input was received (e.g., screen shot 114A), and may also cause display component 202 to display individual characters inferred from the text input by character recognition module 222 as a graphical output (e.g., screen shot 114B).
Character recognition module 222 may include all of the functionality of module 122 of computing device 100 of fig. 1 and may perform operations similar to character recognition module 122 to perform character-level and/or word-level recognition operations on a sequence of touch events determined by text entry module 220 from text input provided at presence-sensitive display 212. Character recognition module 22 performs character-level and/or word-level recognition operations on touch events using SM module 228, LM module 224, and TM module 226.
Threshold data store 234A may include one or more time thresholds, distance or space-based thresholds, probability thresholds, or other comparison values used by character recognition module 222 to infer characters from text input. The threshold stored at threshold data store 234B may be a variable threshold (e.g., based on a function or a look-up table) or a fixed value. For example, threshold data store 234A may include a first time threshold (e.g., 400 milliseconds) and a second time threshold (e.g., 1 second). The character recognition module 222 may compare the duration of the pause between consecutive character inputs to each of the first and second thresholds. If the duration of the pause meets a first threshold (e.g., greater than 400 milliseconds), the character recognition module 222 may increase the probability or score of the character sequence including the break or space corresponding to the pause by a first amount. If the duration of the pause satisfies a second threshold (e.g., greater than 1 second), the character recognition module 222 may increase the probability or score of the character sequence including the break or space corresponding to the pause by a second amount that exceeds the first amount. If the duration of the pause does not satisfy either the first threshold or the second threshold (e.g., less than 400 seconds), the character recognition module 222 may reduce the probability or score of a character sequence that includes a break or space corresponding to the pause.
In some examples, the threshold 234B stored at the threshold data store may be a variable threshold and may dynamically change over time. For example, based on previous inputs, character recognition module 222 may intelligently learn (e.g., by using a machine learning system) characteristics of typical inputs from user 118 and modify the threshold stored at threshold data store 234B based on the learned characteristics of user 118. For example, the character recognition module 222 may determine the threshold stored at the data store based on the amount of time the user 118 typically spends entering different letters, words, and phrases and may determine the threshold stored at the data store based on the amount of time the user 118 typically spends entering different letters of the same word.
In some examples, the number of probabilities or scores that the character recognition module increases or decreases the character sequence may be determined as one or more functions of the dwell duration. For example, the character recognition module 222 may determine a first amount (e.g., a first function based on duration or a first lookup table from values) to increase the score of the character string from the first data set. The character recognition module 222 may determine a second amount (e.g., a second function based on duration or a second lookup table from value) to increase the score of the character string from the second data set. As explained in more detail with respect to fig. 3, the first data set and the second data set may represent two disjoint data sets separated by at least one order of magnitude. In some examples, the order of magnitude may be a factor (e.g., 10) or an offset (e.g., a fixed amount). For example, if the dwell duration is greater than or equal to 1 second, the character recognition module 222 may increase the score by a greater amount than the amount by which the character recognition module 222 increases for dwell applications that last less than 1 second.
The SM module 228 can receive a sequence of touch events as input and output characters or sequences of characters that are more likely to represent the sequence of touch events, along with a degree of certainty or spatial model score that indicates the likelihood or accuracy with which the sequence of characters defines the touch event. In other words, the SM module 228 can perform handwriting recognition techniques to infer a touch event as a stroke, and infer a stroke as a character and/or infer a touch event as a selection or gesture at a key of a keyboard and a selection or gesture of a key as a character of a word. The character recognition module 122 may use the spatial model scores output from the SM module 228 in determining a total score for one or more potential words that the module 122 outputs in response to text input.
The LM module may receive the character sequence as input and output one or more candidate words or word pairs as LM module to identify the character sequence from dictionary data store 234A as a potential replacement for the character sequence in the language context (e.g., a sentence in the written language). For example, language model 224 may assign language model probabilities to one or more candidate words or word pairs located at dictionary data store 234A that include at least some of the same characters as the entered character sequence. The language model probability assigned to each of one or more candidate words or word pairs indicates: candidate words or word pairs are typically found to be positioned to a degree of certainty or likelihood behind, before, and/or within a word sequence (e.g., sentence) generated from text input detected by presence-sensitive input component 204 before and/or after receiving the current character sequence analyzed by LM module 224.
Dictionary data store 234A may include one or more ordered databases (e.g., hash tables, linked lists, ordered arrays, charts, etc.) representing dictionaries of one or more written languages. Each dictionary may include a list of words and phrases within a written language vocabulary (e.g., including grammar, slang, and spoken word usage). LM module 224 of character recognition module 222 may perform a lookup of the sequence of characters in dictionary data store 234A by comparing the portion of the sequence with each word in dictionary data store 234A. LM module 224 may assign a similarity coefficient (e.g., jaccard similarity coefficient) to each word in dictionary data store 234A based on the comparison and determine one or more candidate words from dictionary data store 234A that have the greatest similarity coefficients. In other words, the one or more candidate words with the greatest similarity coefficients may first represent the potential word in dictionary data store 234A with the spelling most closely associated with the spelling of the character sequence. LM module 224 may determine one or more candidate words that include some or all of the characters in the character sequence and determine that the one or more candidate words with the highest similarity coefficients represent potentially corrected spellings of the character sequence. In some examples, the candidate word with the highest similarity coefficient matches a character sequence generated from the sequence of touch events. For example, candidate words for the character sequence h-i-t-h-e-r-e may include "hi", "hit", "here", "hi-here", and "hit here".
LM module 224 may be an n-gram language model. The n-gram language model may provide a probability distribution for an item xi (letter or word) in a sequence of consecutive items based on a previous item in the sequence (i.e., P (xi|xi- (n-1)),... Similarly, the n-gram language model may be based on previous entries in the sequence and the sequence (i.e., P (xi|xi- (n-1), xi+ (n-1))) to provide a probability distribution for an item xi in a sequence of consecutive items. For example, a binary grammar language model (n-gram model, where n=2) may provide a first probability that a word "thene" follows a word "hi" in a sequence (i.e., sentence) and a different probability that a word "here" follows a word "hit" in a different sentence. A trigram language model (n-gram model, where n=3) may provide the probability that the word "here" takes over two words "hey over" in the sequence.
In response to receiving the sequence of characters, language model 24 may output one or more words and word pairs from dictionary data store 234A having the highest similarity coefficient and highest language model score for the sequence. The character recognition module 222 may perform further operations to determine which highest ranked word or word pair to output to the text entry module 220 as the sequence of characters that best represents the sequence of touch events received from the text entry module 220. Character recognition module 222 can combine the language model scores output from LM module 224 with the spatial model scores output from SM module 228 to derive a total score that indicates that the sequence of touch events defined by the text input represents each highest ranked word or word pair in dictionary data store 234A.
To enhance the word segmentation capability of character recognition module 222 and to detect broken words or spaces in the text input, TM module 226 may further analyze touch events received from text entry module 220 on behalf of character recognition module 222 and adjust, if desired, the overall respective scores associated with the one or more candidate words output from LM module 224. TM module 226 may determine a start and end time component associated with each character in the sequence of characters inferred by character recognition module 222 from the sequence of touch events received from text entry module 222. Based on the start and end time components associated with each character in the sequence of characters, TM module 226 may determine the duration that elapses after user 118 completes the character until user 118 begins a subsequent character. TM module 226 may determine that a longer duration between consecutive characters indicates an intended break or space in the text input, and a shorter duration indicates no intended break or space in the text input.
The TM module 226 may promote (boost) the total score of phrases with spaces or breaks at locations corresponding to longer pauses in text input. In some examples, TM module 226 may promote a total score for phrases that have no spaces or breaks at locations corresponding to shorter pauses in text input.
In addition to spatial, temporal, and language model features, character recognition module 222 may also rely on other characteristics of the text input to infer intended characters of the text input. For example, character recognition module 222 may rely on other spatial or distance characteristics of the text input to determine a sequence of characters that are more likely to represent the text input. Character recognition module 222 may infer that user 118 may wish to insert a break or space between two consecutive characters in text input when presence-sensitive input component 204 detects that the location of presence-sensitive input component 204 where the two consecutive characters are located is a greater amount of distance apart.
For example, in response to determining that a distance between characters defining a space or a break meets a distance threshold, the character recognition module 222 may increase a score of a character sequence including the space or the break based on a distance between two consecutive portions of text input. Conversely, in response to determining that the distance between the characters defining the space or break does not satisfy the distance threshold, the character recognition module 222 may decrease the score of the character sequence including the space or break based on the distance between two consecutive portions of text input.
In this way, a computing device operating in accordance with the described techniques may predict where to insert a space into a text input depending on temporal information, language models, and spatial information associated with the text input. Any other combination of temporal, linguistic, and spatial information may also be used, including a machine learning function that inputs metrics of two parts (either before or after a potential space).
In some examples, the computing device may use a weighted combination of temporal and language models and spatial distances, and in some examples, the computing device may use time-based boosting. In other words, if the computing device determines that the user has been waiting more than a certain amount of time between writing two consecutive characters or groups of characters, then it is highly likely that the determined word has ended before the pause. The computing device may compare the duration of the pause to a fixed threshold and add a relatively large addition to the language model score of the character sequence that includes a space at that point.
In some examples, the computing device may automatically tune the time model, the language model, and the weights of the special signals by using Minimum Error Rate Training (MERT). By using MERT, the computing device may automatically adjust parameters to minimize error rates on the set of tuned samples. Likewise, the computing device may collect training samples of users writing multiple words on a particular device (e.g., phone, watch, tablet, etc.) that needs to be tuned. In other examples, the computing device may collect training samples from external data sets (e.g., during training of an entire system or service on which the computing device depends and which is separate from).
In some examples, when a pause-related-boost time threshold has elapsed, the computing device may remove previously written strokes or previously output ink for display. Also, the computing device may provide further indications of the character recognition finalization from text input (e.g., in the context of scrolling a handwriting pane, previously written strokes may be moved out of view, so the user will immediately be aware that writing new content will begin a new word).
In some examples, the computing device may utilize a swipe or continuous gesture keyboard to perform similar character recognition techniques. That is, upon gesture entry, when the user stops the gesture, the computing device may infer the end of the word. However, in some examples, the computing device may ignore the stop or end of the word, for example, if the break between the two gestures is very short. A particular advantage of this technique may precede providing continuous gestures for certain languages that allow long compound words (e.g., german) by using a gesture keyboard.
In other words, upon gesture entry, some computing systems may insert a space after the gesture is completed. For languages in the case of compound words, such as german, inserting a space after each gesture ends may sometimes result in too many spaces in the text input. If the time between two gestures is very short, a computing device according to the described techniques may avoid inserting a space between the two consecutive gesture inputs. In some examples, for languages with little space, the computing device may default to avoid inserting a space after a gesture, and to insert a space only if a sufficiently long pause occurs between successive gestures.
Fig. 3 is a conceptual diagram of a graph 300 that is an example distribution of total score increases that vary based on duration between text input portions, in accordance with one or more techniques of the present disclosure. For purposes of illustration, FIG. 3 is described below within the context of computing device 100 of FIG. 1.
Graph 300 is comprised of data set 310A and data set 310B. Both data sets 310A and 310B represent an overall score increase as a function of time, where time corresponds to the duration of a pause between successive characters of the text input. The data sets 310A and 310B are two disjoint data sets separated by at least one order of magnitude (denoted by "lifting"). In some examples, the order of magnitude may be a factor (e.g., 10) or an offset (e.g., a fixed amount). In some examples, the magnitude may be such that the increase defined by the dataset 310B is high enough such that the resulting total score of the candidate string is at least about 100%.
The character recognition module 122 may rely on the function representing the data sets 310A and 310B to calculate an increase in the total score for a character sequence having spaces or breaks corresponding to pauses in text entry. For example, character recognition module 122 may recognize a pause in the sequence of touch events associated with text input received by computing device 100 between times t5 and t 6.
In response to determining that the duration between times t5 and t6 meets the first level threshold, character recognition module 122 may increase the total score of character sequence "hit here" by a first amount corresponding to the amount at point 312A in graph 300 based on the duration between times t5 and t 6. In response to determining that the duration between times t5 and t6 meets the second level threshold, character recognition module 112 may increase the total score of string sequence "hit here" by a second amount corresponding to the amount at point 312B in graph 300 based on the duration between times t5 and t 6.
As shown in fig. 3, a first number at point 312A is from a first dataset corresponding to dataset 310A and a second number at point 312B is from a second set of numbers corresponding to order of magnitude 310B. The data sets 310A and 310B are two disjoint data sets separated by at least one order of magnitude (denoted by "lifting"). In this manner, if the duration associated with the pause satisfies a first time threshold (e.g., 400 milliseconds), the character recognition module 122 may make a character sequence derived from the text input having a pause between consecutive characters more likely to include spaces between consecutive characters. Additionally, if the duration associated with the pause satisfies a second time threshold (e.g., 1 second), the character recognition module 122 may cause a character sequence derived from the text input having a pause between consecutive characters to most certainly include spaces between consecutive characters.
FIG. 4 is a flowchart illustrating example operations performed by an example computing device configured to divide text input into two or more words, in accordance with one or more aspects of the present disclosure. The process of fig. 4 may be performed by one or more processors of a computing device, such as computing device 100 of fig. 1 and/or computing device 200 of fig. 2. In some examples, the steps of the process of fig. 4 may be repeated, omitted, and/or performed in any order. For purposes of illustration, FIG. 4 is described below within the context of computing device 100 of FIG. 1.
In the example of fig. 4, computing device 100 may receive (400) a first input of at least one first text character at an initial time and computing device 100 may receive (410) a second input of at least one second text character at a subsequent time. For example, presence-sensitive display 112 may detect that user 118 provided initial text input when user 118 gestures at or near presence-sensitive display 112 to draw or write letters h-i-t between times t0 and t 5. Presence-sensitive display 112 may detect that user 118 provides subsequent text input when user 118 gestures at or near presence-sensitive display 112 between times t6 and t13 to draw or write letters h-e-r-e.
The computing device 100 may determine (420) a first score for a first character sequence that does not include space characters between at least one first character and at least one second character. The computing device may determine (430) a second score for a second character sequence including space characters between the at least one first character and the at least one second character. For example, text entry module 120 may process the initial text input and subsequent text input into a sequence of touch events that define when and where presence-sensitive display 112 detects that user 118 has drawn letters h-i-t-h-e-r-e. The spatial model of the character recognition module 122 may generate a sequence of characters based on the sequence of touch events along with the score associated with the touch event and input the sequence of characters into the language model. The language model of the character recognition module 122 may output two candidate strings "hi ether" and "hit ether" as potential candidate strings that the user 118 intends to enter. The character recognition module 112 may assign a first score to the candidate string "hit he" and may assign a second score to the candidate string "hit he". The first score may be based on at least one of a first language model score or a first spatial model score associated with the first character sequence, and the second score may be based on at least one of a second language model score or a second spatial model score associated with the first character sequence.
Computing device 100 adjusts (440) the second score based on the duration between the initial time and the subsequent time to determine a third score for the second string. For example, the character recognition module 122 may compare the amount of time between time t5 (at the time when the user 118 completes entering the initial text input associated with the letter h-i-t) and time t6 (at the time when the user 118 begins entering the subsequent text input associated with the letter h-e-r-e) to one or more time thresholds indicating intended breaks or spaces in the text input. If the pause between times t5 and t6 meets one or more time thresholds indicating intended breaks or spaces in the text input, the character recognition module 122 may increase the second score to determine a third score for the candidate string "hit".
Computing device 100 may determine (450) whether the third score exceeds the first score. For example, after adjusting the score based on the temporal characteristics of the text input, the character recognition module 122 may output the candidate string with the highest or largest score to the text entry module 120.
If the third score exceeds the first score after the adjustment for the pause, the computing device 100 may output 460 an indication of the second character sequence. For example, character recognition module 122 may output a string "hit" to text entry module 120 such that text entry module 120 may cause presence-sensitive display 112 to display the phrase "hit" (e.g., as screenshot 114B).
However, if the third score does not exceed the first score after the adjustment for the pause, the computing device 100 may refrain (470) from outputting an indication of the second character sequence and instead output an indication of the first character sequence. For example, character recognition module 122 may output a string "hi ether" to text entry module 120 such that text entry module 120 may cause presence-sensitive display 112 to display the more common phrase "hi ether" regardless of the pause between times t5 and t 6.
Clause 1. A method comprising: receiving, by the computing device, a second input of at least one second text character at a subsequent time after receiving the first input of at least one first text character at the initial time; determining, by the computing device, a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes space characters between the at least one first text character and the at least one second text character, and the first character sequence does not include space characters between the at least one first text character and the at least one second text character; determining, by the computing device, a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting, by the computing device, the second score based on the duration between the initial time and the subsequent time to determine a third score associated with the second character sequence; and outputting, by the computing device for display, an indication of the second character sequence in response to determining that the third score exceeds the first score.
Clause 2. The method of clause 1, wherein adjusting the second score comprises: the second score is incremented based on the duration by the computing device to determine a third score.
Clause 3 the method of clause 2, wherein increasing the second score comprises: increasing, by the computing device, the second score by a first amount based on the duration in response to determining that the duration satisfies the first level threshold; and increasing, by the computing device, the second score by a second amount based on the duration in response to determining that the duration satisfies the second level threshold.
Clause 4. The method of clause 3, wherein increasing the second score further comprises: determining, by the computing device, a first number from the first data set; and determining, by the computing device, a second number from the second data set, wherein the first data set and the second data set are two disjoint data sets separated by at least one order of magnitude.
Clause 5 the method of any of clauses 1 to 4, wherein: receiving a first input includes: detecting, by the computing device, a first selection of one or more keys of the keyboard; and receiving the second input comprises: a second selection of one or more keys of the keyboard is detected by the computing device.
Clause 6. The method of clause 5, wherein the keyboard is a graphical keyboard or a physical keyboard.
Clause 7 the method of any of clauses 1 to 7, wherein: receiving a first input includes: detecting, by the computing device, a first handwriting input at the presence-sensitive input device; and receiving the second input comprises: a second handwriting input is detected at the presence-sensitive input device by the computing device.
Clause 8 the method of clause 7, further comprising: determining, by the computing device, a first location of the presence-sensitive input device based on the first handwriting input at which a first input of the at least one first text character was received; determining, by the computing device, a second location of the presence-sensitive input device based on the second handwriting input at which a second input of the at least one second text character was received; and adjusting, by the computing device, the score based on the distance between the first location and the second location to determine a third score.
Clause 9. The method of clause 8, wherein adjusting the second score comprises: increasing, by the computing device, the second score based on the distance in response to determining that the distance satisfies the distance threshold; and in response to determining that the distance does not meet the distance threshold, reducing, by the computing device, the second score based on the distance.
Clause 10 the method of any of clauses 1 to 10, further comprising: in response to determining that the first score exceeds the third score: avoiding outputting an indication of the second character sequence; and outputting, by the computing device, an indication of the first character sequence for display.
Clause 11, a computing device, comprising: a presence-sensitive display; at least one processor; and at least one module operable by the at least one processor to: receiving a second input of at least one second text character detected by the presence-sensitive display at a subsequent time after receiving the first input of at least one first text character detected by the presence-sensitive display at an initial time; determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes space characters between the at least one first text character and the at least one second text character, and the first character sequence does not include space characters between the at least one first text character and the at least one second text character; determining a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting the second score based on the duration between the initial time and the subsequent time to determine a third score associated with the second character sequence; and in response to determining that the third score exceeds the first score, outputting an indication of the second character sequence for display at the presence-sensitive display.
Clause 12 the computing device of clause 11, wherein the at least one module is further operable by the at least one processor to adjust the second score by at least: the second score is increased based on the duration to determine a third score.
Clause 13 the computing device of clause 12, wherein the at least one module is further operable by the at least one processor to increase the second score by at least: in response to determining that the duration satisfies the first level threshold, increasing the second score by a first amount based on the duration; and in response to determining that the duration satisfies a second level threshold, increasing the second score by a second amount based on the duration.
Clause 14 the computing device of clause 13, wherein the at least one module is further operable by the at least one processor to increase the score by at least: determining a first number from the first data set; and determining a second number from the second data set, wherein the first data set and the second data set are two disjoint data sets separated by at least one order of magnitude.
The computing device of any of clauses 11-14, wherein the at least one module is further operable by the at least one processor to: receiving a first input by at least detecting a first handwriting input at the presence-sensitive display; and receiving a second input by at least detecting a second handwriting input at the presence-sensitive display.
Clause 16, a computer-readable storage medium comprising instructions that, when executed by at least one processor of a computing device, cause the at least one processor to: receiving a second input of at least one second text character at a subsequent time after receiving the first input of at least one first text character at an initial time; determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes space characters between the at least one first text character and the at least one second text character, and the first character sequence does not include space characters between the at least one first text character and the at least one second text character; determining a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence; adjusting the second score based on the duration between the initial time and the subsequent time to determine a third score associated with the second character sequence; and outputting, for display, an indication of the second sequence of characters in response to determining that the third score exceeds the first language model score.
Clause 17 the computer-readable storage medium of clause 16, comprising additional instructions that, when executed by at least one processor of the computing device, cause the at least one processor to adjust the second score by at least: the second score is increased based on the duration to determine a third score.
Clause 18 the computer-readable storage medium of clause 17, comprising additional instructions that, when executed by at least one processor of the computing device, cause the at least one processor to increase the second score by at least: in response to determining that the duration satisfies the first level threshold, increasing the second score by a first amount based on the duration; and in response to determining that the duration satisfies a second level threshold, increasing the second score by a second amount based on the duration.
Clause 19 the computer-readable storage medium of clause 18, comprising additional instructions that, when executed by at least one processor of the computing device, cause the at least one processor to increase the second score by at least: determining a first number from the first data set; and determining a second number from the second data set, wherein the first data set and the second data set are two disjoint data sets separated by at least one order of magnitude.
Clause 20 the computer-readable storage medium of any of clauses 16 to 19, comprising additional instructions that, when executed by at least one processor of a computing device, cause the at least one processor to: receiving a first input by at least detecting a first handwriting input at a presence-sensitive input device; and receiving a second input by at least detecting a second handwriting input at the presence-sensitive input device.
Clause 21. A system comprising means for performing any of the methods according to clauses 1-10.
Clause 22. A computing device comprising means for performing any of the methods according to clauses 1-10.
Clause 23 the computing device according to clause 11, further comprising means for performing any of the methods according to clauses 1-10.
In one or more examples, the described functions may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on or transmitted over as a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media or communication media corresponding to tangible media, such as data storage media, including any medium that facilitates transfer of a computer program from one place to another, such as according to a communication protocol. In this manner, a computer-readable medium may generally correspond to (1) a non-transitory tangible computer-readable storage medium or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementing the techniques described in this disclosure. The computer program product may include a computer-readable medium.
By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures or that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital Subscriber Line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. However, it should be understood that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but rather relate to tangible storage media that are not transitory. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
The instructions may be executed by one or more processors, such as one or more Digital Signal Processors (DSPs), general purpose microprocessors, application Specific Integrated Circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Thus, the term "processor" as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. Additionally, in some aspects, the functionality described herein may be provided within dedicated hardware modules and/or software modules. Also, the techniques may be adequately implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses including a wireless handheld device, an Integrated Circuit (IC), or a set of ICs (e.g., a chipset). In this disclosure, various components, modules, or units are described to emphasize functional aspects of devices configured to perform the disclosed techniques but do not necessarily require realization by different hardware units. Rather, as described above, the various units may be combined in hardware units or may be provided by combinations of interoperable hardware units, including one or more of the processors described above, in combination with suitable software and/or firmware.
Various examples have been described. These and other examples are within the scope of the following claims.

Claims (15)

1. A method, comprising:
determining, by the computing device, one or more time thresholds based on a plurality of prior user inputs using a machine learning model;
receiving, by the computing device, a second input of at least one second text character at a subsequent time after receiving the first input of at least one first text character at the initial time;
determining, by the computing device, a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes space characters between the at least one first text character and the at least one second text character, and the first character sequence does not include the space characters between the at least one first text character and the at least one second text character;
determining, by the computing device, a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence, and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence;
Using at least one of the one or more time thresholds, adjusting, by the computing device, the second score based on an amount of time between the first input and the second input to determine a third score associated with the second character sequence;
determining, by the computing device, whether to output an indication of the first character sequence or an indication of the second character sequence based on the first score and the third score; and
in response to determining to output the indication of the second sequence of characters, outputting, by the computing device, the indication of the second sequence of characters for display.
2. The method of claim 1, further comprising:
at least one of the one or more time thresholds is updated by the computing device using the machine learning model based on the first input and the second input.
3. The method of claim 1, wherein adjusting the second score comprises, responsive to determining that the amount of time between the first input and the second input is greater than the at least one of the one or more time thresholds, increasing the second score by the computing device to determine the third score.
4. A method according to any one of claims 1-3, wherein:
receiving the first input includes detecting, by the computing device, a first selection of one or more keys of a keyboard; and
receiving the second input includes: a second selection of the one or more keys of the keyboard is detected by the computing device.
5. The method of claim 4, wherein the keyboard is a graphical keyboard or a physical keyboard.
6. A method according to any one of claims 1-3, wherein:
receiving the first input includes: detecting, by the computing device, a first handwriting input at a presence-sensitive input device; and
receiving the second input includes: a second handwriting input is detected at the presence-sensitive input device by the computing device.
7. The method of claim 6, further comprising:
determining, by the computing device, a first location of the presence-sensitive input device based on the first handwriting input at which the first input of the at least one first text character was received;
determining, by the computing device, a second location of the presence-sensitive input device based on the second handwriting input at which the second input of the at least one second text character was received; and
The second score is adjusted by the computing device based on a distance between the first location and the second location to determine the third score.
8. The method of claim 7, wherein adjusting the second score comprises:
increasing, by the computing device, the second score based on the distance in response to determining that the distance satisfies a distance threshold; and
in response to determining that the distance does not satisfy the distance threshold, the second score is reduced based on the distance.
9. A method according to any one of claims 1-3, further comprising:
in response to determining to output the indication of the first sequence of characters:
avoiding outputting an indication of the second sequence of characters; and
outputting, by the computing device, the indication of the first sequence of characters for display.
10. A computing device, comprising:
a presence-sensitive display;
at least one processor; and
a storage device storing at least one module executable by the at least one processor to:
determining one or more time thresholds based on a plurality of prior user inputs using a machine learning model;
Receiving a second input of at least one second text character detected by the presence-sensitive display at a subsequent time after receiving an indication of a first input of at least one first text character detected by the presence-sensitive display at an initial time;
determining a first character sequence and a second character sequence based on the at least one first text character and the at least one second text character, wherein the second character sequence includes space characters between the at least one first text character and the at least one second text character, and the first character sequence does not include the space characters between the at least one first text character and the at least one second text character;
determining a first score associated with the first character sequence and a second score associated with the second character sequence, wherein the first score is based on at least one of a first language model score or a first spatial model score associated with the first character sequence and the second score is based on at least one of a second language model score or a second spatial model score associated with the first character sequence;
Adjusting the second score based on an amount of time between the first input and the second input using at least one of the one or more time thresholds to determine a third score associated with the second character sequence;
determining whether to output an indication of the first character sequence or an indication of the second character sequence based on the first score and the third score; and
in response to determining to output the indication of the second sequence of characters, outputting the indication of the second sequence of characters for display by the presence-sensitive display.
11. The computing device of claim 10, wherein the at least one model is further executable by the at least one processor to update at least one of the one or more time thresholds based on the first input and the second input using the machine learning model.
12. The computing device of claim 10, wherein the at least one model is further executable by the at least one processor to adjust the second score by being executable by the at least one processor to: the third score is determined in response to determining that the amount of time between the first input and the second input is greater than the at least one of the one or more time thresholds, increasing the second score.
13. The computing device of any of claims 10-12, wherein:
the first input includes a first handwriting input;
the second input includes a second handwriting input; and
the at least one model is further executable by the at least one processor to adjust the second score by being at least executable to:
determining, by the computing device, a first location of the presence-sensitive input device based on the first handwriting input at which the first input of the at least one first text character was received;
determining, by the computing device, a second location of the presence-sensitive input device based on the second handwriting input at which the second input of the at least one second text character was received; and
the second score is adjusted by the computing device based on a distance between the first location and the second location to determine the third score.
14. A system comprising means for performing any of the methods of claims 1-9.
15. A computer-readable storage medium encoded with instructions that, when executed, cause one or more processors of a wearable device to perform any of the methods of claims 1-9.
CN202110569755.3A 2015-08-26 2016-08-05 Time-based word segmentation Active CN113407100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110569755.3A CN113407100B (en) 2015-08-26 2016-08-05 Time-based word segmentation

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US14/836,113 2015-08-26
US14/836,113 US10402734B2 (en) 2015-08-26 2015-08-26 Temporal based word segmentation
CN202110569755.3A CN113407100B (en) 2015-08-26 2016-08-05 Time-based word segmentation
CN201680024744.4A CN107850950B (en) 2015-08-26 2016-08-05 Time-based word segmentation
PCT/US2016/045713 WO2017034779A1 (en) 2015-08-26 2016-08-05 Temporal based word segmentation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201680024744.4A Division CN107850950B (en) 2015-08-26 2016-08-05 Time-based word segmentation

Publications (2)

Publication Number Publication Date
CN113407100A CN113407100A (en) 2021-09-17
CN113407100B true CN113407100B (en) 2024-03-29

Family

ID=56799554

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201680024744.4A Active CN107850950B (en) 2015-08-26 2016-08-05 Time-based word segmentation
CN202110569755.3A Active CN113407100B (en) 2015-08-26 2016-08-05 Time-based word segmentation

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201680024744.4A Active CN107850950B (en) 2015-08-26 2016-08-05 Time-based word segmentation

Country Status (4)

Country Link
US (2) US10402734B2 (en)
EP (2) EP3644163B1 (en)
CN (2) CN107850950B (en)
WO (1) WO2017034779A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402734B2 (en) 2015-08-26 2019-09-03 Google Llc Temporal based word segmentation
CN108304367B (en) * 2017-04-07 2021-11-26 腾讯科技(深圳)有限公司 Word segmentation method and device
CN107273356B (en) 2017-06-14 2020-08-11 北京百度网讯科技有限公司 Artificial intelligence based word segmentation method, device, server and storage medium
CN107086027A (en) * 2017-06-23 2017-08-22 青岛海信移动通信技术股份有限公司 Character displaying method and device, mobile terminal and storage medium
KR102509822B1 (en) * 2017-09-25 2023-03-14 삼성전자주식회사 Method and apparatus for generating sentence
KR20210061523A (en) * 2019-11-19 2021-05-28 삼성전자주식회사 Electronic device and operating method for converting from handwriting input to text

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2463230A1 (en) * 2001-10-15 2003-04-24 Jonathon Leigh Napper A method and apparatus for decoding handwritten characters
EP2535844A2 (en) * 2011-06-13 2012-12-19 Google Inc. Character recognition for overlapping textual user input
EP2703955A1 (en) * 2012-08-31 2014-03-05 BlackBerry Limited Scoring predictions based on prediction length and typing speed
US8701032B1 (en) * 2012-10-16 2014-04-15 Google Inc. Incremental multi-word recognition
US9081482B1 (en) * 2012-09-18 2015-07-14 Google Inc. Text input suggestion ranking

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6340967B1 (en) 1998-04-24 2002-01-22 Natural Input Solutions Inc. Pen based edit correction interface method and apparatus
US6938222B2 (en) * 2002-02-08 2005-08-30 Microsoft Corporation Ink gestures
JP4177335B2 (en) 2003-05-02 2008-11-05 富士通株式会社 Handwritten character input device and handwritten character input processing method
KR101027167B1 (en) 2005-12-13 2011-04-05 인터내셔널 비지네스 머신즈 코포레이션 Autocompletion method and system
CN101382844A (en) 2008-10-24 2009-03-11 上海埃帕信息科技有限公司 Method for inputting spacing participle
US8515969B2 (en) * 2010-02-19 2013-08-20 Go Daddy Operating Company, LLC Splitting a character string into keyword strings
US8310461B2 (en) 2010-05-13 2012-11-13 Nuance Communications Inc. Method and apparatus for on-top writing
GB201200643D0 (en) * 2012-01-16 2012-02-29 Touchtype Ltd System and method for inputting text
US8768062B2 (en) * 2010-11-09 2014-07-01 Tata Consulting Services Limited Online script independent recognition of handwritten sub-word units and words
US20120167009A1 (en) 2010-12-22 2012-06-28 Apple Inc. Combining timing and geometry information for typing correction
DE112011105305T5 (en) * 2011-06-03 2014-03-13 Google, Inc. Gestures for text selection
US8094941B1 (en) 2011-06-13 2012-01-10 Google Inc. Character recognition for overlapping textual user input
US8667414B2 (en) * 2012-03-23 2014-03-04 Google Inc. Gestural input at a virtual keyboard
US9047268B2 (en) 2013-01-31 2015-06-02 Google Inc. Character and word level language models for out-of-vocabulary text input
US9008429B2 (en) * 2013-02-01 2015-04-14 Xerox Corporation Label-embedding for text recognition
US9384403B2 (en) * 2014-04-04 2016-07-05 Myscript System and method for superimposed handwriting recognition technology
CN104598937B (en) * 2015-01-22 2019-03-12 百度在线网络技术(北京)有限公司 The recognition methods of text information and device
US10402734B2 (en) 2015-08-26 2019-09-03 Google Llc Temporal based word segmentation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2463230A1 (en) * 2001-10-15 2003-04-24 Jonathon Leigh Napper A method and apparatus for decoding handwritten characters
EP2535844A2 (en) * 2011-06-13 2012-12-19 Google Inc. Character recognition for overlapping textual user input
EP2703955A1 (en) * 2012-08-31 2014-03-05 BlackBerry Limited Scoring predictions based on prediction length and typing speed
US9081482B1 (en) * 2012-09-18 2015-07-14 Google Inc. Text input suggestion ranking
US8701032B1 (en) * 2012-10-16 2014-04-15 Google Inc. Incremental multi-word recognition

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于神经网络的分词方法;徐秉铮 等;《中文信息学报》;19931231(第02期);第36-44页 *

Also Published As

Publication number Publication date
US20170061291A1 (en) 2017-03-02
US10402734B2 (en) 2019-09-03
CN107850950A (en) 2018-03-27
EP3644163B1 (en) 2023-06-28
CN107850950B (en) 2021-06-01
EP3644163A1 (en) 2020-04-29
CN113407100A (en) 2021-09-17
US20190362251A1 (en) 2019-11-28
US10846602B2 (en) 2020-11-24
EP3274792A1 (en) 2018-01-31
EP3274792B1 (en) 2020-04-15
WO2017034779A1 (en) 2017-03-02

Similar Documents

Publication Publication Date Title
US10671281B2 (en) Neural network for keyboard input decoding
CN113407100B (en) Time-based word segmentation
CN109120511B (en) Automatic correction method, computing device and system based on characteristics
CN107430448B (en) Anti-learning techniques for adaptive language models in text entry
US9552080B2 (en) Incremental feature-based gesture-keyboard decoding
US10095405B2 (en) Gesture keyboard input of non-dictionary character strings
US8756499B1 (en) Gesture keyboard input of non-dictionary character strings using substitute scoring
US10146764B2 (en) Dynamic key mapping of a graphical keyboard
EP3241105B1 (en) Suggestion selection during continuous gesture input
US9952763B1 (en) Alternative gesture mapping for a graphical keyboard

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant